COMPLAS 2021 is the 16th conference of the COMPLAS Series.
The COMPLAS conferences started in 1987 and since then have become established events in the field of computational plasticity and related topics. The first fifteen conferences in the COMPLAS series were all held in the city of Barcelona (Spain) and were very successful from the scientific, engineering and social points of view. We intend to make the 16th edition of the conferenceanother successful edition of the COMPLAS meetings.
The objectives of COMPLAS 2021 are to address both the theoretical bases for the solution of nonlinear solid mechanics problems, involving plasticity and other material nonlinearities, and the numerical algorithms necessary for efficient and robust computer implementation. COMPLAS 2021 aims to act as a forum for practitioners in the nonlinear structural mechanics field to discuss recent advances and identify future research directions.
Scope
COMPLAS 2021 is the 16th conference of the COMPLAS Series.
Additive manufacturing (AM) is an advanced method of manufacturing complex parts layer by layer until the required design is achieved. Laser powder bed fusion (L-PBF) is used to produce parts with high resolution because of low layer thickness. L-PBF is based on laser beam and material interaction where the powder material is melted and then solidified. This occurs in a short time frame of the order of 0.02 seconds and makes the whole process challenging to be studied in real time. Studies have shown the development of numerical methods and the use of simulation software to understand the laser beam and material interaction. This phenomenon is key to understanding the material behavior under melting and mechanical properties of the part produced by L-PBF process as it is directly linked with the solidification of the melted powder material. A detailed study of the laser beam and material interaction is needed on a microscale and mesoscale level as it provides a better understanding and helps in the development of the given material for the L-PBF process. This review provides a comprehensive understanding of the background for the use of simulation in AM and the different simulation scales of feature under interest. The main conclusion from this review is the need to develop a methodology to use simulation at micro and mesoscale level to understand the laser beam and material interaction and improve the efficiency of the L-PBF process using this data.
Abstract Additive manufacturing (AM) is an advanced method of manufacturing complex parts layer by layer until the required design is achieved. Laser powder bed fusion (L-PBF) is used [...]
Additive manufacturing (AM) has undergone different phases of technological changes from being a mere manufacturing method for consumer goods, prototyping, and tooling to industrial series production of functional end-use parts. The seven AM sub-categories allow the creation of unprecedented designs that are otherwise impossible using conventional manufacturing (CM) methods. The layer-by-layer approach to manufacturing enables the creation of metal components with hollows and overhangs, often requiring sacrificial support structures which are removed prior to or during the post-processing phase. Factors such as poor part quality, high investment cost, low material efficiency, and long manufacturing time hindered the widespread adoption of AM in the past. The adoption of laser-based powder bed fusion for metals was particularly hindered due to reasons such as the need for support structures, demand for post-processing, the numerous affecting processing parameters and the lack of understanding of the interaction between laser beam and material. Technological advances in AM have helped users reduce or omit some of the limitations to adoption, such as optimized support structures for better material efficiency. Simulation-driven tool is one means offering ways to time-efficient product development and more superior structural components amidst the raw material and cost reductions. This study elucidates how such benefits are feasible via using simulation tools. Simulation-driven optimization of the product design, process, and manufacturing is revealed to change the design, support structures and postprocessing required to bring parts to the required reliability. Virtual manufacturing planning also gives a prior understanding of how processing parameters such as laser scan velocity, laser power, scanning strategy, hatch distance and others can be controlled; to achieve optimal interaction between laser beam and material for the required part quality. Simulation-driven design for additive manufacturing (DfAM) allows for agile design optimizing with design parameters and rules, boosting resource efficiency and productivity. This research proposes a life cycle cost (LCC)driven DfAM tool, which potentially improves service life and life cycle cost. The results provide insight into the simulation-driven DfAM of laser-based PBF and demonstrate the potential for LCC-based approaches to enhance the confidence in adopting PBF for metals.
Abstract Additive manufacturing (AM) has undergone different phases of technological changes from being a mere manufacturing method for consumer goods, prototyping, and tooling to [...]
B. Liu, C. Cantwell, D. Moxey, M. Green, S. Sherwin
eccomas2022.
Abstract
A highly efficient matrix-free Helmholtz operator with single-instruction multipledata (SIMD) vectorisation is implemented in Nektar++ [1] and applied to the simulation of anisotropic heat transport in tokamak edge plasma. A tokamak is currently the leading candidate for a practical fusion reactor using the magnetic confinement approach to produce electricity through controlled thermonuclear fusion. Predicting the transport of heat in magnetized plasma is important to designing a safe tokamak design. Due to the ionized nature of plasma, the heat conduction of the magnetized plasma is highly anisotropic along the magnetic field lines. In this study, a variational form is proposed to simulate the anisotropic heat transport in magnetized plasma and the details of its mathematical derivation and implementation are presented. To accurately approximate the thermal load of plasma deposition on the wall of tokamak chamber, highly scalable and efficient algorithms are crucial. To achieve this, a matrix-free Helmholtz operator is implemented in the Nektar++ framework, utilising sum-factorisation to reduce the operation count and increase arithmetic intensity, and leveraging SIMD vectorisation to accelerate the computation on modern hardware. The performance of the implementation is assessed by measuring throughput and speed-up of the operators using deformed and regular quadrilateral and triangular elements.
Abstract A highly efficient matrix-free Helmholtz operator with single-instruction multipledata (SIMD) vectorisation is implemented in Nektar++ [1] and applied to the simulation of [...]
We investigate scaling and efficiency of the deep neural network multigrid method (DNN-MG), a novel neural network-based technique for the simulation of the Navier-Stokes equations that combines an adaptive geometric multigrid solver with a recurrent neural network with memory. The neural network replaces in DNN-MG one or multiple finest multigrid layers and provides a correction for the classical solve in the next time step. This leads to little degradation in the solution quality while substantially reducing the overall computational costs. At the same time, the use of the multigrid solver at the coarse scales allows for a compact network that is easy to train, generalizes well, and allows for the incorporation of physical constraints. In this work, we investigate how the network size affects training and solution quality and the overall runtime of the computations.
Abstract We investigate scaling and efficiency of the deep neural network multigrid method (DNN-MG), a novel neural network-based technique for the simulation of the Navier-Stokes [...]
J. GRATIEN, C. Chevalier, T. Guignon, X. Tunc, P. Have, S. De Chaisemartin
eccomas2022.
Abstract
Applications to solve large and complex partial derivative equation systems often rely nowadays on frameworks like Arcane, Dune, Feel++. Linear solver packages like PETSc or Trilinos are used to manage linear systems and provide access to a wide range of algorithms. With the evolution of High-Performance Computing, the variety of the hardware features available in new architectures has considerably increased. ARM processors, AMD, Intel and Nvidia GP-GPUs, TPU and FPGA devices are now common. To handle the induced complexity, different strategies are adopted in each linear solver framework. One of them consists in introducing a new layer that provides abstractions to manage the performance portability and to enable several parallel programming models. In this paper, we evaluate the performance of linear solver packages that rely on tools like SYCL [16], Kokkos [8] or HARTS [11] to handle runtime systems like OpenMP, TBB, CUDA,. . . . A simulator to solve advection-diffusion problems has been developed with ALIEN, a C++ framework that provides a high level and unified API to handle large distributed matrices and vectors. We have benchmarked different solver algorithms, and have evaluated the efficiency of their implementations, and their capability to perform on different architectures, for instance, large number of cores, GP-GPU accelerators, or processors with large SIMD instructions.
Abstract Applications to solve large and complex partial derivative equation systems often rely nowadays on frameworks like Arcane, Dune, Feel++. Linear solver packages like PETSc [...]
A new hybrid algorithm for LDU -factorization for large sparse matrix combining iterative solver, which can keep the same accuracy as the classical factorization, is proposed. The last Schur complement will be generated by iterative solver for multiple right-hand sides using block GCR method with the factorization in lower precision as a preconditioner, which achieves mixed precision arithmetic, and then the Schur complement will be factorized in higher precision. In this algorithm, essential procedure is decomposition of the matrix into a union of moderate and hard parts, which is realized by LDU -factorization in lower precision with symmetric pivoting and threshold postponing technique.
Abstract A new hybrid algorithm for LDU -factorization for large sparse matrix combining iterative solver, which can keep the same accuracy as the classical factorization, is proposed. [...]
Graphics cards that are equipped with Tensor Core units designed for AI applications, for example the NVIDIA Ampere A100, promise very high peak rates concerning their computing power (156 TFLOP/s in single and 312 TFLOP/s in half precision in the case of the A100). This is only achieved when performing arithmetically intensive operations such as dense matrix multiplications in the aforementioned lower precision, which is an obstacle when trying to use this hardware for solving linear systems arising from PDEs discretized with the finite element method. In previous works, we delivered a proof of concept that the predecessor of the A100, the V100 and its Tensor Cores, can be exploited to a great extent when solving Poisson's equation on the unit square if a hardware-oriented direct solver based on prehandling via hierarchical finite elements and a Schur complement approach is used. In this work, using numerical results on an A100 graphics card, we show that the method also achieves a very high performance if Poisson's equation, which is discretized by linear finite elements, is solved on a more complex domain corresponding to a flow around a square configuration.
Abstract Graphics cards that are equipped with Tensor Core units designed for AI applications, for example the NVIDIA Ampere A100, promise very high peak rates concerning their computing [...]
Landslides triggered by earthquakes are one of the major seismic hazards and can cause large damages and fatalities. The material point method (MPM) has become a popular technique to model such large mass movements. A limitation of existing MPM implementations is the lack of appropriate boundary conditions to perform seismic response analysis of slopes. To bridge this gap, an extension to the basic MPM framework is presented for simulating the seismic triggering and subsequent collapse of slopes within a single analysis step. The concepts of a compliant base boundary and free-field columns are applied within the MPM framework enabling the direct application of input ground motions and accounting for the absorption of outgoing waves.
Abstract Landslides triggered by earthquakes are one of the major seismic hazards and can cause large damages and fatalities. The material point method (MPM) has become a popular technique [...]
In this paper, academic and industrial test cases have been conducted in order to validate the approach of using a Penalized Direct Forcing method coupled with an immersed turbulent wall model. Good results are obtained compared to a body fitted mesh with the Werner & Wengle wall model. In a shortcoming second step, we can project the coupling between the immersed wall law and a K-epsilon model, as well as obstacle shape optimization during the flow computation.
Abstract In this paper, academic and industrial test cases have been conducted in order to validate the approach of using a Penalized Direct Forcing method coupled with an immersed [...]
L. Ménez, E. Goncalves, P. Parnaudeau, D. Colombet
eccomas2022.
Abstract
The aim of this work is to model compressible flows involving shock waves past a solid obstacle using a non-conformal mesh. An Immersed Boundary Method (IBM) with feedback forcing and a volume penalization method are considered and compared. Both methods are validated on various test-cases. Accuracy and computational cost are discussed.
Abstract The aim of this work is to model compressible flows involving shock waves past a solid obstacle using a non-conformal mesh. An Immersed Boundary Method (IBM) with feedback [...]