(Created page with "==1 Title, abstract and keywords== Your paper should start with a concise and informative title. Titles are often used in information-retrieval systems. Avoid abbreviations a...")
 
 
(2 intermediate revisions by one other user not shown)
Line 1: Line 1:
==1 Title, abstract and keywords==
+
== Abstract ==
 +
The largest runs up-to-now are usually performed for simple symmetric positive definite systems. It is a reasonable approach when measuring the overall scalability of an algorithm/implementation. However, in order to have an impact in science and industry, we must extend scalability to the most challenging applications, since these are the ones that really require extreme scale simulation tools, e.g., multiscale, multiphysics, nonlinear, and transient problems. In this talk, we will discuss some of our experiences in the development of FEMPAR, an in-house finite element multiphysics and massively parallel simulator.
  
Your paper should start with a concise and informative title. Titles are often used in information-retrieval systems. Avoid abbreviations and formulae where possible. Capitalize the first word of the title.
+
On one hand, we will talk about how to deal in a parallel element-based environment with multiphysics simulations that involve interface coupling, e.g., fluid-structure interaction. Our approach is based on the partition of topological meshes, and ghost element information, in order to define locally the degrees of freedom and the unknowns that must be communicated among processors.
  
Provide a maximum of 6 keywords, and avoiding general and plural terms and multiple concepts (avoid, for example, 'and', 'of'). Be sparing with abbreviations: only abbreviations firmly established in the field should be used. These keywords will be used for indexing purposes.
+
On the other hand, we will discuss how we deal with the resulting multiphysics (non)linear systems. We have two different approaches to the problem: block preconditioning and monolithic solvers. Block preconditioning techniques are interesting in the sense that they allow us to decouple complex multiphysics problems into simpler (probably) one physics simulations. However, in order for block preconditioners to be effective, we must define effective approximation of Schur complement systems, which can be a complicated (and very heuristic) task. We will show how we have implemented complex (recursive) block preconditioning strategies in FEMPAR using abstract definitions of operators, and how this framework has been applied to different multiphysics solvers.
  
An abstract is required for every paper; it should succinctly summarize the reason for the work, the main findings, and the conclusions of the study. Abstract is often presented separately from the article, so it must be able to stand alone. For this reason, references and hyperlinks should be avoided. If references are essential, then cite the author(s) and year(s). Also, non-standard or uncommon abbreviations should be avoided, but if essential they must be defined at their first mention in the abstract itself.
+
We will also discuss how we can reach sustained scalability up to large core-counts (about 400,000 cores in a BG/Q). Our in-house numerical linear algebra solvers are based on multilevel domain decomposition techniques, and their very efficient practical implementations based on overlapped and asynchronous techniques. We will consider two different approaches, the first one being a combination of block-preconditioning and multilevel domain decomposition, whereas the second one will be a truly monolithic domain decomposition approach.
  
==2 The main text==
+
Many multiphysics simulations are also multiscale, and the use of adaptively refined meshes can reduce even orders of magnitude the computational cost of simulations with respect to uniformly refined meshes. The possibility to reach extremely scalable adaptive multiphysics solvers would open the door to unprecedented simulations of challenging problems that are out of reach nowadays. In this sense, we will show how we are dealing with scalable adaptive solvers in FEMPAR, via a combination of the p4est library for parallel mesh refinement and dynamic load balancing in our element-based framework. Further, we will show how we modify our solvers to deal with nonconforming meshes through interfaces, and the effect of cheap space-filling curve partitions on solver robustness.
  
You can enter and format the text of this document by selecting the ‘Edit’ option in the menu at the top of this frame or next to the title of every section of the document. This will give access to the visual editor. Alternatively, you can edit the source of this document (Wiki markup format) by selecting the ‘Edit source’ option.
+
== Recording of the presentation ==
 
+
{| style="font-size:120%; color: #222222; border: 1px solid darkgray; background: #f3f3f3; table-layout: fixed; width:100%;"
Most of the papers in Scipedia are written in English (write your manuscript in American or British English, but not a mixture of these). Anyhow, specific journals in other languages can be published in Scipedia. In any case, the documents published in other languages must have an abstract written in English.
+
|-  
 
+
| {{#evt:service=youtube|id=https://youtu.be/TyXa29cnWq8 | alignment=center}}
===2.1 Subsections===
+
|- style="text-align: center;"  
 
+
| Location: San Servolo Complex.  
Divide your article into clearly defined and numbered sections. Subsections should be numbered 1.1, 1.2, etc. and then 1.1.1, 1.1.2, ... Use this numbering also for internal cross-referencing: do not just refer to 'the text'. Any subsection may be given a brief heading. Capitalize the first word of the headings.
+
|- style="text-align: center;"
 
+
| Date: 18 - 20 May 2015, San Servo Island, Venice, Italy.
===2.2 General guidelines===
+
 
+
Some general guidelines that should be followed in your manuscripts are:
+
 
+
:*  Avoid hyphenation at the end of a line.
+
 
+
:*  Symbols denoting vectors and matrices should be indicated in bold type. Scalar variable names should normally be expressed using italics.
+
 
+
:*  Use decimal points (not commas); use a space for thousands (10 000 and above).
+
 
+
:*  Follow internationally accepted rules and conventions. In particular use the international system of units (SI). If other quantities are mentioned, give their equivalent in SI.
+
 
+
===2.3 Tables, figures, lists and equations===
+
 
+
Please insert tables as editable text and not as images. Tables should be placed next to the relevant text in the article. Number tables consecutively in accordance with their appearance in the text (<span id='cite-_Ref382560620'></span>[[#_Ref382560620|table 1]], table 2, etc.) and place any table notes below the table body. Be sparing in the use of tables and ensure that the data presented in them do not duplicate results described elsewhere in the article.
+
 
+
<span id='_Ref382560620'></span>
+
{| style="margin: 1em auto 1em auto;border: 1pt solid black;border-collapse: collapse;"
+
|-
+
| style="text-align: center;"|Thickness
+
| style="text-align: center;"|3.175 mm
+
|-
+
| style="text-align: center;"|Young Modulus
+
| style="text-align: center;"|12.74 MPa
+
|-
+
| style="text-align: center;"|Poisson coefficient
+
| style="text-align: center;"|0.25
+
|-
+
| style="text-align: center;"|Density
+
| style="text-align: center;"|1107 kg/m<sup>3</sup>
+
 
|}
 
|}
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
 
<span style="text-align: center; font-size: 75%;">Table 1: Material properties</span></div>
 
 
Graphics may be inserted directly in the document and positioned as they should appear in the final manuscript.
 
 
<span id='_Ref448852946'></span>
 
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
 
[[Image:Scipedia.gif|center|480px]]
 
</div>
 
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
 
<span style="text-align: center; font-size: 75%;">Figure 1. Scipedia logo.</span></div>
 
 
Number the figures according to their sequence in the text (<span id='cite-_Ref448852946'></span>[[#_Ref448852946|figure 1]], figure 2, etc.). Ensure that each illustration has a caption. A caption should comprise a brief title. Keep text in the illustrations themselves to a minimum but explain all symbols and abbreviations used. Try to keep the resolution of the figures to a minimum of 300 dpi. If a finer resolution is required, the figure can be inserted as supplementary material
 
 
For tabular summations that do not deserve to be presented as a table, lists are often used. Lists may be either numbered or bulleted. Below you see examples of both.
 
 
1. The first entry in this list
 
 
2. The second entry
 
 
2.1. A subentry
 
 
3. The last entry
 
 
* A bulleted list item
 
 
* Another one
 
 
You may choose to number equations for easy referencing. In that case they must be numbered consecutively with Arabic numerals in parentheses on the right hand side of the page. Below is an example of formulae that should be referenced as eq. <span id='cite-_Ref424030152'></span>[[#_Ref424030152|(1)]].
 
 
{| style="width: 100%;"
 
|-
 
| style="vertical-align: top;"| <math>{\nabla }^{2}\phi =0</math>
 
| style="text-align: right;"|<span id='_Ref424030152'></span>
 
(1)
 
|}
 
 
===2.4 Supplementary material===
 
 
Supplementary material can be inserted to support and enhance your article. This includes video material, animation sequences, background datasets, computational models, sound clips and more. In order to ensure that your material is directly usable, please provide the files with a preferred maximum size of 50 MB. Please supply a concise and descriptive caption for each file.
 
 
==3 Bibliography==
 
 
<span id='_Ref449344604'></span>
 
Citations in text will follow a citation-sequence system (i.e. sources are numbered by order of reference so that the first reference cited in the paper is [<span id='cite-1'></span>[[#1|1]]], the second [<span id='cite-2'></span>[[#2|2]]], and so on) with the number of the reference in square brackets. Once a source has been cited, the same number is used in all subsequent references. If the numbers are not in a continuous sequence, use commas (with no spaces) between numbers. If you have more than two numbers in a continuous sequence, use the first and last number of the sequence joined by a hyphen (e.g. [<span id='cite-1'></span>[[#1|1]], <span id='cite-3'></span>[[#3|3]]] or [<span id='cite-2'></span>[[#2|2]]-<span id='cite-2'></span>[[#4|4]]]).
 
 
<span id='_Ref449084254'></span>
 
You should ensure that all references are cited in the text and that the reference list. References should preferably refer to papers published in Scipedia. Unpublished results should not be included in the reference list, but can be mentioned in the text. The reference data must be updated once publication is ready. Complete bibliographic information for all cited references must be given following the standards in the field (IEEE and ISO 690 standards are recommended). If possible, a hyperlink to the referenced publication should be given. See examples for Scipedia’s articles [<span id='cite-1'></span>[[#1|1]]], other journal articles [<span id='cite-2'></span>[[#2|2]]], books [<span id='cite-3'></span>[[#3|3]]], book chapter [<span id='cite-4'></span>[[#4|4]]], conference proceedings [<span id='cite-5'></span>[[#5|5]]], and online documents [<span id='cite-6'></span>[[#6|6]]], shown in references section below.
 
 
==4 Acknowledgments==
 
 
Acknowledgments should be inserted at the end of the paper, before the references section.
 
 
==5 References==
 
 
<span id='_Ref449083719'></span>
 
<div id="1"></div>
 
[[#cite-1|[1]]] Author, A. and Author, B. (Year) Title of the article. Title of the Journal. Article code. Available: [http://www.scipedia.com/ucode. http://www.scipedia.com/ucode.]
 
 
<div id="2"></div>
 
[[#cite-2|[2]]] Author, A. and Author, B. (Year) Title of the article. Title of the Journal. Volume number, first page-last page.
 
 
<div id="3"></div>
 
[[#cite-3|[3]]] Author, C. (Year). Title of work: Subtitle (edition.). Volume(s). Place of publication: Publisher.
 
 
<div id="4"></div>
 
[[#cite-4|[4]]] Author of Part, D. (Year). Title of chapter or part. In A. Editor & B. Editor (Eds.), Title: Subtitle of book (edition, inclusive page numbers). Place of publication: Publisher.
 
  
<div id="5"></div>
+
== General Information ==
[[#cite-5|[5]]] Author, E. (Year, Month date). Title of the article. In A. Editor, B. Editor, and C. Editor. Title of published proceedings. Paper presented at title of conference, Volume number, first page-last page. Place of publication.
+
* Location: San Servolo Complex, Venice, Italy.
 +
* Date: 18 - 20 May 2015, San Servo Island, Venice, Italy.
 +
* Secretariat: [//www.cimne.com/ International Center for Numerical Methods in Engineering (CIMNE)].
  
<div id="6"></div>
+
== External Links ==
[[#cite-6|[6]]] Institution or author. Title of the document. Year. [Online] (Date consulted: day, month and year). Available: [http://www.scipedia.com/document.pdf http://www.scipedia.com/document.pdf]. [Accessed day, month and year].
+
* [//congress.cimne.com/coupled2015/frontal/default.asp IV Coupled] Official Website of the Conference.
 +
* [//www.cimnemultimediachannel.com/ CIMNE Multimedia Channel]

Latest revision as of 10:33, 19 July 2016

Abstract

The largest runs up-to-now are usually performed for simple symmetric positive definite systems. It is a reasonable approach when measuring the overall scalability of an algorithm/implementation. However, in order to have an impact in science and industry, we must extend scalability to the most challenging applications, since these are the ones that really require extreme scale simulation tools, e.g., multiscale, multiphysics, nonlinear, and transient problems. In this talk, we will discuss some of our experiences in the development of FEMPAR, an in-house finite element multiphysics and massively parallel simulator.

On one hand, we will talk about how to deal in a parallel element-based environment with multiphysics simulations that involve interface coupling, e.g., fluid-structure interaction. Our approach is based on the partition of topological meshes, and ghost element information, in order to define locally the degrees of freedom and the unknowns that must be communicated among processors.

On the other hand, we will discuss how we deal with the resulting multiphysics (non)linear systems. We have two different approaches to the problem: block preconditioning and monolithic solvers. Block preconditioning techniques are interesting in the sense that they allow us to decouple complex multiphysics problems into simpler (probably) one physics simulations. However, in order for block preconditioners to be effective, we must define effective approximation of Schur complement systems, which can be a complicated (and very heuristic) task. We will show how we have implemented complex (recursive) block preconditioning strategies in FEMPAR using abstract definitions of operators, and how this framework has been applied to different multiphysics solvers.

We will also discuss how we can reach sustained scalability up to large core-counts (about 400,000 cores in a BG/Q). Our in-house numerical linear algebra solvers are based on multilevel domain decomposition techniques, and their very efficient practical implementations based on overlapped and asynchronous techniques. We will consider two different approaches, the first one being a combination of block-preconditioning and multilevel domain decomposition, whereas the second one will be a truly monolithic domain decomposition approach.

Many multiphysics simulations are also multiscale, and the use of adaptively refined meshes can reduce even orders of magnitude the computational cost of simulations with respect to uniformly refined meshes. The possibility to reach extremely scalable adaptive multiphysics solvers would open the door to unprecedented simulations of challenging problems that are out of reach nowadays. In this sense, we will show how we are dealing with scalable adaptive solvers in FEMPAR, via a combination of the p4est library for parallel mesh refinement and dynamic load balancing in our element-based framework. Further, we will show how we modify our solvers to deal with nonconforming meshes through interfaces, and the effect of cheap space-filling curve partitions on solver robustness.

Recording of the presentation

Location: San Servolo Complex.
Date: 18 - 20 May 2015, San Servo Island, Venice, Italy.

General Information

External Links

Back to Top

Document information

Published on 30/06/16

Licence: CC BY-NC-SA license

Document Score

0

Views 43
Recommendations 0

Share this document

Keywords

claim authorship

Are you one of the authors of this document?