(7 intermediate revisions by the same user not shown)
Line 15: Line 15:
 
===1.1.1 Moore's and Parkinson's laws in computational modeling===
 
===1.1.1 Moore's and Parkinson's laws in computational modeling===
  
A basic precept in constructing a successful computational model &#8211;-one that strikes the right balance between  accuracy and  simplicity&#8211;- is, quoting M.Ashby <span id='citeF-1'></span>[[#cite-1|[1]]],  ``to unashamedly distort the ''inessentials'' in order to capture the features that really matter''.  Yet with the general availability of fast computers with large memories, and the  exponential growth  presaged by Moore's law  of such capacities, it is becoming increasingly  difficult to resist  the temptation to  violate this basic precept and,    in the quest for higher accuracy,  indiscriminately include features that do not contribute significantly to the overall response of the modeled system.  As  pointed out by  Venkataraman et al.<span id='citeF-2'></span>[[#cite-2|[2]]],  this  had led to the paradox  that certain  engineering problems  that were  computationally tackled for the first time  40 years  ago  ''appear''  to remain    comparatively costly, even though  the  capacity of computers have  increased one trillion-fold during this period. This apparent  paradox is of course not exclusive of computational physical modeling, but rather    just another manifestation of the  general  adage “software applications will invariably grow to fill up increased computer memory, processing capabilities  and storage space”,  known as  the  ''computerized''    ''Parkinson's law'' <span id='citeF-3'></span>[[#cite-3|[3]]].
+
A basic precept in constructing a successful computational model &#8211;-one that strikes the right balance between  accuracy and  simplicity&#8211;- is, quoting M.Ashby <span id='citeF-1'></span>[[#cite-1|[1]]],  ``to unashamedly distort the ''inessentials'' in order to capture the features that really matter.  Yet with the general availability of fast computers with large memories, and the  exponential growth  presaged by Moore's law  of such capacities, it is becoming increasingly  difficult to resist  the temptation to  violate this basic precept and,    in the quest for higher accuracy,  indiscriminately include features that do not contribute significantly to the overall response of the modeled system.  As  pointed out by  Venkataraman et al.<span id='citeF-2'></span>[[#cite-2|[2]]],  this  had led to the paradox  that certain  engineering problems  that were  computationally tackled for the first time  40 years  ago  ''appear''  to remain    comparatively costly, even though  the  capacity of computers have  increased one trillion-fold during this period. This apparent  paradox is of course not exclusive of computational physical modeling, but rather    just another manifestation of the  general  adage “software applications will invariably grow to fill up increased computer memory, processing capabilities  and storage space”,  known as  the  ''computerized''    ''Parkinson's law'' <span id='citeF-3'></span>[[#cite-3|[3]]].
  
 
===1.1.2 The two-scale homogenization problem===
 
===1.1.2 The two-scale homogenization problem===
Line 1,058: Line 1,058:
 
===4.3.1 The reduced-order  subspace of statically admissible stresses (''V''<sub>&sigma;</sub><sup>*</sup>)===
 
===4.3.1 The reduced-order  subspace of statically admissible stresses (''V''<sub>&sigma;</sub><sup>*</sup>)===
  
Similarly to the problem addressed in Chapter [[#3 Reduced-order model of the RVE|3]] concerning the    reduced basis for the displacement fluctuations,  the problem of  constructing  a <math display="inline">\mathcal{O}(n_u)</math>-dimensional representation of the stress field reduces, in principle, to finding  a set of orthogonal basis functions <math display="inline">\{ \boldsymbol{\mathit{\Psi}}_1(\mathbf{x}), \boldsymbol{\mathit{\Psi}}_2(\mathbf{x}) \ldots \boldsymbol{\mathit{\Psi}}_{n_{\sigma }}(\mathbf{x})\} </math> (<math display="inline">n_{\sigma }= \mathcal{O}(n_u)</math>)  such that its span accurately    approximates    the set of all possible ''stress solutions'' &#8211;-that is, the set of all ''statically admissible stresses''. Accordingly,    the  procedure to  compute the reduced basis for the stress field would be, ''mutatis mutandis'', formally identical to that explained earlier for the displacement fluctuations.  Firstly,  finite element, stress distributions over the cell are computed for representative, input macro-strain histories (the most practical and somehow consistent choice regarding these strain trajectories  is to  use the same as in the  computation of the  displacement fluctuations snapshots).  Then, the elastic/inelastic dimensionality reduction process set forth in Section [[#3.1.2.3 Elastic/Inelastic reduced basis functions|3.1.2.3]] is applied to the resulting ensemble of stress solutions <math display="inline">\{ \boldsymbol{\sigma }^1(\mathbf{x}),\boldsymbol{\sigma }^2(\mathbf{x}) \ldots  \boldsymbol{\sigma }^{n_{snp}}(\mathbf{x}) \} </math>,  in order to identify both the elastic  and the    ''essential'' inelastic  stress modes.  The space spanned  by these  modes  will be denoted hereafter by <math display="inline">\mathcal{V}_{\sigma }^{*}</math> and termed  the ''reduced-order  subspace of statically admissible stresses'':
+
Similarly to the problem addressed in Chapter [[#3 Reduced-order model of the RVE|3]] concerning the    reduced basis for the displacement fluctuations,  the problem of  constructing  a <math display="inline">\mathcal{O}(n_u)</math>-dimensional representation of the stress field reduces, in principle, to finding  a set of orthogonal basis functions <math display="inline">\{ \boldsymbol{\mathit{\Psi}}_1(\mathbf{x}), \boldsymbol{\mathit{\Psi}}_2(\mathbf{x}) \ldots \boldsymbol{\mathit{\Psi}}_{n_{\sigma }}(\mathbf{x})\} </math> (<math display="inline">n_{\sigma }= \mathcal{O}(n_u)</math>)  such that its span accurately    approximates    the set of all possible ''stress solutions'' &#8211;-that is, the set of all ''statically admissible stresses''. Accordingly,    the  procedure to  compute the reduced basis for the stress field would be, ''mutatis mutandis'', formally identical to that explained earlier for the displacement fluctuations.  Firstly,  finite element, stress distributions over the cell are computed for representative, input macro-strain histories (the most practical and somehow consistent choice regarding these strain trajectories  is to  use the same as in the  computation of the  displacement fluctuations snapshots).  Then, the elastic/inelastic dimensionality reduction process set forth in Section [[#3.1.2 Elastic/Inelastic reduced basis functions|3.1.2]] is applied to the resulting ensemble of stress solutions <math display="inline">\{ \boldsymbol{\sigma }^1(\mathbf{x}),\boldsymbol{\sigma }^2(\mathbf{x}) \ldots  \boldsymbol{\sigma }^{n_{snp}}(\mathbf{x}) \} </math>,  in order to identify both the elastic  and the    ''essential'' inelastic  stress modes.  The space spanned  by these  modes  will be denoted hereafter by <math display="inline">\mathcal{V}_{\sigma }^{*}</math> and termed  the ''reduced-order  subspace of statically admissible stresses'':
  
 
<span id="eq-4.11"></span>
 
<span id="eq-4.11"></span>
Line 1,149: Line 1,149:
  
 
One plausible route for determining a low-dimensional approximation space that embraces both statically admissible and statically inadmissible stresses might be to collect, during the  offline finite element calculations, not only converged stresses,    but also  the  unconverged ones &#8211;-i.e., those generated during the corresponding iterative algorithm&#8211;-, and then perform the POD-based dimensionality reduction over the whole ensemble of snapshots<span id="fnc-11"></span>[[#fn-11|<sup>1</sup>]]. In the present work, however, we pursue an approach  that  precludes the necessity of undertaking this  computationally laborious and in some aspects objectionable &#8211;-there is no guarantee that the span of selected, unconverged stress snapshots covers the entire space of statically inadmissible stresses&#8211;-process. The idea behind the  employed approach was  originally conceived, but  not fully developed,    by the authors  in a recent report <span id='citeF-36'></span>[[#cite-36|[36]]]. Here, the theory underlying such an idea is  further elaborated  and cast into the formalisms of functional analysis.
 
One plausible route for determining a low-dimensional approximation space that embraces both statically admissible and statically inadmissible stresses might be to collect, during the  offline finite element calculations, not only converged stresses,    but also  the  unconverged ones &#8211;-i.e., those generated during the corresponding iterative algorithm&#8211;-, and then perform the POD-based dimensionality reduction over the whole ensemble of snapshots<span id="fnc-11"></span>[[#fn-11|<sup>1</sup>]]. In the present work, however, we pursue an approach  that  precludes the necessity of undertaking this  computationally laborious and in some aspects objectionable &#8211;-there is no guarantee that the span of selected, unconverged stress snapshots covers the entire space of statically inadmissible stresses&#8211;-process. The idea behind the  employed approach was  originally conceived, but  not fully developed,    by the authors  in a recent report <span id='citeF-36'></span>[[#cite-36|[36]]]. Here, the theory underlying such an idea is  further elaborated  and cast into the formalisms of functional analysis.
 +
 +
 +
<span id="fn-11"></span>
 +
<span style="text-align: center; font-size: 75%;">([[#fnc-11|<sup>1</sup>]]) Incidentally, this way of proceeding has the flavor of the nonlinear model reduction strategy advocated by Carlberg and co-workers <span id='citeF-78'></span><span id='citeF-80'></span>[[#cite-78|[78,80]]], in which the reduction is carried over the linearized form of the pertinent governing equation</span>
 +
  
 
====4.3.3.1 Continuum formulation====
 
====4.3.3.1 Continuum formulation====
Line 1,591: Line 1,596:
 
|}
 
|}
  
<span id="fn-11"></span>
 
<span style="text-align: center; font-size: 75%;">([[#fnc-11|<sup>1</sup>]]) Incidentally, this way of proceeding has the flavor of the nonlinear model reduction strategy advocated by Carlberg and co-workers [
 
carlberg2010efficient,carlberg2011gnat], in which the reduction is carried over the linearized form of the pertinent governing equation</span>
 
  
 
<span id="fn-12"></span>
 
<span id="fn-12"></span>
Line 2,938: Line 2,940:
 
=Appendix A. Connection between POD and SVD=
 
=Appendix A. Connection between POD and SVD=
  
We saw in Section [[#3.1.2.1 Proper Orthogonal Decomposition|3.1.2.1]] that the POD problem for the displacement fluctuations boils down to the solution of the following eigenvalue problem:
+
We saw in Section [[#3.1.2 Proper Orthogonal Decomposition|3.1.2]] that the POD problem for the displacement fluctuations boils down to the solution of the following eigenvalue problem:
  
 
<span id="eq-A.1"></span>
 
<span id="eq-A.1"></span>
Line 3,330: Line 3,332:
 
| style="text-align: center;" | <math> = \displaystyle \sum _{k=1}^{n_{snp}} \left\langle {{\Delta \boldsymbol{u}^{*}}^k},{{\Delta \boldsymbol{u}^{*}}^k} \right\rangle_{L_2(\Omega )}      </math>
 
| style="text-align: center;" | <math> = \displaystyle \sum _{k=1}^{n_{snp}} \left\langle {{\Delta \boldsymbol{u}^{*}}^k},{{\Delta \boldsymbol{u}^{*}}^k} \right\rangle_{L_2(\Omega )}      </math>
 
|-
 
|-
| style="text-align: center;" | <math> =  \displaystyle \sum _{k=1}^{n_{snp}} \left(\sum _{I=1}^{n} \sum _{J=1}^{n} {\Delta \boldsymbol{u}^{*}}_{I}^{k} {\Delta \boldsymbol{u}^{*}}_{J}^{k}        \left\langle {N_{\uppercase{I}}},{ N_{\uppercase{J}}} \right\rangle_{L_2(\Omega )} \right)      </math>
+
| style="text-align: center;" | <math> =  \displaystyle \sum _{k=1}^{n_{snp}} \left(\sum _{I=1}^{n} \sum _{J=1}^{n} {\Delta \boldsymbol{u}^{*}}_{I}^{k} {\Delta \boldsymbol{u}^{*}}_{J}^{k}        \left\langle {N_{{I}}},{ N_{{J}}} \right\rangle_{L_2(\Omega )} \right)      </math>
 
|-
 
|-
 
| style="text-align: center;" | <math> = \textrm{tr}( (\boldsymbol{X}_{u}- \boldsymbol{X}_{u}^{*})^T \mathbf{M}(\boldsymbol{X}_{u}-\boldsymbol{X}_{u}^{*})) </math>
 
| style="text-align: center;" | <math> = \textrm{tr}( (\boldsymbol{X}_{u}- \boldsymbol{X}_{u}^{*})^T \mathbf{M}(\boldsymbol{X}_{u}-\boldsymbol{X}_{u}^{*})) </math>
Line 4,146: Line 4,148:
  
 
<div id="cite-1"></div>
 
<div id="cite-1"></div>
'''[[#citeF-1|[1]]]''' Ashby, M.F. (1992) "Physical modelling of materials problems", Volume 8. Maney Publishing. Materials Science and Technology 2 102&#8211;111
+
[[#citeF-1|[1]]] Ashby, M.F. (1992) "Physical modelling of materials problems", Volume 8. Maney Publishing. Materials Science and Technology 2 102&#8211;111
  
 
<div id="cite-2"></div>
 
<div id="cite-2"></div>
'''[[#citeF-2|[2]]]''' Venkataraman, S. and Haftka, RT. (2004) "Structural optimization complexity: what has Moore’s law done for us?", Volume 28. Springer. Structural and Multidisciplinary Optimization 6 375&#8211;387
+
[[#citeF-2|[2]]] Venkataraman, S. and Haftka, RT. (2004) "Structural optimization complexity: what has Moore’s law done for us?", Volume 28. Springer. Structural and Multidisciplinary Optimization 6 375&#8211;387
  
 
<div id="cite-3"></div>
 
<div id="cite-3"></div>
'''[[#citeF-3|[3]]]''' Thimbleby, H. (1993) "Computerised Parkinson's law", Volume 4. IET. Computing & Control Engineering Journal 5 197&#8211;198
+
[[#citeF-3|[3]]] Thimbleby, H. (1993) "Computerised Parkinson's law", Volume 4. IET. Computing & Control Engineering Journal 5 197&#8211;198
  
 
<div id="cite-4"></div>
 
<div id="cite-4"></div>
'''[[#citeF-4|[4]]]''' Gross, D. and Seelig, T. (2011) "Fracture mechanics: with an introduction to micromechanics". Springer
+
[[#citeF-4|[4]]] Gross, D. and Seelig, T. (2011) "Fracture mechanics: with an introduction to micromechanics". Springer
  
 
<div id="cite-5"></div>
 
<div id="cite-5"></div>
'''[[#citeF-5|[5]]]''' Hill, R. (1963) "Elastic properties of reinforced solids: some theoretical principles", Volume 11. Elsevier. Journal of the Mechanics and Physics of Solids 5 357&#8211;372
+
[[#citeF-5|[5]]] Hill, R. (1963) "Elastic properties of reinforced solids: some theoretical principles", Volume 11. Elsevier. Journal of the Mechanics and Physics of Solids 5 357&#8211;372
  
 
<div id="cite-6"></div>
 
<div id="cite-6"></div>
'''[[#citeF-6|[6]]]''' Hashin, Z. and Shtrikman, S. (1963) "A variational approach to the theory of the elastic behaviour of multiphase materials", Volume 11. Elsevier. Journal of the Mechanics and Physics of Solids 2 127&#8211;140
+
[[#citeF-6|[6]]] Hashin, Z. and Shtrikman, S. (1963) "A variational approach to the theory of the elastic behaviour of multiphase materials", Volume 11. Elsevier. Journal of the Mechanics and Physics of Solids 2 127&#8211;140
  
 
<div id="cite-7"></div>
 
<div id="cite-7"></div>
'''[[#citeF-7|[7]]]''' Mori, T. and Tanaka, K. (1973) "Average stress in matrix and average elastic energy of materials with misfitting inclusions", Volume 21. Elsevier. Acta metallurgica 5 571&#8211;574
+
[[#citeF-7|[7]]] Mori, T. and Tanaka, K. (1973) "Average stress in matrix and average elastic energy of materials with misfitting inclusions", Volume 21. Elsevier. Acta metallurgica 5 571&#8211;574
  
 
<div id="cite-8"></div>
 
<div id="cite-8"></div>
'''[[#citeF-8|[8]]]''' Bohm, H.J. (1998) "A short introduction to basic aspects of continuum micromechanics", Volume 3. Citeseer. CDL-FMD Report
+
[[#citeF-8|[8]]] Bohm, H.J. (1998) "A short introduction to basic aspects of continuum micromechanics", Volume 3. Citeseer. CDL-FMD Report
  
 
<div id="cite-9"></div>
 
<div id="cite-9"></div>
'''[[#citeF-9|[9]]]''' Zaoui, A. (2002) "Continuum micromechanics: survey", Volume 128. American Society of Civil Engineers. Journal of Engineering Mechanics 8 808&#8211;816
+
[[#citeF-9|[9]]] Zaoui, A. (2002) "Continuum micromechanics: survey", Volume 128. American Society of Civil Engineers. Journal of Engineering Mechanics 8 808&#8211;816
  
 
<div id="cite-10"></div>
 
<div id="cite-10"></div>
'''[[#citeF-10|[10]]]''' Pindera, M.J. and Khatam, H. and Drago, A.S. and Bansal, Y. (2009) "Micromechanics of spatially uniform heterogeneous media: A critical review and emerging approaches", Volume 40. Elsevier. Composites Part B: Engineering 5 349&#8211;378
+
[[#citeF-10|[10]]] Pindera, M.J. and Khatam, H. and Drago, A.S. and Bansal, Y. (2009) "Micromechanics of spatially uniform heterogeneous media: A critical review and emerging approaches", Volume 40. Elsevier. Composites Part B: Engineering 5 349&#8211;378
  
 
<div id="cite-11"></div>
 
<div id="cite-11"></div>
'''[[#citeF-11|[11]]]''' Yuan, Z. and Fish, J. (2008) "Towards Realization of Computational Homogenization in Practice1", Volume 73. Wiley Online Library. Int. J. Numer. Meth. Engng 361&#8211;380
+
[[#citeF-11|[11]]] Yuan, Z. and Fish, J. (2008) "Towards Realization of Computational Homogenization in Practice1", Volume 73. Wiley Online Library. Int. J. Numer. Meth. Engng 361&#8211;380
  
 
<div id="cite-12"></div>
 
<div id="cite-12"></div>
'''[[#citeF-12|[12]]]''' Geers, M.G.D. and Kouznetsova, VG and Brekelmans, WAM. (2010) "Multi-scale computational homogenization: Trends and challenges", Volume 234. Elsevier. Journal of computational and applied mathematics 7 2175&#8211;2182
+
[[#citeF-12|[12]]] Geers, M.G.D. and Kouznetsova, VG and Brekelmans, WAM. (2010) "Multi-scale computational homogenization: Trends and challenges", Volume 234. Elsevier. Journal of computational and applied mathematics 7 2175&#8211;2182
  
 
<div id="cite-13"></div>
 
<div id="cite-13"></div>
'''[[#citeF-13|[13]]]''' Feyel, F. and Chaboche, J.L. (2000) "FE-2 multiscale approach for modelling the elastoviscoplastic behaviour of long fibre SiC/Ti composite materials", Volume 183. Elsevier. Computer methods in applied mechanics and engineering 3 309&#8211;330
+
[[#citeF-13|[13]]] Feyel, F. and Chaboche, J.L. (2000) "FE-2 multiscale approach for modelling the elastoviscoplastic behaviour of long fibre SiC/Ti composite materials", Volume 183. Elsevier. Computer methods in applied mechanics and engineering 3 309&#8211;330
  
 
<div id="cite-14"></div>
 
<div id="cite-14"></div>
'''[[#citeF-14|[14]]]''' Oden, J.T. and Belytschko, T. and Fish, J. and Hughes, TJ and Johnson, C. and Keyes, D. and Laub, A. and Petzold, L. and Srolovitz, D. and Yip, S. (2006) "Revolutionizing engineering science through simulation". National Science Foundation (NSF), Blue Ribbon Panel on Simulation-Based Engineering Science [3, 7, 101, 123]
+
[[#citeF-14|[14]]] Oden, J.T. and Belytschko, T. and Fish, J. and Hughes, TJ and Johnson, C. and Keyes, D. and Laub, A. and Petzold, L. and Srolovitz, D. and Yip, S. (2006) "Revolutionizing engineering science through simulation". National Science Foundation (NSF), Blue Ribbon Panel on Simulation-Based Engineering Science [3, 7, 101, 123]
  
 
<div id="cite-15"></div>
 
<div id="cite-15"></div>
'''[[#citeF-15|[15]]]''' Oskay, C. and Fish, J. (2007) "Eigendeformation-based reduced order homogenization for failure analysis of heterogeneous materials", Volume 196. Elsevier. Computer Methods in Applied Mechanics and Engineering 7 1216&#8211;1243
+
[[#citeF-15|[15]]] Oskay, C. and Fish, J. (2007) "Eigendeformation-based reduced order homogenization for failure analysis of heterogeneous materials", Volume 196. Elsevier. Computer Methods in Applied Mechanics and Engineering 7 1216&#8211;1243
  
 
<div id="cite-16"></div>
 
<div id="cite-16"></div>
'''[[#citeF-16|[16]]]''' Robert W. Batterman. (2011) "The Tyranny of Scales"
+
[[#citeF-16|[16]]] Robert W. Batterman. (2011) "The Tyranny of Scales"
  
 
<div id="cite-17"></div>
 
<div id="cite-17"></div>
'''[[#citeF-17|[17]]]''' Hill, R. (1965) "Continuum micro-mechanics of elastoplastic polycrystals", Volume 13. Elsevier. Journal of the Mechanics and Physics of Solids 2 89&#8211;101
+
[[#citeF-17|[17]]] Hill, R. (1965) "Continuum micro-mechanics of elastoplastic polycrystals", Volume 13. Elsevier. Journal of the Mechanics and Physics of Solids 2 89&#8211;101
  
 
<div id="cite-18"></div>
 
<div id="cite-18"></div>
'''[[#citeF-18|[18]]]''' Dvorak, GJ and Wafa, AM and Bahei-El-Din, YA. (1994) "Implementation of the transformation field analysis for inelastic composite materials", Volume 14. Springer. Computational Mechanics 3 201&#8211;228
+
[[#citeF-18|[18]]] Dvorak, GJ and Wafa, AM and Bahei-El-Din, YA. (1994) "Implementation of the transformation field analysis for inelastic composite materials", Volume 14. Springer. Computational Mechanics 3 201&#8211;228
  
 
<div id="cite-19"></div>
 
<div id="cite-19"></div>
'''[[#citeF-19|[19]]]''' Michel, J.C. and Suquet, P. (2003) "Nonuniform transformation field analysis", Volume 40. Elsevier. International journal of solids and structures 25 6937&#8211;6955
+
[[#citeF-19|[19]]] Michel, J.C. and Suquet, P. (2003) "Nonuniform transformation field analysis", Volume 40. Elsevier. International journal of solids and structures 25 6937&#8211;6955
  
 
<div id="cite-20"></div>
 
<div id="cite-20"></div>
'''[[#citeF-20|[20]]]''' Michel, J.C. and Suquet, P. (2004) "Computational analysis of nonlinear composite structures using the nonuniform transformation field analysis", Volume 193. Elsevier. Computer methods in applied mechanics and engineering 48-51 5477&#8211;5502
+
[[#citeF-20|[20]]] Michel, J.C. and Suquet, P. (2004) "Computational analysis of nonlinear composite structures using the nonuniform transformation field analysis", Volume 193. Elsevier. Computer methods in applied mechanics and engineering 48-51 5477&#8211;5502
  
 
<div id="cite-21"></div>
 
<div id="cite-21"></div>
'''[[#citeF-21|[21]]]''' Roussette, S. and Michel, J.C. and Suquet, P. (2009) "Nonuniform transformation field analysis of elastic-viscoplastic composites", Volume 69. Elsevier. Composites Science and Technology 1 22&#8211;27
+
[[#citeF-21|[21]]] Roussette, S. and Michel, J.C. and Suquet, P. (2009) "Nonuniform transformation field analysis of elastic-viscoplastic composites", Volume 69. Elsevier. Composites Science and Technology 1 22&#8211;27
  
 
<div id="cite-22"></div>
 
<div id="cite-22"></div>
'''[[#citeF-22|[22]]]''' Fish, J. and Shek, K. and Pandheeradi, M. and Shephard, M.S. (1997) "Computational plasticity for composite structures based on mathematical homogenization: Theory and practice", Volume 148. Elsevier. Computer Methods in Applied Mechanics and Engineering 1-2 53&#8211;73
+
[[#citeF-22|[22]]] Fish, J. and Shek, K. and Pandheeradi, M. and Shephard, M.S. (1997) "Computational plasticity for composite structures based on mathematical homogenization: Theory and practice", Volume 148. Elsevier. Computer Methods in Applied Mechanics and Engineering 1-2 53&#8211;73
  
 
<div id="cite-23"></div>
 
<div id="cite-23"></div>
'''[[#citeF-23|[23]]]''' Maday, Y. and Patera, AT and Turinici, G. (2002) "Reliable real-time solution of parametrized partial differential equations: Reduced-basis output bound methods"
+
[[#citeF-23|[23]]] Maday, Y. and Patera, AT and Turinici, G. (2002) "Reliable real-time solution of parametrized partial differential equations: Reduced-basis output bound methods"
  
 
<div id="cite-24"></div>
 
<div id="cite-24"></div>
'''[[#citeF-24|[24]]]''' R.D. Cook. (1995) "Finite element modeling for stress analysis". John Wiley and Sons.
+
[[#citeF-24|[24]]] R.D. Cook. (1995) "Finite element modeling for stress analysis". John Wiley and Sons.
  
 
<div id="cite-25"></div>
 
<div id="cite-25"></div>
'''[[#citeF-25|[25]]]''' Krysl, P. and Lall, S. and Marsden, JE. (2001) "Dimensional model reduction in non-linear finite element dynamics of solids and structures", Volume 51. Wiley Online Library. International Journal for Numerical Methods in Engineering 4 479&#8211;504
+
[[#citeF-25|[25]]] Krysl, P. and Lall, S. and Marsden, JE. (2001) "Dimensional model reduction in non-linear finite element dynamics of solids and structures", Volume 51. Wiley Online Library. International Journal for Numerical Methods in Engineering 4 479&#8211;504
  
 
<div id="cite-26"></div>
 
<div id="cite-26"></div>
'''[[#citeF-26|[26]]]''' Salomon, D. (2004) "Data compression: the complete reference". Springer-Verlag New York Incorporated
+
[[#citeF-26|[26]]] Salomon, D. (2004) "Data compression: the complete reference". Springer-Verlag New York Incorporated
  
 
<div id="cite-27"></div>
 
<div id="cite-27"></div>
'''[[#citeF-27|[27]]]''' Bishop, C.M. and SpringerLink (Service en ligne). (2006) "Pattern recognition and machine learning", Volume 4. springer New York
+
[[#citeF-27|[27]]] Bishop, C.M. and SpringerLink (Service en ligne). (2006) "Pattern recognition and machine learning", Volume 4. springer New York
  
 
<div id="cite-28"></div>
 
<div id="cite-28"></div>
'''[[#citeF-28|[28]]]''' Barrault, M. and Maday, Y. and Nguyen, N.C. and Patera, A.T. (2004) "An  empirical interpolation'method: application to efficient reduced-basis discretization of partial differential equations", Volume 339. Elsevier. Comptes Rendus Mathematique 9 667&#8211;672
+
[[#citeF-28|[28]]] Barrault, M. and Maday, Y. and Nguyen, N.C. and Patera, A.T. (2004) "An  empirical interpolation'method: application to efficient reduced-basis discretization of partial differential equations", Volume 339. Elsevier. Comptes Rendus Mathematique 9 667&#8211;672
  
 
<div id="cite-29"></div>
 
<div id="cite-29"></div>
'''[[#citeF-29|[29]]]''' Grepl, M.A. and Maday, Y. and Nguyen, N.C. and Patera, A.T. (2007) "Efficient reduced-basis treatment of nonaffine and nonlinear partial differential equations", Volume 41. edpsciences. org. Mathematical Modelling and Numerical Analysis 3 575&#8211;605
+
[[#citeF-29|[29]]] Grepl, M.A. and Maday, Y. and Nguyen, N.C. and Patera, A.T. (2007) "Efficient reduced-basis treatment of nonaffine and nonlinear partial differential equations", Volume 41. edpsciences. org. Mathematical Modelling and Numerical Analysis 3 575&#8211;605
  
 
<div id="cite-30"></div>
 
<div id="cite-30"></div>
'''[[#citeF-30|[30]]]''' Nguyen, NC and Patera, AT and Peraire, J. (2008) "A best points interpolation method for efficient approximation of parametrized functions", Volume 73. Wiley Online Library. Int. J. Numer. Meth. Engng 521&#8211;543
+
[[#citeF-30|[30]]] Nguyen, NC and Patera, AT and Peraire, J. (2008) "A best points interpolation method for efficient approximation of parametrized functions", Volume 73. Wiley Online Library. Int. J. Numer. Meth. Engng 521&#8211;543
  
 
<div id="cite-31"></div>
 
<div id="cite-31"></div>
'''[[#citeF-31|[31]]]''' Chaturantabut, S. and Sorensen, D.C. (2010) "Discrete empirical interpolation for nonlinear model reduction". IEEE. Decision and Control, 2009 held jointly with the 2009 28th Chinese Control Conference. CDC/CCC 2009. Proceedings of the 48th IEEE Conference on 4316&#8211;4321
+
[[#citeF-31|[31]]] Chaturantabut, S. and Sorensen, D.C. (2010) "Discrete empirical interpolation for nonlinear model reduction". IEEE. Decision and Control, 2009 held jointly with the 2009 28th Chinese Control Conference. CDC/CCC 2009. Proceedings of the 48th IEEE Conference on 4316&#8211;4321
  
 
<div id="cite-32"></div>
 
<div id="cite-32"></div>
'''[[#citeF-32|[32]]]''' Astrid, P. (2004) "Reduction of process simulation models: a proper orthogonal decomposition approach". Technische Universiteit Eindhoven
+
[[#citeF-32|[32]]] Astrid, P. (2004) "Reduction of process simulation models: a proper orthogonal decomposition approach". Technische Universiteit Eindhoven
  
 
<div id="cite-33"></div>
 
<div id="cite-33"></div>
'''[[#citeF-33|[33]]]''' An, S.S. and Kim, T. and James, D.L. (2009) "Optimizing cubature for efficient integration of subspace deformations", Volume 27. NIH Public Access. ACM transactions on graphics 5 165
+
[[#citeF-33|[33]]] An, S.S. and Kim, T. and James, D.L. (2009) "Optimizing cubature for efficient integration of subspace deformations", Volume 27. NIH Public Access. ACM transactions on graphics 5 165
  
 
<div id="cite-34"></div>
 
<div id="cite-34"></div>
'''[[#citeF-34|[34]]]''' Kim, T. and James, D.L. (2009) "Skipping steps in deformable simulation with online model reduction". ACM. ACM SIGGRAPH Asia 2009 papers 1&#8211;9
+
[[#citeF-34|[34]]] Kim, T. and James, D.L. (2009) "Skipping steps in deformable simulation with online model reduction". ACM. ACM SIGGRAPH Asia 2009 papers 1&#8211;9
  
 
<div id="cite-35"></div>
 
<div id="cite-35"></div>
'''[[#citeF-35|[35]]]''' J. D. Hoffman. (2001) "Numerical Methods for Engineers and Scientists". Marcel Dekker
+
[[#citeF-35|[35]]] J. D. Hoffman. (2001) "Numerical Methods for Engineers and Scientists". Marcel Dekker
  
 
<div id="cite-36"></div>
 
<div id="cite-36"></div>
'''[[#citeF-36|[36]]]''' J. A. Hernández and J. Oliver and    A.E. Huespe and  M. Caicedo. (2012) "High-performance model reduction procedures in multiscale simulations". CIMNE
+
[[#citeF-36|[36]]] J. A. Hernández and J. Oliver and    A.E. Huespe and  M. Caicedo. (2012) "High-performance model reduction procedures in multiscale simulations". CIMNE
  
 
<div id="cite-37"></div>
 
<div id="cite-37"></div>
'''[[#citeF-37|[37]]]''' Boyaval, S. (2007) "Reduced-basis approach for homogenization beyond the periodic setting". Arxiv preprint math/0702674
+
[[#citeF-37|[37]]] Boyaval, S. (2007) "Reduced-basis approach for homogenization beyond the periodic setting". Arxiv preprint math/0702674
  
 
<div id="cite-38"></div>
 
<div id="cite-38"></div>
'''[[#citeF-38|[38]]]''' Yvonnet, J. and He, Q.C. (2007) "The reduced model multiscale method (R3M) for the non-linear homogenization of hyperelastic media at finite strains", Volume 223. Elsevier. Journal of Computational Physics 1 341&#8211;368
+
[[#citeF-38|[38]]] Yvonnet, J. and He, Q.C. (2007) "The reduced model multiscale method (R3M) for the non-linear homogenization of hyperelastic media at finite strains", Volume 223. Elsevier. Journal of Computational Physics 1 341&#8211;368
  
 
<div id="cite-39"></div>
 
<div id="cite-39"></div>
'''[[#citeF-39|[39]]]''' Monteiro, E. and Yvonnet, J. and He, QC. (2008) "Computational homogenization for nonlinear conduction in heterogeneous materials using model reduction", Volume 42. Elsevier. Computational Materials Science 4 704&#8211;712
+
[[#citeF-39|[39]]] Monteiro, E. and Yvonnet, J. and He, QC. (2008) "Computational homogenization for nonlinear conduction in heterogeneous materials using model reduction", Volume 42. Elsevier. Computational Materials Science 4 704&#8211;712
  
 
<div id="cite-40"></div>
 
<div id="cite-40"></div>
'''[[#citeF-40|[40]]]''' Nguyen, NC. (2008) "A multiscale reduced-basis method for parametrized elliptic partial differential equations with multiple scales", Volume 227. Elsevier. Journal of Computational Physics 23 9807&#8211;9822
+
[[#citeF-40|[40]]] Nguyen, NC. (2008) "A multiscale reduced-basis method for parametrized elliptic partial differential equations with multiple scales", Volume 227. Elsevier. Journal of Computational Physics 23 9807&#8211;9822
  
 
<div id="cite-41"></div>
 
<div id="cite-41"></div>
'''[[#citeF-41|[41]]]''' Efendiev, Yalchin and Galvis, Juan and Gildin, Eduardo. (2012) "Local&#8211;global multiscale model reduction for flows in high-contrast heterogeneous media", Volume 231. Elsevier. Journal of Computational Physics 24 8100&#8211;8113
+
[[#citeF-41|[41]]] Efendiev, Yalchin and Galvis, Juan and Gildin, Eduardo. (2012) "Local&#8211;global multiscale model reduction for flows in high-contrast heterogeneous media", Volume 231. Elsevier. Journal of Computational Physics 24 8100&#8211;8113
  
 
<div id="cite-42"></div>
 
<div id="cite-42"></div>
'''[[#citeF-42|[42]]]''' Efendiev, Yalchin and Galvis, Juan and Thomines, Florian. (2012) "A systematic coarse-scale model reduction technique for parameter-dependent flows in highly heterogeneous media and its applications", Volume 10. SIAM. Multiscale Modeling & Simulation 4 1317&#8211;1343
+
[[#citeF-42|[42]]] Efendiev, Yalchin and Galvis, Juan and Thomines, Florian. (2012) "A systematic coarse-scale model reduction technique for parameter-dependent flows in highly heterogeneous media and its applications", Volume 10. SIAM. Multiscale Modeling & Simulation 4 1317&#8211;1343
  
 
<div id="cite-43"></div>
 
<div id="cite-43"></div>
'''[[#citeF-43|[43]]]''' Abdulle, Assyr and Bai, Yun. (2012) "Reduced basis finite element heterogeneous multiscale method for high-order discretizations of elliptic homogenization problems", Volume 231. Elsevier. Journal of Computational Physics 21 7014&#8211;7036
+
[[#citeF-43|[43]]] Abdulle, Assyr and Bai, Yun. (2012) "Reduced basis finite element heterogeneous multiscale method for high-order discretizations of elliptic homogenization problems", Volume 231. Elsevier. Journal of Computational Physics 21 7014&#8211;7036
  
 
<div id="cite-44"></div>
 
<div id="cite-44"></div>
'''[[#citeF-44|[44]]]''' Abdulle, Assyr and Bai, Yun. (2013) "Adaptive reduced basis finite element heterogeneous multiscale method", Volume 257. Elsevier. Computer Methods in Applied Mechanics and Engineering 203&#8211;220
+
[[#citeF-44|[44]]] Abdulle, Assyr and Bai, Yun. (2013) "Adaptive reduced basis finite element heterogeneous multiscale method", Volume 257. Elsevier. Computer Methods in Applied Mechanics and Engineering 203&#8211;220
  
 
<div id="cite-45"></div>
 
<div id="cite-45"></div>
'''[[#citeF-45|[45]]]''' Drago, A. and Pindera, M.J. (2007) "Micro-macromechanical analysis of heterogeneous materials: Macroscopically homogeneous vs periodic microstructures", Volume 67. Elsevier. Composites science and technology 6 1243&#8211;1263
+
[[#citeF-45|[45]]] Drago, A. and Pindera, M.J. (2007) "Micro-macromechanical analysis of heterogeneous materials: Macroscopically homogeneous vs periodic microstructures", Volume 67. Elsevier. Composites science and technology 6 1243&#8211;1263
  
 
<div id="cite-46"></div>
 
<div id="cite-46"></div>
'''[[#citeF-46|[46]]]''' Miehe, C. and Schotte, J. and Schroder, J. (1999) "Computational micro-macro transitions and overall moduli in the analysis of polycrystals at large strains", Volume 16. Elsevier. Computational Materials Science 1-4 372&#8211;382
+
[[#citeF-46|[46]]] Miehe, C. and Schotte, J. and Schroder, J. (1999) "Computational micro-macro transitions and overall moduli in the analysis of polycrystals at large strains", Volume 16. Elsevier. Computational Materials Science 1-4 372&#8211;382
  
 
<div id="cite-47"></div>
 
<div id="cite-47"></div>
'''[[#citeF-47|[47]]]''' de Souza Neto, EA and Feijóo, RA. (2006) "Variational foundations of multi-scale constitutive models of solid: small and large strain kinematical formulation", Volume 16. LNCC Research & Development Report
+
[[#citeF-47|[47]]] de Souza Neto, EA and Feijóo, RA. (2006) "Variational foundations of multi-scale constitutive models of solid: small and large strain kinematical formulation", Volume 16. LNCC Research & Development Report
  
 
<div id="cite-48"></div>
 
<div id="cite-48"></div>
'''[[#citeF-48|[48]]]''' Kouznetsova, V.G. (2002) "Computational homogenization for the multi-scale analysis of multi-phase materials". Technische Universiteit Eindhoven
+
[[#citeF-48|[48]]] Kouznetsova, V.G. (2002) "Computational homogenization for the multi-scale analysis of multi-phase materials". Technische Universiteit Eindhoven
  
 
<div id="cite-49"></div>
 
<div id="cite-49"></div>
'''[[#citeF-49|[49]]]''' Michel, JC and Moulinec, H. and Suquet, P. (1999) "Effective properties of composite materials with periodic microstructure: a computational approach", Volume 172. Elsevier. Computer methods in applied mechanics and engineering 1-4 109&#8211;143
+
[[#citeF-49|[49]]] Michel, JC and Moulinec, H. and Suquet, P. (1999) "Effective properties of composite materials with periodic microstructure: a computational approach", Volume 172. Elsevier. Computer methods in applied mechanics and engineering 1-4 109&#8211;143
  
 
<div id="cite-50"></div>
 
<div id="cite-50"></div>
'''[[#citeF-50|[50]]]''' Kanit, T. and Forest, S. and Galliet, I. and Mounoury, V. and Jeulin, D. (2003) "Determination of the size of the representative volume element for random composites: statistical and numerical approach", Volume 40. Elsevier. International Journal of Solids and Structures 13 3647&#8211;3679
+
[[#citeF-50|[50]]] Kanit, T. and Forest, S. and Galliet, I. and Mounoury, V. and Jeulin, D. (2003) "Determination of the size of the representative volume element for random composites: statistical and numerical approach", Volume 40. Elsevier. International Journal of Solids and Structures 13 3647&#8211;3679
  
 
<div id="cite-51"></div>
 
<div id="cite-51"></div>
'''[[#citeF-51|[51]]]''' Nguyen, V.P. and Lloberas-Valls, O. and Stroeven, M. and Sluys, L.J. (2010) "On the existence of representative volumes for softening quasi-brittle materials-a failure zone averaging scheme". Elsevier. Computer Methods in Applied Mechanics and Engineering
+
[[#citeF-51|[51]]] Nguyen, V.P. and Lloberas-Valls, O. and Stroeven, M. and Sluys, L.J. (2010) "On the existence of representative volumes for softening quasi-brittle materials-a failure zone averaging scheme". Elsevier. Computer Methods in Applied Mechanics and Engineering
  
 
<div id="cite-52"></div>
 
<div id="cite-52"></div>
'''[[#citeF-52|[52]]]''' B. D. Reddy. (1998) "Introductory Functional Analysis". Springer-Vedag
+
[[#citeF-52|[52]]] B. D. Reddy. (1998) "Introductory Functional Analysis". Springer-Vedag
  
 
<div id="cite-53"></div>
 
<div id="cite-53"></div>
'''[[#citeF-53|[53]]]''' Giusti, SM and Blanco, PJ and de Souza Netoo, EA and Feijóo, RA. (2009) "An assessment of the Gurson yield criterion by a computational multi-scale approach", Volume 26. Emerald Group Publishing Limited. Engineering Computations 3 281&#8211;301
+
[[#citeF-53|[53]]] Giusti, SM and Blanco, PJ and de Souza Netoo, EA and Feijóo, RA. (2009) "An assessment of the Gurson yield criterion by a computational multi-scale approach", Volume 26. Emerald Group Publishing Limited. Engineering Computations 3 281&#8211;301
  
 
<div id="cite-54"></div>
 
<div id="cite-54"></div>
'''[[#citeF-54|[54]]]''' Couégnat, Guillaume. (2008) "Approche multiéchelle du comportement mécanique de matériaux composites a renfort tissé". Université Sciences et Technologies-Bordeaux I
+
[[#citeF-54|[54]]] Couégnat, Guillaume. (2008) "Approche multiéchelle du comportement mécanique de matériaux composites a renfort tissé". Université Sciences et Technologies-Bordeaux I
  
 
<div id="cite-55"></div>
 
<div id="cite-55"></div>
'''[[#citeF-55|[55]]]''' J. Lubliner. (1990) "Plasticity Theory". McMillan
+
[[#citeF-55|[55]]] J. Lubliner. (1990) "Plasticity Theory". McMillan
  
 
<div id="cite-56"></div>
 
<div id="cite-56"></div>
'''[[#citeF-56|[56]]]''' Rozza, G. (2009) "Reduced basis methods for Stokes equations in domains with non-affine parameter dependence", Volume 12. Springer. Computing and Visualization in Science 1 23&#8211;35
+
[[#citeF-56|[56]]] Rozza, G. (2009) "Reduced basis methods for Stokes equations in domains with non-affine parameter dependence", Volume 12. Springer. Computing and Visualization in Science 1 23&#8211;35
  
 
<div id="cite-57"></div>
 
<div id="cite-57"></div>
'''[[#citeF-57|[57]]]''' Hu, Y.H. and Hwang, J.N. and Perry, S.W. (2002) "Handbook of neural network signal processing", Volume 111. The Journal of the Acoustical Society of America 2525
+
[[#citeF-57|[57]]] Hu, Y.H. and Hwang, J.N. and Perry, S.W. (2002) "Handbook of neural network signal processing", Volume 111. The Journal of the Acoustical Society of America 2525
  
 
<div id="cite-58"></div>
 
<div id="cite-58"></div>
'''[[#citeF-58|[58]]]''' Bui-Thanh, T. (2007) "Model-constrained optimization methods for reduction of parameterized large-scale systems". Citeseer
+
[[#citeF-58|[58]]] Bui-Thanh, T. (2007) "Model-constrained optimization methods for reduction of parameterized large-scale systems". Citeseer
  
 
<div id="cite-59"></div>
 
<div id="cite-59"></div>
'''[[#citeF-59|[59]]]''' Bui-Thanh, T. and Willcox, K. and Ghattas, O. (2008) "Model reduction for large-scale systems with high-dimensional parametric input space", Volume 30. Citeseer. SIAM Journal on Scientific Computing 6 3270&#8211;3288
+
[[#citeF-59|[59]]] Bui-Thanh, T. and Willcox, K. and Ghattas, O. (2008) "Model reduction for large-scale systems with high-dimensional parametric input space", Volume 30. Citeseer. SIAM Journal on Scientific Computing 6 3270&#8211;3288
  
 
<div id="cite-60"></div>
 
<div id="cite-60"></div>
'''[[#citeF-60|[60]]]''' Carlberg, K. and Farhat, C. (2008) "A Compact Proper Orthogonal Decomposition Basis for Optimization-Oriented Reduced-Order Models", Volume 5964. AIAA Paper 10&#8211;12
+
[[#citeF-60|[60]]] Carlberg, K. and Farhat, C. (2008) "A Compact Proper Orthogonal Decomposition Basis for Optimization-Oriented Reduced-Order Models", Volume 5964. AIAA Paper 10&#8211;12
  
 
<div id="cite-61"></div>
 
<div id="cite-61"></div>
'''[[#citeF-61|[61]]]''' Kunisch, K. and Volkwein, S. (2010) "Optimal snapshot location for computing POD basis functions", Volume 44. ESAIM: Mathematical Modelling and Numerical Analysis 3 509
+
[[#citeF-61|[61]]] Kunisch, K. and Volkwein, S. (2010) "Optimal snapshot location for computing POD basis functions", Volume 44. ESAIM: Mathematical Modelling and Numerical Analysis 3 509
  
 
<div id="cite-62"></div>
 
<div id="cite-62"></div>
'''[[#citeF-62|[62]]]''' Rowley, C.W. and Colonius, T. and Murray, R.M. (2004) "Model reduction for compressible flows using POD and Galerkin projection", Volume 189. Elsevier. Physica D: Nonlinear Phenomena 1-2 115&#8211;129
+
[[#citeF-62|[62]]] Rowley, C.W. and Colonius, T. and Murray, R.M. (2004) "Model reduction for compressible flows using POD and Galerkin projection", Volume 189. Elsevier. Physica D: Nonlinear Phenomena 1-2 115&#8211;129
  
 
<div id="cite-63"></div>
 
<div id="cite-63"></div>
'''[[#citeF-63|[63]]]''' Smith, L.I. (2002) "A tutorial on principal components analysis", Volume 51. Cornell University, USA 52
+
[[#citeF-63|[63]]] Smith, L.I. (2002) "A tutorial on principal components analysis", Volume 51. Cornell University, USA 52
  
 
<div id="cite-64"></div>
 
<div id="cite-64"></div>
'''[[#citeF-64|[64]]]''' Carlberg, K. and Farhat, C. (2011) "A low-cost, goal-oriented ‘compact proper orthogonal decomposition’basis for model reduction of static systems", Volume 86. Wiley Online Library. International Journal for Numerical Methods in Engineering 3 381&#8211;402
+
[[#citeF-64|[64]]] Carlberg, K. and Farhat, C. (2011) "A low-cost, goal-oriented ‘compact proper orthogonal decomposition’basis for model reduction of static systems", Volume 86. Wiley Online Library. International Journal for Numerical Methods in Engineering 3 381&#8211;402
  
 
<div id="cite-65"></div>
 
<div id="cite-65"></div>
'''[[#citeF-65|[65]]]''' A. Quarteroni and  R. Sacco and F. Saleri. (2000) "Numerical Mathematics". Springer
+
[[#citeF-65|[65]]] A. Quarteroni and  R. Sacco and F. Saleri. (2000) "Numerical Mathematics". Springer
  
 
<div id="cite-66"></div>
 
<div id="cite-66"></div>
'''[[#citeF-66|[66]]]''' Astrid, P. and Weiland, S. and Willcox, K. and Backx, T. (2008) "Missing point estimation in models described by proper orthogonal decomposition", Volume 53. IEEE. Automatic Control, IEEE Transactions on 10 2237&#8211;2251
+
[[#citeF-66|[66]]] Astrid, P. and Weiland, S. and Willcox, K. and Backx, T. (2008) "Missing point estimation in models described by proper orthogonal decomposition", Volume 53. IEEE. Automatic Control, IEEE Transactions on 10 2237&#8211;2251
  
 
<div id="cite-67"></div>
 
<div id="cite-67"></div>
'''[[#citeF-67|[67]]]''' Maday, Y. and Nguyen, N.C. and Patera, A.T. and Pau, G.S.H. (2007) "A general, multipurpose interpolation procedure: the magic points"
+
[[#citeF-67|[67]]] Maday, Y. and Nguyen, N.C. and Patera, A.T. and Pau, G.S.H. (2007) "A general, multipurpose interpolation procedure: the magic points"
  
 
<div id="cite-68"></div>
 
<div id="cite-68"></div>
'''[[#citeF-68|[68]]]''' Chaturantabut, Saifon and Sorensen, Danny C. (2011) "Application of POD and DEIM on dimension reduction of non-linear miscible viscous fingering in porous media", Volume 17. Taylor & Francis. Mathematical and Computer Modelling of Dynamical Systems 4 337&#8211;353
+
[[#citeF-68|[68]]] Chaturantabut, Saifon and Sorensen, Danny C. (2011) "Application of POD and DEIM on dimension reduction of non-linear miscible viscous fingering in porous media", Volume 17. Taylor & Francis. Mathematical and Computer Modelling of Dynamical Systems 4 337&#8211;353
  
 
<div id="cite-69"></div>
 
<div id="cite-69"></div>
'''[[#citeF-69|[69]]]''' Galbally, D. and Fidkowski, K. and Willcox, K. and Ghattas, O. (2010) "Non-linear model reduction for uncertainty quantification in large-scale inverse problems", Volume 81. John Wiley & Sons. International Journal for Numerical Methods in Engineering 12 1581&#8211;1608
+
[[#citeF-69|[69]]] Galbally, D. and Fidkowski, K. and Willcox, K. and Ghattas, O. (2010) "Non-linear model reduction for uncertainty quantification in large-scale inverse problems", Volume 81. John Wiley & Sons. International Journal for Numerical Methods in Engineering 12 1581&#8211;1608
  
 
<div id="cite-70"></div>
 
<div id="cite-70"></div>
'''[[#citeF-70|[70]]]''' Everson, R. and Sirovich, L. (1995) "Karhunen&#8211;Loeve procedure for gappy data", Volume 12. OSA. Journal of the Optical Society of America A 8 1657&#8211;1664
+
[[#citeF-70|[70]]] Everson, R. and Sirovich, L. (1995) "Karhunen&#8211;Loeve procedure for gappy data", Volume 12. OSA. Journal of the Optical Society of America A 8 1657&#8211;1664
  
 
<div id="cite-71"></div>
 
<div id="cite-71"></div>
'''[[#citeF-71|[71]]]''' DeVore, R.A. and Iserles, A. and Suli, E. (2001) "Foundations of computational mathematics". Cambridge Univ Pr
+
[[#citeF-71|[71]]] DeVore, R.A. and Iserles, A. and Suli, E. (2001) "Foundations of computational mathematics". Cambridge Univ Pr
  
 
<div id="cite-72"></div>
 
<div id="cite-72"></div>
'''[[#citeF-72|[72]]]''' Boyd, S.P. and Vandenberghe, L. (2004) "Convex optimization". Cambridge Univ Pr
+
[[#citeF-72|[72]]] Boyd, S.P. and Vandenberghe, L. (2004) "Convex optimization". Cambridge Univ Pr
  
 
<div id="cite-73"></div>
 
<div id="cite-73"></div>
'''[[#citeF-73|[73]]]''' Ryckelynck, D. (2005) "A priori hyperreduction method: an adaptive approach", Volume 202. Elsevier. Journal of computational physics 1 346&#8211;366
+
[[#citeF-73|[73]]] Ryckelynck, D. (2005) "A priori hyperreduction method: an adaptive approach", Volume 202. Elsevier. Journal of computational physics 1 346&#8211;366
  
 
<div id="cite-74"></div>
 
<div id="cite-74"></div>
'''[[#citeF-74|[74]]]''' Ryckelynck, D. (2009) "Hyper-reduction of mechanical models involving internal variables", Volume 77. John Wiley & Sons. International Journal for Numerical Methods in Engineering 1 75&#8211;89
+
[[#citeF-74|[74]]] Ryckelynck, D. (2009) "Hyper-reduction of mechanical models involving internal variables", Volume 77. John Wiley & Sons. International Journal for Numerical Methods in Engineering 1 75&#8211;89
  
 
<div id="cite-75"></div>
 
<div id="cite-75"></div>
'''[[#citeF-75|[75]]]''' Montgomery, D.C. and Runger, G.C. (2010) "Applied statistics and probability for engineers". Wiley
+
[[#citeF-75|[75]]] Montgomery, D.C. and Runger, G.C. (2010) "Applied statistics and probability for engineers". Wiley
  
 
<div id="cite-76"></div>
 
<div id="cite-76"></div>
'''[[#citeF-76|[76]]]''' Lovasz, L. and  Pelikan, J. and  Vesztergombi, K. (2003) "Discrete Mathematics: Elementary and Beyond". Springer
+
[[#citeF-76|[76]]] Lovasz, L. and  Pelikan, J. and  Vesztergombi, K. (2003) "Discrete Mathematics: Elementary and Beyond". Springer
  
 
<div id="cite-77"></div>
 
<div id="cite-77"></div>
'''[[#citeF-77|[77]]]''' J. C. Simo and T. J. R. Hughes. (1998) "Computational inelasticity". Springer
+
[[#citeF-77|[77]]] J. C. Simo and T. J. R. Hughes. (1998) "Computational inelasticity". Springer
  
 
<div id="cite-78"></div>
 
<div id="cite-78"></div>
'''[[#citeF-78|[78]]]''' Carlberg, K. and Bou-Mosleh, C. and Farhat, C. (2011) "Efficient non-linear model reduction via a least-squares Petrov&#8211;Galerkin projection and compressive tensor approximations", Volume 86. Wiley Online Library. International Journal for Numerical Methods in Engineering 2 155&#8211;181
+
[[#citeF-78|[78]]] Carlberg, K. and Bou-Mosleh, C. and Farhat, C. (2011) "Efficient non-linear model reduction via a least-squares Petrov&#8211;Galerkin projection and compressive tensor approximations", Volume 86. Wiley Online Library. International Journal for Numerical Methods in Engineering 2 155&#8211;181
  
 
<div id="cite-79"></div>
 
<div id="cite-79"></div>
'''[[#citeF-79|[79]]]''' Hogben, L. (2006) "Handbook of linear algebra". Chapman & Hall/CRC
+
[[#citeF-79|[79]]] Hogben, L. (2006) "Handbook of linear algebra". Chapman & Hall/CRC
 +
 
 +
<div id="cite-80"></div>
 +
[[#citeF-80|[80]]] Carlberg, K., Cortial, J., Amsallem, D., Zahr, M. and Farhat, C. (2011) "The gnat nonlinear model reduction method and its application to fluid dynamics problems", In 6th AIAA Theoretical Fluid Mechanics Conference, Honolulu, Hawaii, June, volume 2730, pages 2011–3112

Latest revision as of 16:23, 25 November 2019

Acknowledgments

The research leading to these results has received funding from, on the one hand, the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. 320815, Advanced Grant Project COMP-DES-MAT, and, on the other hand, the Spanish Ministry of Science and Innovation under grant BIA2011-24258.

Abstract

The present work is concerned with the application of projection-based, model reduction techniques to the efficient solution of the cell equilibrium equation appearing in (otherwise prohibitively costly) two-scale, computational homogenization problems. The main original elements of the proposed Reduced-Order Model (ROM) are fundamentally three. Firstly, the reduced set of empirical, globally-supported shape functions are constructed from pre-computed Finite Element (FE) snapshots by applying, rather than the standard Proper Orthogonal Decomposition (POD), a partitioned version of the POD that accounts for the elastic/inelastic character of the solution. Secondly, we show that, for purposes of fast evaluation of the nonaffine term (in this case, the stresses), the widely adopted approach of replacing such a term by a low-dimensional interpolant constructed from POD modes, obtained, in turn, from FE snapshots, leads invariably to ill-posed formulations. To safely avoid this ill-posedness, we propose a method that consists in expanding the approximation space for the interpolant so that it embraces also the gradient of the global shape functions. A direct consequence of such an expansion is that the spectral properties of the Jacobian matrix of the governing equation becomes affected by the number and particular placement of sampling points used in the interpolation. The third innovative ingredient of the present work is a points selection algorithm that does acknowledge this peculiarity and chooses the sampling points guided, not only by accuracy requirements, but also by stability considerations. The efficiency of the proposed approach is critically assessed in the solution of the cell problem corresponding to a highly complex porous metal material under plane strain conditions. Results obtained convincingly show that the computational complexity of the proposed ROM is virtually independent of the size and geometrical complexity of the considered representative volume, and this affords gains in performance with respect to finite element analyses of above three orders of magnitude without significantly sacrificing accuracy –-hence the appellation High-Performance ROM.

1 Introduction

1.1 Motivation

1.1.1 Moore's and Parkinson's laws in computational modeling

A basic precept in constructing a successful computational model –-one that strikes the right balance between accuracy and simplicity–- is, quoting M.Ashby [1], ``to unashamedly distort the inessentials in order to capture the features that really matter. Yet with the general availability of fast computers with large memories, and the exponential growth presaged by Moore's law of such capacities, it is becoming increasingly difficult to resist the temptation to violate this basic precept and, in the quest for higher accuracy, indiscriminately include features that do not contribute significantly to the overall response of the modeled system. As pointed out by Venkataraman et al.[2], this had led to the paradox that certain engineering problems that were computationally tackled for the first time 40 years ago appear to remain comparatively costly, even though the capacity of computers have increased one trillion-fold during this period. This apparent paradox is of course not exclusive of computational physical modeling, but rather just another manifestation of the general adage “software applications will invariably grow to fill up increased computer memory, processing capabilities and storage space”, known as the computerized Parkinson's law [3].

1.1.2 The two-scale homogenization problem

An engineering research area that seems to be falling into this paradigmatic trend is, as we intend to argue in the ensuing discussion, the two-scale modeling, via homogenization, of materials such as composites and polycrystalline metals, characterized by exhibiting a clear heterogeneous composition at some lower length scale –-the micro-, or meso-, scale–-, but that can be regarded as homogeneous at the length scale at which engineering predictions are needed –-the macro-scale. The actual challenge in the macro-scale continuum description of the mechanical behavior of such materials lies in the determination of a constitutive connection between macro-stresses and macro-strains that accurately reflects the material properties and geometrical arrangement of the distinct phases at the micro-/meso-scale. Under the hypotheses of either periodicity or statistical homogeneity, on the one hand; and scale separation, on the other hand, it is well-known [4] that this constitutive link can be systematically established by solving, for each point at the coarse scale, a boundary value problem (BVP) on a certain representative microscopic subdomain (the cell equilibrium problem). In a strain-driven formulation of this BVP, the macro-strain at a given point acts as “loading parameter”, in the form of appropriate essential boundary conditions, whereas the associated macro-stress is obtained through volume averaging –-i.e., homogenization–- of the corresponding micro-stress field.

1.1.3 Evolution of homogenization approaches

When the discipline of continuum micromechanics began to flourish in the 60's and 70's of the last century, research focus was fundamentally directed, absent powerful computational tools (if any), towards the development of approximated, closed-form solutions of this BVP for certain types of geometrically and constitutively simple micro-structures. To arrive at these solutions, pioneers such as R. Hill [5], Z. Hashin [6], and K. Tanaka [7] struggled to identify and retain, guided sometimes by an uncanny physical intuition, only those features of the microstructural response that has a significant impact on the accuracy of coarse-scale predictions –-filtering out, thus, the “inessentials”. The advent of increasingly fast computers, and the concomitant advancement of numerical methods to solve differential equations, progressively fostered a shift towards the development of homogenization techniques that rely less on analytical results and more on numerical solutions, widening thereby their scope; for comprehensive surveys of these semi-analytical homogenization methods, the reader is referred to Refs. [8,9,10]. The approach termed by some authors [11,12] computational homogenization (or direct computational homogenization ) can be regarded as the culmination of this shift towards purely numerical methods: in this approach, the microscopic boundary value problem at each coarse-scale point is attacked using no other approximation than the spatial discretization of the pertinent solution strategy (finite element method, for instance), thus, circumventing the need for simplifying assumptions regarding the topological arrangement of the micro-phases and/or their collective constitutive behavior.

1.1.4 Infeasibility of direct computational homogenization methods

Although no doubt the most versatile and accurate homogenization technique, with no other limitation in scope than the imposed by the aforementioned hypotheses of statistical homogeneity and scale separation, the direct computational homogenization approach violates squarely the modeling precept outlined at the outset –-it does not discriminate between essential and irrelevant features in solving the fine-scale BPVs–-, making the accuracy/parsimony balance to tilt unduly towards the accuracy side and far from the parsimony one. The consequence is its enormous computational cost (in comparison with analytical and semi-analytical homogenization techniques). For instance, when the Finite Element (FE) method is the strategy of choice for both scales –-the commonly known as multilevel finite element method, abbreviated FE method [13]–-, one has to solve, at every increment and every Newton-Raphson iteration of the global, coarse-scale FE analysis, and for each Gauss point of the coarse-scale mesh, a local, non-linear finite element problem which, in turn, may involve several thousand of degrees of freedom. To put it metaphorically, in the FE method, the complexity of the overall analysis is submitted to the “tyranny of scales” [14,15,16]–- it depends on the resolution of both coarse- and fine- scale grids. This explains why, presently, almost five decades after R.Hill [17] laid the foundations of non-linear continuum micro-mechanics, and despite the dizzying speed of today's computers, the routine application of this general and versatile homogenization method to model the inelastic behavior of large structural systems featuring complex micro-/meso-structures is still considered intolerably costly, especially when the system has to be analyzed for various configurations, as in design optimization or inverse analysis.

1.1.5 Shortcomings of semi-analytical approaches

The foregoing review and critical appraisal of how things have evolved in the field of continuum micromechanics clearly teaches us that, to defeat the tyranny of scales, and develop a successful homogenization method, one should strive to adhere to Ashby's modeling principle and introduce –-as actually done in analytical and semi-analytical approaches–- appropriate simplifications in dealing with the fine-scale BVPs. Negating the need for such simplifications in the hope that the upcoming generations of peta-, exa- (and so forth) flops computers will eventually come to the rescue and cope with the resulting complexity, is, in the authors' opinion, a mistaken view, at odds with the true spirit of physical modeling, and the culprit for the paradoxical trend alluded to earlier.

Yet finding, for a given, arbitrarily complex microstructure, a suitable set of simplifying assumptions, let alone incorporating in a consistent manner such assumptions in the formulation of the local BVPs, is in general a formidably challenging endeavor. The admittedly brilliant idea underlying many advanced semi-analytical homogenization methods, such as the Transformation Field Analysis (TFA) [18] and variants thereof [19,20,21,22], of pre-computing certain characteristic operators (strain localization and influence tensors) by solving a carefully chosen battery of fine-scale BPVs has only partially relieved modelers from this burden: these methods are still predicated, to a lesser or greater extent, on ad-hoc assumptions connected with the constitutive description of the involved phases. Consideration of new materials with unstudied compositions will thereby require additional research efforts by specialists in the field and eventual modifications of the corresponding mathematical and numerical formulations –-in contrast to direct computational homogenization approaches, such as the FE method, in which the formulation is “material-independent”, and hence more versatile.

1.2 Goal

The current state of affairs in the field of two-scale homogenization seems to call, thus, for a unified homogenization approach that combines somewhat the advantages of direct computational homogenization and semi-analytical techniques. It would be desirable to have a homogenization method with a computational cost virtually independent of the geometric complexity of the considered representative volume, as in semi-analytical techniques –-i.e., one that defies the tyranny of scales. At the same time, it would be also interesting to arrive at a method whose mathematical formulation dispenses with ad-hoc, simplifying assumptions related with the composition of the heterogeneous material; i.e, one enjoying the versatility, unrestricted applicability and “user-friendliness” –-insofar as it would totally relieve the modeler from the often exceedingly difficult task of visualizing such assumptions –- of direct computational homogenization methods. The goal of the present work is to show that these desirable, apparently conflicting attributes can be conceivably achieved, for arbitrarily complex heterogeneous materials well into the inelastic range, by using the so-called [23] Reduced-Basis (RB) approximation in the solution of the cell BVPs.

1.3 Reduced-basis approach

1.3.1 Essence of the approach

Generally speaking, the reduced-basis approximation is simply a class of Galerkin approximation procedure that employs, as opposed to the FE method, but similarly to classical Rayleigh-Ritz solution techniques [24], globally supported basis functions. The main difference with respect to classical Rayleigh-Ritz schemes is that these basis functions or modes are not constructed from globally supported polynomials or transcendental functions (sines, cosines ...), but rather are determined from a larger set of previously computed, using the finite element (FE) method or other classical solution techniques, solutions of the BVP at appropriately selected values of the input of interest. These functions are commonly termed empirical basis functions [25], the qualifier empirical meaning “derived from computational experiments”.

As noted earlier, the input of interest or “loading” parameter in the fine-scale cell problem is the macro-scale strain tensor. Accordingly, the starting point for constructing the basis functions in the case under study would consist in solving a battery of cell BVPs for various, judiciously chosen macro-strain values. In the linear elastic regime, for instance, the displacement solution depends linearly on the prescribed macro-strain tensor, and hence it would suffice to perform 6 linear FE analysis (stretch in three directions and three shears); the corresponding modes would simply arise from orthonormalizing the displacement solutions of these problems. By constraining the cell to deform only into these 6 pre-defined, shape functions, one automatically obtains a genuine reduced-order model (ROM) of the cell. Note that the dimension of this ROM is totally independent of the spatial discretization (and, therefore, of the geometric complexity of the cell) used in the preliminary or offline FE analyses.

1.3.2 Dimensionality reduction

Things get a little bit more complicated in the inelastic range. The solution of the problem not only ceases to bear a linear relation to the prescribed macro-strain tensor: it also depends in general on the entire history of this coarse-scale kinematic variable. As a consequence, instead of single linear FE analyses, it becomes necessary to perform nonlinear FE studies on the cell subjected to various, representative macro-strain histories. The outcome of these FE calculations is a data set comprising an ensemble of hundred or even thousand (depending on the number of time steps into which the strain histories are discretized) displacement field solutions (also called snapshots). Were all these snapshots barely correlated with each other, the dimension of the manifold spanned by them would prove overly high, rendering the entire approach impractical –-it would no longer qualify as a truly reduced basis method. Fortunately, as we show in the present work, in general, most of these snapshots do display strong linear correlations between each other –-i.e., they have redundant information–-, and, in addition, contain deformation modes that are irrelevant to the quality of coarse-scale predictions –-in the sense that their associated stress fields have vanishing or negligible average volumes. Accordingly, all that is required to obtain a much lower dimensional representation of the solution data set, and therewith the desired reduced basis, is an automatic means to identify and remove this redundant and irrelevant information, while preserving, as much as possible, its essential features. This problem of removing unnecessary complexity from huge data sets so as to uncover dominant patterns is the central concern of disciplines such as digital image and video compression [26], and patter recognition [27], to name but a few, and thereby many efficient dimensionality reduction (or data compression, in more common parlance) algorithms already exist to deal with it. In the present work, we employ the arguably simplest and most popular of such algorithms: the Proper Orthogonal Decomposition (POD).

It becomes clear from the above discussion that, in the reduced basis approach, the inescapable task of discriminating what is essential and what is not is automatically carried out by these dimensionality reduction methods. In other words, the “burden” of simplification of the cell BVP in the RB approach is entirely borne by the computer, and not by the modeler, as it occurs in analytical and, to a lesser extent, in semi-analytical homogenization methods. It is precisely this feature that confers the advantages of versatility and “user-friendliness” alluded to earlier.

1.3.3 Numerical integration

Once the global shape functions have been computed, the next step in the construction of the reduced-order model of the cell is to introduce an efficient method for numerically evaluating the integrals appearing in the weak form of the cell BVP. Of course one can simply use the same Gauss quadrature formulae and the same sampling points (a total number of , being the number of mesh nodes) as the underlying finite element model. But this would be akin to integrating, say, a third-order polynomial function using thousand of sampling points–-a profligate waste of computational resources. Since displacement solutions for the cell BVP are constrained to lie in a reduced-order space of dimension , it is reasonable to expect that the corresponding stresses, internal forces and Jacobians will also reside in reduced-order spaces of dimensions of order , and consequently, only sampling points would suffice in principle to accurately evaluate the corresponding integrals. The challenging questions that have to be confronted are where to locate these sampling points and, loosely speaking, how to determine their associated weighting functions so that maximum accuracy in the integration is attained.

Approaches found in the model reduction literature that, directly or indirectly, deal with these fundamental questions can be broadly classified either as interpolatory approaches [28,29,30,31,32] or Gauss-type quadrature approaches [33,34], the former having a wider scope, for they also serve to reduce models derived from finite difference approximations [32,31]. The starting point for both is to collect, during the alluded to earlier finite element calculations, snapshots of the integrand or part of the integrand. In interpolatory approaches, the resulting ensemble of snapshots is submitted to a dimensionality reduction process, in the manner described previously for the solution basis functions, in order to compute a reduced set of dominant, orthogonal modes. Such orthogonal modes are employed to construct an interpolant of the integrand, using as interpolation points (the desired sampling points) those minimizing the interpolation error over the finite element snapshots. In the spirit of classical, interpolatory quadrature schemes such as Newton-Cotes [35], the resulting quadrature formula, and therefore, the weighting functions, emerges from approximating the integrand by this reduced-order interpolant. In Gauss-type quadrature procedures [33,34], by contrast, the selection of sampling points and the calculation of the accompanying weighting factors are simultaneously carried out, guided by a criterion of minimum integration error over the snapshots–-in a vein, in turn, similar to that used in classical Gauss-Legendre quadrature rules [35]. A more comprehensive review on both type of integration methods can be found in Ref. [36].

In the BVP under consideration, the output of interest is the volume average of the stresses over the cell domain and, therefore, accuracy is required not only in the integration of the equilibrium equation, but also on the approximation of the stresses themselves. This is the reason why, notwithstanding the presumably higher efficiency of Gauss-type quadrature (less integration error for the same number of sample points), attention is focused in the present work on interpolatory integration strategies, the variable subject to spatial interpolation being precisely the stresses.

1.4 Originality of this work

The idea of exploiting the synergistic combination of multiscale modeling and reduced basis approximation is admittedly not new. In the specific context of two-scale homogenization, it has been recently explored by Boyaval [37], Yvonnet et al. [38], and Monteiro et al. [39]. Traces of this idea can also be found in articles dealing with more general hierarchical multiscale techniques –-that do not presuppose either scale separation or periodicity/statistical homogeneity, or both–-, namely, in the multiscale finite element method [40,41,42] and in the heterogeneous multiscale method [43,44]. However, it should be noted that none of the above cited papers confronts the previously described, crucial question of how to efficiently integrate the resulting reduced-order equations, simply because, in most of them [37,40,41,42,43,44], integration is not an issue –- the fine-scale BVPs addressed in these works bear an affine relation with the corresponding coarse-scale, input parameter, as in linear elasticity, and, consequently, all integrals can be pre-computed, i.e., evaluated offline, with no impact in the online computational cost. Thus, the development of cell reduced-order models endowed with efficient, mesh-size independent integration schemes –-able to handle any material composition–- is a research area that, to the best of the authors' knowledge, still remains uncharted.

1.4.1 Main original contribution

The theory underlying ROMs that incorporate efficient interpolatory integration schemes, henceforth termed High-Performance ROMs (HP-ROMs), is still at its embryonic stage of development –-the first general proposal for parametrized BVPs dates back to 2004 [28]–- and many fundamental issues remain to be addressed. Foremost among these is the crucial question of well-posedness of the resulting system of algebraic equations: does the replacement of the integrand, or nonaffine term in the integrand, by a reduced-order interpolant always lead to a well-posed, discrete problem ? Examination of the reduced basis literature indicates that apparently no researcher has so far been confronted with ill-posed reduced-order equations, a fact that might certainly promote the view that uniqueness of solution can be taken for granted whenever the full-order model is well-posed. Unfortunately, this is not always so: we demonstrate in this work that the choice of the reduced-order space in which the interpolant of the integrand resides has a profound impact on the well-posedness of the discrete problem. In particular, we show that, in the case of the cell boundary-value problem, the widely adopted [29] approach of determining the basis functions for this space from (converged) FE snapshots leads invariably to ill-posed, discrete formulations. The main original contribution of the present work to the field of reduced-order modeling is the development of an interpolatory integration method that safely overcomes this type of ill-posedness. The gist of the method is to expand the interpolation space so that it embraces, aside from the span of the POD stress basis functions, the space generated –-and herein lies the novelty–- by the gradient of the (reduced-order) shape functions.

1.4.2 Other original contributions

An inevitable consequence of adopting the aforementioned expanded space approach is that, in contrast to the situation encountered when using standard interpolatory schemes in other parametrized BVPs [29], in the proposed method, the number and particular placement of sampling points within the integration domain influence notably the spectral properties (positive definiteness) of the Jacobian matrix of the governing equation, and therefore, the convergence characteristics of the accompanying Newton-Raphson solution algorithm. Another innovative ingredient of the present work is a points selection algorithm that does acknowledge this peculiarity and chooses the desired sampling points guided, not only by accuracy requirements (minimization of the interpolation error over the FE stress snapshot), but also by stability considerations.

Lastly, a further original contribution of the present work is the strategy for computing the global shape functions and stress basis functions. Instead of directly applying the POD over the snapshots to obtain the dominant modes, we first decompose these snapshots into (mutually orthogonal) elastic and inelastic components, and then apply separately the standard POD to the resulting elastic and inelastic snapshots. In so doing, the resulting reduced-order model is guaranteed to deliver linear elastic solutions with the same accuracy as the underlying (full-order) finite element model.

1.5 Organization of the document

The remaining of this document is organized as follows. Chapter 2is devoted to the formulation and finite element implementation of the cell equilibrium problem. Chapter 3, on the other hand, is concerned with the issue of the offline computation of the POD reduced basis. In Section 3.2, the Galerkin projection of the cell equilibrium equation onto the space spanned by the POD basis is presented. Chapter 4 outlines the procedure for efficiently integrating the reduced-order equilibrium equation. In Section 4.3, we discuss the crucial issue of where the low-dimensional approximation of the stress field should lie in order to obtain an accurate and at the same time well-posed reduced-order problem; in Section 4.3.3, the original proposal of expanding the stress basis with the gradient of the (reduced-order) shape functions is put forward. Section 4.4 delineates the derivation of the modal coefficients in the approximation of the stress field in terms of the stress values computed at the set of pre-specified sampling points. The determination of the optimal location of these sampling points is addressed in Section 4.5. For the reader's convenience and easy reference, in Section 4.6, both the offline and online steps leading to the proposed reduced-order model are conveniently summarized. Chapter 5 is dedicated to numerically assess the efficiency of the proposed model reduction strategy. Finally, in Chapter 6, some concluding remarks are presented.

In order to preserve the continuity of the presentation, details concerning the algorithmic implementation of the computation of the reduced basis and the selection of sampling points are relegated to the appendices.

2 First-order homogenization

2.1 Basic assumptions

The fundamental assumptions upon which the homogenization approach followed in this work rests are presented below. For a more in-depth description of the underlying axiomatic framework, the reader is referred to Refs. [45,46,47,48].

2.1.1 Existence of a representative subvolume

The homogenization approach employed in this work –-commonly known as first-order homogenization–- is only valid for materials that either display statistical homogeneity or spatial periodicity [45]. In both type of materials, it is possible to identify a subvolume , of characteristic length , that is representative, in a sense that will be properly defined later, of the heterogeneous material as a whole. Furthermore, this subvolume has to be small enough that it can approximately be regarded as a point at the coarse-scale level [4] (i.e, , being the characteristic length of the macro-continuum , see Figure 1). This is the so-called scale separation hypothesis. In micro-structures that exhibit statistical homogeneity, this domain receives the name of Representative Volume Element (RVE), whereas in micro-structures that display periodicity, it is commonly known as repeating unit cell (RUC), or simply unit cell [45]. In the sequel, both the acronym RVE and the more generic term “cell” will be used interchangeably to refer to .

First-order homogenization.
Figure 1: First-order homogenization.

2.1.2 Decomposition into macroscopic and microscopic contributions

The displacement at any point is assumed to be decomposed into macroscopic and microscopic parts; under the hypothesis of infinitesimal deformations, this decomposition can be written as:

(2.1)

where stands1 for the macroscopic strain tensor (the input parameter in the BVP we wish to efficiently solve) and denotes the so-called displacement fluctuation field (in turn, the basic unknown of this BVP). The macroscopic term represents the displacements that would have been observed had the material been homogeneous, whereas the fluctuating contribution accounts for the deviations from this homogeneous state due to the presence of heterogeneities [49].

The decomposition of the microscopic strain tensor follows from simply differentiating Eq.(2.1):

(2.2)

Notice that, in writing Eq.(2.2), one is tacitly assuming that the macroscopic strain tensor is uniform over the spatial length associated to the cell size . This proviso renders the employed homogenization approach –-commonly termed first-order homogenization [12,48]–- not suitable for appropriately handling localization problems. For this reason, in the present work, we shall presuppose that localization of strains does not take place within the deforming cell.

Implicit in the scale separation assumption is the fact that fine-scale deformations only influence coarse-scale behavior through its volume average over the RVE; this implies that

(2.3)

where stands for the volume of the RVE:

(2.4)

By virtue of Eq.(2.2), this condition is equivalent to impose the volume average of the fluctuation contribution to vanish:

(2.5)

(1) Some remarks concerning notation are in order here. Firstly, macroscopic variables will be identified by appending a subscript “M”, while variables associated to the fine scale will be designated by bare symbols. For instance, we shall write and to denote the macroscopic strain tensor and the fine-scale strain field, respectively. Secondly, readers accustomed to classical continuum mechanics notation are reminded that, in this work, the symbol does not represent the displacement field, but rather the fluctuating part –-due to the presence of heterogeneities in the concerned RVE–- of such a field.

2.1.3 Hill-Mandel principle of macro-homogeneity

The scale bridging is completed by the Hill-Mandel principle of macro-homogeneity, that states that the stress power at any point of the macro-continuum must equal the volume average (over the RVE) of the corresponding fine-scale, stress power field (making, thus, both coarse- and fine-scale, continuum description energetically equivalent [8]). The variational statement of this principle is as follows [47]: let be a statically admissible, fine-scale stress field, and the associated macroscopic stress tensor, then, the identity

(2.6)

must hold for any kinematically admissible strain rate . Inserting the rate form of Eq.(2.2) into the above equation, and by virtue of Eq.(2.3), it follows that, for identity (2.6) to hold, the macroscopic stress tensor must be the volume average of the microscopic stress field :

(2.7)

Another necessary condition for Eq.(2.6) to hold is that:

(2.8)

for any kinematically admissible displacement fluctuation field . It is shown in Ref. [47] that this condition amounts to requiring that the external surface traction and body force field in the RVE be purely reactive –-i.e., a reaction to the kinematic constraints imposed upon the RVE. This is why the upcoming cell equilibrium equation does not contain neither external boundary traction nor body force terms.

2.2 RVE equilibrium problem

2.2.1 Boundary conditions

To address the issue of boundary conditions (BCs), it proves first convenient to, using Gauss' theorem, recast equation (2.5) in terms of boundary displacement fluctuations:

(2.9)

where represents the boundary of , is the outer unit normal vector to and the symbol denotes the symmetrized tensor product. Any type of boundary conditions prescribed on the cell must obey this condition.

The natural choice for a repeating unit cell (RUC) –-defined as the smallest volume that allows one to generate, by periodic repetition, the entire microstructure of a periodic media [49]–- is to employ periodic boundary conditions for the displacement fluctuations. By definition, the boundary of an RUC comprises pairs of identical –-appropriately shifted in space–- surfaces and (), with the property that for every , , being, loosely speaking, the “counterpart” of in . Periodicity of displacement fluctuations implies that for every , . (See Refs. [8,49] for more details on this type of BCs).

In statistically homogeneous micro-structures, by contrast, the situation is not so clear-cut. The concept of RVE admits various, alternative interpretations and, as a consequence, there is a certain latitude in the choice of boundary conditions (vanishing fluctuations, uniform tractions, quasi-periodic conditions …); the reader interested in this topic is urged to consult Refs. [45,50,51]. Arguably, the most practical choice –-from an implementational point of view–- in a strain-driven finite element context (and the one adopted here) is to use vanishing boundary conditions for the displacement fluctuations (), and correspondingly determine the RVE as the smallest subvolume of the statistically homogeneous microstructure whose mechanical response is, under this type of boundary conditions, indistinguishable from that of the material-at-large.

2.2.2 Trial and test function spaces

It is not hard to see that both periodic and vanishing BCs are just particular instances of the more general case of homogeneous boundary conditions [52], i.e, conditions of the form:

(2.10)

being the corresponding linear operator. An immediate corollary of this observation is that the set of all kinematically admissible displacement fluctuation fields, henceforth denoted by , will form for both type of boundary conditions a vector space; this space is defined formally as:

(2.11)

Here, stands for the Sobolev space of functions possessing square integrable derivatives over . Since the test functions are kinematically admissible variations, we can write

(2.12)

from which it follows that , i.e., the spaces of trial and test functions coincide.

Observation 1: As will be further explained later, having structure of linear space confers a unique advantage –-not enjoyed by ROMs of BVPs with general inhomogeneous boundary conditions [25]–- for model reduction purposes: reduced-order responses will invariably and automatically conform to the imposed boundary conditions, regardless of the level of approximation.

2.2.3 Variational statement

Consider a time discretization of the interval of interest . The current value of the microscopic stress tensor at each is presumed to be entirely determined by, on the one hand, the current value of the microscopic strain tensor , and, on the other hand, a set of microscopic internal variables –-that encapsulate the history of microscopic deformations. The relationship between these variables is established by (phenomenological) rate constitutive equations; these equations may vary from point to point within the cell (multiphase materials). Likewise, the considered RVE may contain also voids distributed all over the domain. The (incremental) RVE equilibrium problem at time can be stated as follows (see Ref. [47]): given the initial data and the prescribed macroscopic strain tensor , find such that

(2.13)

for all . The actual output of interest in this fine-scale BVP is not the displacement fluctuation field per se, but rather the macroscopic stress tensor :

(2.14)

In order to keep the notation uncluttered, the superindex “n+1” will be hereafter dropped out and all quantities will be assumed to be evaluated at time ; only when confusion is apt to show up, the pertinent distinction will be introduced.

2.3 Finite element formulation

Let be a finite element discretization of the cell. It will be assumed that this discretization is fine enough to consider the exact and FE approximated solutions indistinguishable at the accuracy level of interest. Let ( denotes the number of nodes of the discretization) be a set of shape functions associated to this discretization such that

(2.15)

In the case of vanishing displacement fluctuations, this can be achieved by simply ensuring that:

(2.16)

As for periodic boundary conditions, let us suppose, for simplicity, that the spatial grid is such that every node () has a “counterpart” node (and vice versa), and that, in addition, no nodes are placed at the intersection of adjoining surfaces. Such being the case, it can be readily shown that the conditions that the shape functions of a given node and its counterpart have to satisfy for proviso (2.15) to hold are:

(2.17)

(2.18)

The reader is referred to Refs. [49,53] for guidelines on how to enforce periodicity conditions in a more general scenario.

Now we approximate and as:

(2.19)

(2.20)

where and () denote the nodal values of the displacement fluctuations and test functions, respectively. Inserting these approximations in Eq.(2.13), and exploiting the arbitrariness of coefficients (), one arrives at the following set of discrete equilibrium equations (repeated indices implies summation):

(2.21)

Introducing Voigt's notation1, the above equation can be expressed in matrix format as:

(2.22)

where represents, with a slight abuse of notation2, the column matrix form of the stress tensor ( and for plane and 3D problems, respectively), and is the classical, global ``-matrix" connecting strains at a given point with the vector containing all nodal displacements:

(2.23)

As usual, numerical evaluation of the integral in Eq.(2.22) is carried out by Gaussian quadrature:

(2.24)

Here, stands for the total number of Gauss points of the mesh; denotes the weight associated to the Gauss point (this weight includes both the quadrature weight itself and the corresponding Jacobian determinant.); and and stand for the B-matrix and the stress vector at Gauss point , respectively.

(1) Here, it is convenient to use the so-called modified Voigt's notation rather than the standard one. In the modified Voigt's notation, both stress and strain tensors are represented as column vectors ( and , respectively ) in which the shear components are multiplied by . The advantage of this notation over the conventional, engineering Voigt's notation is the equivalence between norms; viz., . The reader is urged to consult [54] for further details on this notation.

(2) The same symbol is used to denote the stress tensor and its counterpart in Voigt's notation.

3 Reduced-order model of the RVE

3.1 Computation of reduced basis

A basic, intuitive picture of the strategy for computing the reduced basis onto which to project the cell equilibrium equation (2.13) was already given in the introductory Chapter. In the following, we put the idea behind this strategy on a more rigorous footing. We begin by noting that, from a functional analysis standpoint, the term model reduction is conceptually akin to the more common term model discretization, since both connote transitions from higher-dimensional to lower-dimensional solution spaces. Indeed, whereas model discretization is used to refer to the (classical) passage from the infinite dimensional space to the finite element subspace , model reduction denotes a transition from this finite dimensional space to a significantly smaller manifold –-the reduced-order space. This latter transition is not carried out directly, but in two sequential steps, namely, sampling of the parameter space and dimensionality reduction. The precise meaning of these terms is explained below.

3.1.1 Sampling of the input parameter space

In constructing the finite element space of kinematically admissible functions , the only restrictions placed on the motion of the mesh nodes are those imposed at the boundaries through conditions (2.16) (for RVEs) and (2.17), (2.18) (for RUCs). The finite element solution space, thus, does not presuppose any constraint on the motion of the interior nodes of the mesh.

However, in actuality, interior nodes cannot fluctuate freely, independently from each other, but they rather move according to deformational patterns dictated by the constitutive laws that govern the mechanical behavior of the distinct phases in the cell –-as noted by Lubliner [55], constitutive laws can be regarded as internal restrictions on the kinds of deformation a body can suffer. This means that the solution of the finite element equilibrium equation (2.13) for given values of the macro-strain tensor actually lives in a smaller subspace (in the parlance of model reduction [23,56], is the manifold induced by the parametric dependence of the BVP on the input variables).

Yet, in general, this manifold cannot be precisely determined –-such a task would require finite element analyses of the cell under all conceivable strain paths. Rather, one has to be content to construct an approximation of it as the span of the displacement fluctuation solutions obtained for a judiciously chosen set of input strain histories . Suppose, for simplicity, that each of these strain histories is discretized into equal number of steps , and let

(3.1)

denote the displacement fluctuation solution at the time step of the strain history (, ). The approximating space for , henceforth called the snapshots space, is then defined as:

(3.2)

being the total number of snapshots. Likewise, the matrix containing, in columns, the nodal values of these displacement fluctuations solutions:

(3.3)

will correspondingly be termed the (displacement fluctuations) snapshot matrix.

Remark 1: The first error incurred in solving the cell equilibrium equations using the reduced basis approach arises in approximating by this space of snapshots . In order to keep this error to a minimum, one should strive to select the set of strain histories in such a way that the span of the corresponding displacement fluctuation solutions cover as much as possible the space (or at least, the region or regions of particular interest), while, at the same time, trying to keep the total number of snapshots in check –-the computational cost of the subsequent dimensionality reduction process grows considerably with the size of the snapshot matrix. In this respect, it may be interesting to note that this task of sampling the input parameter space (also known as “training”1) is somehow akin to the experimental process whereby material parameters of standard phenomenological models are calibrated in a laboratory. In this analogy, the RVE plays the role of the corresponding experimental specimen, whereas the macro-strain training trajectories represent the loading paths of the pertinent calibration tests. As opposed to the situation encountered in standard laboratory experiments, however, in the training process, one has “privileged” information regarding the phenomenological behavior of the constituents. Hindsight and elementary physical considerations can therefore aid in restricting the number of strain histories (and hence of snapshots) necessary to characterize the response. For instance, if the behavior of the materials that compose the cell is governed by rate-independent constitutive models, we know beforehand that it is not necessary to study the response under varying rates of deformation. Strategies for efficiently sampling the input parameter space in general model reduction contexts can be found in Refs. [58,59,60,61].

(1) The term “training”, which, incidentally, is borrowed from the neural network literature [57], is used throughout the text to refer to the offline generation of snapshots.

3.1.2 Dimensionality reduction

The next and definitive step in the transition from the high-dimensional finite element space to the desired reduced-order space –-in which the cell BVP is to be finally posed–- is the dimensionality reduction process, in which, as pointed out in the introductory Chapter, the dominant deformational patterns of the cell response are identified and unveiled by washing out the “inessentials”.

3.1.2.1 Proper Orthogonal Decomposition

To accomplish this central task, we employ here the Proper Orthogonal Decomposition (POD). The formal statement of the POD problem goes as follows: given the ensemble of snapshots , find a set of orthogonal basis functions () such that the error defined as

(3.4)

is minimized. Here, represents the projection of onto the subspace spanned by the basis functions , and symbolizes the norm. It is shown in Ref. [62] that the solution of this optimization problem can be obtained by first solving the following eigenvalue problem:

(3.5)

where is a symmetric matrix defined as:

(3.6)

i.e., is the inner product between snapshots and . In statistical terms, can be interpreted as a covariance matrix: the off-diagonal entries captures the degree of linear correlation or redundancy between pair of snapshots (the covariance), whereas the diagonal terms are the variances [63]. The goal in diagonalizing by solving the eigenvalue problem (3.5) is to re-express the snapshots data in axes that filter out the redundancies and reveals the actual dominant displacement fluctuation patterns.

Since , expression (3.6) for the covariance matrix can be rewritten in terms of the snapshot matrix as follows:

(3.7)

or in matrix form:

(3.8)

where

(3.9)

Note that, except for the density factor, this matrix is similar to the “mass matrix” appearing in finite element implementations of dynamical problems.

Once the eigenvalues problem (3.5) has been solved, the desired reduced basis is calculated from the largest eigenvalues and associated eigenvectors through the following expression:

(3.10)

(modes are judged to be essential or dominant, and hence worthy of being included in the reduced basis set, if their associated eigenvalues have relatively large magnitudes). Substitution of into the above equation yields:

(3.11)

where1 is given by

(3.12)

and stands for the value of the basis function at the node of the finite element grid. The matrix defined by the above equation will be hereafter called the reduced basis matrix. Each column () of this matrix can be compactly expressed in terms of the snapshot matrix as follows:

(3.13)

3.1.2.2 Singular value decomposition

Instead of solving the eigenvalue problem (3.5), and then obtaining the reduced basis matrix from expression (3.12), one can alternatively compute this basis matrix using the Singular Value Decomposition. Indeed, let be the Cholesky decomposition of , and let denote the matrix defined as:

(3.14)

It can be shown (see Appendix A) that the column of the reduced basis matrix is related to the left singular vector of , denoted by , through expression

(3.15)

3.1.2.3 Elastic/Inelastic reduced basis functions

The POD can be viewed as a multidimensional data fitting procedure intended to obtain a sequence of orthogonal basis functions whose span best approximate the space of snapshots. As such, the POD is a purely data-driven process, “agnostic” to the physical origin of the data [64,63]. For instance, for POD basis construction purposes, it is completely immaterial whether a given snapshot corresponds to a purely linear elastic solution or to a solution well into the inelastic regime. The task of discriminating which features of the cell response are essential and which are not is exclusively guided by statistical considerations: if the elastic response happens to be poorly represented within the snapshot ensemble, the POD may regard as unimportant the contribution of these snapshots, and, as a consequence, the basis functions with largest associated eigenvalues –-i.e., the essential modes–- would hardly contain any information of this range. To accurately replicate the apparently trivial linear elastic behavior, thus, one may be forced to take a relatively large number of basis functions, and this may translate into a significant increase in the overall online computational cost. This fact certainly places the POD-based reduced basis approach at a competitive disadvantage compared with semi-analytical homogenization approaches such as the Nonlinear Transformation Field Analysis [20], which do capture exactly (and effortlessly) the linear elastic response of the cell.

To eliminate this shortcoming, we propose here a slightly different strategy for constructing the reduced basis. The essence of the proposal is to partition the space of snapshots into elastic () and inelastic () subspaces:

(3.16)

( symbolizes direct sum of subspaces [65]) and then obtain the reduced basis as the union of the bases for both subspaces. Below, we describe this strategy more in detail.

The first step is to determine an orthogonal basis for . One can do this by simply performing independent, linear elastic finite element analysis of the cell ( for 3D problems, and for plane strain), and then orthonormalizing the resulting displacement fluctuation fields. These elastic modes will be considered as the first basis functions of the reduced basis:

(3.17)

Once we have at our disposal this set of elastic basis functions, we compute the (orthogonal) projection of each snapshot onto the orthogonal complement of (which is precisely the aforementioned inelastic space ):

(3.18)

It is now on this ensemble of inelastic snapshots that the previously described POD is applied to obtain the remaining basis functions. Thus, we finally have:

(3.19)

for 3D problems, and

(3.20)

for plane strain.

Observation 2: In placing the elastic modes within the first positions, the reduced-order model is guaranteed to deliver linear elastic solutions with the same accuracy as the underlying (full-order) finite element model (obviously, provided that ).


Further details concerning the numerical implementation of this apparently novel –-to the best of the authors' knowledge–- basis construction strategy can be found in Appendix B.

(1) It is necessary at this point to further clarify the notation employed for referring to the basis functions. A bold italic “phi” symbol with one subscript is employed to denote the basis function itself, i.e., (). Bold normal “phi” symbols, on the other hand, are employed to represent values of such basis functions at the nodes of the underlying finite element mesh. For instance, the value of the basis functions at the node is symbolized as . When accompanied by only one (lowercase) subscript, the bold normal “phi” symbol denotes a column vector containing the nodal values of the pertinent basis functions at all gauss points: (). Lastly, when no subscript is attached, represents the reduced basis matrix: .

3.2 Galerkin projection onto the reduced subspace

We now seek to pose the boundary-value problem represented by Eq.(2.13) in the reduced-order space spanned by the basis functions . To this end, we approximate both test and trial functions by the following linear expansions:

(3.21)

(3.22)

and being the low-dimensional approximations of trial and test functions, respectively (hereafter, asterisked symbols will be used to denote low-dimensional approximations of the associated variables). Inserting Eqs. (3.21) and (3.22) into Eq.(2.13), and exploiting the arbitrariness of coefficients (), we arrive at the following set of equilibrium equations:

(3.23)

Expressing now the reduced basis functions in the above equation in terms of finite element shape functions (through expression ), we get (in Voigt's notation):

(3.24)

or more compactly:

(3.25)

Here, denotes the vector containing the reduced displacement fluctuations –-the basic unknowns of the reduced-order problem:

(3.26)

and stands for the reduced “B-matrix”, defined as:

(3.27)

and that connects the gradient of the displacement fluctuation field with the vector of reduced displacement fluctuations, i.e.:

(3.28)

For implementational purposes, it is more expedient to express Eq.(3.27) in terms of elemental matrices. To this end, we write:

(3.29)

where denotes the local -matrix of element (, in turn, is the number of nodes in ). Thus,

(3.30)

In the above equation, represents the block matrix of corresponding to the nodes of finite element ().

4 Numerical integration

4.1 Classical Gauss quadrature: the standard ROM

A straightforward –-but, as already mentioned, ostensibly inefficient–- route for numerically evaluating the integral appearing in the reduced-order equilibrium equation (3.24) is to simply use the same Gauss quadrature formulae and the same set of Gauss points as the underlying finite element model (see Eq.(2.24)):

(4.1)

Low-rank approximations that employ the underlying finite element Gauss points for numerically evaluating integrals in the weak statement of the problem are commonly known as standard reduced-order models [66].

4.2 Efficient numerical integration: the High-Performance ROM (HP-ROM)

4.2.1 Overview

As outlined in the introductory Chapter, approaches found in the model reduction literature that, directly or indirectly, deal with the still underexplored question of how to efficiently –-at an online computational cost independent of the dimension of the underlying finite element model–- integrate reduced-order equations can be broadly classified either as interpolatory approaches [28,29,30,31,32] or Gauss-type quadrature methods [33,34]. The integration strategy proposed in the present work falls into the former category of interpolatory approaches. Recall that the basic idea in such approaches is to replace the nonaffine term in the BVP by a low-dimensional approximation. In our case, a glance at the reduced-order equilibrium equation (3.24) reveals that such “offending”, nonaffine term is the stress field –-the reduced -matrix is independent of the input parameter and hence need not be subject to approximation. The proposed integration scheme, thus, is predicated on the assumption that, not only the displacement fluctuations, but also the stress field over the cell admits an accurate, low-dimensional approximation. Numerical experiments confirm (see Chapter 5) that, luckily, this premise generally holds whenever the displacement fluctuations field itself lives in a low-dimensional space.

Let () denote a set of orthogonal basis functions for such low-dimensional, stress approximation space. Then, the reduced-order approximation of the stress field may be written as:

(4.2)


(notice that, in keeping with the notational convention introduced in Section 3.2, the low-dimensional approximation of the stress field is represented by attaching an asterisk to the stress symbol). In the spirit of classical polynomial quadrature (such as Newton-Cotes formulae [35]), coefficients () in Eq.(4.2) are calculated by fitting this linear expansion to the stress values computed at a set of pre-specified sampling points:

(4.3)

Approximation (4.2) becomes therefore expressible as:

(4.4)

where () stands for the interpolation or, more generally, reconstruction operator1 at sampling point , whereas

(4.5)

represents the stress vector evaluated at sampling point through the pertinent constitutive relation .

Substitution of the above approximation into equation Eq.(3.25) leads to:

(4.6)

The bracketed integral (denoted by ) in the above equation is independent of the input parameter –-the macroscopic strain –-, and, hence, it can be entirely pre-computed offline, using, for instance, the full set of finite element Gauss points:

(4.7)

Introducing the above definition into Eq.(4.6) finally yields the quadrature formula for the reduced equilibrium equation:

(4.8)

It is noteworthy that this quadrature formula requires evaluation of stresses at only sampling points within the cell domain (in contrast to the standard ROM (4.1), that needs Gauss points). Furthermore, inserting Eq.(4.4) into Eq.(2.14), we get:

(4.9)

where

(4.10)

Note that () can also be pre-computed offline.

In summary, in projecting the cell equilibrium equation onto the displacement fluctuations, reduced space, and in additionally adopting the above described integration scheme, one automatically arrives at a reduced-order model in which the operation count –-the complexity–- in both solving the cell equilibrium equation and calculating the macroscopic stress tensor depends exclusively on the dimension of the reduced basis. We shall refer to this model as the High-Performance, Reduced-Order Model (HP-ROM), to highlight the tremendous gains in performance that affords this model over the previously described standard ROM, let alone over the full-order, finite model, discussed in Section 2.3–-in the numerical example shown in Chapter 5, we report speedup factors of above three order of magnitudes.

(1) The term interpolation usually connotes that the fit is exact at the sampling points, and this only occurs when .

4.3 Stress approximation space

Two crucial aspects of the integration scheme sketched in the foregoing remains to be addressed, namely, the determination of the vector space (hereafter denoted by ) in which the low-dimensional approximation of the stress field should lie in order to obtain an accurate and at the same time well-posed HP-ROM; and the calculation of the optimal location of the sampling or integration points at which the stress tensor is to be evaluated. Attention here and in the next Section is confined to the aspect related to the stress approximation space, while the discussion of the issue related to the selection of sampling points is deferred to Section 4.5.

4.3.1 The reduced-order subspace of statically admissible stresses (Vσ*)

Similarly to the problem addressed in Chapter 3 concerning the reduced basis for the displacement fluctuations, the problem of constructing a -dimensional representation of the stress field reduces, in principle, to finding a set of orthogonal basis functions () such that its span accurately approximates the set of all possible stress solutions –-that is, the set of all statically admissible stresses. Accordingly, the procedure to compute the reduced basis for the stress field would be, mutatis mutandis, formally identical to that explained earlier for the displacement fluctuations. Firstly, finite element, stress distributions over the cell are computed for representative, input macro-strain histories (the most practical and somehow consistent choice regarding these strain trajectories is to use the same as in the computation of the displacement fluctuations snapshots). Then, the elastic/inelastic dimensionality reduction process set forth in Section 3.1.2 is applied to the resulting ensemble of stress solutions , in order to identify both the elastic and the essential inelastic stress modes. The space spanned by these modes will be denoted hereafter by and termed the reduced-order subspace of statically admissible stresses:

(4.11)

4.3.2 Ill-posedness of the HP-ROM

At first sight, it appears reasonable to simply construct the low-dimensional approximation required in the proposed integration method as a linear combination of the above described stress reduced basis–- hence making –-; i.e.,

(4.12)

where (). This strategy of approximating the offending, nonaffine term in the BVP by a linear combination of pre-computed basis functions –-obtained, in turn, from samples of the nonaffine term evaluated at the solution–- has been successfully applied by several authors, with no apparent –-or at least not reported–- computational pitfalls, to a wide gamut of problems: nonlinear monotonic elliptic and nonlinear parabolic BPVs [67,29], nonlinear miscible viscous fingering in porous media [68,31], uncertainty quantification in inverse problems [69], and nonlinear heat conduction problems [32,66], to cite but a few.

However, a closer examination of the the cell equilibrium problem reveals that, in this case, this “standard” strategy proves completely fruitless, for it leads to patently ill-posed reduced-order equations. To show this, let us first substitute approximation (4.12) into Eq.(3.24):

(4.13)

By virtue of Eq.(3.27), the bracketed integral in the preceding equation can be rephrased as:

(4.14)

Each basis function () is, by construction (see Chapter 3), a linear combination of the stress snapshots collected during the offline, finite element analysis; thus, we can write:

(4.15)

being the corresponding coefficients in the linear combination. Inserting the above equation into Eq.(4.14) and considering that () are finite element stress solutions –-and therefore fulfill the finite element equilibrium equation (2.22)–-, we finally arrive at:

(4.16)

that is, the integral (4.14) appearing in the equilibrium equation (4.13), and hence, the left-hand side of the equation itself, vanishes identically regardless of the value of the modal coefficients (), and therefore, regardless of the value of the reduced displacement fluctuations –-hence the ill-posedness.

4.3.3 Proposed remedy: the expanded space approach

It is clear from the foregoing discussion that the root cause of the ill-posedness lies in the fact that the set of all admissible stress fields () forms a vector space, and, consequently, the POD stress modes () –-and any linear combination of them–- turn out to be self-equilibrated fields. Thus, for the reduced-order problem to be well-posed, the approximation space cannot be only formed by statically admissible stresses, but it must also include statically inadmissible fields –-i.e. stress functions that do not satisfy the reduced-order equilibrium equation (3.24).

One plausible route for determining a low-dimensional approximation space that embraces both statically admissible and statically inadmissible stresses might be to collect, during the offline finite element calculations, not only converged stresses, but also the unconverged ones –-i.e., those generated during the corresponding iterative algorithm–-, and then perform the POD-based dimensionality reduction over the whole ensemble of snapshots1. In the present work, however, we pursue an approach that precludes the necessity of undertaking this computationally laborious and in some aspects objectionable –-there is no guarantee that the span of selected, unconverged stress snapshots covers the entire space of statically inadmissible stresses–-process. The idea behind the employed approach was originally conceived, but not fully developed, by the authors in a recent report [36]. Here, the theory underlying such an idea is further elaborated and cast into the formalisms of functional analysis.


(1) Incidentally, this way of proceeding has the flavor of the nonlinear model reduction strategy advocated by Carlberg and co-workers [78,80], in which the reduction is carried over the linearized form of the pertinent governing equation


4.3.3.1 Continuum formulation

To originate our considerations from a general standpoint, it proves convenient first to rephrase the left-hand side of the reduced-order equilibrium equation Eq.(3.24) as the action of a certain linear operator on the stress field over the cell:

(4.17)

Invoking now the orthogonal decomposition of induced by this operator, one obtains:

(4.18)

where stands for the nullspace of . Since the cell equilibrium equation has a vanishing right-hand side term, it follows that is actually the space of statically admissible stress fields. Its orthogonal complement, , can be therefore construed as the aforementioned space of statically inadmissible stresses. The key fact here is that such a space is inherently -dimensional and, thus, there is no need to perform any dimensionality reduction whatsoever over unconverged snapshots to arrive at the desired basis: the strain-displacement functions themselves are linearly independent (albeit not orthogonal) and can thereby serve this very purpose.

According to the preceding decomposition, any can be resolved as (see Figure 2):

(4.19)

where and stand for the statically admissible and statically inadmissible components of , respectively. Following the standard approach, the statically admissible component –-i.e., the stress solution we wish to calculate for a given input –- is forced to lie in the span of the POD modes () obtained from converged snapshots:

(4.20)

() being the corresponding modal coefficients. The non-equilibrated component , on the other hand, resides naturally in the span of the reduced strain-displacement functions, so we can directly write–-i.e., without introducing further approximations–-:

(4.21)

with (). The low-dimensional approximation required in the proposed integration method, denoted in what follows by (the appended superscript “ex” means ``stress approximated in the expanded space), is finally obtained as the sum of Eq.(4.20) and Eq.(4.21) :

(4.22)
Expanded space approach.   The stress approximation space is expanded so that it  embraces, not only the span of the stress POD modes, but  also the  span of the  reduced strain-displacement functions \{ B₁*\!,B₂*\! …Bnu*\!\} . The reduced-order cell equilibrium problem boils down to find the reduced displacement fluctuations vector U* that makes the non-equilibrated component σⁱⁿ to vanish (σⁱⁿ(U*,ϵM)=0 ).
Figure 2: Expanded space approach. The stress approximation space is expanded so that it embraces, not only the span of the stress POD modes, but also the span of the reduced strain-displacement functions . The reduced-order cell equilibrium problem boils down to find the reduced displacement fluctuations vector that makes the non-equilibrated component to vanish ( ).

Substituting the above approximation into the equilibrium equation, one gets:

(4.23)

Since are linearly independent functions, it becomes immediately clear that the above equations holds only if:

(4.24)

i.e., if the coefficients multiplying () are identically zero. In adopting the proposed integration approach, thus, the reduced-order cell equilibrium problem (3.24) is transformed into the problem of finding, for a given input macroscopic strain tensor , the reduced displacement fluctuations vector that makes the non-equilibrated component (defined in Eq.(4.21)) to vanish.

In a nutshell, the ill-posedness exhibited by the discrete problem when adopting the standard approach of using only POD modes is eliminated by expanding the stress approximation space so that it embraces also the span of the reduced strain-displacement functions (or strain modes2) ():

(4.25)

4.3.3.2 Discrete formulation

In typical finite element implementations, both stresses and gradients of shape functions are only calculated and stored at the Gauss points of the underlying spatial discretization. For practical reasons, thus, it proves imperative to reformulate the above explained expanded space strategy and treat both magnitudes as spatially discrete variables, defined only at such Gauss points.

The discrete counterparts of the continuously defined fields and () will be denoted by and , and termed the global stress vector, and the global matrix of strain modes, respectively. The global stress vector is constructed by stacking the stress vectors () at the Gauss points of the finite element grid into a single column vector:

(4.26)

Similarly, the global matrix of strain modes is constructed as:

(4.27)

With definitions (4.26) and (4.27) at hand, the standard ROM equilibrium equation (4.1) can be readily rephrased in the following compact, matrix form:

(4.28)

where is a diagonal matrix containing the weights at each Gauss point:

(4.29)

(here, denotes the identity matrix). Assuming that () –-Gauss quadrature rules with negative weights are excluded from our considerations–-, one can reexpress Eq.(4.28) as:

(4.30)

where symbolizes the following inner product:

(4.31)

Equation (4.30) reveals that any statically admissible global stress vector is orthogonal to the global strain modes vectors () in the sense of the inner product induced by . In other words, in approximating the integral of the internal forces by Gauss quadrature, the orthogonality condition translates into orthogonality in the sense of Eq.(4.31). From the point of view of numerical implementation, however, it is preferable to cast the equilibrium condition and the subsequent developments in terms of the standard euclidean scalar product in –-working with the inner product defined by Eq.(4.31) is somehow a nuisance and complicates unnecessarily the involved algebra. This can be achieved by inserting the Cholesky decomposition of the weights matrix

(4.32)

into Eq.(4.26):

(4.33)

Defining now the weighted global stress vector and weighted matrix of strain modes as

(4.34)

and

(4.35)

respectively, and inserting these definitions into Eq.(4.26), one finally arrives at:

(4.36)

or equivalently:

(4.37)

which shows that any statically admissible weighted stress vector is orthogonal, in the sense of the standard euclidean inner product, to the weighted strain modes ().

Comparing Eq.(4.36) with Eq.(4.17), it becomes clear that plays the same role as operator in Eq.(4.17). In analogy with Eq.(4.18), thus, we can write

(4.38)

where and denote the null space and the range (or column space) of and , respectively, and consequently decompose any as

(4.39)

with and . As in the continuous case (see Eq.(4.20)), the statically admissible component is now approximated by a linear combination of POD basis vectors obtained from converged stress snapshots (the methodology for obtaining these modes using the SVD is thoroughly explained in Appendix B.2):

(4.40)

where

(4.41)

denotes the (weighted) stress basis matrix and stands for the vector of modal coefficients associated to such a basis matrix. Likewise, since the non-equilibrated component pertains to the column space of , we can directly write

(4.42)

where . The low-dimensional (weighted) stress vector required in the proposed integration method is finally obtained as the sum of Eq.(4.42) and Eq.(4.40).

(4.43)

or in a more compact format:

(4.44)

where

(4.45)

and

(4.46)

The matrix defined by Eq.(4.45) will be hereafter called the expanded basis matrix for the (weighted) stresses, whereas will be correspondingly termed the expanded vector of modal coefficients. Inserting approximation (4.43) into Eq.(4.36), and considering that and that is a full rank matrix, one finally arrives at the same equilibrium condition derived in the continuum case (see Eq. 4.24):

(4.47)

Once the above equation is solved for , the desired equilibrated stress vector is obtained by evaluating Eq.(4.40):

(4.48)


(2) Indeed, functions () can be viewed as fluctuating strain modes, since they are the symmetric gradient of the displacement fluctuation modes, see Eq. 3.27.

4.4 Determination of modal coefficients

The next step in the development of the proposed integration scheme is to deduce closed-form expressions for the vectors of modal coefficients and in terms of the stress values computed at a set of pre-specified sampling points (to be chosen among the set of Gauss points of the underlying finite element mesh). To this end, we need first to introduce some notation and terminology.

4.4.1 Gappy vectors

Let denote the set of indices of sampling points. Notationally, we write to designate the subvector of containing the rows associated to these sampling points; viz.:

(4.49)

(When confusion is not apt to arise, the parenthetical subscript indicating the set of sampling indices will be dropped, and we shall simply write ). It proves conceptually advantageous to regard this restricted or “gappy” –-a terminology that goes back to the work of Everson et al. [70]–- stress vector as the result of the application of a certain boolean operator over the full vector :

(4.50)

We call the selection operator associated to sampling indices . This operator can be of course applied to any (). For instance, the restricted matrix of weighted strain modes would be defined as:

(4.51)

Furhtermore, it is straighforward to show that

(4.52)

(here is the x identity matrix) and that

(4.53)

for any and .

4.4.2 Least-squares fit

In the spirit of classical polynomial quadrature, such as Newton-Cotes formulae [35], the modal coefficients and are determined by fitting the low-dimensional approximation (4.43) to the weighted stresses calculated at the pre-specified sampling points. It should be noticed that, the variable subject to approximation –-the stress–- being a vector-valued function, the total number of discrete points to be fitted does not coincide with the number of spatial sampling points (), but rather is equal to the product of such a number times the number of stress components (). The well-posedness of the fitting problem, thus, demands that:

(4.54)

i.e., the number of discrete points must be equal or greater than the number of parameters to be adjusted. For the equality to hold, both and have to be multiple of ; thus, an exact fit is in general not possible for arbitrary values of and , and recourse to an approximate fit is to be made. In this respect, we follow here the standard approach of using a least-squares, best-fit criterion, i.e., minimization of the squares of the deviations between “observed” () and fitted () values (in our context, “observed” signifies “calculated through the pertinent constitutive equation”). This minimization problem can be stated as:

(4.55)

where stands for the standard euclidean norm. Let be the gappy expanded basis matrix, and suppose that the sampling indices have been chosen so that has full rank, i.e.:

(4.56)

Then, it can be shown (see, for instance, Ref. [71]) that the solution of this standard, least-squares problem is provided by the following vector of coefficients:

(4.57)

where

(4.58)

is the so-called pseudo-inverse of matrix .

Recall that our ultimate aim is to derive closed-form expressions for and as functions of . Thus, it remains to extricate these two sub-vectors from expression (4.57). This can be done by first partitioning both and in terms of the gappy stress basis matrix and the gappy matrix of strain modes :

(4.59)

Invoking the blockwise inverse formula for x block symmetric matrices [72], and upon tedious algebra –-that has been relegated to Appendix C–-, one finally arrives at the following expressions for and

(4.60)

(4.61)

where denotes the pseudoinverse of the gappy stress basis matrix :

(4.62)

and is the so-called Schur complement [72] of in :

(4.63)

(note that this matrix is invertible by virtue of the hypothesis represented by Eq.(4.56)).

Reconstruction matrix

Let us first examine expression (4.60) for the modal coefficients –-those that multiply the statically admissible component of the global stress vector. Since, at the solution, , we have that:

(4.64)

(Notice that this result can also be obtained by directly solving minimization problem (4.55) with ). Substitution of this equation into Eq.(4.48) yields:

(4.65)

where

(4.66)

Inspection of Eq.(4.65) reveals that the matrix defined above is the operator that allows one to reconstruct the (weighted) statically admissible stress vector using only the (weighted) stress values () calculated at the pre-selected sampling points . For this reason, we shall use the term weighted reconstruction matrix (or simply reconstruction matrix) to refer to this operator. It must be emphasized here that this matrix only depends on the POD stress basis matrix and on the selected sampling indices –-i.e., it is independent of the input parameter, the macro-strain –-and, therefore, it can be pre-computed offline.

4.4.3 “Hyperreduced” cell equilibrium equation

As for the expression for the set of “statically inadmissible” coefficients , we know that, at the solution, these coefficients must vanish; thus, from Eq.(4.61), we have

(4.67)

Since is a nonsingular matrix, the above condition is equivalent to

(4.68)

Furthermore, examination of Eq.(4.66) and Eq.(4.68) readily shows that the bracketed term in Eq.(4.68) is nothing but the submatrix of the reconstruction matrix formed by the rows associated to sampling points , i.e.:

(4.69)

Substitution of expression (4.69) into Eq.(4.68) finally leads to:

(4.70)

As previously noted (see Figure 2), the purpose of enforcing condition is to ensure that the stress solution lies entirely in the space of equilibrated stresses. Equation (4.70) can be viewed, thus, as the “hyperreduced” form of the original cell equilibrium equation.

Observation 3: The “hyperreduced” qualifier –-coined by D. Ryckelynck [73,74]–- is used here to indicate that Eq.(4.70) is the result of two subsequent steps of complexity reduction: firstly, in the number of degrees of freedom (when passing from the finite element model to the standard ROM), and, secondly, in the number of integration points (when passing from this standard ROM to what we have baptized “High-Performance” ROM ). This double complexity reduction can be better appreciated by rephrasing both Eq.(4.70) and the FE cell equation (2.24) in a format similar to that of Eq.(4.36), viz.:

(4.71)

and

(4.72)

respectively (here, is the finite element counterpart of , defined in Eq.(4.27)). With Eq.(4.72), Eq.(4.36) and Eq.(4.71) at our disposal, the aforementioned process of complexity reduction can be symbolically represented as

(4.73)

the relation between , and being1

(4.74)

and

(4.75)

with . It is interesting to see how the reduction in complexity of the cell equilibrium equation is reflected in the gradual reduction of the dimensions of the “B” operators that act on the weighted vector of stresses.

Physical interpretation

Aside from a “compressed” version of the original, full-order cell condition, the hyperreduced cell equation (4.70) can be alternatively interpreted as a balance between “observed” and “fitted” internal forces at the selected sampling points. Such an interpretation becomes readily identifiable by realizing that the product appearing in Eq.(4.70) is but the (weighted) vector of fitted stresses at the selected sampling points. Indeed, by virtue of Eq.(4.65) and, considering the properties of the selection operator , we have that

(4.76)

Using the above equality, Eq.(4.70) is expressible as

(4.77)

or, reverting to the original, summation notation:

(4.78)

as

(4.79)

Note that both sides of the above equation represent the same physical quantity, namely, the sum of internal forces, in reduced coordinates, at the sampling Gauss points . The difference lies in the stresses employed for computing these internal forces. In the left-hand side, they are calculated using “observed” stresses –-stresses that arise directly from evaluating the corresponding constitutive equation–-, whereas, in the right-hand side, “fitted” stresses are used –-that is, stresses obtained from fitting the approximation constructed using the POD stress basis functions to the observed data. Thus, the HP-ROM equilibrium condition (4.79) is telling us that, at the solution, the sum of internal forces –-at the pre-selected sampling points–- computed using either observed or fitted stresses2 must coincide.

(1) is nothing but the compact form of the weighting factors introduced in Section 4.2.1 (see Eq.(4.7)).

(2) It should be mentioned in this respect that, in general, since, as expression (4.54) indicates, the number of data items to be fitted () is always greater than the number of stress modes (). Observed and fitted stresses coincide only when the stress vector one wishes to approximate pertains to the column space of the stress basis matrix (consult Appendix D.1 (Observation 8) for a demonstration of this property. )

4.4.4 Jacobian matrix

Needless to say, the dependence of the stresses on the reduced vector of reduced displacement fluctuations is in general non-linear, and, thereby, an iterative method is required for solving Eq.(4.70). Here we employ the standard Newton-Raphson procedure. The iterative scheme corresponding to this procedure is given by the following expression (the parenthetical superscript indicates iteration number):

(4.80)

where

(4.81)

and

(4.82)

In the above equation, denotes a block diagonal matrix containing the algorithmic, constitutive tangent matrices at each sampling point:

(4.83)

Positive definiteness

Because of its relevance in the overall robustness of the proposed method, it is worthwhile at this point to digress and discuss thoroughly the spectral properties of the Jacobian matrix represented by Eq.(4.82). In particular, it would be interesting to ascertain whether positive definiteness of the algorithmic tangent matrices at the selected sampling points, and thus of matrix , ensures positive definiteness of the Jacobian matrix –-as it occurs when using classical Gauss quadrature rules with positive weights–-, and, if not, which remedies can be applied to obtain such a desirable property.

Positive definiteness of the Jacobian matrix (4.82) requires that the function defined as

(4.84)

be positive for all non-zero . Since is a full rank matrix –-by virtue of Eq.(4.56)–-, condition is equivalent to:

(4.85)

for all non-zero .

To go further, we need to demonstrate that –-recall that is the matrix that maps the vector of “observed” stresses to the vector of fitted stresses –- actually represents an orthogonal projection1 onto the the column space of the gappy stress basis matrix . This can be shown by simply noting that is, on the one hand, symmetric:

(4.86)

and, on the other hand, idempotent:

(4.87)

With this property at hand, we can decompose any as

(4.88)

where –-the component of along the column space of –- and –-the component of along the orthogonal complement of . Introducing the above decomposition into Eq.(4.85), we arrive at

(4.89)

While the first term in the preceding equation is, in virtue of the positive definiteness of , eminently positive for all nonzero , nothing can be said in principle about the second term : numerical experience shows that the sign and relative magnitude of this term depends further on the chosen set of sampling indices .

Remark 2: From the above observation, it follows that the positive definiteness of the Jacobian matrix is determined, not only by the spectral properties of , but –-not surprisingly–- also by the number and the location within the RVE of the sampling points employed in the integration.

The foregoing remark naturally leads to wonder whether it is possible to select the sampling indices so as to ensure the positive definiteness of (assuming, obviously, that enjoys this property). To shed light on this question, let us first divide Eq.(4.89) by (notice that hypothesis (4.56) precludes the possibility of being zero)

(4.90)

Suppose now, for the sake of argument, that is also symmetric. Such being the case, the above equation can be legitimately rewritten as:

(4.91)

where

(4.92)

In the above equation, symbolizes the inner product defined by (i.e., ), whereas denotes the norm associated to such an inner product (). From Eq.(4.90), it can be deduced that a sufficient (yet not necessary) condition for , and thus for to be positive definite, is that

(4.93)

for all nonzero , or equivalently (setting ):

(4.94)

for all nonzero .

Useful guidelines on how to choose so as to make positive definite the Jacobian matrix can be inferred from inequality (4.94). Firstly, given a fixed number of sampling points , expression (4.94) indicates that such points should be selected so that the columns of the gappy strain basis matrix are, loosely speaking, “as orthogonal as possible” to –-the column space of the gappy stress basis matrix . In so doing, the factor defined as

(4.95)

would diminish, and so would, consequently, the left-hand side of inequality Eq.(4.94). In practice, however, factor cannot be used as a criterion for guiding the selection of sampling points, simply because it is defined in terms of the norm induced by , and this matrix virtually changes at every time step and iteration. One has to be content to estimate this factor using other norm; for instance, employing the standard euclidean norm , one gets

(4.96)

where stands for the Frobenius norm.

Aside from seeking orthogonality between and , expression (4.95) suggests that another way of lowering factor may be to reduce the ratio defined as

(4.97)

Since and, consequently, are matrices representing orthogonal projections, we have that

(4.98)

and

(4.99)

Therefore,

(4.100)

Observation 4: From the above expression, thus, one can conclude that increasing the number of sampling points while keeping the number of stress modes constant also contributes to reduce factor in Eq.(4.94), and, hence, in improving the spectral properties (positive defineteness) of the Jacobian matrix . Notice that this property is totally consistent with the fact that, in the limiting case of taking all Gauss points (), the reduced matrices and degenerate into their full order counterparts and , for which the condition holds –-they span subspaces that are mutually orthogonal–-, hence making .

(1) is the so-called “hat” matrix of linear regression models [75].

4.5 Selection of sampling points

The last theoretical issue to be discussed in the present work is the selection –-among the full set of Gauss points of the underlying finite element mesh–- of appropriate sampling or interpolation points. At the very least, the set of sampling indices must be chosen so that the gappy expanded basis matrix has full rank (see section 4.4.2):

(4.101)

Any set of sampling indices fulfilling this necessary condition is said to be admissible.

4.5.1 Optimality criteria

Accuracy

As in any other model reduction problem, the overriding concern when choosing the sampling points is obviously the accuracy of the approximation: we would like to position such points so that maximum similarity between the “high-fidelity”, finite element solution and the reduced-order response is obtained. In particular, since the ultimate aim of solving the cell BVP is to, given an input macroscopic strain trajectory, calculate the associated macroscopic stress response, it seems reasonable to use as optimality indicator the following error estimate:

(4.102)

where

(4.103)

In the above equations, denotes the finite element, macroscopic stress response corresponding to the the () time step of the “training” strain trajectory () –-i.e., the strain path employed in the construction of both the displacement and stress basis, see Chapter 3. The variable , on the other hand, stands for the low-dimensional approximation of , calculated through formula (4.65) using the stress basis matrix and the finite element solution at sampling points .

It is shown in Proposition D.1.1 of Appendix D.1 that an upper bound for is given by

(4.104)

where

(4.105)

( is the rank of the stress snapshot matrix) and

(4.106)

In the above equations, designates the matrix of trailing (or inessential) inelastic stress modes, while stand for the singular values associated to . Note that in Eq.(4.105) only depends on the number of stress modes employed in the approximation (but not on the employed sampling indices); thus, it provides an estimate of the stress truncation error. The term that actually measures the quality, in terms of accuracy, of a given set of admissible sampling points is the other one, –-it provides an (a priori) estimate of the stress reconstruction error. For this reason, and also because the cost of evaluating expression (4.106) is independent on the total number of snapshots and Gauss points –-and hence significantly lower than in the case of the original error estimate –-, we shall use in what follows as error estimator for guiding the selection of sampling points.

Spectral properties

Yet the optimality of a given set of sampling points cannot be measured only in terms of accuracy of the approximation. As demonstrated in the preceding section, the number and particular placement of such points influence also the spectral properties (positive definiteness) of the Jacobian matrix of the equilibrium equation, and therefore, the convergence characteristics of the accompanying Newton-Raphson algorithm. We saw that, to preserve the positive definiteness of the full-order Jacobian matrix, one should strive to choose the sampling indices so as to make the factor –-defined previously in Eq.(4.96)–-:

(4.107)

as small as possible.

4.5.2 Optimization approach: basic and stabilizing sampling points

Unfortunately, the minimization of the approximation error represented by expression Eq.(4.106) and the minimization of Eq.(4.107) are in general conflicting goals. For instance, numerical experiments show that when the selection is driven exclusively by accuracy considerations, the resulting Jacobian matrix becomes indefinite at certain states of deformation –-especially when inelastic deformations are severe–-, leading occasionally to convergence failures. These goals must be therefore balanced in order to arrive at an accurate and at the same time robust solution scheme.

To accomodate these conflicting requirements, we propose here a heuristic strategy that basically consists in treating the minimization of Eq.(4.106) and Eq.(4.107) as two separated, sequential problems –-in the spirit of the so-called “greedy” optimization algorithms [76]. The set of sampling points is assumed to be divided into two disjoint subsets and :

(4.108)

The first subset is obtained as the minimizer of the error estimation given in Eq.(4.106), viz.:

(4.109)

Once the set is determined, the remaining sampling indices () are calculated as

(4.110)

Remark 3: It must be noted here that the minimization problem represented by Eq.(4.109) is in essence the same problem addressed in (standard) interpolatory-based, model reduction approaches for determining, given a set of empirical basis functions, the optimal location of associated interpolations points. For this reason, we shall refer to the set of points arising from solving this minimization problem as the standard or basic sampling points –-these are the Best Interpolation Points of Nguyen et al. [30], or the “magic points” of Maday et al. [67]. By contrast, the necessity of introducing points that attempt to solve problem (4.110) is a consequence of expanding the stress approximation space in the first place –-the main innovative feature of our approach–-, and it is therefore not present in other model reduction strategies. We shall call the set of stabilizing sampling points.

The number of basic sampling points must satisfy the necessary condition . In general, taking suffices to ensure highly satisfactory approximations. How many, on the other hand, stabilizing sampling points have to be added to safely render positive definite the Jacobian matrix –-for at least a representative range of macroscopic state deformations–- is a question that can only be answered empirically. In the examples presented in the ensuing Chapter, it has been found that a conservative answer is to use as many stabilizing sampling points as displacement basis modes ().

In order not to disrupt the continuity of the presentation, the discussion concerning the algorithms employed here for addressing optimization problems (4.109) and (4.110) is postponed to Appendix D.

4.6 Summary

Lastly, for the reader's convenience and easy reference, the online HP-ROM cell problem, along with the offline steps that leads to the the hyperreduced operators appearing in the online problem, are summarized in Boxes 4.6.1 and 4.6.2 .

  1. Compute FE displacement fluctuations and stress snaphots for representative, input macro-strain histories. Apply –-see Appendix B–- the elastic/inelastic POD to the resulting snapshot matrices to obtain the displacement fluctuation and stress basis matrices ( and , respectively).
  2. Calculate the weighted matrix of fluctuating strain modes using Eqs. (3.30) and (4.35).
  3. Select a set of sampling indices optimal for the basis matrices and following the procedure sketched in Section 4.5 –-and described more in detail in Appendix D.
  4. Finally, using , and , construct the hyperreduced-order matrices and ; the expressions for these matrices read:
    (4.111)
  5. and

    (4.112)

    where and (Note: is the global matrix form of the weighting matrices presented previously in Eq.(4.10)).

Box 4.6.1 Offline stage. Pre-computation of reduced basis and hyperreduced operators (Note: ).


  1. Initial data: (reduced vector of displacement fluctuations at ), (macroscopic strain vector at ), and (internal variables at at the selected sampling points).
  2. Input data: (macroscopic strain vector at )
  3. Given the above initial and input data, find such that
    (4.113)
  4. where

    (4.114)

    (here, denotes the stress vector evaluated at the sampling point through the corresponding constitutive equation).

  5. Output data: Once Eq.(4.117) has been solved for , update the macroscopic stress vector as
    (4.115)
Box 4.6.2 Online stage (solution of the hyperreduced-order RVE equilibrium problem for given macroscopic strains).

5 Numerical results

This section is intended to illustrate the performance and assess the efficiency of the proposed model reduction strategy in solving the fine scale BVP corresponding to a porous metal material under plane strain conditions.

5.1 Microstructure description

The voids are elliptical in shape (with eccentricity equal to 0.3), randomly distributed (with porosity equal to 0.3), and have aligned major axes ranging in length –-according to the cumulative probability distribution displayed in figure 3.b–- from 0.2 to 1.5 mm.
a) Finite element mesh of the RVE corresponding to the porous metal material. b) Cumulative probability distribution followed by the length of the pore  major axes.
Figure 3: a) Finite element mesh of the RVE corresponding to the porous metal material. b) Cumulative probability distribution followed by the length of the pore major axes.

The mechanical behavior of the metal matrix is modeled by a rate-independent, Von Mises elastoplastic model endowed with the following non-linear, isotropic hardening saturation law (consult Ref. [77] for details on the implementation of this elastoplastic model):

(5.1)

Here, stands for the yield stress, denotes the equivalent plastic strain; and , , and are material constants. The Young's modulus and Poisson's coefficient, on the other hand, are equal to and , respectively (these material constants corresponds approximately to Aluminum).

5.2 RVE and finite element discretization

The size of the RVE was determined by conducting finite element analyses on square domains of increasing size subject to vanishing displacement fluctuations boundary conditions. It was found that the macroscopic stress responses calculated under representative macroscopic strain paths (stretching along the longitudinal and transversal directions, and shearing) of all samples above x were practically indistinguishable. According to the definition of RVE provided in Section 2.1.1, this fact indicates that any subvolume of x can be considered as a Representative Volume Element (RVE) of the porous material under study.

The finite element discretization corresponding to the particular x RVE employed in the ensuing simulations is shown in figure 3.a. The number of (four-node bilinear) elements is , and the number of nodes . The employed quadrature formula, on the other hand, is the standard x Gauss rule, the total number of Gauss points amounting thus to . To overcome incompressibility issues while maintaining the displacement-based formulation presented in the preceding sections, the commonly known as “B-bar” approach is adopted. (This means that, in this case, the reduced “B-matrix” appearing in the formulation of the HP-ROM is not constructed using the gradients of the shape functions, as indicated by Eq.(3.27), but rather using the modified “B-matrix” emanating from the three-field Hu-Washizu variational principle [77]). The constitutive differential equations are integrated in time using the classical (fully implicit) backward-Euler scheme.

5.3 Sampling of parameter space

Macro-strain trajectories used for generating the displacement and stress snapshots.
Figure 4: Macro-strain trajectories used for generating the displacement and stress snapshots.

The first step in the process of constructing the reduced basis is the sampling of the input parameter space; we saw in Section 3.1.1 that, in the cell equilibrium BVP, this process amounts to select representative macroscopic strain histories. The three macroscopic strain histories () used in the case under study are depicted in figure 4. In each of these strain trajectories, one of the (independent) strain components follows a linear ascending path while the magnitude of the other two components is set to zero. The time domain for each strain history is discretized into equally spaced steps, resulting in a total number of snapshots.

5.4 Dimensionality reduction: a priori error analysis

The finite element displacement fluctuation and stress fields computed at each time step of the input strain trajectories shown above are multiplied by their corresponding weighting matrices ( and ) and stored, in the snapshot matrices () and (), respectively. Then, these matrices are subjected to the SVD-based, elastic/inelastic dimensionality reduction process sketched in Section 3.1.2 –-and described more in detail in Appendix B–- in order to generate an optimal set of basis vectors for both the displacements fluctuation and stress solution spaces.

Draft Samper 404103716-truncat error POROS CD.png
Figure 5: POD truncation error estimates (for the displacement fluctuations, see Eq.(5.2)) and (for the stresses, see Eq.(5.3)) versus number of basis vectors employed in the approximation ( and , respectively). The portion between 6 and 11 modes is shown in magnified form.


To elucidate which of these basis vectors constitute the “essential” modes of the response, we plot in Figure 5 the dimensionless POD truncation error estimates defined, for the displacement fluctuations, as:

(5.2)

and for the stresses:

(5.3)
and being the orthogonal projection of and onto the span of the first and basis vectors, respectively (Remark: these error measures can be expressed in terms of singular values arising from the SVD of the snapshot matrices and ; see Propositions B.1.1 and D.1).
Contour plots of the euclidean norm of the first 6 displacement fluctuations  modes (‖Фi‖, i=1,2 …5). Deformed shapes are scaled up by a factor of 15.
Figure 6: Contour plots of the euclidean norm of the first 6 displacement fluctuations modes (, ). Deformed shapes are scaled up by a factor of 15.

It can be observed in Figure 5 that both error measures decrease monotonically with increasing order of truncation –-this is a mere consequence of the optimality properties of the SVD–-, and at approximately the same rate; the decay is more pronounced from 1 to 6 modes, and becomes more gradual thereafter, tending asymptotically to zero as the number of modes increases. The truncation error for both stresses and displacement fluctuations at is around . In terms of dimensionality reduction, this means that the data contained in the snapshot matrices can be “compressed” to a factor of and still retain 95% of the information –-the essential information. The first 6 basis functions (3 elastic and 3 inelastic) for both stresses and displacement fluctuations, therefore, are to be regarded as essential modes in the characterization of the mechanical response of the concerned RVE. By way of illustration, we plot in Figure 6 the contour plots of the euclidean norm of such 6 essential displacement fluctuations modes (, ).

Observation 5: A noteworthy feature in Figure 5 is that the truncation errors and are of the same order of magnitude for all level of truncations. This fact suggests that the optimum relation between order of truncations may be not too far from . For this reason, and for the sake of simplicity in the ensuing parametric studies, we shall assume hereafter that (equal levels of truncation for both stresses and displacement fluctuations). Physically, this seems to be a fairly reasonable choice: the quality of the stress solution depends largely, through the pertinent constitutive relationships, on the quality of the displacement solution, and vice versa. Therefore, it seems pointless to increase the quality of the approximation in stresses without accompanying such an increase with a concomitant enlargement of the displacement solution space.

5.5 Sampling points

5.5.1 Basic sampling points

Once the stress and displacement fluctuation basis vectors have been determined, the next offline step consists in the selection –-among the full set of finite element Gauss points–-of an optimal set of sampling points. Following the strategy described in Section 4.5.2, we carry out such a selection by first computing the location of what we have called basic sampling points .
Estimates  for the POD truncation (̃eσtrun, see Eq.(5.3)) and total (̃eσ, see Eq.(5.4)) stress error          versus  number of basis vectors employed in the approximation (nσ). The total error estimate is computed using only the set of basic sampling points (̃eσ= ̃eσ(nσ,\mathcalIσ), with pσ= nσ). The portion between 6 and 11 modes  is shown in magnified form.
Figure 7: Estimates for the POD truncation (, see Eq.(5.3)) and total (, see Eq.(5.4)) stress error versus number of basis vectors employed in the approximation (). The total error estimate is computed using only the set of basic sampling points (, with ). The portion between 6 and 11 modes is shown in magnified form.

To assess the efficiency of the employed Hierarchical Interpolation Points Method, abbreviated HIPM, (set out in Appendix D.2.1), we plot in Figure 7 the estimates for both the POD truncation (shown previously in Figure 5) and total stress error versus the number of stress modes (in using this algorithm, it is assumed that ). The total stress error estimate is defined as

(5.4)

where denotes the oblique projection (calculated using sampling points ) of onto the span of the first basis vectors (). The number of effective trailing modes employed in the calculations is . (The connection between this error estimate and the singular values of the trailing stress modes is disclosed in Appendix 10.1). It can be appreciated in Figure 7 that both the total error and the truncation error curves are practically coincident, a fact that indicates that the contribution of the reconstruction error:

(5.5)

( the error introduced as a result of using only sampling points instead of the entire set of finite element Gauss points, see Section 4.5.1) is negligible in comparison to the discrepancies due to truncation of the POD basis. For , for instance, the reconstruction error is less than of the total stress error. In view of these results, it becomes clear that further refinements in the algorithm for selecting the basic sampling points are in principle not necessary: the employed HIPM optimization algorithm, however heuristic, satisfactorily fulfills this purpose. If one wishes to lower the stress approximation error, it is far more effective to simply increase the level of truncation.

5.5.2 Stabilizing sampling points

a) Factor fF (defined in Eq.(4.98)) versus  number of stabilizing sampling points pB for varying numbers of basic sampling points pσ (with pσ= nσ= nu). b)   Minimum eigenvalue μminK (over all time steps and iterations for each pσ)   of the symmetric part of the reduced-order Jacobian matrix K*  versus  number of stabilizing sampling points pB.
Figure 8: a) Factor (defined in Eq.(4.98)) versus number of stabilizing sampling points for varying numbers of basic sampling points (with ). b) Minimum eigenvalue (over all time steps and iterations for each ) of the symmetric part of the reduced-order Jacobian matrix versus number of stabilizing sampling points .


Concerning what we have termed “stabilizing sampling points”, Figure 8.a contains the graphs, for varying levels of truncation, of factor defined in Eq.(4.96) as a function of the number of stabilizing sampling points . To study the influence of including such points on the spectral properties –-positive defineteness–- of the stiffness matrix, these graphs are accompanied, see figure 8.b, by the plots of the minimum eigenvalue (over all time steps and iterations for each case) of the symmetric part of the reduced-order Jacobian matrix versus . It can be seen that decreases monotonically as the number of stabilizing sampling points increases, and such a decrease is reflected, as theoretically anticipated in Section 4.4.4, in the improvement of the spectral properties of the reduced-order Jacobian matrix (higher as raises). For clarity, the minimum number of stabilizing sampling points required, for each level of truncation, to render positive definite is plotted in Figure 9.


Minimum number of stabilizing sampling points required to make the Jacobian matrix K* definite positive   for each  level of truncation nσ=nu= pσ  (deduced from Figure 8).
Figure 9: Minimum number of stabilizing sampling points required to make the Jacobian matrix definite positive for each level of truncation (deduced from Figure 8).


From this plot, it can be gleaned that, roughly, the higher the level of truncation (and thus the number of basic sampling points), the more stabilizing sampling points appear to be needed to ensure the positive definiteness of . For , adding just one stabilizing sampling points suffices, while for , 7 points are required.
Location within the RVE  of the finite elements (marked in red) that contains the first pσ=pB=6 basic and stabilizing sampling points.
Figure 10: Location within the RVE of the finite elements (marked in red) that contains the first basic and stabilizing sampling points.

Observation 6: The values shown in Figure 9 correspond to the minimum that leads to positive definite when the prescribed strain path coincides with any of the “training” strain trajectory (displayed in Figure 4 ). Unfortunately, there is no guarantee that the Jacobian matrix will also exhibit this desirable property for prescribed strain histories different from the training ones. Thus, in view of such uncertainty, and in the interest of robustness, it is preferable to stay on the side of “caution” in this regard and use more stabilizing sampling points that the minimum number indicated by the analysis based on the training strain trajectories. It is the authors' experience that a “safe” estimate for is to simply take –-that is, equal number of basic and stabilizing sampling points. In adopting such a rule, the authors have not observed any convergence failures whatsoever, neither in the example under consideration nor in other cases not shown here. The location of the first basic sampling points and the corresponding stabilizing sampling points is depicted in Figure 10.

5.6 A posteriori errors: consistency analysis

The error measures displayed previously in Figures 5 and 7 only depend on the outcome of the SVD of the snapshot matrices; they can be calculated, thus, before actually constructing the reduced-order model. Error analyses based on such measures serve the useful purpose of providing a first hint of how many stress and displacement fluctuations modes are needed to satisfactorily replicate the full-order, finite element solution, and thereby, of prospectively evaluating the viability of the reduced basis approach itself.

However, these a priori error estimates do not tell the whole story. Expression (5.4) for the stress approximation error presumes that the stress solution at the chosen sampling points is the one provided by the finite element model, thus ignoring the fact that, actually, in the reduced-order model, and for the general case of nonlinear, dissipative materials –-for linear elasticity, the approximation is exact, see Remarks 1 and D.1.2–- the stress information at such points at a given time step is already polluted by truncation (in displacement fluctuations and stresses) and reconstruction (in stresses) errors originated in previous time steps. To quantify the extent to which this amalgam of accumulated errors affects the predictions furnished by the HP-ROM, it is necessary to perform a consistency analysis.

Generally speaking, a reduced basis approximation is said to be consistent if, in the limit of no truncation, it introduces no additional error in the solution of the same problem for which the data used in constructing the basis functions were acquired [78]. In the BVP under consideration, thus, consistency implies that, when using as input macro-strain paths the same trajectories employed in the “training” process, results obtained with the HP-ROM should converge, as and increases, to the solution furnished by the full-order, finite element model. This condition can be checked by studying the evolution of the error measures defined as

(5.6)

for the displacement fluctuations, and

(5.7)

for the stresses. ( The superscript “ROM” is appended to highlight that, unlike and in Eqs. (5.2) and (5.4), and are matrices of displacement fluctuation and stress snapshots computed using the HP-ROM). Figures 11.a and 11.b contain the graphs of these a posteriori error measures, along with their respective a priori counterparts (Eq. 5.2) and (Eq. 5.4), versus the level of truncation. It becomes clear from these graphs that consistency, in the sense given above, is observed in terms of both displacement fluctuations and stresses: the a posteriori error measures and mimic essentially the decreasing tendency of their a priori counterparts and , respectively. It can be seen also that the a priori error estimations and constitute (rather tight) lower bounds for their a posteriori counterparts and , respectively. This can be better appreciated, for the stresses, in Figure 12, where the ratio versus the level of truncation is plotted.

Comparison of the evolution of a priori and a posteriori error measures versus the level of truncation (using nu=nσ=pσ=pB). a) Displacement fluctuations (see Eqs. 5.2 and 5.6). b)  Stresses (see Eqs. 5.4 and 5.7)
Figure 11: Comparison of the evolution of a priori and a posteriori error measures versus the level of truncation (using ). a) Displacement fluctuations (see Eqs. 5.2 and 5.6). b) Stresses (see Eqs. 5.4 and 5.7)
Ratio ̃eσROM/̃eσ  between the a posteriori and a priori   measures for the stress approximation error against the level of truncation (using nu=nσ=pσ=pB).
Figure 12: Ratio between the a posteriori and a priori measures for the stress approximation error against the level of truncation (using ).
Longitudinal macroscopic stress versus longitudinal macroscopic strain computed using the FEM, the HP-ROM with nσ=nu=6,7,8, and the elementary rule of mixtures.
Figure 13: Longitudinal macroscopic stress versus longitudinal macroscopic strain computed using the FEM, the HP-ROM with , and the elementary rule of mixtures.
Contour plot of transversal stresses computed at the end of  the  first “training” strain history      using a) FEM (b)  HP-ROM with nσ= nu=6. Deformed shapes are exaggerated (by a factor of 20).
Figure 14: Contour plot of transversal stresses computed at the end of the first “training” strain history using a) FEM (b) HP-ROM with . Deformed shapes are exaggerated (by a factor of 20).

The degree of approximation that can be achieved using the proposed HP-ROM is quantified in a more “engineering” fashion in Figure 13, where we plot, for the case of the first training strain history (stretching in the longitudinal direction), the longitudinal, macroscopic stress-strain curves computed using the FE model, on the one hand, and the HP-ROM with modes, on the other hand. In order to highlight the non-trivial character of such a stress-strain response, we include also in Figure 13 the solution provided by the classical rule of mixtures (zero fluctuations at all points in the RVE [47]). Observe that the maximum deviation from the FE response when using 6 modes (3 elastic and 3 inelastic) takes place at the onset of plastic yielding and is below 8%; remarkably, as deformation continues, this deviation gradually diminishes, being practically negligible at the end of the process –-in stark contrast to the case of the simplistic rule of mixtures, that overpredicts in more than 300% the final stress. Furthermore, by just increasing the order of truncation to , differences between the HP-ROM and the FEM responses become virtually imperceptible at all levels of deformation. Resemblance between HP-ROM and FEM results can also be appreciated in terms of stress distribution in the contour plots shown in Figure 14. Visually, there are no discernible differences between the two contour plots.

5.7 “Training” errors

The studies presented in the preceding subsections were aimed at examining the errors incurred in approximating the snapshot solution space by the reduced-order subspace spanned by the POD basis vectors –-in the terminology of Section 3.1.1–-, and to check that when , the solution provided by the HP-ROM converges to that obtained with the FEM. But recall that the snapshot space is but a (presumably representative) subspace of , the manifold of induced by the parametric dependence of the cell BVP on the prescribed macroscopic strain history. Consequently, in general –-for an arbitrary input strain trajectory–- the HP-ROM solution will not converge to the solution provided by the FEM. To complete the error assessment analysis, thus, it is necessary to estimate also the errors inherent to the sampling of the parameter space –-we call them training errors–- and judge whether the selected training strain trajectories generate a snapshot subspace that is indeed representative of such a solution space1 .

a) First strain trajectory employed for assessing training errors.   b) Plot of the macroscopic error estimator ̃Eσ,MROM (see Eq.(5.8)) corresponding to this testing trajectory  versus  level of truncation (nσ= nu)
Figure 15: a) First strain trajectory employed for assessing training errors. b) Plot of the macroscopic error estimator (see Eq.(5.8)) corresponding to this testing trajectory versus level of truncation ()

Ideally, one should carry out this error assessment by picking up, guided by some sound, statistically-based procedure, a sufficiently large set of strain paths and by comparing the solutions computed by the FEM and HP-ROM under such input strain paths for varying levels of truncation. Such a degree of rigor, however, is beyond the scope of the present work. Here, we limit ourselves here to analyze the quality of the HP-ROM approximation obtained for two different input strain histories, namely, a uniaxial compression test, and a biaxial loading/unloading test.

(1) To put it in less mathematical terms –-by appealing to the the analogy, introduced in Remark 1, between the training of the cell reduced-order model and the calibration of standard phenomenological models–- we have “calibrated” our HP-ROM using the training tests displayed previously in Figure 4, and we have shown that the model is able to exactly replicate the behavior of the cell in these tests when is sufficiently large. Similarly to the situation encountered when dealing with standard phenomenological models, it remains now to assess the capability of the proposed HP-ROM to predict the behavior of the cell under conditions different from those used in the “calibration” (training) process.

5.7.1 Uniaxial compression

The first strain path employed for the assessment is displayed in Figure 15.a; it represents a monotonic compression in the transversal direction (the model, see Figure 4, was trained using only stretching and shear, but not compression, tests). For purposes of evaluating the quality of the HP-ROM approximation, it is convenient to introduce the following macroscopic1 stress error estimate:

(5.8)

where and denote the macroscopic stress at the time step computed by the FEM and the HP-ROM, respectively. This error estimate is plotted in Figure 15.b versus the level of truncation . Observe that the error goes to zero as the number of employed modes increase. In this particular case, thus, there is no additional error due to sampling of the parameter space.

Remark 4: This simple example fittingly illustrates one of the acclaimed advantages of POD/Galerkin reduced-order approaches over “black box” methods such as artificial neural networks –-that are also based on the partitioned offline-online computational paradigm–-: POD/Galerkin reduced-order approaches preserve the “physics” of the problem one wishes to model and, as a consequence, are able to make physically-based extrapolations. For instance, in this case, the reduced-order model is able to exactly replicate (for sufficiently large the macroscopic compressive behavior of the RVE, even though no information regarding this deformational state has been supplied to the model in the calibration (training) phase; the HP-ROM is “aware”, figuratively speaking, that the matrix material in the RVE exhibits similar behavior in tension and compression (J2 plasticity).

(1) Recall that the output of interest in solving the cell BVP is the macroscopic stress tensor; thus, the error estimate defined in Eq.(5.8) () provides a more meaningful indication of the quality of the approximation than the stress error measure defined previously in Eq.(5.7) (). The latter is more suited for examining convergence properties of the HP-ROM approximation, since the minimization problem that underlies the SVD is posed in terms of the Frobenis norm. Nevertheless, it is straightforward to show that, actually, is an upper bound for (see Proposition D.1).

5.7.2 Biaxil loading/unloading test

A more severe test for assessing errors associated to the training process is provided by the strain trajectory shown in Figure 16. Indeed, while the training strain histories of Figure 4 only included monotonic, uniaxial stretching, the strain history displayed in Figure 16 consists of a cycle of biaxial, loading/unloading stretching (time steps 1 to 100) and biaxial loading/unloading compression (time steps 101 to 200). The graph of the macroscopic error estimator (5.8) corresponding to this input strain path as a function of the level of truncation is represented in Figure 17.a. It can be readily perceived that, in this case, and in contrast to the situation encountered in the previously discussed input strain trajectory, the macroscopic stress does not go to zero as the number of POD modes included in the basis increases. Rather, the graph drops sharply from 24% to approximately 5% at (second inelastic mode), and then fluctuates erratically, with no apparent trend, between and –-a level of accuracy that, nevertheless, may be deemed more than acceptable in most practical applications. A more clear picture of the accuracy of the approximation for the particular case of can be obtained from the stress-strain diagrams shown in figure 18.

Second strain trajectory employed for assessing training errors.
Figure 16: Second strain trajectory employed for assessing training errors.
a) Macroscopic error estimator ̃Eσ,MROM (see Eq.(5.8)) versus  level of truncation (nσ= nu) for   the case of  testing trajectory shown in Figure 16,. b) Local speedup factor Sloc (defined in Eq.(5.9)) reported for this case versus level of truncation. This plot is accompanied by the graph of the ratio ng/p, where ng= 38984 is the total number of Gauss points of the finite element mesh, and p= 2 nσ the number of sampling points employed for numerically integrating the HP-ROM.
Figure 17: a) Macroscopic error estimator (see Eq.(5.8)) versus level of truncation () for the case of testing trajectory shown in Figure 16,. b) Local speedup factor (defined in Eq.(5.9)) reported for this case versus level of truncation. This plot is accompanied by the graph of the ratio , where is the total number of Gauss points of the finite element mesh, and the number of sampling points employed for numerically integrating the HP-ROM.
Longitudinal and transversal macroscopic stress versus longitudinal macroscopic strain computed using the FEM and the HP-ROM with nσ=nu=6 (for the case of the testing trajectory shown in Figure 16)
Figure 18: Longitudinal and transversal macroscopic stress versus longitudinal macroscopic strain computed using the FEM and the HP-ROM with (for the case of the testing trajectory shown in Figure 16)

5.8 Speedup analysis

Lastly, we turn our attention to one of the main concerns of the present work: the issue of computational efficiency. For a given error level, how many times can the proposed HP-ROM speed up the calculation of the cell response with respect to the reference finite element model? Let us define the local speedup factor as the ratio

(5.9)

where and denote the CPU times required to compute the FE and HP-ROM macro-stress responses, respectively, induced by a given input strain history1 In Figure 17.b, we show the graph of the speedup factor reported in the the case of the input strain path of Figure 16 as a function of the number of POD modes included in the analysis (recall in this respect that ). We plot also in Figure 17.b the ratio , i.e., the relation between the total number of integration points in the finite element model ( ) and in the reduced order model (). It can be gleaned from Figure 17.b that the reported speedup factors are of the same order of magnitude as the ratio ; i.e.:

(5.10)

(this indicates that the evaluation of the stresses at the integration points dominates the total computational cost). Although these results are no doubt influenced and biased by the particular programming language and coding style employed –-we use an in-house, non-vectorized Matlab program operating in a Linux platform–-, and, consequently, this trend may not be exactly observed when using other programming languages and/or platforms, they serve to provide an idea of the tremendous gains in performance that can be achieved using the proposed ROM; for modes, for instance, the computational cost is reduced by a factor above , while still capturing 95% of the full-order, high-fidelity information –-the essential information.

Remark 5: The foregoing analysis considers only the relation between speedup factors and error for varying sampling points in the reduced-order model () and fixed number of Gauss points () in the finite element model. It may be wondered about the reverse behavior, i.e., what would happen if increases while is kept constant. In Reference [36], the authors investigate this issue, and prove that the approximation error is, interestingly, relatively insensitive to mesh refinement; that is, if the mesh is refined by a certain factor, one need not commensurately increase the number of sampling points in the reduced-order model to preserve the same level of accuracy. The striking implication of this property is that the speedup factors provided by the HP-ROM will invariably grow –-at a rate roughly proportional to the factor of refinement–- as the spatial discretization of the RVE becomes finer. Thus, if the mesh shown previously in Figure 3 is refined by a factor of, say, 3, speedup factors above 10000 can be conceivably obtained at for approximately the same level of approximation ().

(1) The computational cost associated to the offline stage –-generation of snapshots plus the comparatively negligible expenses of applying the POD and selecting the sampling points–-has been deliberately ruled out from this speedup analysis because, in two-scale homogenization contexts, the cell equilibrium problem is to be solved a sheer number of times and, consequently, this overhead cost is quickly amortized.

6 Concluding remarks

It is an unarguable fact that, in constructing a computational model of a given physical system, one's perception of what is essential and what is not is strongly influenced by the capabilities of the computer that will carry out the pertinent simulation: the more powerful the computer, the more details are regarded, at times unconsciously, but very often deliberately –-to skip the ever difficult task of concocting appropriate simplifications–-, as essential by the modeler, and thus eventually included in the simulation. To reverse this seemingly natural, but regrettable tendency –-it is inexorably conducive to misuse of available computational resources–-, in the present work, and for the particular problem of multiscale homogenization, we have proposed an approach in which the task of reducing the inherent complexity of the governing equations (in this case, the cell equilibrium equation ) is entirely delegated to the computer itself, and hence not affected by the vagaries of the modeler's appreciation.

The proposed approach revolves around the realization that, in two-scale homogenization problems, the fine-scale fields over the RVE induced by the parametric dependence on the input macroscopic strains can be accurately approximated by functions whose dimensionality is relatively low and, furthermore, totally independent of the geometric complexity of the RVE. Central to the approach is therefore the determination of optimal reduced-order spaces for these approximated functions. The optimality of these spaces depends, on the one hand, on the representativeness of the snapshots obtained in the a priori FE analysis, and, on the other hand, on the suitability of the dimensionality reduction tool employed to unveil the dominant modes from these snapshots. In the present work, attention has been focused on the latter aspect: we have developed a novel, partitioned POD that allows capturing the elastic behavior of the RVE with the same accuracy as the underlying FE model. By contrast, the former aspect (choice of the macro-strain values at which to obtain the snapshots) has been addressed only on intuitive basis, and thus, should receive more careful consideration in future developments. In particular, it would be desirable to systematize this crucial task, as well as to provide some statistical means to certify, so to speak, the representativeness of the chosen snapshots.

One of the the most striking features of the proposed theory is perhaps the conceptual simplicity of the cell equilibrium equation in its hyperreduced-order form: the sum of (reduced) internal forces at the pre-selected sampling points must give identical result either calculated using observed stresses or fitted stresses. Although this condition appears, in hindsight, rather reasonable, even obvious –-it ensures maximum resemblance between reduced-order and full-order responses at the sampling points–- it would require uncommon physical intuition to arrive at it without the benefit of the integration procedure –-based on the notion of expanded approximation space –- advocated in the present work.

The hyperreduced form of the cell equilibrium equation excels not only in its conceptual simplicity; the corresponding solution scheme is also very simple to implement. Taking as departure point an existing FE code, one has only to replace the typical loop over elements in the FE code by a loop over the pre-selected sampling points . The stress vectors and corresponding constitutive tangent matrices obtained at each stage of the loop are stored in the gappy weighted vector and the matrix , respectively, and, then the residual vector and the Jacobian matrix are computed as and , respectively. Notice that no assembly process is needed, nor has one to worry about imposing boundary conditions. Once convergence is achieved, the macroscopic stress value is simply calculated as . It should be emphasized again that the operation count in both solving this hyperreduced cell equation and updating the macroscopic stress vector depends exclusively on the reduced dimensions and (number of fluctuation modes and number of sampling points, respectively). Likewise, storage of history data (internal variables) is only required at the pre-selected sampling points. Computational savings accrue, thus, not only in terms of number of operations, but also in terms of memory requirements.

As already mentioned, the hyperreduced matrices and appearing in the online solution of the HP-ROM are calculated in the offline stage, prior to the overall multiscale analysis. These operators –-whose dimensions are independent of the size of the underlying finite element mesh–- encode the essential information regarding the geometrical arrangement and interaction of the distinct phases and/or pores at the fine scale; they are, thus, somehow intrinsic to the micro-/meso-structure one wishes to macroscopically model. In the light of this observation, one may venture to think of creating an extensive digital material library in which to include, once their approximating quality has been properly certified, these operators together with the properties (Young's modulus, type of hardening law, etc.) exhibited by the heteregoneous material at the sampling points. Given the inherent modular character of hierarchical multiscale models, any user having a two-scale HP-ROM homogenization code could easily incorporate and leverage the information contained in this library.

A topic we have purposefully refrained to touch in this work is the application of model reduction when strain localization (formation of discontinuities) takes place in the RVE. The reason is that homogenization in the presence of such events is a controversial, diffuse subject in its own right –-even the existence of an RVE in such cases is questionable–-, and, consequently, a rigorous treatment of it would have taken us too far afield. The most challenging and intriguing questions that will surely pose the application of model reduction techniques in these problems are connected with the aforementioned offline task of determining the dominant deformational and stress modes of the RVE. Can the deformational behavior of a cell affected by multiple propagating cracks be represented also in a parsimonious manner, as in the case of strain hardening? Or will the number of modes necessary to accurately replicate its response combinatorially increase with the number of potential crack propagation paths (i.e., with the geometrical complexity of the RVE). Research in this front is currently in progress and will be reported in forthcoming publications.

Appendix A. Connection between POD and SVD

We saw in Section 3.1.2 that the POD problem for the displacement fluctuations boils down to the solution of the following eigenvalue problem:

(A.1)

where is the typical finite element “mass matrix” (with density equal one):

(A.2)

The desired reduced basis matrix is then calculated from the largest eigenvalues and associated eigenvectors of such a problem by the following expression (see Eq. (3.13)):

(A.3)

Proposition 1: Let be the Cholesky decomposition of , and the matrix defined as:

(A.4)

Consider now the the (reduced) Singular Value Decomposition (SVD) [79] of , that is, the factorization:

(A.5)

wherein ( is the rank of ) and stand for the matrices of right and left singular vectors, respectively; and is a diagonal matrix containing the singular values of . Then, the column of the basis matrix defined by Eq.(A.3) can be alternatively calculated as:

(A.6)

being the left singular vector of .

Proof. Post-multiplying both sides of Eq.(A.5) by , one gets:

(A.7)

Pre-multiplying now by , and considering Eq.(A.4), one arrives at:

(A.8)

The proof reduces, thus, to showing that the eigenvectors and eigenvalues of problem (A.1) are indeed the right singular vectors and the square root of the singular values, respectively, of the Singular Value Decomposition (SVD) of . This follows easily by, first, substituting the Cholesky decomposition into Eq.(A.1):

(A.9)

and then inserting Eq.(A.5) into the above equation:

(A.10)

Finally, setting in the above:

(A.11)

which proves, as asserted, that and ().

Appendix B. Elastic/Inelastic reduced basis matrix

This appendix is devoted to provide further details concerning the actual numerical implementation of the elastic/inelastic partitioned strategy, presented in Section 3.1.2, for the computation of the reduced basis matrices (displacement fluctuations) and (stresses).

B.1 Displacement fluctuations

As pointed out in Section 3.1.2, the essence of this strategy lies in the orthogonal decomposition –-in the sense of the inner product–- of the space of snapshots into elastic () and inelastic () subspaces. This decomposition naturally translates into an orthogonal decomposition of the range of the snapshot matrix , denoted by , into pertinent elastic and inelastic manifolds. According to Eq.(3.8), orthogonality in the case of the displacement fluctuations is in the sense of the inner product induced by the “mass matrix” (see Eq. (3.9)):

(B.1)

The steps to arrive at the desired matrix basis are summarized in the following.

  1. Compute finite element stress solutions for representative, input macro-strain histories.
  2. Store the displacement fluctuation solutions computed at each time step of these macro-strain trajectories in the displacement fluctuations snapshot matrix :
    (B.2)
  3. Pick up from a minimum of ( for 3D problems, and for plane strain) linearly independent columns corresponding to purely elastic solutions. Store these columns in a matrix .
  4. Perform the reduced singular value decomposition (SVD) of the matrix defined as
    (B.3)
  5. where is the matrix of the Cholesky factorization of ( ). A basis matrix for is finally obtained as

    (B.4)

    being the matrix of left singular vectors arising from the SVD of . Notice that is columnwise orthogonal in the sense of the inner product defined by Eq.(B.1):

    (B.5)

    As such, it may be used in principle as the desired elastic basis matrix . However, does not enjoy any optimality property with respect to –-it is only optimal with respect to the matrix of chosen elastic snapshots.

  6. For consistency in the approximation, thus, it is preferable to derive from the the “elastic component” of –-the orthogonal projection of onto –-; the expression for this projection reads:
    (B.6)
  7. According to Eq.(A.5) of Proposition A.0.1, the elastic basis matrix can be finally calculated from as:

    (B.7)

    where is the matrix of left singular vectors emerging from the reduced SVD of ; i.e:

    (B.8)
  8. Calculate the “inelastic component” of the snapshot matrix as:
    (B.9)
  9. that is, is the orthogonal projection –-in the sense, again, of Eq.(B.1)–- of onto the orthogonal complement, in , of .

  10. It is now on this inelastic snapshot matrix that we apply the POD in order to identify and unveil the essential or most “energetic” inelastic fluctuation modes. According to Proposition A.0.1 in Appendix A, this is done by first carrying out the reduced SVD of :
    (B.10)
  11. The POD basis vector of is then given by:

    (B.11)
  12. The desired basis matrix adopts finally the form:
    (B.12)

B.1.1 Displacement truncation error

Proposition B.1.1: The fluctuation displacement truncation error defined in Eq.(3.4):

(B.13)

can be calculated as

(B.14)

where , and stands for the singular value of the inelastic snapshot matrix (see Eq.(B.10) ).

Proof. We begin the proof by showing that is equal to the Frobenius norm of the difference between the “weighted” snapshot matrix and its low-dimensional approximation . Indeed, following the same logic that led to Eq.(3.8), we get:

(B.15)

where

(B.16)

Inserting now the Cholesky decomposition of in the above equation, we arrive finally at

(B.17)

where .

On the other hand, can be written from Eq.(B.8) and Eq.(B.10) as:

(B.18)

where . Substitution of this decomposition in the expression for yields:

(B.19)

Using Eq.(B.18) and Eq.(B.19) in Eq.(B.17), and exploiting the orthonormality of the left and right singular vectors, one finally obtains:

(B.20)

as asserted.

B.2 Stresses

The methodology for constructing the reduced basis matrix for the stresses parallels, in essence, that explained in the preceding discussion. Nevertheless, for completeness, we summarize below the steps to be followed in this case.

  1. Compute finite element stress solutions for representative, input macro-strain histories. The most practical and somehow consistent choice regarding these strain trajectories is to use the same as in the computation of the displacement fluctuations snapshots.
  2. Store the weighted global stress vectors () (see Eq.(4.26)) computed at each time step in the snapshot matrix :
  3. (B.21)
  4. Select from a minimum of linearly independent columns corresponding to purely elastic solutions. Store these columns in a matrix .
  5. Determine an orthogonal basis matrix for as the matrix of left singular vectors, denoted by , arising from the reduced SVD of .
  6. Compute the “elastic component” of , i.e., the orthogonal projection of onto :
    (B.22)
  7. Perform the reduced SVD of to finally arrive at the desired elastic basis matrix :
    (B.23)
  8. Compute the “inelastic” snapshot matrix as
    (B.24)
  9. Perform the SVD of :
    (B.25)
  10. The remaining basis vectors –-the essential or dominant inelastic modes–- are the first columns of the matrix of left singular values .

    (B.26)
  11. The desired basis matrix adopts finally the form:
    (B.27)

B.2.1 Generalized elastic/inelastic SVD

For formulation purposes (see Appendix D.2.2), it is convenient to decompose matrix in equation Eq.(B.25) as

(B.28)

Here, denotes the matrix of the inessential inelastic modes, and and their associated matrix of singular values and right singular vectors, respectively. Having decompositions (B.23) and (B.28) at hand, we can finally write the snapshot matrix as:

(B.29)

Appendix C. Block matrix pseudoinverse of the expanded basis matrix

The inverse of a x symmetric block matrix is given by the following expression (see, for instance, Ref. [72]):

(C.1)

where

(C.2)

is the so-called Schur complement of in . This formula can be used to derive closed-form expressions for the modal coefficients and (see Section 4.4.2). The departure point is equation Eq.(4.59):

(C.3)

where designates the pseudo-inverse of the gappy expanded basis matrix. By setting:

(C.4)

and by inserting Eq.(C.1) into Eq.(C.3), one obtains upon expansion:

(C.5)

and

(C.6)

By substituting back Eq.(C.4) into the above equation, and taking into account that:

(C.7)

one finally gets:

(C.8)

(C.9)

where

(C.10)

Appendix D. Sampling points selection algorithms

D.1 Upper bound for the macroscopic stress error

Proposition D.1.1: Let be the error estimate defined as (see Section 4.5.1):

(D.1)

Here, denotes the finite element, macroscopic stress solution corresponding to the “training snapshot”, whereas stands for the low-dimensional approximation of , calculated through formula (4.65) using the stress basis matrix and the sampling indices . On the other hand, consider the following decomposition of the weighted snapshot matrix –-this decomposition is derived in Appendix B.2.1:

(D.2)

where denotes the basis matrix of elastic and dominant (or essential) inelastic modes; designates the basis matrix of trailing (or inessential) inelastic modes (recall that ); and, finally, and stand for the singular values and right singular vectors, respectively, associated to . Then, it can be shown that:

(D.3)

where

(D.4)

( is the rank of ), and

(D.5)

with and . The variable in Eq.(D.4) represents an upper bound for the stress truncation error, while is an upper bound for the stress reconstruction error.

Proof. We begin the proof by applying to Eq.(D.1) the Cauchy-Schwarz inequality:

(D.6)

Approximating now the integral in the above equation by Gauss quadrature, and using the weighted global stress vectors introduced in Section 4.3.3, one gets

(D.7)

where

(D.8)

and

(D.9)

The error estimate for the macroscopic stresses defined in Eq.(4.102) is, thus, bounded above by the Frobenius norm of the difference between the (weighted) stress snapshot matrix and its low-dimensional approximation . This bound will be hereafter designated by

(D.10)

The link between and , on the other hand, is established through the reconstruction matrix defined in Section 4.4.2:

(D.11)

where . Inserting decomposition Eq.(D.2) into Eq.(D.11), and exploiting the fact that represents an oblique projection onto –-and therefore –-, one obtains

(D.12)

Substitution of Eqs. (D.12) and (D.2) into expression (D.10) yields:

(D.13)

Since the column space of and are, by construction, mutually orthogonal, it follows that:

(D.14)

Replacing now in the preceding equation by its singular value decomposition, one finally arrives at

(D.15)

The proof is completed by noting that , and are columnwise orthonormal matrices, and, therefore:

(D.16)

as asserted.

Observation 7: From Eq.(D.3) and Eq.(D.4), we have that:

(D.17)

i.e., the bound for the reconstruction error diminishes as the truncation error is reduced. Furthermore, since and , it follows that and , and consequently

(D.18)

It may be inferred from the above that the only contribution to the reconstruction error that can grow unboundedly depending on the chosen sampling points is . A poorly conditioned matrix is likely to give rise to large reconstruction errors. Therefore, one must seek to choose such points so that the columns –-and hence the rows–- of are “as linearly independent as possible”. To put it alternatively, the stress information associated to the chosen sampling points should be as “uncorrelated” as possible. This observation concurs with one's intuitive expectations, for it seems to make little sense to choose, say, spatially close points at which stress responses are expected to be very similar.

Observation 8: Any stress snapshot pertaining to the column space of the stress basis matrix is exactly approximated regardless of the chosen set of admissible sampling points; that is, if then

(D.19)

for all admissible . The proof follows easily from the observation, made earlier when deriving Eq.(D.12), that represents actually an oblique projection onto . A far-reaching implication of this property is that, the elastic stress modes being contained in , the reduced-order model will furnish linear elastic solutions with the same accuracy as the underlying finite element model regardless of the set of admissible sampling points used for reconstructing the stress field.

D.2 Basic sampling points

D.2.1 Hierarchical Interpolation Points Method

The algorithm employed in the present work to deal with the discrete minimization problem (4.109) is inspired in the Hierarchical Interpolation Points (HPI) method proposed by Nguyen et al.[30]. In fact, the only difference with respect to the proposal by Nguyen et al. is the format of the equation for the objective function . In Ref. [30], the reconstruction error is directly calculated as the norm of the difference between the coefficients of the orthogonal projection and the oblique projection (using sampling points ) of onto the column space of the basis matrix ( i.e, ). However, we have just shown in Proposition D.1.1 that the expression for this estimate can be notably simplified by introducing the singular value decomposition of ; the final result reads (see Eq.(D.5)):

(D.20)

In principle, the summation in the preceding equation should extend over all inessential or trailing modes (). In practice, however, there is no need to include all such trailing modes in the evaluation of objective function: the contribution of each such modes is weighted by their corresponding singular values (), whose magnitude is already comparatively small –-this is the very reason why they are deemed inessential–-and, furthermore, decays exponentially as increases. For practical purposes, including trailing modes suffices to satisfactorily approximate :

(D.21)

We call these first inessential basis vectors the effective trailing modes. The advantage that accrues from using Eq.(D.21) instead of the objective function employed in [30] is evident: the operation count in evaluating the objective function becomes independent of the total number of snapshots (that may be arbitrarily large), thus reducing significantly the overall cost of the algorithm.

D.2.2 Proposed algorithm

The essence of the proposed algorithm is to construct, in a greedy fashion1, the set of indices by solving a sequence of one-dimensional minimization problems. Let us define for this purpose a matrix containing the dominant stress modes and the first trailing stress modes:

(D.22)

Likewise, the matrix of singular values associated to these modes is defined as:

(D.23)

Here, and are the matrices of singular values corresponding to the dominant and first inelastic trailing inelastic modes, respectively.

The index corresponding to the first basic sampling points is obtained as that minimizing the reconstruction error obtained when only the first dominant mode is used:

(D.24)

Proceeding inductively, the index would be determined as that minimizing the reconstruction error obtained when only the first () dominant modes are included in the basis matrix (note that, in using this algorithm, we are tacitly assuming that ):

(D.25.a)
(D.25.b)
(D.25.c)

where

(D.26)

(1) A greedy method is any algorithm that solves the problem by making the locally optimal choice at each step with the hope of finding the global optimum.

D.3 Complementary sampling points

The heuristic employed for addressing the minimization problem (4.110) is also based on the greedy paradigm. The () index is selected by solving the following, one-dimensional minimization problem:

(D.27.a)
(D.27.b)

i.e.:

(D.28.a)
(D.28.b)


BIBLIOGRAPHY

[1] Ashby, M.F. (1992) "Physical modelling of materials problems", Volume 8. Maney Publishing. Materials Science and Technology 2 102–111

[2] Venkataraman, S. and Haftka, RT. (2004) "Structural optimization complexity: what has Moore’s law done for us?", Volume 28. Springer. Structural and Multidisciplinary Optimization 6 375–387

[3] Thimbleby, H. (1993) "Computerised Parkinson's law", Volume 4. IET. Computing & Control Engineering Journal 5 197–198

[4] Gross, D. and Seelig, T. (2011) "Fracture mechanics: with an introduction to micromechanics". Springer

[5] Hill, R. (1963) "Elastic properties of reinforced solids: some theoretical principles", Volume 11. Elsevier. Journal of the Mechanics and Physics of Solids 5 357–372

[6] Hashin, Z. and Shtrikman, S. (1963) "A variational approach to the theory of the elastic behaviour of multiphase materials", Volume 11. Elsevier. Journal of the Mechanics and Physics of Solids 2 127–140

[7] Mori, T. and Tanaka, K. (1973) "Average stress in matrix and average elastic energy of materials with misfitting inclusions", Volume 21. Elsevier. Acta metallurgica 5 571–574

[8] Bohm, H.J. (1998) "A short introduction to basic aspects of continuum micromechanics", Volume 3. Citeseer. CDL-FMD Report

[9] Zaoui, A. (2002) "Continuum micromechanics: survey", Volume 128. American Society of Civil Engineers. Journal of Engineering Mechanics 8 808–816

[10] Pindera, M.J. and Khatam, H. and Drago, A.S. and Bansal, Y. (2009) "Micromechanics of spatially uniform heterogeneous media: A critical review and emerging approaches", Volume 40. Elsevier. Composites Part B: Engineering 5 349–378

[11] Yuan, Z. and Fish, J. (2008) "Towards Realization of Computational Homogenization in Practice1", Volume 73. Wiley Online Library. Int. J. Numer. Meth. Engng 361–380

[12] Geers, M.G.D. and Kouznetsova, VG and Brekelmans, WAM. (2010) "Multi-scale computational homogenization: Trends and challenges", Volume 234. Elsevier. Journal of computational and applied mathematics 7 2175–2182

[13] Feyel, F. and Chaboche, J.L. (2000) "FE-2 multiscale approach for modelling the elastoviscoplastic behaviour of long fibre SiC/Ti composite materials", Volume 183. Elsevier. Computer methods in applied mechanics and engineering 3 309–330

[14] Oden, J.T. and Belytschko, T. and Fish, J. and Hughes, TJ and Johnson, C. and Keyes, D. and Laub, A. and Petzold, L. and Srolovitz, D. and Yip, S. (2006) "Revolutionizing engineering science through simulation". National Science Foundation (NSF), Blue Ribbon Panel on Simulation-Based Engineering Science [3, 7, 101, 123]

[15] Oskay, C. and Fish, J. (2007) "Eigendeformation-based reduced order homogenization for failure analysis of heterogeneous materials", Volume 196. Elsevier. Computer Methods in Applied Mechanics and Engineering 7 1216–1243

[16] Robert W. Batterman. (2011) "The Tyranny of Scales"

[17] Hill, R. (1965) "Continuum micro-mechanics of elastoplastic polycrystals", Volume 13. Elsevier. Journal of the Mechanics and Physics of Solids 2 89–101

[18] Dvorak, GJ and Wafa, AM and Bahei-El-Din, YA. (1994) "Implementation of the transformation field analysis for inelastic composite materials", Volume 14. Springer. Computational Mechanics 3 201–228

[19] Michel, J.C. and Suquet, P. (2003) "Nonuniform transformation field analysis", Volume 40. Elsevier. International journal of solids and structures 25 6937–6955

[20] Michel, J.C. and Suquet, P. (2004) "Computational analysis of nonlinear composite structures using the nonuniform transformation field analysis", Volume 193. Elsevier. Computer methods in applied mechanics and engineering 48-51 5477–5502

[21] Roussette, S. and Michel, J.C. and Suquet, P. (2009) "Nonuniform transformation field analysis of elastic-viscoplastic composites", Volume 69. Elsevier. Composites Science and Technology 1 22–27

[22] Fish, J. and Shek, K. and Pandheeradi, M. and Shephard, M.S. (1997) "Computational plasticity for composite structures based on mathematical homogenization: Theory and practice", Volume 148. Elsevier. Computer Methods in Applied Mechanics and Engineering 1-2 53–73

[23] Maday, Y. and Patera, AT and Turinici, G. (2002) "Reliable real-time solution of parametrized partial differential equations: Reduced-basis output bound methods"

[24] R.D. Cook. (1995) "Finite element modeling for stress analysis". John Wiley and Sons.

[25] Krysl, P. and Lall, S. and Marsden, JE. (2001) "Dimensional model reduction in non-linear finite element dynamics of solids and structures", Volume 51. Wiley Online Library. International Journal for Numerical Methods in Engineering 4 479–504

[26] Salomon, D. (2004) "Data compression: the complete reference". Springer-Verlag New York Incorporated

[27] Bishop, C.M. and SpringerLink (Service en ligne). (2006) "Pattern recognition and machine learning", Volume 4. springer New York

[28] Barrault, M. and Maday, Y. and Nguyen, N.C. and Patera, A.T. (2004) "An empirical interpolation'method: application to efficient reduced-basis discretization of partial differential equations", Volume 339. Elsevier. Comptes Rendus Mathematique 9 667–672

[29] Grepl, M.A. and Maday, Y. and Nguyen, N.C. and Patera, A.T. (2007) "Efficient reduced-basis treatment of nonaffine and nonlinear partial differential equations", Volume 41. edpsciences. org. Mathematical Modelling and Numerical Analysis 3 575–605

[30] Nguyen, NC and Patera, AT and Peraire, J. (2008) "A best points interpolation method for efficient approximation of parametrized functions", Volume 73. Wiley Online Library. Int. J. Numer. Meth. Engng 521–543

[31] Chaturantabut, S. and Sorensen, D.C. (2010) "Discrete empirical interpolation for nonlinear model reduction". IEEE. Decision and Control, 2009 held jointly with the 2009 28th Chinese Control Conference. CDC/CCC 2009. Proceedings of the 48th IEEE Conference on 4316–4321

[32] Astrid, P. (2004) "Reduction of process simulation models: a proper orthogonal decomposition approach". Technische Universiteit Eindhoven

[33] An, S.S. and Kim, T. and James, D.L. (2009) "Optimizing cubature for efficient integration of subspace deformations", Volume 27. NIH Public Access. ACM transactions on graphics 5 165

[34] Kim, T. and James, D.L. (2009) "Skipping steps in deformable simulation with online model reduction". ACM. ACM SIGGRAPH Asia 2009 papers 1–9

[35] J. D. Hoffman. (2001) "Numerical Methods for Engineers and Scientists". Marcel Dekker

[36] J. A. Hernández and J. Oliver and A.E. Huespe and M. Caicedo. (2012) "High-performance model reduction procedures in multiscale simulations". CIMNE

[37] Boyaval, S. (2007) "Reduced-basis approach for homogenization beyond the periodic setting". Arxiv preprint math/0702674

[38] Yvonnet, J. and He, Q.C. (2007) "The reduced model multiscale method (R3M) for the non-linear homogenization of hyperelastic media at finite strains", Volume 223. Elsevier. Journal of Computational Physics 1 341–368

[39] Monteiro, E. and Yvonnet, J. and He, QC. (2008) "Computational homogenization for nonlinear conduction in heterogeneous materials using model reduction", Volume 42. Elsevier. Computational Materials Science 4 704–712

[40] Nguyen, NC. (2008) "A multiscale reduced-basis method for parametrized elliptic partial differential equations with multiple scales", Volume 227. Elsevier. Journal of Computational Physics 23 9807–9822

[41] Efendiev, Yalchin and Galvis, Juan and Gildin, Eduardo. (2012) "Local–global multiscale model reduction for flows in high-contrast heterogeneous media", Volume 231. Elsevier. Journal of Computational Physics 24 8100–8113

[42] Efendiev, Yalchin and Galvis, Juan and Thomines, Florian. (2012) "A systematic coarse-scale model reduction technique for parameter-dependent flows in highly heterogeneous media and its applications", Volume 10. SIAM. Multiscale Modeling & Simulation 4 1317–1343

[43] Abdulle, Assyr and Bai, Yun. (2012) "Reduced basis finite element heterogeneous multiscale method for high-order discretizations of elliptic homogenization problems", Volume 231. Elsevier. Journal of Computational Physics 21 7014–7036

[44] Abdulle, Assyr and Bai, Yun. (2013) "Adaptive reduced basis finite element heterogeneous multiscale method", Volume 257. Elsevier. Computer Methods in Applied Mechanics and Engineering 203–220

[45] Drago, A. and Pindera, M.J. (2007) "Micro-macromechanical analysis of heterogeneous materials: Macroscopically homogeneous vs periodic microstructures", Volume 67. Elsevier. Composites science and technology 6 1243–1263

[46] Miehe, C. and Schotte, J. and Schroder, J. (1999) "Computational micro-macro transitions and overall moduli in the analysis of polycrystals at large strains", Volume 16. Elsevier. Computational Materials Science 1-4 372–382

[47] de Souza Neto, EA and Feijóo, RA. (2006) "Variational foundations of multi-scale constitutive models of solid: small and large strain kinematical formulation", Volume 16. LNCC Research & Development Report

[48] Kouznetsova, V.G. (2002) "Computational homogenization for the multi-scale analysis of multi-phase materials". Technische Universiteit Eindhoven

[49] Michel, JC and Moulinec, H. and Suquet, P. (1999) "Effective properties of composite materials with periodic microstructure: a computational approach", Volume 172. Elsevier. Computer methods in applied mechanics and engineering 1-4 109–143

[50] Kanit, T. and Forest, S. and Galliet, I. and Mounoury, V. and Jeulin, D. (2003) "Determination of the size of the representative volume element for random composites: statistical and numerical approach", Volume 40. Elsevier. International Journal of Solids and Structures 13 3647–3679

[51] Nguyen, V.P. and Lloberas-Valls, O. and Stroeven, M. and Sluys, L.J. (2010) "On the existence of representative volumes for softening quasi-brittle materials-a failure zone averaging scheme". Elsevier. Computer Methods in Applied Mechanics and Engineering

[52] B. D. Reddy. (1998) "Introductory Functional Analysis". Springer-Vedag

[53] Giusti, SM and Blanco, PJ and de Souza Netoo, EA and Feijóo, RA. (2009) "An assessment of the Gurson yield criterion by a computational multi-scale approach", Volume 26. Emerald Group Publishing Limited. Engineering Computations 3 281–301

[54] Couégnat, Guillaume. (2008) "Approche multiéchelle du comportement mécanique de matériaux composites a renfort tissé". Université Sciences et Technologies-Bordeaux I

[55] J. Lubliner. (1990) "Plasticity Theory". McMillan

[56] Rozza, G. (2009) "Reduced basis methods for Stokes equations in domains with non-affine parameter dependence", Volume 12. Springer. Computing and Visualization in Science 1 23–35

[57] Hu, Y.H. and Hwang, J.N. and Perry, S.W. (2002) "Handbook of neural network signal processing", Volume 111. The Journal of the Acoustical Society of America 2525

[58] Bui-Thanh, T. (2007) "Model-constrained optimization methods for reduction of parameterized large-scale systems". Citeseer

[59] Bui-Thanh, T. and Willcox, K. and Ghattas, O. (2008) "Model reduction for large-scale systems with high-dimensional parametric input space", Volume 30. Citeseer. SIAM Journal on Scientific Computing 6 3270–3288

[60] Carlberg, K. and Farhat, C. (2008) "A Compact Proper Orthogonal Decomposition Basis for Optimization-Oriented Reduced-Order Models", Volume 5964. AIAA Paper 10–12

[61] Kunisch, K. and Volkwein, S. (2010) "Optimal snapshot location for computing POD basis functions", Volume 44. ESAIM: Mathematical Modelling and Numerical Analysis 3 509

[62] Rowley, C.W. and Colonius, T. and Murray, R.M. (2004) "Model reduction for compressible flows using POD and Galerkin projection", Volume 189. Elsevier. Physica D: Nonlinear Phenomena 1-2 115–129

[63] Smith, L.I. (2002) "A tutorial on principal components analysis", Volume 51. Cornell University, USA 52

[64] Carlberg, K. and Farhat, C. (2011) "A low-cost, goal-oriented ‘compact proper orthogonal decomposition’basis for model reduction of static systems", Volume 86. Wiley Online Library. International Journal for Numerical Methods in Engineering 3 381–402

[65] A. Quarteroni and R. Sacco and F. Saleri. (2000) "Numerical Mathematics". Springer

[66] Astrid, P. and Weiland, S. and Willcox, K. and Backx, T. (2008) "Missing point estimation in models described by proper orthogonal decomposition", Volume 53. IEEE. Automatic Control, IEEE Transactions on 10 2237–2251

[67] Maday, Y. and Nguyen, N.C. and Patera, A.T. and Pau, G.S.H. (2007) "A general, multipurpose interpolation procedure: the magic points"

[68] Chaturantabut, Saifon and Sorensen, Danny C. (2011) "Application of POD and DEIM on dimension reduction of non-linear miscible viscous fingering in porous media", Volume 17. Taylor & Francis. Mathematical and Computer Modelling of Dynamical Systems 4 337–353

[69] Galbally, D. and Fidkowski, K. and Willcox, K. and Ghattas, O. (2010) "Non-linear model reduction for uncertainty quantification in large-scale inverse problems", Volume 81. John Wiley & Sons. International Journal for Numerical Methods in Engineering 12 1581–1608

[70] Everson, R. and Sirovich, L. (1995) "Karhunen–Loeve procedure for gappy data", Volume 12. OSA. Journal of the Optical Society of America A 8 1657–1664

[71] DeVore, R.A. and Iserles, A. and Suli, E. (2001) "Foundations of computational mathematics". Cambridge Univ Pr

[72] Boyd, S.P. and Vandenberghe, L. (2004) "Convex optimization". Cambridge Univ Pr

[73] Ryckelynck, D. (2005) "A priori hyperreduction method: an adaptive approach", Volume 202. Elsevier. Journal of computational physics 1 346–366

[74] Ryckelynck, D. (2009) "Hyper-reduction of mechanical models involving internal variables", Volume 77. John Wiley & Sons. International Journal for Numerical Methods in Engineering 1 75–89

[75] Montgomery, D.C. and Runger, G.C. (2010) "Applied statistics and probability for engineers". Wiley

[76] Lovasz, L. and Pelikan, J. and Vesztergombi, K. (2003) "Discrete Mathematics: Elementary and Beyond". Springer

[77] J. C. Simo and T. J. R. Hughes. (1998) "Computational inelasticity". Springer

[78] Carlberg, K. and Bou-Mosleh, C. and Farhat, C. (2011) "Efficient non-linear model reduction via a least-squares Petrov–Galerkin projection and compressive tensor approximations", Volume 86. Wiley Online Library. International Journal for Numerical Methods in Engineering 2 155–181

[79] Hogben, L. (2006) "Handbook of linear algebra". Chapman & Hall/CRC

[80] Carlberg, K., Cortial, J., Amsallem, D., Zahr, M. and Farhat, C. (2011) "The gnat nonlinear model reduction method and its application to fluid dynamics problems", In 6th AIAA Theoretical Fluid Mechanics Conference, Honolulu, Hawaii, June, volume 2730, pages 2011–3112

Back to Top

Document information

Published on 01/01/2014

Licence: CC BY-NC-SA license

Document Score

0

Views 42
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?