Abstract

In order to impact physical mechanical system design decisions and realize the full promise of high-fidelity computational tools, simulation results must be integrated at the earliest stages of the design process. This is particularly challenging when dealing with uncertainty and optimizing for system-level performance metrics, as full-system models (often notoriously expensive and time-consuming to develop) are generally required to propagate uncertainties to system-level quantities of interest. Methods for propagating parameter and boundary condition uncertainty in networks of interconnected components hold promise for enabling design under uncertainty in real-world applications. These methods avoid the need for time consuming mesh generation of full-system geometries when changes are made to components or subassemblies. Additionally, they explicitly tie full-system model predictions to component/subassembly validation data which is valuable for qualification. These methods work by leveraging the fact that many engineered systems are inherently modular, being comprised of a hierarchy of components and subassemblies that are individually modified or replaced to define new system designs. By doing so, these methods enable rapid model development and the incorporation of uncertainty quantification earlier in the design process. The resulting formulation of the uncertainty propagation problem is iterative. We express the system model as a network of interconnected component models, which exchange solution information at component boundaries. We present a pair of approaches for propagating uncertainty in this type of decomposed system and provide implementations in the form of an open-source software library. We demonstrate these tools on a variety of applications and demonstrate the impact of problem-specific details on the performance and accuracy of the resulting UQ analysis. This work represents the most comprehensive investigation of these network uncertainty propagation methods to date.

1 Introduction

The majority of systems in science and engineering are intentionally designed as collections of components and subassemblies connected in a tree-like structure. This type of design is highly advantageous as it allows for interoperability of components, parallel design efforts, and straightforward approaches to quality assurance. Unfortunately, the most common approaches to modeling and simulation in support of the design of these systems are not similarly constructed. Often, these modeling efforts are interested in system-level quantities of interest (QoIs) and the entire system is treated in a monolithic way. This approach necessitates an undesirable tradeoff: either this monolithic system model is extremely complicated and expensive to construct and evaluate, or it involves significant simplifying assumptions, such as neglecting physics and/or geometric features of components and subassemblies. This latter approach of using simplified representations of components and subassemblies is the most common.

While in many cases, these simplifying assumptions are justified and defensible, the impact that these simplifications have on system-level QoIs is difficult to quantify, representing a challenge for high consequence applications for which uncertainty quantification (UQ) is a required piece of the qualification process. Additionally, these simplified models represent a break from the more detailed models commonly used for the design and qualification of individual components and subassemblies. This means that the monolithic system-level model must be validated against system-level test data, which are often very expensive to generate and thus sparse. System-level validation is also inherently challenging because it is difficult to determine the source of any discrepancy. Hierarchical validation allows for discrepancy sources to be more easily isolated. Additional negative consequences of relying exclusively on system-level validation data are increased uncertainty in the predictions of the system-level model and reduced confidence in the resulting predictions.

Researchers have developed a variety of methods to address these challenges [13]. In this work, we provide efficient implementations of two of these methods in the form of an open-source software library, pynetuq.1 We publish our open-source tools for uncertainty propagation in networks and remark that these tools are extensible to be used with any simulation tool (including arbitrary surrogate modeling tools) through the construction of a simple Python interface class. Additional interface classes for a variety of simulation tools are currently under active development. We evaluate the convergence behavior of these network UQ problems and the impact of system-level boundary condition types on this network performance. Finally, we present demonstration uncertainty propagation results for a series of increasingly complex exemplar applications.

The engineering exemplar problems used for demonstration in this work are finite element models, solving either thermal or mechanical responses to stimuli external to the system. QoIs for such exemplars are either thermal or mechanical state variables such as temperatures at locations within the system or spatial deflection. Such state variables are often of interest for full system analyses because they can indicate when a component has reached the edge of its designed performance window. Understanding such responses often requires a full system model due to the state variable's response often being dependent upon the response of other neighboring components. Uncertain variables utilized in these systems could have included material properties such as thermal conductivity, specific heat, emissivity, or geometric tolerances, but only boundary conditions were specified as uncertain in this work.

The remainder of the document will be organized as follows. Section 2 will introduce the mathematics underpinning the network-based UQ approach, how domain decomposition is utilized, and the implementation of those two concepts for this work. Engineering examples of increasing complexity will then be used in Sec. 3 to demonstrate the approach's performance for different boundary conditions and equation sets. The UQ approach will also be demonstrated on a large scale exemplar engineering system within this section to show how the approach can apply to relevant applications. Lastly, conclusions and future plans for the UQ approach will be presented in Sec. 4.

2 Methods

We have implemented the network UQ method [1,4] in an extensible Python library (pynetuq), which interfaces with a variety of physics simulation tools to evaluate component UQ problems. Although the network UQ method is agnostic to the functional representation used for the underlying random variables, we will restrict ourselves to polynomial chaos expansions (PCEs) [58].

In the sections to follow, we will outline the approach taken and provide some details on the implementation. Section 2.1 will provide basic background on polynomial chaos and introduce some relevant notation. Section 2.2 will then describe the ways in which the domain is decomposed and the two relevant uncertainty propagation methods implemented in pynetuq. Finally, Sec. 2.3 will contain a discussion on implementation details.

2.1 Polynomial Chaos Expansions.

In the broadest terms, we seek to determine a probabilistic characterization of the system response, QoI, to uncertainty in parameter values, u. The parameter values are parameterized using the germ, ξ={ξ1,ξ2,}, a vector of independent random variables (RVs), such that
uu(ξ)
(1)
When using PCEs, we represent the dependence of the solution on the germ through the series
QoI(ξ)kckΨk(ξ)
(2)
where the Ψk's are suitably chosen orthogonal functionals of the random variables and ck are deterministic coefficients given by
ck=QoI(ξ)Ψk(ξ)Ψk2(ξ)
(3)
with · an appropriate inner product. The choice of Ψ depends upon the distribution of ξ. For the examples to follow Hermite polynomials are used, which is appropriate for cases where ξ is Gaussian. We approximate the inner product in the numerator using nonintrusive spectral projection (NISP) and quadrature integration
QoI(ξ)Ψk(ξ)=QoI(ξ)Ψk(ξ)π(ξ)dξqQoI(ξq)Ψk(ξq)wq
(4)

This process is visualized for a single-component system in Fig. 1(a). The sampling step corresponds to the selection of the quadrature points ξq in Eq. (4) and the deterministic solutions correspond to the evaluations of QoI(ξq). For domain-decomposed problems, this process becomes somewhat more complicated as shown in Figs. 1(b) and 1(c) depending upon the approach taken.

Fig. 1
Graphical illustrations of the different approaches used for uncertainty propagation in this paper: (a) monolithic, (b) deterministic domain decomposition, and (c) NetUQ
Fig. 1
Graphical illustrations of the different approaches used for uncertainty propagation in this paper: (a) monolithic, (b) deterministic domain decomposition, and (c) NetUQ
Close modal

2.2 Model Decomposition.

In this section, we will describe the domain decomposition approach used and two different uncertainty propagation algorithms applicable to domain-decomposed problems. Our approach is analogous to domain decomposition methods developed for parallel computing [9,10] but is implemented in a less intrusive manner. The ideal monolithic system model is described by M(u):uQoI. As previously described, this system model is generally impractical to construct for complex systems. Instead, the system model is decomposed into a set of n local models Mi:(yi,ui)xi,i=1,,n. We define uiu to be the set of exogenous input parameters relevant to component i and yi to be the corresponding set of endogenous inputs. The endogenous inputs are derived from the outputs xi of neighboring components (e.g., interface conditions).

For all of the results in this paper, an overlapping domain decomposition technique with Dirichlet–Dirichlet coupling terms is utilized. However, the pynetuq software is agnostic to the domain decomposition scheme used, working instead at a higher level of abstraction (inputs/outputs) and leaving the coupling details to the individual component models. As such, the library is also applicable to nonoverlapping domain decomposition techniques with Dirichlet–Neumann or Robin–Robin coupling terms as well. More information on the specific coupling scheme used is proved in Sec. 2.3.

There are two uncertainty propagation methods applicable to systems decomposed in this way. The first approach, which we will term deterministic domain decomposition (deterministic DD) effectively decouples the domain decomposition algorithm from the uncertainty propagation algorithm. The second approach, which we term network uncertainty quantification (NetUQ) combines the domain decomposition and uncertainty propagation algorithms in order to realize improved efficiency.

2.2.1 Uncertainty Propagation Via Deterministic Domain Decomposition.

Deterministic domain decomposition (deterministic DD) is an approach for simulating a large complex physical system that has been spatially decomposed into a network of interconnected components. The components exchange information along their interfaces and the full-system solution is achieved through iteration. In our implementation, the forward problems at each iteration for all components are solved simultaneously using the additive Schwarz method [9,11].

Uncertainty propagation for deterministic DD is performed in exactly the same way as when the ideal monolithic system model is available. The only difference is the substitution of the alternative deterministic solution method used for evaluating the simulation outputs at each sample point. This is seen visually by comparing Figs. 1(a) and 1(b). The domain decomposition is deterministic and the uncertainty propagation algorithm wraps the deterministic system solution procedure.

In pynetuq, we use a PCE to represent the uncertainty in any QoIs and utilize a quadrature rule for determining the sample points. pynetuq leverages UQTk [12] to generate either full or sparse quadrature rules for evaluating Eq. 4. Since the UQ method wraps around the deterministic DD algorithm, a single global quadrature rule for integrating over ξ is defined for the entire system and a separate deterministic DD iterative solution must be performed at each quadrature point, ξ(q).

2.2.2 Uncertainty Propagation Via NetUQ.

In the network, UQ (NetUQ) method illustrated in Fig. 1(c), the interfaces between components are treated as stochastic rather than deterministic [1,4]. We define uiu to be the set of exogenous input random variables relevant to component i and yi to be the corresponding set of endogenous input random variables. The endogenous inputs are derived from the outputs xi of neighboring components (e.g., interface conditions). Note that in contrast to the deterministic DD approach, xi and yi are treated as stochastic in the NetUQ approach.

We represent each random variable using a PCE. In order to propagate uncertainty through each component, we define a local quadrature rule for integrating over ξ for each component. The local quadrature rule allows for different stochastic representations for different components, adding flexibility and potentially reducing the number of simulations required. These quadrature points ξ(q) define samples for both the exogenous inputs uk(ui)kΨk(ξ(q)) and endogenous inputs yi(q)k(yi)kΨk(ξ(q)). The forward problem M is then solved for each quadrature point and the resulting outputs xi(q) are used to update the PCE coefficients using Galerkin projection as in Eq. (3).

The network uncertainty propagation problem for the full system may be written as a fixed-point problem. We stack up the inputs and outputs from each component ũ:=[u1TunT]T, ỹ:=[y1TynT]T, and x̃:=[x1TxnT]T and define a full-system uncertainty propagation operator M̃:(ỹ,ũ)x̃. The endogenous outputs of each component are routed to the appropriate endogenous inputs of neighboring components. These relationships are encoded into the adjacency matrix Ixy{0,1}ny×nx, which satisfies
ỹIxyx̃
(5)
nx and ny are the total number of endogenous output and input values, respectively, across all n components. The result is the fixed-point problem
x̃=M(Ixyx̃,ũ)
(6)

which can be solved via Jacobi iteration.

In our implementation, we never explicitly form the adjacency matrix. Rather, it is represented implicitly in the boundary conditions specified for each component model Mi. In this way, the pynetuq library operates only in terms of component inputs and outputs.

As long as the fixed-point operator is a contraction, the fixed-point iterations generated by the Jacobi algorithm converge q-linearly [1]. This is rarely competitive with other solution methods for nonlinear equations (e.g., Newton's method). Thus, relaxation methods [1315] are typically used only where these types of solvers are not applicable.

In this work, we seek to improve the convergence rate by employing Anderson acceleration [1618]. Anderson acceleration works by modifying the fixed-point-iteration updates in a way that is similar to the generalized minimum residual (GMRES) method for linear problems [19] and multisecant methods for nonlinear problems [17].

2.3 Implementation.

For the results to follow, we exclusively use an overlapping domain decomposition technique with Dirichlet boundary conditions imposed on both sides of the interface. As an example, consider a system with two components, M1 and M2 which operate on the domains shown in Fig. 2. The boundary conditions at xL,1 and xR,2 along with any other parameters comprise potential exogenous inputs u. The boundary conditions at xR,1 and xL,2 are endogenous inputs to M1 and M2, respectively.

Fig. 2
Schematic of overlapping domain decomposition strategy
Fig. 2
Schematic of overlapping domain decomposition strategy
Close modal

For deterministic DD, these are simply the solution variables of the other component(s) evaluated at the interface locations. For NetUQ, these are the PCE representations of the solution variables of the other component(s) evaluated at the interface locations and sampled at the appropriate quadrature point.

In the pynetuq implementation, each component model is responsible for extracting its own endogenous boundary conditions from the outputs of each neighboring component. At each iteration, the library provides a dummy version of these neighbors' outputs populated with the appropriate information for each required point in probability space and executes the component model. These component model evaluations are all performed in parallel within each network iteration.

Iterations continue until either a convergence tolerance is met
||x̃i+1x̃i||2||x̃i||2<ε
(7)

or until the step size fails to decrease for ten consecutive iterations. In practice, ||x̃i||2 will only approach zero in degenerate cases where the solution is trivial. For the results to follow, a convergence tolerance of ε=105 is used.

As will be seen, an important parameter for transient simulations is the output frequency. Since the pynetuq library couples adjacent components using these output files, the output frequency must be sufficient for time accuracy. Infrequent outputs result in temporally underresolved boundary conditions for neighboring components. Since each component manages its own time-stepping scheme and output frequency, care must be taken to ensure that sufficiently resolved transient information is made available to the network.

3 Results

Since monolithic system models are difficult to construct, we have chosen simplistic example problems for which M is easily constructed or have used the deterministic domain decomposition method described in Sec. 2.2.1 as a substitute for the results to follow. In Sec. 3.1 we analyze the effect that different boundary conditions have on the network convergence for thermal and mechanical problems by focusing on simple 1D domains. In Sec. 3.2 we extend this discussion to the propagation of uncertainty in a set of increasingly complex thermal systems. The first is a 1D steady-state linear heat conduction problem, the second is a 1D transient heat conduction problem, and the third is a complex system transient thermal model with multiple sources of nonlinearity.

3.1 Boundary Condition Effects on Network Convergence.

In this section, we apply the network approach to a set of thermal and mechanical problems to analyze the convergence behavior. Both problems are pseudo-1D in that they are mathematically one-dimensional but are analyzed using a 3D finite element mesh. Both problems are also linear and transient. In all cases, we will apply a fixed Dirichlet boundary condition to one side and vary the boundary condition applied to the other side in order to determine what effect, if any, the boundary condition has on the network convergence behavior.

3.1.1 Mechanical Network Results.

For this first example, we analyze a linear elastic cantilevered beam with a Young's modulus of 106 and a Poisson's ratio of 1/4. The beam is 10 units long with a constant square cross section of unit area. One end of the beam (x =0) is held fixed. Dirichlet and Neumann boundary conditions were applied to the other (free, x =10) end of the beam in order to investigate the efficiency of the network for simple systems. The governing equations are given by
(EIuxx)xx+mutt=0u(0,t)=0ux(0,t)=0
(8)

with Young's modulus E, mass per unit length m, and second moment of inertia I. For this geometry, I =1/12.

The Dirichlet boundary condition applied is a linear ramp from 0 displacement to the maximum tip displacement over 2 ms, such that
u(L,t)=umaxt2×106
(9)
The prescribed displacement is applied quasi-statically. The solution, in this case, is given by
u(x,t)=u(L,t)x2(3Lx)2L3
(10)
The Neumann boundary condition applied is a constant load perpendicular to the beam, such that
(EIuxx(L,T))x=F
(11)
with F being the prescribed load. The solution, in this case, is given by
u(x,t)=Fx2(3Lx)6EI.
(12)

Our quantities of interest are the tip and midpoint deflections at the final time-step.

Two solution methods were contrasted: (1) a monolithic approach (labeled “classic” in the figures) and (2) the NetUQ approach described in Sec. 2.2.2. The monolithic case is the traditional FEM analysis of a cantilever beam using a single component meshing scheme. For the NetUQ approach, the domain is divided into two overlapping regions, x[0,5.5] and x[4.5,10]. The pynetuq library is used to couple separate simulations of the two components. Convergence in this context represents the network's ability to approach the monolithic solution within prescribed solution parameters and will be the main metric when analyzing its performance and efficiency.

Boundary conditions play a significant role in determining network convergence behavior for the system. Figure 3 illustrates the difference in behavior of the midpoint and tip displacements at the final time-step of each iteration of the network analysis of the transient mechanical model when using either Dirichlet or Neumann boundary conditions. Figure 3(a) imposes a linear-in-time displacement resulting in a unit tip displacement at the final time (2 ms), whereas Fig. 3(b) imposes a force necessary to create a maximum tip deflection of approximately one unit, Figs. 3(c) and 3(d) double the imposed conditions from Figs. 3(a) and 3(b), respectively, correspond to a tip deflection of two.

Fig. 3
Dirichlet and Neumann boundary conditions for the mechanical case: (a) small Dirichlet condition, (b) small Neumann condition, (c) medium Dirichlet condition, (d) medium Neumann condition, (e) large Dirichlet condition, (f) large Neumann condition: (a) Dirichlet condition with a final tip displacement of 1, (b) Neumann condition with a final tip displacement of ≈ 1, (c) Dirichlet condition with a final tip displacement of 2, (d) Neumann condition with a final tip displacement of ≈ 2, (e) Dirichlet condition with a final tip displacement of 5, and (f) Neumann condition with a final tip displacement of ≈ 2.5
Fig. 3
Dirichlet and Neumann boundary conditions for the mechanical case: (a) small Dirichlet condition, (b) small Neumann condition, (c) medium Dirichlet condition, (d) medium Neumann condition, (e) large Dirichlet condition, (f) large Neumann condition: (a) Dirichlet condition with a final tip displacement of 1, (b) Neumann condition with a final tip displacement of ≈ 1, (c) Dirichlet condition with a final tip displacement of 2, (d) Neumann condition with a final tip displacement of ≈ 2, (e) Dirichlet condition with a final tip displacement of 5, and (f) Neumann condition with a final tip displacement of ≈ 2.5
Close modal

The Neumann boundary condition converges more quickly for smaller tip deflections as shown in Fig. 3(b); however, for the larger deflections given in Figs. 3(c) and 3(d) the network converges more quickly for the Dirichlet boundary condition. For the Dirichlet cases, the tip deflection is prescribed, forcing the network to initially simulate the largest deflection at all iterations; the network would then initially begin with the largest displacement variances and then slowly try to converge to the appropriate solution. For small deflections, the network initially overestimates the center displacement and underestimates for larger deflections, whereas, the Neumann case always seems to initially underestimate the deflection of the component for similar tip displacements. (This is related to the boundary condition used at the component interfaces to initialize the network.) This will not impact the network extensively; however, for large deflections, this could increase the number of iterations needed to achieve sufficient tolerance and thus, slow the speed of convergence.

Results after imposing an extreme displacement with Dirichlet boundary condition of 5 are shown in Fig. 3(e). Analyzing even a slightly larger tip displacement with the Neumann boundary conditions presents a convergence error. In this case, the applied force results in a maximum tip displacement of 2.5. Figure 3(f) shows that although the system has satisfied the imposed convergence tolerance the network converges to a slightly different final tip displacement. This was due to the output frequency sensitivity. The network is not required to communicate information at every time-step for each component within the pynetuq framework. Rather, each component has a defined output frequency which may be adjusted individually and linear interpolation is used when a neighboring component requires information at an intermediate time point. Infrequent outputs (especially early in time) can result in incorrect behavior for networks with Neumann boundary conditions. Although this same frequency can produce accurate results for the extreme Dirichlet condition, it is not sufficient for larger Neumann boundary conditions. This sensitivity for the Neumann condition highlights the influence the boundary conditions have on the network.

3.1.2 Thermal Network Results.

In this section, we analyze the network convergence for a transient linear heat conduction application. The governing equations are given by
utαuxx=0u(0,t)=T0u(L,t)=TLu(x,0)=Tinit
(13)
with Tinit = 300, T0=1000, and α=0.01. TL is considered a variable boundary condition. The series solution from the separation of variables is
u(x,t)=T0+xL(TLT0)+n=1BnCn(x,t)Bn=4nπ(TinitT0)2nπ(1)n(T0TL)Cn(x,t)=sin(nπxL)exp(αtL2(nπ)2)
(14)

Once again, two solution methods are contrasted: (1) a monolithic approach (labeled classic in the figures) and (2) the NetUQ approach described in Sec. 2.2.2. The classic approach is verified to agree with the analytical solution given by Eq. (14) to within the specified tolerances (106 for this example).

The network can more easily solve the thermal case compared to the mechanical case; a smooth monotonic convergence is seen in Fig. 4(a). This is due to the different equations analyzed as the mechanical system requires the solution of a fourth-order PDE; whereas, the thermal case is a second order PDE. Even if a larger temperature gradient is imposed, the system converges as quickly as the lower tip temperature condition as shown in Fig. 4(c).

Fig. 4
Network convergence results for a Dirichlet boundary condition for the thermal case with TR = 500 for (a) and (b) and TR = 5500 for (c) and (d). For the transient plots, (b) and (d), dark lines are early iterations and later iterations are progressively lighter: (a) midpoint temperature at the final time, U(L∕2, 500), (b) transient evolution of the midpoint temperature U(L∕2,t) at different iterations, (c) midpoint temperature at the final time, U(L∕2, 500), and (d) Transient evolution of the midpoint temperature U(L∕2,t) at different iterations.
Fig. 4
Network convergence results for a Dirichlet boundary condition for the thermal case with TR = 500 for (a) and (b) and TR = 5500 for (c) and (d). For the transient plots, (b) and (d), dark lines are early iterations and later iterations are progressively lighter: (a) midpoint temperature at the final time, U(L∕2, 500), (b) transient evolution of the midpoint temperature U(L∕2,t) at different iterations, (c) midpoint temperature at the final time, U(L∕2, 500), and (d) Transient evolution of the midpoint temperature U(L∕2,t) at different iterations.
Close modal

Varying the Dirichlet boundary conditions on the thermal case seemed to have negligible impacts on the network performance. Examining the Neumann boundary conditions revealed a similar lack of sensitivity. Figure 5 shows the results of applying either a negative (top) or positive (bottom) heat flux to the right-hand boundary. The network performed just as well when the fluxes varied from a positive to negative flux. Figures 5(b) and 5(d) show that the network converges to the monolithic solution in a similar manner as the Dirichlet boundary conditions; whereas in the mechanical case the convergence behavior is dependent on the boundary conditions imposed. Thus, we conclude that the network performance for thermal systems is not influenced by the type of boundary conditions imposed on the system.

Fig. 5
Network convergence results for a Neumann boundary condition for the thermal case with (a) and (b) a negative flux applied to the right boundary and (c) and (d) a positive flux applied to the right boundary. For the transient plots, (b) and (d), dark lines are early iterations and later iterations are progressively lighter: (a) midpoint temperature at the final time, U(L∕2, 500), (b) transient evolution of the midpoint temperature U(L∕2,t) at different iterations, (c) midpoint temperature at the final time, U(L∕2, 500), and (d) transient evolution of the midpoint temperature U(L∕2,t) at different iterations.
Fig. 5
Network convergence results for a Neumann boundary condition for the thermal case with (a) and (b) a negative flux applied to the right boundary and (c) and (d) a positive flux applied to the right boundary. For the transient plots, (b) and (d), dark lines are early iterations and later iterations are progressively lighter: (a) midpoint temperature at the final time, U(L∕2, 500), (b) transient evolution of the midpoint temperature U(L∕2,t) at different iterations, (c) midpoint temperature at the final time, U(L∕2, 500), and (d) transient evolution of the midpoint temperature U(L∕2,t) at different iterations.
Close modal

3.2 Uncertainty Quantification Results.

The results from Sec. 3.1 confirm that pynetuq provided the correct mean behavior and that the iterative solvers are (mostly) performing robustly. In this section, those results will be expanded upon to demonstrate that pynetuq is correctly propagating uncertainty through the system. We do this by examining both steady-state and transient results for linear heat conduction and a transient simulation of a complex thermal model which includes numerous nonlinearities. In all cases, the uncertain input parameters are assumed to be Gaussian and PCE coefficients are determined through NISP using a full quadrature.

3.2.1 Quasi-1D Steady-State Thermal.

First, consider the 1D steady-state linear heat transfer problem defined by
uxx=0u(0)=T0u(L)=TL
(15)
The resulting solution is
u(x)=T0+xL(TLT0)
(16)
Let T0N(μ=1000,σ2=402) and TLN(μ=500,σ2=202). Our quantity of interest is the temperature at the midpoint of the slab
u(L/2)N(μ=750,σ2=402+2024)
(17)

A coarse hex mesh (Fig. 6(a)) is used to solve Eq. (15) numerically in three different ways. First, is the monolithic problem in which the heat equation is solved concurrently on all three blocks. Next, the domain is decomposed into two components. The first component consists of the left and middle blocks and the second component consists of the middle and right blocks. The middle block is the overlap region shared by both components. The decomposed problem is then solved using both the deterministic domain decomposition and NetUQ methods. All forward propagation steps are computed using SIERRA:Aria [20].

Fig. 6
Meshes used for steady-state verification problems
Fig. 6
Meshes used for steady-state verification problems
Close modal

For the monolithic and deterministic domain decomposition approaches, a third-order Gauss–Hermite polynomial chaos is used to represent the uncertainty in the quantity of interest (the centerpoint temperature). The coefficients of the PCE are generated through NISP using a full quadrature with four points per stochastic dimension. For the NetUQ approach, both components use a third order Gauss-Hermite polynomial chaos to represent the solution uncertainty at the endogenous nodes. Since the QoI lies within the overlap region, a unique PCE for the QoI may be extracted from each component. However, since both components use the same order PCE these predictions are identical (within the convergence tolerance) as shown in Fig. 7(a).

Fig. 7
QoI output PDFs from solving the steady-state linear heat conduction problem in four different ways: (a) consistent meshes in the overlap region and (b) inconsistent meshes in the overlap region
Fig. 7
QoI output PDFs from solving the steady-state linear heat conduction problem in four different ways: (a) consistent meshes in the overlap region and (b) inconsistent meshes in the overlap region
Close modal

To demonstrate the flexibility of the overlapping domain decomposition approaches, the mesh for the left (purple+gray) component is changed to a tet mesh (Fig. 6(b)). The only nodes that are consistent between the hex and tet meshes are the ones at the eight corners of the overlap region. Admittedly, for a 1D problem that's more than enough to uniquely specify the temperature throughout the overlap region, so this is a fairly weak test, but it does test the spatial interpolation between meshes. The results for this case are shown in Fig. 7(b).

3.2.2 Transient Linear Heat Conduction.

Consider now the analogous transient thermal problem. This is the same problem previously described in Sec. 3.1.2. Once again, let T0N(μ=1000,σ2=402) and TLN(μ=500,σ2=202). Our quantity of interest is now the time evolution of the temperature at the midpoint of the slab, u(L/2,t). The solution is given by
u(L/2,t)N(μ=μ(t),σ2=σ(t)2)
(18)
with
μ(t)=750+n=1BnCn(L/2,t)
(19)
and
σ(t)=σT02+σTL2σT0=40(12n=1(4nπ2nπ(1)n)Cn(L/2,t))σTL=20(12+n=12nπ(1)nCn(L/2,t))
(20)

As a demonstration, we solve this transient heat conduction problem with the more complicated network of components shown in Fig. 8. The solution, given by Eq. (18) only varies in the x direction despite the higher-dimensional mesh and more complex network of components. We also adjust the time steps for the two components to be different to test the temporal interpolation.

Fig. 8
Network of six overlapping components
Fig. 8
Network of six overlapping components
Close modal

Figures 9(a) and 9(b) illustrate that the results for this problem are consistent with those generated with the decomposition shown in Fig. 6(a) as well as the analytical series solution. This demonstrates the validity of the method for larger numbers of components that interact with each other in more complex ways.

Fig. 9
Time evolution of the stochastic representation of the midpoint temperature, u(L/2,t): (a) expected value and (b) variance
Fig. 9
Time evolution of the stochastic representation of the midpoint temperature, u(L/2,t): (a) expected value and (b) variance
Close modal

3.2.3 U-Bomb.

As a final demonstration, a large-scale exemplar problem was selected for evaluating the pynetuq capability. The pynetuq framework was applied to an abnormal thermal simulation using an unclassified exemplar bomb (U-Bomb) geometry shown in Fig. 10 [21]. The U-Bomb is examined in two configurations: a simplified low-fidelity configuration and a detailed configuration with enhanced geometric fidelity. The low-fidelity configuration is represented by a network of four components (red, green, blue, and orange). The detailed U-Bomb geometry is divided into seven components with the forward-most (green) component having been subdivided further into four separate components as shown in Fig. 11. An uncertain radiative boundary condition is applied to the exterior surface to approximate a fully engulfing hydrocarbon fuel fire environment. The radiative boundary condition is assumed to follow a normal distribution N(μ=1000,σ2=402).

Fig. 10
Unclassified bomb geometry. The U-bomb is examined in two configurations: a simplified low-fidelity configuration (a) and a detailed configuration with enhanced geometric fidelity (b). The detailed U-bomb geometry is divided into seven components for network analysis: (a) low-fidelity and (b) detailed.
Fig. 10
Unclassified bomb geometry. The U-bomb is examined in two configurations: a simplified low-fidelity configuration (a) and a detailed configuration with enhanced geometric fidelity (b). The detailed U-bomb geometry is divided into seven components for network analysis: (a) low-fidelity and (b) detailed.
Close modal
Fig. 11
Graph representation of detailed U-bomb geometry
Fig. 11
Graph representation of detailed U-bomb geometry
Close modal

The outer case is constructed of aluminum 7075. Components 1, 2, and 3 are potted with PMDI foam. An organic material decomposition model [22] is used for these blocks. Components 1a, 1b, and 1c are constructed of stainless steel 304 L. All other unnamed internal potted components are constructed of either stainless steel 304 L or a representative laminate with anisotropic thermal conductivity representative of a printed circuit board. The thermal properties of the aluminum and steel are treated as nonlinear functions of temperature.

Eight different QoIs are defined for the U-Bomb application corresponding to temperature-time histories at different locations of interest within the system. The locations of the different QoIs are shown in Fig. 12. The minimum temperature is selected for the blocks colored purple while the maximum temperature is selected for blocks colored orange.

Fig. 12
Average response and 1–σ uncertainty bounds for eight different quantities of interest for the U-bomb problem computed using both the deterministic domain decomposition and NetUQ approaches. For each component, the minimum temperature over the purple block provides one quantity of interest, while the maximum temperature over the orange block provides another. In all for cases, the purple block minimum temperature is lower than the orange block maximum temperature.
Fig. 12
Average response and 1–σ uncertainty bounds for eight different quantities of interest for the U-bomb problem computed using both the deterministic domain decomposition and NetUQ approaches. For each component, the minimum temperature over the purple block provides one quantity of interest, while the maximum temperature over the orange block provides another. In all for cases, the purple block minimum temperature is lower than the orange block maximum temperature.
Close modal

A workflow was developed in which a pynetuq model was constructed for the low-fidelity configuration and then updated to include the added geometric fidelity of the detailed configuration. The components were all meshed independently with no requirements for mesh consistency at the component interfaces (although approximate geometric consistency is required). Component 0 remained unchanged between configurations. The network was initially solved using the low-fidelity configuration. The low-fidelity configuration solution was then used to initialize the network for the detailed configuration. This procedure significantly reduced the number of iterations required to achieve convergence for the detailed configuration.

Due to the lack of a conformal mesh for these scenarios, no monolithic simulation is attempted for validation. Instead, the network model is analyzed using both the deterministic domain decomposition and NetUQ methods implemented in the pynetuq library. The results of this comparison are shown in Fig. 12. The results agree very well including the variability. Where the solutions differ, it is expected that the NetUQ solution is more accurate due to the ability to apply a tighter convergence tolerance to the NetUQ simulation owing to its more rapid convergence relative to the deterministic domain decomposition method.

The faster convergence of the NetUQ method, in this case, is due primarily to the fact that Anderson acceleration is only implemented for the NetUQ method in the pynetuq library. Unfortunately, this lack of consistency within the library makes apples-to-apples timing comparisons impossible. However, we can assert that the primary advantage of the NetUQ approach relative to the deterministic domain decomposition approach is in terms of flexibility rather than performance. The NetUQ approach allows for unique uncertainty representations for each component which may allow for computational savings if, e.g., a simple Gaussian is sufficient for capturing the response for a linear or nearly linear component. For this application, the same uncertainty representation was used for all components to be consistent with the results from the deterministic domain decomposition method.

The differences in QoI responses shown in Fig. 12 are due to the thermal paths to and through the components, differences in thermal diffusivities of the blocks, and using block minimums versus maximums. The large temperature difference between QoIs seen in Fig. 12(a) is largely due to the fact that the purple block is composed of a low-conductivity material, fiberite. As a result, the location of the minimum temperature of that block (which is surrounded by low-conductivity PMDI foam encapsulant) is extremely well-insulated. In contrast, the orange block is in intimate contact with the metallic case such that its maximum temperature more closely tracks the imposed boundary condition. Temperatures of the orange and purple blocks in Fig. 12(b) are much more similar, with the difference being primarily attributed to the orange block using the maximum temperature and the purple block using the minimum. Both blocks shown in Fig. 12(c) are fully suspended in foam encapsulant causing more muted responses. The difference in those blocks' responses is mainly due to a difference in thermal diffusivity. The component in Fig. 12(d) is the orange block in Fig. 12(a), with QoIs now being internal blocks within the component. Note that the thermal responses of the orange blocks in Figs. 12(a) and 12(d) are nearly identical due to the orange block in Fig. 12(d) being in contact with the external face of the component. In order for heat to reach the purple block within Fig. 12(d), it primarily travels along a path through contacting blocks within the component which significantly delays heat reaching the purple block compared to the orange block that resides at the start of the path.

The magnitude of variability found for all responses largely tracks with the temperature reached. Due to nonlinearities within the model including internal radiation enclosures, temperature-dependent thermal properties, and decomposing foam, it is expected that the QoI response distributions are non-Gaussian. Unfortunately, the quadrature results were no longer available at the time of documentation, so this expected nonlinear distribution propagation could not be confirmed.

4 Conclusions

A network uncertainty quantification technique was implemented and evaluated using a number of example applications. This involved the creation of the open source pynetuq library to facilitate the creation of network models of surrogate problems of interest. These problems spanned a wide range of complex physics, including thermal and mechanical simulations at a range of scales. The library itself represents a flexible tool for performing network-based uncertainty propagation and is integrated with a number of production codes. Additionally, it is easily extensible to be compatible with new physics applications.

Pseudo-1D steady-state and transient thermal problems were used to verify the network solver implementation and transient thermal and mechanical problems were used to evaluate the solver performance. For steady-state thermal transient thermal problems the network behaves robustly, but for transient solid mechanics problems, the behavior is more complicated. For solid mechanics problems with fixed-displacement boundary conditions, the network converges robustly, but for fixed load boundary conditions, the accuracy of the converged solution is dependent on the load applied and the output frequency of the network components. Infrequent outputs and large loads result in inaccuracies due to the linear interpolation in time used between components. This difficulty may be overcome by increasing the output frequency during time periods with large temporal solution variations, particularly at early times.

Additionally, we applied the approach to a complex system exemplar model, the U-Bomb. This demonstrated the applicability of the approach to complex system-scale problems and highlights some advantages and challenges of the approach. The network approach essentially decouples the model development pipelines for different components and subassemblies. This results in increased agility and the potential to incorporate UQ into the fast-moving design cycle. Individual component geometries and models may be easily updated in a modular way, reducing dependencies and enabling analyst access to a parallel model development paradigm. The network approach also directly ties plentiful component and subassembly validation evidence to full-system model predictions. This approach would result in fewer simplifications and a more robust credibility package.

Unfortunately, the modularity and flexibility come at a cost. Network uncertainty quantification techniques are considerably slower (in terms of computing cost) than monolithic techniques. The advantage lies exclusively in the modularity and the potential to speed up model development at the expense of slower model execution. In many applications, this is a trade well worth making. Additionally, reduced-order and surrogate models provide an avenue to avoid this sacrifice in model execution speed [2327].

Expensive component models may be seamlessly replaced by inexpensive surrogates within the network architecture without sacrificing accuracy. These surrogates may either be trained through a bottom-up approach without prior knowledge of the intercomponent boundary conditions or by utilizing partially converged network results.

The network uncertainty propagation techniques are also inherently iterative. This is not itself a problem but does pose additional challenges when operating within existing high performance computing (HPC) infrastructure. The current HPC paradigm assumes jobs to be independent and optimizes for high throughput (at the expense of latency). The iterative nature of these network UQ techniques introduces dependencies between jobs, and long queue times can be compounded. This can be circumvented by either packaging all of the jobs together when submitting to the HPC system or eschewing the HPC systems for alternative low-latency resources.

Acknowledgment

This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. This work was funded by the Advanced Simulation and Computing program and the Laboratory Directed Research and Development program at Sandia National Laboratories, a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.

This article has been authored by an employee of National Technology & Engineering Solutions of Sandia, LLC under Contract No. DE-NA0003525 with the U.S. Department of Energy (DOE). The employee owns all right, title, and interest in and to the article and are solely responsible for its contents. The DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan.2

Funding Data

  • Sandia National Laboratories (Grant No. 213014; Funder ID; 10.13039/100006234).

Nomenclature

c =

polynomial chaos expansion coefficients

DD =

domain decomposition

M =

deterministic operator

PCE =

polynomial chaos expansion

QoI =

quantity of interest

T =

temperature

u =

exogenous input variable

UQ =

uncertainty quantification

w =

quadrature weights

y =

endogenous input variable

Footnotes

References

1.
Carlberg
,
K.
,
Guzzetti
,
S.
,
Khalil
,
M.
, and
Sargsyan
,
K.
,
2019
, “
The Network Uncertainty Quantification Method for Propagating Uncertainties in Component-Based Systems
,” arXiv:1908.11476 [math.NA].
2.
Liao
,
Q.
, and
Willcox
,
K.
,
2015
, “
A Domain Decomposition Approach for Uncertainty Analysis
,”
SIAM J. Sci. Comput.
,
37
(
1
), pp.
A103
A133
.10.1137/140980508
3.
Martin
,
J. D.
, and
Simpson
,
T. W.
,
2006
, “
A Methodology to Manage System-Level Uncertainty During Conceptual Design
,”
ASME J. Mech. Des.
,
128
(
4
), pp.
959
968
.10.1115/1.2204975
4.
Rojas
,
E.
, and
Tencer
,
J.
,
2021
, “
Performance of Iterative Network Uncertainty Quantification for Multicomponent System Qualification
,”
ASME
Paper No. IMECE2021-72345.10.1115/IMECE2021-72345
5.
Le Maître
,
O.
, and
Knio
,
O. M.
,
2010
,
Spectral Methods for Uncertainty Quantification: With Applications to Computational Fluid Dynamics
,
Springer Science & Business Media
,
Berlin, Germany
.
6.
Ghanem
,
R. G.
, and
Spanos
,
P. D.
,
2003
,
Stochastic Finite Elements: A Spectral Approach
,
Courier Corporation
,
New York
.
7.
Xiu
,
D.
, and
Karniadakis
,
G. E.
,
2002
, “
The Wiener–Askey Polynomial Chaos for Stochastic Differential Equations
,”
SIAM J. Sci. Comput.
,
24
(
2
), pp.
619
644
.10.1137/S1064827501387826
8.
Xiu
,
D.
,
2010
, “
Numerical Methods for Stochastic Computations
,”
Numerical Methods for Stochastic Computations
,
Princeton University Press
,
Princeton, NJ
.
9.
Smith
,
B. F.
,
1997
, “
Domain Decomposition Methods for Partial Differential Equations
,”
Parallel Numerical Algorithms
,
Springer
,
Berlin, Germany
, pp.
225
243
.
10.
Chan
,
T. F.
, and
Mathew
,
T. P.
,
1994
, “
Domain Decomposition Algorithms
,”
Acta Numer.
,
3
, pp.
61
143
.10.1017/S0962492900002427
11.
Toselli
,
A.
, and
Widlund
,
O.
,
2004
,
Domain Decomposition Methods-Algorithms and Theory
, Vol.
34
,
Springer Science & Business Media
,
Berlin, Germany
.
12.
Sargsyan
,
K.
,
Safta
,
C.
,
Johnston
,
K.
,
Khalil
,
M.
,
Chowdhary
,
K. S.
,
Rai
,
P.
,
Casey
,
T. A.
,
Boll
,
L. D.
,
Zeng
,
X.
, and
Debusschere
,
B.
,
2021
, “
UQTk Version 3.1.1 User Manual sand2021-3655
,” Sandia National Laboratories, Albuquerque, NM, Report No. SAND2021-3655.
13.
Yang
,
X. I.
, and
Mittal
,
R.
,
2014
, “
Acceleration of the Jacobi Iterative Method by Factors Exceeding 100 Using Scheduled Relaxation
,”
J. Comput. Phys.
,
274
, pp.
695
708
.10.1016/j.jcp.2014.06.010
14.
Pratapa
,
P. P.
,
Suryanarayana
,
P.
, and
Pask
,
J. E.
,
2016
, “
Anderson Acceleration of the Jacobi Iterative Method: An Efficient Alternative to Krylov Methods for Large, Sparse Linear Systems
,”
J. Comput. Phys.
,
306
, pp.
43
54
.10.1016/j.jcp.2015.11.018
15.
Eyert
,
V.
,
1996
, “
A Comparative Study on Methods for Convergence Acceleration of Iterative Vector Sequences
,”
J. Comput. Phys.
,
124
(
2
), pp.
271
285
.10.1006/jcph.1996.0059
16.
Anderson
,
D. G.
,
1965
, “
Iterative Procedures for Nonlinear Integral Equations
,”
J. ACM (JACM)
,
12
(
4
), pp.
547
560
.10.1145/321296.321305
17.
Fang
,
H-R.
, and
Saad
,
Y.
,
2009
, “
Two Classes of Multisecant Methods for Nonlinear Acceleration
,”
Numer. Linear Algebra Appl.
,
16
(
3
), pp.
197
221
.10.1002/nla.617
18.
Toth
,
A.
, and
Kelley
,
C.
,
2015
, “
Convergence Analysis for Anderson Acceleration
,”
SIAM J. Numer. Anal.
,
53
(
2
), pp.
805
819
.10.1137/130919398
19.
Walker
,
H. F.
, and
Ni
,
P.
,
2011
, “
Anderson Acceleration for Fixed-Point Iterations
,”
SIAM J. Numer. Anal.
,
49
(
4
), pp.
1715
1735
.10.1137/10078356X
20.
SIERRA Thermal/Fluid Development Team
,
2019
, “
SIERRA Multimechanics Module: Aria User Manual - Version 4.52
,” Sandia National Laboratories, Albuquerque, NM, Report No. SAND2019-3786.
21.
Schroeder
,
B.
,
Hetzler
,
A.
,
Mills
,
B.
, and
Shelton
,
J.
,
2018
, “
An Effort Towards a Consistent VVUQ Approach for Thermal Systems Analyses
,” Sandia National Laboratories, Albuquerque, NM, Report No. SAND2018-5411 PE.
22.
Scott
,
S. N.
,
Dodd
,
A. B.
,
Larsen
,
M. E.
,
Suo-Anttila
,
J. M.
, and
Erickson
,
K. L.
,
2016
, “
Validation of Heat Transfer, Thermal Decomposition, and Container Pressurization of Polyurethane Foam Using Mean Value and Latin Hypercube Sampling Approaches
,”
Fire Technol.
,
52
(
1
), pp.
121
147
.10.1007/s10694-014-0448-8
23.
Benner
,
P.
,
Gugercin
,
S.
, and
Willcox
,
K.
,
2015
, “
A Survey of Projection-Based Model Reduction Methods for Parametric Dynamical Systems
,”
SIAM Rev.
,
57
(
4
), pp.
483
531
.10.1137/130932715
24.
Peherstorfer
,
B.
,
Willcox
,
K.
, and
Gunzburger
,
M.
,
2018
, “
Survey of Multifidelity Methods in Uncertainty Propagation, Inference, and Optimization
,”
Siam Rev.
,
60
(
3
), pp.
550
591
.10.1137/16M1082469
25.
Brunini
,
V.
,
Parish
,
E. J.
,
Tencer
,
J.
, and
Rizzi
,
F.
,
2022
, “
Projection-Based Model Reduction for Coupled Conduction–Enclosure Radiation Systems
,”
ASME J. Heat Transfer-Trans. ASME
,
144
(
6
), p. 062101.10.1115/1.4053994
26.
Gelsomino
,
F.
, and
Rozza
,
G.
,
2011
, “
Comparison and Combination of Reduced-Order Modelling Techniques in 3d Parametrized Heat Transfer Problems
,”
Math. Comput. Modell. Dyn. Syst.
,
17
(
4
), pp.
371
394
.10.1080/13873954.2011.547672
27.
Brunton
,
S. L.
, and
Kutz
,
J. N.
,
2022
,
Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control
,
Cambridge University Press
,
Cambridge, UK
.