Abstract

This paper describes the process for assessing the predictive capability of the Consortium for the advanced simulation of light-water reactors (CASL) virtual environment for reactor applications code suite (VERA—CS) for different challenge problems. The assessment process is guided by the two qualitative frameworks, i.e., phenomena identification and ranking table (PIRT) and predictive capability maturity model (PCMM). The capability and credibility of VERA codes (individual and coupled simulation codes) are evaluated. Capability refers to evidence of required functionality for capturing phenomena of interest while credibility refers to the evidence that provides confidence in the calculated results. For this assessment, each challenge problem defines a set of phenomenological requirements (based on PIRT) against which the VERA software is evaluated. This approach, in turn, enables the focused assessment of only those capabilities that are relevant to the challenge problem. The credibility assessment using PCMM is based on different decision attributes that encompass verification, validation, and uncertainty quantification (VVUQ) of the CASL codes. For each attribute, a maturity score from zero to three is assigned to ascertain the acquired maturity level of the VERA codes with respect to the challenge problem. Credibility in the assessment is established by mapping relevant evidence obtained from VVUQ of codes to the corresponding PCMM attribute. The illustration of the proposed approach is presented using one of the CASL challenge problems called chalk river unidentified deposit (CRUD) induced power shift (CIPS). The assessment framework described in this paper can be considered applicable to other M & S code development efforts.

1 Introduction

Rapid growth in the computing power of Modeling and Simulation (M & S) tools has brought tremendous advances in the field of engineering and science. This is particularly true in fields where system-prototypic experimentation is difficult and/or expensive. In nuclear engineering, M & S tools are extensively used to support decisions regarding the design, operation, and safety assessment of nuclear power plants. Consequently, comprehensive methodologies and standard procedures have been developed to guide the evolution of M & S tools and assess their adequacy for the intended use. The “code scaling, applicability, and uncertainty” (CSAU) methodology [1] was the first effort led by the United States Nuclear Regulatory Commission (U.S. NRC) to provide standard procedures and guidelines for uncertainty quantification and adequacy assessment of the system analysis code for design basis accidents. Later the “evaluation model development and assessment process” [2], was developed to provide a systematic process for the development and assessment of evaluation models for transient and/or accident analysis of nuclear power plants during design basis accidents. The “predictive capability maturity model” (PCMM) [3] is a qualitative decision framework developed by Sandia National Laboratories which adopts a graded approach (based on qualitative criteria) for assessing the maturity of a simulation tool with respect to the consequence of its application. PCMM is the defacto assessment framework for the National Nuclear Safety Administration funded M & S conducted in the United States.

Both M & S and experiments provide approximate representations of real systems. Because of uncertainty and incomplete information, making decisions regarding the adequacy of a simulation code for an intended application, particularly when it is a high-consequence nuclear reactor safety application, can be difficult. Subjective decision-making regarding the adequacy and rigor of the code development team is common in the development and assessment of computational codes for complex nuclear reactor applications. Owing to this subjectivism, it becomes important to organize information and evidence systematically to establish their accountability toward the decision regarding the adequacy of the M & S code for a specific application.

The consortium for advanced simulation of light water reactor (CASL) is a U.S. Department of Energy (DOE) sponsored energy innovation hub for M & S of nuclear reactor applications. The primary objective of CASL is to develop modeling and simulation capabilities to support decisions regarding the safe and efficient operation of commercial nuclear power reactors. CASL has identified and developed different simulation codes with the capability to model different physics, e.g., coolant-boiling in rod arrays-two fluids code and computational fluid dynamics simulation) for reactor core thermal hydraulics, Michigan parallel characteristics transport code (MPACT) for reactor core neutronics, BISON code for fuel performance, and MPO advanced model for boron analysis (MAMBA) code for coolant chemistry and chalk river unidentified deposit (CRUD) [4]. Although most of these codes are mature with respect to their domain of individual physics, their extension to multiphysics and multiscale CASL challenge problems (CPs) require further capability development and extensive verification, validation, and uncertainty quantification (VVUQ) work [5]. This paper describes the process of assessment of the predictive capability of CASL virtual environment for reactor applications code suite (VERA—CS) for different CPs and also serves as first-of-kind documentation of the application of PCMM to complex multiphysics codes and problems. The proposed approach is based on the logical separation of capability and credibility assessment. The capability and credibility assessment are based on (1) phenomena identification and ranking table (PIRT), and (2) PCMM, respectively. The PIRT is used for complexity resolution and ultimately to determine the capability of VERA—CS with respect to the CP. Grading scales are used to specify the degree of knowledge, importance, modeling capability, and existing gaps corresponding to each phenomenon identified by the PIRT. The PCMM is used to define a set of decision attributes and assessment criteria to determine the attained degree of maturity for VERA—CS with respect to the CP. A similar gradation scheme is used to provide granularity for assessing the level of maturity. Given the large volume of heterogeneous data (or evidence) from various modeling and VVUQ activities of different codes related to the CPs, the PCMM serves as a convenient tool for organization and credibility assessment. However, PCMM needs a formal structure and the ability to incorporate evidence that can support the claims regarding the degree of maturity for VERA—CS for a specific CP. To justify the grade of evaluation and enhance transparency, a systematic scheme for evidence documentation and citation is incorporated in this work. The illustration of the proposed approach is presented using CRUD induced power shift (CIPS) as a CP, though the same approach has been applied to other CPs

The organization of the paper is as follows. Section 2 provides a summary of VERA—CS and CPs. Section 3 describes the technique for complexity resolution using PIRT. Section 4 describes the elements of the PCMM and assessment criteria. Section 5 describes the proposed approach for capability and credibility assessment of CASL VERA for a specific CP (i.e., CIPS), and Sec. 6 provides the conclusions.

2 Virtual Environment for Reactor Applications Code Suite and Challenge Problems

The CASL is charged with developing computational modeling and simulation capability for light water-moderated, commercial nuclear power reactors. The VERA—CS [6] includes a collection of tools for the simulation of neutronics, thermal-hydraulics, chemistry, and fuel performance (solid mechanics and heat transfer) in an integrated and coupled computational environment. These tools are generally designed to be employed in a high-performance computing environment and are highly parallelized. Computational fluid dynamics also play an important role, though not within VERA. The current, main CASL toolset includes the following software modules:

  • vera

  • mpact for reactor core neutronics (neutron and gamma transport) [713],

  • ctf for reactor core thermal-hydraulics [1424],

  • mamba for coolant chemistry and CRUD [2527],

  • bison for fuel performance [2830].

  • starccm+ for computational fluid dynamics [31]

Consortium for the advanced simulation of light-water reactors has been organized around a handful of CPs [32,33]. These CPs have been identified by the nuclear industry as important for the safe and reliable operation of the current nuclear reactors. Each CP has a unique set of phenomena that may span multiple traditional disciplines. CASL has addressed seven CPs

  • CIPS,

  • CRUD induced localized corrosion (CILC),

  • Pellet-cladding interactions (PCI),

  • Grid to rod fretting (GTRF),

  • Departure from nucleate boiling (DNB),

  • Loss of coolant accident (LOCA),

  • Reactivity insertion accident (RIA).

In aggregate, the CASL CPs serve to define the intended purpose of VERA—CS. In this paper, the illustration of the process for predictive capability assessment of VERA—CS is presented with respect to one of these CASL CPs, i.e., CIPS. The clear articulation of an intended purpose is critical for assessing both code capability and credibility. CRUD in CIPS refers to the deposition of porous corrosion products on the surface of the nuclear fuel rods. These chemical products are Iron and Nickel-based compounds that are produced by the corrosion of the metallic surface of the steam generator in pressurized water reactors (PWRs). Some of the corrosion products get released into the coolant in particulate form and eventually find their way to the reactor fuel rods. The deposition of CRUD leads to poor heat transfer, changes in the flow pattern, and accelerated corrosion. CRUD formation is accelerated under the subcooled boiling condition. Furthermore, Boron compounds accumulate inside the porous CRUD. As Boron is a neutron poison, a shift in the power profile is observed. This shift is termed CIPS [34].

The M & S efforts for CASL CPs are centered around two types of activities: (1) model development activities (focused on the development of specific capabilities for CP applications), and (2) VVUQ activities (that enhances trustworthiness or credibility of the developed capabilities for CP applications). The model development activities are guided by the PIRT while the VVUQ activities are guided by the PCMM. More details related to the use of PIRT and PCMM are discussed in Secs. 3 and 4.

3 Complexity Resolution Using Phenomena Identification and Ranking Table

Natural systems are often fundamentally complex. Analysis and understanding of a complex system require segregation of the system into less intricate parts or subsystems with distinctive forms or characteristics. Herbert A. Simon in his classic paper on “the architecture of complexity” describes how different complex systems exhibit hierarchical structure and similar properties (in the context of architecture or structural organization) regardless of their specific content [35]. He explains that two types of interactions are eminent in a hierarchical system: (1) Interactions within subsystems (or intercomponent linkage), (2) Interactions among subsystems (or intracomponent linkage). It is the nature of these interactions that guide the decomposition of a complex system. A complex system can be considered nearly decomposable when interactions among subsystems are feeble in strength compared to interactions within a subsystem [35].

The PIRT is a classical approach for the complexity resolution of nuclear reactor applications for modeling and simulation. It was formally introduced by the U.S. NRC as part of the CSAU methodology in 1988. Similar procedures [36,37] have been published in 1987 and 1989 by the Committee of the Safety of Nuclear Installations (CSNI), Nuclear Energy Agency for analysis of thermal-hydraulic phenomena related to emergency core cooling system and LOCA in Light water reactors. In the past decades, PIRT has been extensively used to resolve several issues, like large break loss of coolant accidents [38], small break loss of coolant accidents [39], fire modeling in NPP [40], and design of next-generation nuclear power plants [41]. The PIRT involves the identification and ranking of different phenomena relevant to the figure of merit [2]. Major steps related to the PIRT process are shown below [42]

  • Define the problem and PIRT objectives.

  • Specify the scenario (transient or steady-state). In the case of a transient process, the scenario is partitioned into time phases based on the dominant process/mechanism.

  • Identify and define the figures of merit.

  • Identify and review all the relevant literature (experimental and analytical data).

  • Identify phenomena relevant to the figures of merit.

  • Rank all the Phenomena based on knowledge and importance (with respect to the figures of merit).

  • Document all the findings.

Simon [35] describes two types of descriptors that can be used for solving a problem involving a complex system. These descriptors are called state descriptor and process descriptor. A state descriptor provides criteria for identifying an object or state of the system while the process descriptors are related to different processes or actions that lead to that particular state of the system. He further explains, “We pose a problem by giving state description of the solution. The task is to discover a sequence of processes that will produce the goal state from an initial state” [35]. In the context of the PIRT, the figures of merit may be considered as a state descriptor while different phenomena/processes that impact the figure of merit may be considered as process descriptors. Understanding the sequence and relation of different phenomena becomes crucial for the successful formulation of the problem. The structure of PIRT is governed by the nature of the problem being analyzed. The PIRT for accident situations like LOCA resolves complexity by dividing the transient scenario into time phases (blowdown, refill and reflood) based on the dominant mechanism or some other factors (operators' action or valve opening and closing). The phenomena identified by the PIRT process are arranged hierarchically based on the transient phase, system components, and underlying phenomena. The PIRT for simulation of high fidelity CASL CPs involves system decomposition with respect to governing physics (neutronics, fuel performance, coolant chemistry, and thermal-hydraulics) and scale (microscale, meso-scale, and macroscale) of the underlying phenomena. Hence, scale separation and physics decoupling are the two elementary principles that guide complexity resolution for CASL CPs. The outcome of the PIRT process is governed by the experts' knowledge and understanding of the problem of interest. Therefore, PIRT is subject to large epistemic uncertainty. Human factors related to the oratory skills and persuasiveness of the participating experts can also introduce some biases.

In CASL, the PIRT is employed for the conception of governing mechanisms and underlying physical processes (complexity resolution), guiding model development, identification of issues, and data needs. In this way, PIRT helps in prioritizing the research and development needs for different CPs. The strategy for the formulation of PIRT for CASL CP is based on the identification of key phenomena with respect to the governing physics. Numerical grading (viz., 0, 1, 2, 3) is used to specify the degree of knowledge, importance, modeling capability, and existing gaps corresponding to each phenomenon identified by the PIRT. As there may be a disparity between the input of different experts the grades are averaged to obtain the final assessment.

4 Predictive Capability Maturity Model

Assessing the credibility of predictions made using scientific computer codes is a complex and multifaceted topic that is also relatively new compared to the technical fields for which the codes are written. This problem has become more challenging as scientific software has become more capable and includes multiple physical phenomena. Within CASL, the credibility of VERA—CS is assessed using the PCMM [3]. The PCMM was originally developed by the Sandia National Laboratories for the maturity assessment of computational simulations concerning nuclear weapon applications. The original PCMM matrix consists of 6 decision attributes for code maturity assessment, namely, (1) representation and geometric fidelity (RGF), (2) physics and material model fidelity (PMMF), (3) code verification (CVER), (4) solution verification (SVER), (5) model validation (SVAL), and (6) uncertainty quantification and sensitivity analysis (UQSA). The evaluation of different PCMM attributes is performed by defining maturity levels based on the consequence of the application. According to Oberkampf et al. [43], the maturity levels in PCMM are based on two distinct information attributes [44]: (1) intrinsic information quality (related to objectivity and fidelity of information), and (2) contextual information quality (related to thoroughness, volume, and level of detail of information). The maturity levels guide the evaluation of intellectual artifacts or evidence obtained by different M & S activities [43]. Categorically all the data and/or information related to the M & S activities contribute to the body of evidence for its' maturity assessment. Given the nature of information attributes in PCMM levels, the required quality and quantity of evidence increase with increasingly higher maturity levels. A descriptive set of qualitative assessment criteria is specified in the PCMM matrix to guide the evaluation of PCMM attributes at different levels. The target maturity level for each PCMM attribute is decided based on the consequence of the application. It should be noted that the assessment criteria and maturity levels in the PCMM matrix make extensive usage of qualitative classifiers like, “little,” “some,” “minimum,” “all,” “low,” “medium,” “high,” etc. These classifiers provide a basis for assessment but lack in providing an absolute metric for different attributes assessment. It is important to note that while the level classifiers are nonquantitative, they do heavily rely on the objective assessment of evidence and meaningful resolution between levels is still enabled for four levels of granularity in maturity assessment. Moreover, the focus of PCMM is the maturity assessment of a simulation tool for an application based on the different processes and activities that enhance confidence in its use. Therefore, it is difficult to provide a quantitative metric for assessment.

For CASL CP applications, two modifications have been made to the original PCMM matrix. These modifications include the separation of software quality assurance (SQA) and software quality engineering (SQE) from the code verification category and the separation of separate effects tests (SETs) validation from integral effects tests (IETs) validation. The purpose of SETs and IETs validation is analogous to performing the unit tests and integration tests during code verification. Both strategies involve understanding the hierarchy involved in both areas.

Within CASL, there has been a relatively high level of effort and rigor expended on SQA/SQE practices while less effort has been expended on the mathematical code verification activities such as demonstrating the expected order of convergence. Separating SQA/SQE from code verification permits a more precise assessment and communication of expectations and achievements for each aspect. Furthermore, [45] recognizes SQA/SQE and numerical algorithm verification as separate types of activities, yet they are both intended to minimize or eliminate unexpected bugs, errors, blunders, and mistakes from corrupting predictions. Similarly, for validation, the separation of IET validation from SET validation permits more resolution in the assessment and clearer identification of the expectations and accomplishments. Figure 1 shows the PCMM matrix used for CASL CP applications with all the eight attributes and their assessment criteria for each maturity level.

Fig. 1
PCMM matrix used in credibility assessment of VERA—CS for different CP
Fig. 1
PCMM matrix used in credibility assessment of VERA—CS for different CP
Close modal

The predictive capability maturity model requires a detailed analysis of each element to decide its level of maturity. The current evaluation is based on a qualitative assessment of each code (in VERA—CS) based on the descriptors (decision criteria) for each element (or decision attribute) in the PCMM matrix. If a code satisfies all the descriptors at a particular level, it is assumed to reach the maturity corresponding to that level. If the descriptor for an element is partially satisfied, a fractional scale (e.g., 0.25, 0.5, 1.25…) is used to express maturity between the two levels. In this way, PCMM provides a qualitative decision model for the evaluation of codes using a graded approach. The primary use of the PCMM matrix in CASL can be summarized as:

  1. The CASL consists of a large team that involves researchers from different institutions working on different aspects of code development and assessment. The PCMM matrix helps in communicating and elucidating the different attributes that can impact the credibility of a simulation. The CASL researchers expend efforts in M & S development and assessment according to the target maturity levels.

  2. It provides a basis for discussion about the M & S needs and critical developments for different applications (challenge problems) to the decision-maker (CASL leadership team and council), stakeholders, and decision facilitators (researcher working on code development and VVUQ).

  3. Tracking the progress in development and assessment of codes for specifics CPs and directing resources toward critical areas.

5 Assessment Methodology

This section describes the process of assessment of VERA—CS for CASL CPs. The assessment process is guided by PIRT and PCMM while the illustration of different steps is described using CIPS as a CP. Figure 2 shows the different steps involved in this assessment process. A description of each step with an example is shown below.

Fig. 2
Challenge problem driven phenomenology based assessment
Fig. 2
Challenge problem driven phenomenology based assessment
Close modal

5.1 Step 1: Challenge Problems Specification.

The first step in this assessment process involves CP specification based on the purpose of analysis, system conditions, and figure of merit [1,2], e.g., the specification for CIPS is defined as:

  • Purpose of analysis: Assess the adequacy of VERA—CS for simulation of CIPS.

  • System condition: PWR system condition during transient and normal operation (with changing fuel burn up and CRUD deposition).

  • Figures of merit: Boron mass distribution (vector), Boron mass (scalar), and axial offset (scalar).

5.2 Step 2: Complexity Resolution by Phenomena Identification and Ranking Table.

The second step in the assessment process involves complexity resolution using the classical PIRT methodology [1,2]. Complexity resolution using PIRT for multiphysics CASL CPs involves system decomposition with respect to governing physics (Neutronics, Fuel Performance, Coolant Chemistry, and Thermal Hydraulics) and scale (microscale, meso-scale, and macroscale) to identify important phenomena or processes that can impact the figure of merit. The identified phenomena are ranked based on importance and knowledge information. Code adequacy is also determined at this step. The importance of a phenomenon is defined based on its relevance to the figure of merit, e.g., Boron exchange in and out of CRUD (see Table 1) is considered as a high importance phenomenon for determining boron mass distribution. Knowledge level expresses the level of understanding of the phenomena based on available models, experimental data, and existing literature. Code adequacy reflects the current capability of the individual codes in VERA—CS for simulating the phenomena in PIRT. Tables 25 provide code adequacy assessment for different phenomena identified by PIRT for CIPS. These rankings, along with an assessment of the cost of implementation, can be used to set funding and development priorities. The PIRT assessment directly informs the evaluation of capability as it links the required phenomenology with the code components designed to represent the phenomenology. CP specification (step 1 in Fig. 2) and complexity resolution using PIRT (step 2 in Fig. 2) is based on the input of subject matter experts (SMEs) and focus area leads in CASL.

Table 1

Phenomena identified by PIRT for CIPS in coolant chemistry

PhenomenonImportanceKnowledgeImportanceKnowledge
PIRT update (2017)Mini-PIRT (2014)
Local changes (near the rod) in the equation of state2.41.33.03.0
Chemical reaction rates are based on lower temperature and pressures2.01.32.02.0
Overlooked chemical reactions/species1.81.03.02.0
CRUD porosity2.81.82.02.0
CRUD permeability2.01.52.02.0
CRUD chimney density2.61.62.01.0
Water pH effect on Steam Generator Corrosion2.81.32.02.0
Water pH effect on CRUD Deposition2.31.52.02.0
Boron exchange in and out of the CRUD (New Phenomenon)3.01.0
PhenomenonImportanceKnowledgeImportanceKnowledge
PIRT update (2017)Mini-PIRT (2014)
Local changes (near the rod) in the equation of state2.41.33.03.0
Chemical reaction rates are based on lower temperature and pressures2.01.32.02.0
Overlooked chemical reactions/species1.81.03.02.0
CRUD porosity2.81.82.02.0
CRUD permeability2.01.52.02.0
CRUD chimney density2.61.62.01.0
Water pH effect on Steam Generator Corrosion2.81.32.02.0
Water pH effect on CRUD Deposition2.31.52.02.0
Boron exchange in and out of the CRUD (New Phenomenon)3.01.0
Table 2

Mapping phenomena identified in subchannel hydraulic to VERA codes for CIPS CP

PhenomenaMPACTBISONCTFMAMBA
Steaming rateH
Subcooled boiling on a clean metal surfaceH
Subcooled boiling in CRUDL
Bulk coolant temperatureH
Heat fluxH
Wall roughnessL
Single-phase heat transferH
Mass balance of nickel and ironL
CRUD erosionMM
Initial CRUD thickness (mass)LL
Initial coolant boron concentrationH
Initial coolant nickel concentrationLL
CRUD source term from steam generators and other surfacesM
CRUD induced change in boiling efficiencyL
Heat flux distribution (new phenomenon) CRUD-fluid heat transfer modelMM
PhenomenaMPACTBISONCTFMAMBA
Steaming rateH
Subcooled boiling on a clean metal surfaceH
Subcooled boiling in CRUDL
Bulk coolant temperatureH
Heat fluxH
Wall roughnessL
Single-phase heat transferH
Mass balance of nickel and ironL
CRUD erosionMM
Initial CRUD thickness (mass)LL
Initial coolant boron concentrationH
Initial coolant nickel concentrationLL
CRUD source term from steam generators and other surfacesM
CRUD induced change in boiling efficiencyL
Heat flux distribution (new phenomenon) CRUD-fluid heat transfer modelMM
Table 3

Mapping phenomena identified in fuel modeling to VERA codes for CIPS CP

PhenomenaMPACTBISONCTFMAMBA
Local changes in rod power due to burn-upHH
Fuel thermal conductivity changes as a function of burn-upH
Changes in effective crud conductivity due to internal fluid flow and boilingH
CRUD removal due to transient power changes.L
Fission product gasH
Pellet swellingH
Contact between the pellet and the cladH
PhenomenaMPACTBISONCTFMAMBA
Local changes in rod power due to burn-upHH
Fuel thermal conductivity changes as a function of burn-upH
Changes in effective crud conductivity due to internal fluid flow and boilingH
CRUD removal due to transient power changes.L
Fission product gasH
Pellet swellingH
Contact between the pellet and the cladH
Table 4

Mapping phenomena identified in neutronics to VERA codes for CIPS CP

PhenomenaMPACTBISONCTFMAMBA
Local boron density increases absorptionH
Moderator displaced by CRUD and replaced with an absorberH
Xenon impact on steady-state transientsM
Geometry changes in the pelletM
Cross section changesM
Fission product productionM
Fission product decay constantM
Simplified decay chainM
Boron induced shift in neutron spectrumH
Boron depletion due to exposure to neutron flux in the coolantM
Boron depletion due to exposure to neutron flux in the CRUDL
Fuel depletion and neutron flux calculation resolution disparityL
Boron concentration computation methodL
Iron and nickel neutron absorption (new phenomena)M
PhenomenaMPACTBISONCTFMAMBA
Local boron density increases absorptionH
Moderator displaced by CRUD and replaced with an absorberH
Xenon impact on steady-state transientsM
Geometry changes in the pelletM
Cross section changesM
Fission product productionM
Fission product decay constantM
Simplified decay chainM
Boron induced shift in neutron spectrumH
Boron depletion due to exposure to neutron flux in the coolantM
Boron depletion due to exposure to neutron flux in the CRUDL
Fuel depletion and neutron flux calculation resolution disparityL
Boron concentration computation methodL
Iron and nickel neutron absorption (new phenomena)M
Table 5

Mapping phenomena identified in coolant chemistry to VERA codes for CIPS CP

PhenomenaMPACTBISONCTFMAMBA
Local changes (near the rod) in the equation of stateM
Temperature-dependent chemical reaction ratesM
CRUD porosityM
CRUD permeabilityM
CRUD chimney densityL
Water pH effect on Steam Generator CorrosionL
Water pH effect on CRUD depositionM
Boron exchange in and out of the CRUD (New Phenomenon)M
PhenomenaMPACTBISONCTFMAMBA
Local changes (near the rod) in the equation of stateM
Temperature-dependent chemical reaction ratesM
CRUD porosityM
CRUD permeabilityM
CRUD chimney densityL
Water pH effect on Steam Generator CorrosionL
Water pH effect on CRUD depositionM
Boron exchange in and out of the CRUD (New Phenomenon)M

Within CASL, the CIPS CP involves four codes: MPACT, CTF, BISON, and MAMBA. The conceptual, physics-based understanding of computational modeling for CIPS can be described in a series of steps. First, the simulation must compute a neutron flux that produces energy from fission (deposited in the fuel and the coolant). Boron in CRUD, fuel temperature, moderator density, and moderator temperature are all feedback mechanisms. Next, the computation must conduct the energy in the fuel radially out from the center, across the gap, through the clad, and finally through the CRUD into the coolant. The fuel is changing with burn-up and the gap is shrinking. Subsequently, the code must remove the heat from the clad to the coolant and advect it out of the core. Finally, the simulation must predict how CRUD is exchanged between the fuel pin surface and the coolant (boiling and nonboiling) and how Boron deposited in and on the CRUD.

The PIRT for CIPS is constructed by the identification of phenomena and processes in each governing physics, i.e., neutronics, subchannel thermal hydraulics, fuel modeling, and coolant chemistry. Tables 1 and 68 show phenomena identified by PIRT for CIPS CP in subchannel hydraulics, fuel modeling, neutronics, and coolant chemistry, respectively. The complete PIRT for different challenge problems (CIPS, DNB, and PCI) with the description of all phenomena, is reported in CASL V & V assessment for VERA [46]. The CIPS PIRT results presented in this paper represent two specific PIRT exercises: a preliminary or Mini-PIRT conducted in 2014 and a Mini-PIRT update conducted in 2017. The PIRT update conducted for the CIPS CP was executed in two phases. First, the phenomena identified from the previous Mini-PIRT for CIPS were organized into a survey and this survey was made available electronically to CIPS experts within CASL. It is worth noting that the survey included the ability to suggest additional phenomena for consideration. The electronic survey was completed by several CASL researchers with expertise in different areas of M & S. Once the PIRT survey results were obtained, an extended discussion with all the participants was conducted to work through items that had significant disagreement among the survey responses. This proved relatively efficient since items where participants were already well converged could be passed by quickly and more time spent on items with greater disagreement. The final score for each phenomenon was obtained by averaging the scores provided by the participants. The comparison of PIRT responses of 2014 and 2017 indicates some changes:

Table 6

Phenomena identified by PIRT for CIPS in subchannel thermal-hydraulics

PhenomenonImportanceKnowledgeImportanceKnowledge
PIRT update (2017)Mini-PIRT (2014)
Steaming rate3.02.03.02.0
Subcooled boiling on a clean metal surface3.03.03.03.0
Subcooled boiling In CRUD3.01.03.01.0
Bulk coolant temperature3.03.02.02.0
Heat flux3.02.23.03.0
Wall roughness2.01.01.01.0
Single phase heat transfer2.02.51.02.0
Mass balance of nickel and iron3.01.83.01.0
Boron mass balance2.52.61.03.0
CRUD erosion2.21.33.01.0
Initial CRUD thickness (mass)2.52.03.01.0
Initial coolant nickel and boron concentration2.72.33.01.0
CRUD source term from steam generators and other surfaces3.01.73.01.0
CRUD induced change in boiling efficiency2.71.31.02.0
CRUD induced change in flow area0.71.41.02.0
CRUD induced change in friction pressure drop1.01.61.01.0
Change in thermal hydraulic equation of state due to chemistry1.81.31.01.0
Change in local heat flux to the coolant from the fuel due to CRUD buildup1.71.53.01.0
Heat flux distribution (new phenomenon)3.01.0
PhenomenonImportanceKnowledgeImportanceKnowledge
PIRT update (2017)Mini-PIRT (2014)
Steaming rate3.02.03.02.0
Subcooled boiling on a clean metal surface3.03.03.03.0
Subcooled boiling In CRUD3.01.03.01.0
Bulk coolant temperature3.03.02.02.0
Heat flux3.02.23.03.0
Wall roughness2.01.01.01.0
Single phase heat transfer2.02.51.02.0
Mass balance of nickel and iron3.01.83.01.0
Boron mass balance2.52.61.03.0
CRUD erosion2.21.33.01.0
Initial CRUD thickness (mass)2.52.03.01.0
Initial coolant nickel and boron concentration2.72.33.01.0
CRUD source term from steam generators and other surfaces3.01.73.01.0
CRUD induced change in boiling efficiency2.71.31.02.0
CRUD induced change in flow area0.71.41.02.0
CRUD induced change in friction pressure drop1.01.61.01.0
Change in thermal hydraulic equation of state due to chemistry1.81.31.01.0
Change in local heat flux to the coolant from the fuel due to CRUD buildup1.71.53.01.0
Heat flux distribution (new phenomenon)3.01.0
Table 7

Phenomena identified by PIRT for CIPS in fuel modeling

PhenomenonImportanceKnowledgeImportanceKnowledge
PIRT update (2017)Mini-PIRT (2014)
Local changes in rod power due to burn-up2.02.23.02.0
Fuel thermal conductivity changes as a function of burn-up1.51.83.02.0
Changes in effective CRUD conductivity due to internal fluid flow and boiling2.01.03.02.0
CRUD removal due to transient power changes2.01.03.02.0
Fission product gas1.01.31.02.0
Pellet swelling1.01.33.02.0
Contact between the pellet and the clad1.01.33.02.0
PhenomenonImportanceKnowledgeImportanceKnowledge
PIRT update (2017)Mini-PIRT (2014)
Local changes in rod power due to burn-up2.02.23.02.0
Fuel thermal conductivity changes as a function of burn-up1.51.83.02.0
Changes in effective CRUD conductivity due to internal fluid flow and boiling2.01.03.02.0
CRUD removal due to transient power changes2.01.03.02.0
Fission product gas1.01.31.02.0
Pellet swelling1.01.33.02.0
Contact between the pellet and the clad1.01.33.02.0
Table 8

Phenomena identified by PIRT for CIPS in neutronics

PhenomenonImportanceKnowledgeImportanceKnowledge
PIRT update (2017)Mini-PIRT (2014)
Local boron density increases absorption2.52.83.03.0
Moderator displaced by CRUD and replaced with an absorber1.62.01.03.0
Xenon impact on steady-state transients1.01.83.03.0
Geometry changes in the pellet0.51.31.02.0
Cross section changes2.72.73.02.0
Fission product production1.31.72.02.0
Fission product decay constants1.31.73.03.0
Simplified decay chains1.01.02.02.0
Boron induced shift in neutron spectrum1.52.02.02.0
Boron depletion due to exposure to neutron flux in the coolant2.02.21.01.0
Boron depletion due to exposure to neutron flux in the CRUD3.02.01.01.0
Fuel depletion and neutron flux calculation resolution disparity1.01.81.01.0
Boron concentration computation method0.81.61.01.0
Iron and nickel neutron absorption (new phenomena)2.03.0--
PhenomenonImportanceKnowledgeImportanceKnowledge
PIRT update (2017)Mini-PIRT (2014)
Local boron density increases absorption2.52.83.03.0
Moderator displaced by CRUD and replaced with an absorber1.62.01.03.0
Xenon impact on steady-state transients1.01.83.03.0
Geometry changes in the pellet0.51.31.02.0
Cross section changes2.72.73.02.0
Fission product production1.31.72.02.0
Fission product decay constants1.31.73.03.0
Simplified decay chains1.01.02.02.0
Boron induced shift in neutron spectrum1.52.02.02.0
Boron depletion due to exposure to neutron flux in the coolant2.02.21.01.0
Boron depletion due to exposure to neutron flux in the CRUD3.02.01.01.0
Fuel depletion and neutron flux calculation resolution disparity1.01.81.01.0
Boron concentration computation method0.81.61.01.0
Iron and nickel neutron absorption (new phenomena)2.03.0--
  • new phenomena were added in the 2017 PIRT update (e.g., heat flux distribution was added as a new phenomenon and was considered important for modeling physics for subchannel thermal hydraulics, boron exchange in and out of the CRUD was also added as a new phenomenon and was considered important for modeling coolant chemistry) and

  • scores for several phenomena were changed in the 2017 PIRT update.

The above-mentioned changes can be attributed to the increased understanding of CIPS phenomena from model development and VVUQ after the first iteration of the PIRT in 2014. However, due to the subjective nature of the PIRT process, the differences may also stem from some bias and disparity in the opinion of the expert groups from 2014 and 2017 PIRT. Therefore, neither the preliminary PIRT nor the update should be considered exhaustive and this acknowledged as a current shortcoming of the V & V assessment. Given increased priority and resources in the future or for any new CPs undertaken, a more comprehensive PIRT should be conducted.

The disagreement in the experts' opinion can be minimized by adopting the argumentation technique [47,48]. The argumentation technique makes use of explicit classifiers like “claim,” “argument,” “justification,” “assumption,” “context,” “evidence” to represent any information. Any claim regarding the knowledge and importance of a phenomenon needs to be supported by relevant evidence or justification. During any PIRT exercise, the experts may present data or evidence to support their opinion (or claims). However, sometimes this information gets lost in the discussion and is not properly documented. By conducting PIRT in a structured framework created by the argumentation technique, the uncertainties due to the subjective nature of PIRT can be minimized.

5.3 Step 3: Define Requirement for Model Development and Assessment.

The third step in the assessment process defines the requirements for code capability development and assessment, i.e., code VVUQ. Requirements for code capability development are defined by determining the model and data needs with respect to the phenomena in the PIRT. Requirements for code assessment are defined based on the target maturity levels of the PCMM attributes for the respective CP.

5.4 Step 4: Map Requirements to Code Capability.

The fourth step in the assessment process is focused on mapping the code development and assessment requirements to relevant single physics and/or coupled simulation code in VERA—CS. This step can be considered analogous to a transition from qualitative to quantitative. For example, if the effect of CRUD deposition on cladding temperature is identified as an important phenomenon, then the associated requirement would be that the code must be able to compute CRUD deposition with a specified accuracy, precision, and range of validity. Mapping the requirements to code capability also helps in identifying the gaps (due to model deficiencies and/or lack of data for validation).

A backlog of code requirements is established by examining the cost of implementation and the importance of pay-off for each of the phenomena. The code development teams bear the responsibility for model development and VVUQ. Each code development team examines their resources (i.e., developer time, computing hardware, etc.) and decides how much of the code requirement backlog they can address.

5.5 Step 5: Assemble Evidence for Maturity Assessment.

The fifth step in the assessment process is focused on the accumulation of VVUQ evidence to support the PCMM assessment. The assembling of evidence for PCMM is supported by the input of CASL researchers and code teams working on different aspects of model development and VVUQ activities in CASL. Therefore, the collection of evidence is guided by,

  • Direct statements from the code's teams working on development and VVUQ of codes.

  • Data/information gathered from user and theory manuals for the various codes as well as documentation of VVUQ activities such as verification test problems (e.g., observing the correct order of convergence for the numerical discretization schemes used in the codes) or comparison to validation data and uncertainty or sensitivity studies.

These two pieces of information are completely related as the statement from CASL code teams regarding the PCMM attributes needs to be backed up by supporting data or information from CASL technical milestones documentation. A careful review of relevant documents is required to gather a specific set of information that verifies the assessment of PCMM attributes at a specific level. During this process gaps in code functionalities and VVUQ are also identified. Gaps act as counterevidence in PCMM assessment as they undermine the confidence in codes' capability. In this way, a body of evidence is assembled that forms the sole basis for the maturity assessment. There is some subjectivity in assessing this evidence, and the authors acknowledge that there may be some disagreement about the numerical scores. Complete documentation of evidence related to CIPS CP, as reported earlier in CASL V & V assessment [46] is shown in Tables 919.

Table 9

Evidence reference and description (for SQA and CVER of MPACT)

IndexCategoryDescriptionRelevance/ Comments
MP.1.1. 1HLEComprehensive MPACT V&V manual [710,13].
MP.1.1. 2HLEComprehensive unit tests and regression tests support the SQA of MPACT [9].SQA
MP.1.1. 3HLESome peer reviews conducted.Need tracking of issues and resolution
MP.1.1. 4HLERigorous version control [9,11].SQA
MP.1.2. 1MLEUnit test for individual functions and subroutines [11].SQA
MP.1.2. 2MLERegression tests involve functional tests encompassing different sections of the code with various inputs [911].CVER
MP.1.2. 3MLEimpact software test plan, requirement, and test report [11].SQA
MP.1.2. 4MLEWork is in progress to implement both the consistency test and MMS test in the MPACT reactor code as part of the code verification and overall quality assessment effort for MPACT [13].CVER
MP.1.3. 1LLEUnit tests for solver kernels test against analytical solutions [9]Including CVER
MP.1.3. 2LLEKey capabilities tested [9]:Including CVER
 • GeometryRGF
 • Transports solvers: P0 and Pn 2D Method of Characteristics (MOC), P0 and Pn 2D-1D with SP3 and Nodal Expansion Method (NEM) solverPMMF
 • Other solvers: depletion search (boron, rod), multistate, Eq Xe/Sm, cross section shielding, coarse mesh finite difference (CMFD), cusping treatment
 • Parallel solver: message passing interface (MPI), Open-source message passing interface (OpenMPI).
MP.1.3. 3LLE• Code verification using the method of exact solutions,CVER
• Benchmark problem 3.4 in Ganapol [49] has been used as a code verification test for MPACT [13],
• MPACT agreed with all cases to within a few pcm [13].
MP.1.3. 4LLECode verification using method of manufactured solution (MMS)CVER
• Applied MMS to the C5G7 benchmark problem to verify the 2D multigroup neutron transport solver.
• The relative error of the scalar flux of the first energy group is ∼1 × 10−8. The relative error of the scalar flux of the thermal energy group is close to ∼1 × 10−5. This close-to-zero error indicates that the scalar flux from the fixed-source problem converges to the same solution as from the eigenvalue calculation [13].
IndexCategoryDescriptionRelevance/ Comments
MP.1.1. 1HLEComprehensive MPACT V&V manual [710,13].
MP.1.1. 2HLEComprehensive unit tests and regression tests support the SQA of MPACT [9].SQA
MP.1.1. 3HLESome peer reviews conducted.Need tracking of issues and resolution
MP.1.1. 4HLERigorous version control [9,11].SQA
MP.1.2. 1MLEUnit test for individual functions and subroutines [11].SQA
MP.1.2. 2MLERegression tests involve functional tests encompassing different sections of the code with various inputs [911].CVER
MP.1.2. 3MLEimpact software test plan, requirement, and test report [11].SQA
MP.1.2. 4MLEWork is in progress to implement both the consistency test and MMS test in the MPACT reactor code as part of the code verification and overall quality assessment effort for MPACT [13].CVER
MP.1.3. 1LLEUnit tests for solver kernels test against analytical solutions [9]Including CVER
MP.1.3. 2LLEKey capabilities tested [9]:Including CVER
 • GeometryRGF
 • Transports solvers: P0 and Pn 2D Method of Characteristics (MOC), P0 and Pn 2D-1D with SP3 and Nodal Expansion Method (NEM) solverPMMF
 • Other solvers: depletion search (boron, rod), multistate, Eq Xe/Sm, cross section shielding, coarse mesh finite difference (CMFD), cusping treatment
 • Parallel solver: message passing interface (MPI), Open-source message passing interface (OpenMPI).
MP.1.3. 3LLE• Code verification using the method of exact solutions,CVER
• Benchmark problem 3.4 in Ganapol [49] has been used as a code verification test for MPACT [13],
• MPACT agreed with all cases to within a few pcm [13].
MP.1.3. 4LLECode verification using method of manufactured solution (MMS)CVER
• Applied MMS to the C5G7 benchmark problem to verify the 2D multigroup neutron transport solver.
• The relative error of the scalar flux of the first energy group is ∼1 × 10−8. The relative error of the scalar flux of the thermal energy group is close to ∼1 × 10−5. This close-to-zero error indicates that the scalar flux from the fixed-source problem converges to the same solution as from the eigenvalue calculation [13].
Table 10

Evidence related to MPACT SVER, RGF, and PMMF

IndexCategoryDescriptionRelevance/ Comments
MP.2.1. 1HLESupported by a test involving mesh convergence analysis and method of manufactured solution [9,13].SVER
MP.2.1. 2HLENumerical effects are quantitatively estimated to be small on some SRQ (system response quantities) [9,13].SVER
MP.2.1. 3HLEI/O independently verified [9].
MP.2.1. 4HLESome peer reviews conducted.Need tracking of issues and resolution
MP.2.2. 1MLEMesh convergence analysis-Work is based on evaluation of the sensitivity of K-effective to different MOC parameters (Flat source region mesh, angular quadrature, ray spacing) for VERA Benchmark Problems [9].SVER
MP.2.2. 2MLEMethod of Manufactured Solution will be used to quantify the rate of convergence of the solution to MOC parameters [7,10].Gap
MP.2.3. 1LLETest performed for regular pin cell (VERA—CS benchmark problem 1a) and assembly (VERA—CS benchmark problem 1a) [9]RGF
MP.2.3. 2LLEThe test encompasses radial and azimuthal discretization, ray spacing, angular quadrature, the coupling between discretization parameters [9].RGF
MP.2.3. 3LLEMPACT library generation procedure [9].PMMF
MP.2.3. 4LLETesting (and improvement) of the nuclide transmutation solver (ORIGEN) application programing interface (API) [9].PMMF
MP.2.3. 5LLEExtensive solution verification test performed for 3D assembly geometry and 2D pin geometry [13].SVER
IndexCategoryDescriptionRelevance/ Comments
MP.2.1. 1HLESupported by a test involving mesh convergence analysis and method of manufactured solution [9,13].SVER
MP.2.1. 2HLENumerical effects are quantitatively estimated to be small on some SRQ (system response quantities) [9,13].SVER
MP.2.1. 3HLEI/O independently verified [9].
MP.2.1. 4HLESome peer reviews conducted.Need tracking of issues and resolution
MP.2.2. 1MLEMesh convergence analysis-Work is based on evaluation of the sensitivity of K-effective to different MOC parameters (Flat source region mesh, angular quadrature, ray spacing) for VERA Benchmark Problems [9].SVER
MP.2.2. 2MLEMethod of Manufactured Solution will be used to quantify the rate of convergence of the solution to MOC parameters [7,10].Gap
MP.2.3. 1LLETest performed for regular pin cell (VERA—CS benchmark problem 1a) and assembly (VERA—CS benchmark problem 1a) [9]RGF
MP.2.3. 2LLEThe test encompasses radial and azimuthal discretization, ray spacing, angular quadrature, the coupling between discretization parameters [9].RGF
MP.2.3. 3LLEMPACT library generation procedure [9].PMMF
MP.2.3. 4LLETesting (and improvement) of the nuclide transmutation solver (ORIGEN) application programing interface (API) [9].PMMF
MP.2.3. 5LLEExtensive solution verification test performed for 3D assembly geometry and 2D pin geometry [13].SVER
Table 11

Evidence related to impact validation

IndexCategoryDescriptionRelevance/ Comments
MP.3.1. 1HLEQuantitative assessment of predictive accuracy for key SRQ from IETs and SETs [7,10].SVAL, IVAL
MP.3.1. 2HLEMPACT validation is supported by Refs. [8] and [9] measured data from different criticality tests, operating nuclear power plants, measured isotopes from irradiated fuel, calculation from continuous energy MC simulation, use of postirradiation examination (PIE) tests for evaluation and validation of the isotopic depletion capability in MPACT.SVAL, IVAL
MP.3.1. 3HLEDemonstrated capability to support CIPS.Validation of phenomena using experimental data
MP.3.1. 4HLEAdditional validation is required.Gap
MP.3.2. 1MLECriticality tests encompass: critical condition, fuel rod fission rate distribution, control rod burnable poison worth, isothermal temperature coefficient.
MP.3.2. 2MLEOperating nuclear power plants: critical soluble boron concentration, MOC physics parameter- control rod worth, temperature coefficient, fission rates.RGF
MP.3.2. 3MLEMeasured isotopes from the postirradiation experiment: gamma scans of Cs-137, burnup based on Nd-148, full radiochemical assay of the major actinides and fission products.
MP.3.2. 4MLEContinuous energy Monte Carlo simulation: 3D core pin-by-pin fission rates at operating condition, intrapin distribution of fission, capture rates, reactivity, pin power distribution, gamma transport, thick radial core support structure effects.
MP.3.3. 1LLEBabcock & wilcox critical experiments (validation based on fast-flux, fission power, and cross section data).The successful validation shows adequate quality in RGF and PMMF
MP.3.3. 2LLEDevelopment of preliminary VERA—CS Crud induced localized corrosion modeling capability (milestone: L2:PHI.P17.03) [53].RGF, PMMF
MP.3.3. 3LLESpecial power excursion reactor test (SPERT) (validation based on fast-flux, fission power and cross section data)IVAL, RGF, PMMF
MP.3.3. 4LLEDIMPLE critical experiments (validation based on fast-flux, fission power, and cross section data).IVAL, RGF, PMMF
MP.3.3. 5LLEWatts Bar Nuclear plant (validation based on fast-flux, fission power, isotopics, boron feedback to neutronics, cross section data, burn up).IVAL, RGF, PMMF
The MPACT validation for the WB2 startup tests (Godfrey, 2017) [54].
MP.3.3. 6LLEBenchmark for evaluation and validation of reactor simulations (BEAVRS), validation based on fission power, isotopics, boron feedback to neutronics, cross section data, burn up.IVAL, RGF, PMMF
MP.3.3. 7LLEValidation by code-to-code comparisons using Monte Carlo N-Particle (MCNP) transport code.IVAL, RGF, PMMF
MP.3.3. 8LLEReaction rate analysis.
MP.3.3. 9LLEVERA progression problems 1–4.RGF, PMMF
MP.3.3. 10LLEExtensive PWR pin and assembly benchmark problems.RGF, PMMF
IndexCategoryDescriptionRelevance/ Comments
MP.3.1. 1HLEQuantitative assessment of predictive accuracy for key SRQ from IETs and SETs [7,10].SVAL, IVAL
MP.3.1. 2HLEMPACT validation is supported by Refs. [8] and [9] measured data from different criticality tests, operating nuclear power plants, measured isotopes from irradiated fuel, calculation from continuous energy MC simulation, use of postirradiation examination (PIE) tests for evaluation and validation of the isotopic depletion capability in MPACT.SVAL, IVAL
MP.3.1. 3HLEDemonstrated capability to support CIPS.Validation of phenomena using experimental data
MP.3.1. 4HLEAdditional validation is required.Gap
MP.3.2. 1MLECriticality tests encompass: critical condition, fuel rod fission rate distribution, control rod burnable poison worth, isothermal temperature coefficient.
MP.3.2. 2MLEOperating nuclear power plants: critical soluble boron concentration, MOC physics parameter- control rod worth, temperature coefficient, fission rates.RGF
MP.3.2. 3MLEMeasured isotopes from the postirradiation experiment: gamma scans of Cs-137, burnup based on Nd-148, full radiochemical assay of the major actinides and fission products.
MP.3.2. 4MLEContinuous energy Monte Carlo simulation: 3D core pin-by-pin fission rates at operating condition, intrapin distribution of fission, capture rates, reactivity, pin power distribution, gamma transport, thick radial core support structure effects.
MP.3.3. 1LLEBabcock & wilcox critical experiments (validation based on fast-flux, fission power, and cross section data).The successful validation shows adequate quality in RGF and PMMF
MP.3.3. 2LLEDevelopment of preliminary VERA—CS Crud induced localized corrosion modeling capability (milestone: L2:PHI.P17.03) [53].RGF, PMMF
MP.3.3. 3LLESpecial power excursion reactor test (SPERT) (validation based on fast-flux, fission power and cross section data)IVAL, RGF, PMMF
MP.3.3. 4LLEDIMPLE critical experiments (validation based on fast-flux, fission power, and cross section data).IVAL, RGF, PMMF
MP.3.3. 5LLEWatts Bar Nuclear plant (validation based on fast-flux, fission power, isotopics, boron feedback to neutronics, cross section data, burn up).IVAL, RGF, PMMF
The MPACT validation for the WB2 startup tests (Godfrey, 2017) [54].
MP.3.3. 6LLEBenchmark for evaluation and validation of reactor simulations (BEAVRS), validation based on fission power, isotopics, boron feedback to neutronics, cross section data, burn up.IVAL, RGF, PMMF
MP.3.3. 7LLEValidation by code-to-code comparisons using Monte Carlo N-Particle (MCNP) transport code.IVAL, RGF, PMMF
MP.3.3. 8LLEReaction rate analysis.
MP.3.3. 9LLEVERA progression problems 1–4.RGF, PMMF
MP.3.3. 10LLEExtensive PWR pin and assembly benchmark problems.RGF, PMMF
Table 12

Evidence related to SQA and verification (CVER, SVER) of CTF

IndexCategoryDescriptionRelevance/ Comments
CT.1.1. 1HLESQA is based on unit test and regression tests.SQA
CT.1.1. 2HLEDocumentation of the SQA of base code is required.Gap
CT.1.1. 3HLECode verification work is insufficient.Gap
CT.1.1. 4HLESolution verification study performed by mesh refinement study.SVER
CT.1.2. 1MLEUnit tests: tests for different classes/procedures.SQA
CT.1.2. 2MLERegression tests: unit tests, verification problems, and validation problems used as a regression tests.SQA
CT.1.2. 3MLECode verification: Few models have been verified using an analytical solution.Limited CVER
CT.1.2. 4MLESolution verification by mesh refined study for progression problem 6.Limited SVER
CT.1.3. 1LLE(Unit test) Covers input reading, fluid properties, units, etc.SQA
CT.1.3. 2LLE• (Regression test) Covers both steady-state and transient simulation.SQA
• All V&V test inputs are part of the CTF repository.
• PHI continues testing system.
CT.1.3. 3LLETested phenomena: Single phase wall shear, grid heat transfer enhancement, isokinetic advection, shock tube, water faucet.CVER
CT.1.3. 4LLETest performed with and without spacer grids, QoI: total pressure drop across the assembly.SVER
CT.1.3. 5LLEUse validation tests as regression tests which are run on a continual basis to demonstrate code results are not changing.SQA
CT.1.3. 6LLECode to code benchmarking with subchannel code, VIPRE-01.SQA
CT.1.3. 7LLEComparison of CTF predicted rod surface temperature with STAR CCM+ predicted rod surface temperature.SQA
CT.1.3. 8LLEDetails on CTF coverage by code and solution verification are provided in the latest CTF code and solution verification report. There are some gaps in the assessment (Grid shear enhancement, grid heat transfer enhancement is not tested). Convergence behavior and numerical errors need to be quantified [16].CVER, SVER, Gap
CT.1.3. 9LLESolution verification tests were conducted [16].SVER
• The first solution verification problem in assembly geometry is a modification of Problem 3 in the CASLs Progression Test Suite (Godfrey) for decoupled codes.
• The second solution verification test in assembly geometry is a modification of Problem 6 in the Progression Test Suite, which emphasizes coupled CTF and MPACT calculations using VERA—CS. These solution verification tests represent a nearly complete integration of the physics capabilities in assembly geometry.
CT.1.3. 10LLESolution and code verification of the wall friction model in CTF [18].CVER and SVER
CT.1.3. 11LLESolution verification on the governing equations for the water faucet problem [55].SVER
CT.1.3. 12LLETwo-phase pressure drop code verification study.CVER
IndexCategoryDescriptionRelevance/ Comments
CT.1.1. 1HLESQA is based on unit test and regression tests.SQA
CT.1.1. 2HLEDocumentation of the SQA of base code is required.Gap
CT.1.1. 3HLECode verification work is insufficient.Gap
CT.1.1. 4HLESolution verification study performed by mesh refinement study.SVER
CT.1.2. 1MLEUnit tests: tests for different classes/procedures.SQA
CT.1.2. 2MLERegression tests: unit tests, verification problems, and validation problems used as a regression tests.SQA
CT.1.2. 3MLECode verification: Few models have been verified using an analytical solution.Limited CVER
CT.1.2. 4MLESolution verification by mesh refined study for progression problem 6.Limited SVER
CT.1.3. 1LLE(Unit test) Covers input reading, fluid properties, units, etc.SQA
CT.1.3. 2LLE• (Regression test) Covers both steady-state and transient simulation.SQA
• All V&V test inputs are part of the CTF repository.
• PHI continues testing system.
CT.1.3. 3LLETested phenomena: Single phase wall shear, grid heat transfer enhancement, isokinetic advection, shock tube, water faucet.CVER
CT.1.3. 4LLETest performed with and without spacer grids, QoI: total pressure drop across the assembly.SVER
CT.1.3. 5LLEUse validation tests as regression tests which are run on a continual basis to demonstrate code results are not changing.SQA
CT.1.3. 6LLECode to code benchmarking with subchannel code, VIPRE-01.SQA
CT.1.3. 7LLEComparison of CTF predicted rod surface temperature with STAR CCM+ predicted rod surface temperature.SQA
CT.1.3. 8LLEDetails on CTF coverage by code and solution verification are provided in the latest CTF code and solution verification report. There are some gaps in the assessment (Grid shear enhancement, grid heat transfer enhancement is not tested). Convergence behavior and numerical errors need to be quantified [16].CVER, SVER, Gap
CT.1.3. 9LLESolution verification tests were conducted [16].SVER
• The first solution verification problem in assembly geometry is a modification of Problem 3 in the CASLs Progression Test Suite (Godfrey) for decoupled codes.
• The second solution verification test in assembly geometry is a modification of Problem 6 in the Progression Test Suite, which emphasizes coupled CTF and MPACT calculations using VERA—CS. These solution verification tests represent a nearly complete integration of the physics capabilities in assembly geometry.
CT.1.3. 10LLESolution and code verification of the wall friction model in CTF [18].CVER and SVER
CT.1.3. 11LLESolution verification on the governing equations for the water faucet problem [55].SVER
CT.1.3. 12LLETwo-phase pressure drop code verification study.CVER
Table 13

Evidence related to validation and RGF of CTF

IndexCategoryDescriptionRelevance/ Comments
CT.2.1. 1HLELack of separate effect validation.Gap
CT.2.1. 2HLEExtensive integral effect validation was done.
CT.2.2. 1MLETesting of component models (correlations).See Table 14 for details
CT.2.2. 2MLEIntegral-effect test validation.See Table 15 for detail
CT.2.3. 1LLEHigh to low fidelity simulation using STAR CCM+ was used to improve grid heat transfer effect or rod bundle geometry.Accuracy improvement
CT.2.3. 2LLEDevelopment of preliminary VERA—CS Crud induced localized corrosion modeling capability (milestone: L2:PHI.P17.03) [53].RGF
CT.2.3. 3LLEImprovement in representation and geometric fidelity of CTF was shown by the calibration study using measured plant data (Watts bar nuclear plant) and experimental loop data (Westinghouse Advanced Loop Tester, WALT) [55].RGF
IndexCategoryDescriptionRelevance/ Comments
CT.2.1. 1HLELack of separate effect validation.Gap
CT.2.1. 2HLEExtensive integral effect validation was done.
CT.2.2. 1MLETesting of component models (correlations).See Table 14 for details
CT.2.2. 2MLEIntegral-effect test validation.See Table 15 for detail
CT.2.3. 1LLEHigh to low fidelity simulation using STAR CCM+ was used to improve grid heat transfer effect or rod bundle geometry.Accuracy improvement
CT.2.3. 2LLEDevelopment of preliminary VERA—CS Crud induced localized corrosion modeling capability (milestone: L2:PHI.P17.03) [53].RGF
CT.2.3. 3LLEImprovement in representation and geometric fidelity of CTF was shown by the calibration study using measured plant data (Watts bar nuclear plant) and experimental loop data (Westinghouse Advanced Loop Tester, WALT) [55].RGF
Table 14

Testing of component models for CTF [51]

PhenomenonModelValidation test statusVerification test status
Single-phase convectionDittus–BoelterCompleted
Subcooled boiling heat transferThomCompleted
Single-phase grid spacer pressure lossForm lossCompleted
Single-phase wall shearDarcy–WeisbachCompletedCompleted
Grid heat transfer enhancementYao–Hochreiter–Leech
Single-phase turbulent mixingMixing-length theoryCompletedCompleted
Pressure-directed cross flowTransverse momentum equation
PhenomenonModelValidation test statusVerification test status
Single-phase convectionDittus–BoelterCompleted
Subcooled boiling heat transferThomCompleted
Single-phase grid spacer pressure lossForm lossCompleted
Single-phase wall shearDarcy–WeisbachCompletedCompleted
Grid heat transfer enhancementYao–Hochreiter–Leech
Single-phase turbulent mixingMixing-length theoryCompletedCompleted
Pressure-directed cross flowTransverse momentum equation
Table 15

CTF integral effect test validation [51]

EffectExperiments
Pressure dropBWR full-size fine-mesh bundle test (BFBT), FRIGG test loop
Void/qualityPWR subchannel bundle test (PSBT), FRIGG test loop
Single-phase turbulent mixingGeneral electric (GE) 3 × 3 bundle tests, combustion engine (CE) 5 × 5 rod bundle tests, RPI
Turbulent mixing/void driftGE 3 × 3 bundle tests, BFBT
DNBHarwell high-pressure loop test, Takahama
Heat transferCE 5 × 5 rod bundle test
Natural circulationPacific Northwest National Laboratory (PNNL) 2 × 6 rod array
Fuel temperatureHalden test
EffectExperiments
Pressure dropBWR full-size fine-mesh bundle test (BFBT), FRIGG test loop
Void/qualityPWR subchannel bundle test (PSBT), FRIGG test loop
Single-phase turbulent mixingGeneral electric (GE) 3 × 3 bundle tests, combustion engine (CE) 5 × 5 rod bundle tests, RPI
Turbulent mixing/void driftGE 3 × 3 bundle tests, BFBT
DNBHarwell high-pressure loop test, Takahama
Heat transferCE 5 × 5 rod bundle test
Natural circulationPacific Northwest National Laboratory (PNNL) 2 × 6 rod array
Fuel temperatureHalden test
Table 16

Evidence related to SQA, V&V, and UQ of MAMBA

IndexCategoryDescriptionRelevance/comments
MA.1.1. 1HLEThe MAMBA3D refactoring the developers are implementing a unit and regression testing protocol that should result in robust source code verification when the code is completed at the end of PoR15.Gap
MA.1.1. 2HLESQA needs some improvement.Gap
MA.1.1. 3HLELow-level code verification was performed.Gap
MA.1.1. 4HLESolution verification is not done for CASL CPs.Gap
MA.1.1. 5HLESome validation work was performed (SET, IET, and plant analysis).Gap
MA.1.2. 1MLESolution verification and code verification using the analytical solutions are in progress [56].
MA.1.2. 2MLESimulation of Westinghouse WALT Experiment:
• Comparison of cladding temperature versus rod power and crud thickness against the WALT data.
MA.1.2. 3MLEAn initial CIPS study compared axial offset predicted by coupled MAMBA/CTF/MPACT with plant data for Watts Bar.Multiple codes
MA.1.2. 4MLEPlant analysis:Multiple codes
• CIPS study by coupled MAMBA (1D)/CTF/MPACT simulations compared with plant data.
• Oxide thickness and morphology compared with an operating plant.
MA.1.3. 1LLESoftware quality assurance:SQA Gap
• Unit testing (water properties).
• Unit test coverage is good and most of the important routines are tested.
• The automatic test coverage feature reported coverage of ∼98%.
• Source properties and Steam generator properties are not tested in the assessed version of MAMBA [56].
MA.1.3. 1LLEComparisons between the model in FACTSAGE code and MAMBA [57].SQA, PMMF
MA.1.3. 2LLEComparison to BOA 3.0 code for heat transfer/chimney boiling model, mass evaporation rate versus crud thickness, pin power, and thermochemistry.Quasi-CVER
MA.1.3. 3LLEComparison to MAMBA-BDM to verify cladding temperature and boiling velocity.Quasi-SVER
MA.1.3. 4LLEConvergence studies for the main quantities of interest as a function of the radial mesh density are completed. Convergence studies with respect to the internal time-step size are completed [56].
MA.1.3. 5LLECode verification and solution verification tests conducted [56]:CVER and SVER
• The thermal and mass transport solvers were compared to analytical solutions for a simple diffusion problem (no convection or sinks/sources).
• A simplified thermal diffusion problem with a sink term was solved by introducing a few minor code changes and compared to the form of the corresponding analytical solution.
• A simplified convection-diffusion problem was implemented by setting reaction rates for internal chemical reactions to zero and choosing the concentrations of Li and B to avoid precipitation of Li2B4O7.
• The solution to the CRUD growth rate equation was verified by comparison to an analytical solution.
MA.1.3. 6LLEInference of CRUD model parameters from plant data [55].IVAL (partial credit) calibration study
MA.1.3. 7LLEImprovement in MAMABA source term model was achieved by calibration using measured plant data and experimental loop data. The calibration process was able to estimate thermophysical and growth rate parameters in MAMBA given experimental evidence in the form of flux maps and thermocouple measurements.IVAL (partial credit) calibration study
The small-scale WALT loop calibration demonstrated the ability to perform statistical inference of the thermophysical crud parameters present in MAMBA given an experimental data set from a small-scale crud test loop using a Markov Chain Monte Carlo sampler [55].
MA.1.3. 8LLEImprovement in representation and geometric fidelity of MAMABA was shown by the calibration study using measured plant data (Watts bar nuclear plant) and experimental loop data (westinghouse advanced loop tester) [55].RGF
MA.1.3. 9LLEDevelopment of preliminary VERA—CS Crud induced localized corrosion modeling capability (milestone: L2:PHI.P17.03) [53].RGF
IndexCategoryDescriptionRelevance/comments
MA.1.1. 1HLEThe MAMBA3D refactoring the developers are implementing a unit and regression testing protocol that should result in robust source code verification when the code is completed at the end of PoR15.Gap
MA.1.1. 2HLESQA needs some improvement.Gap
MA.1.1. 3HLELow-level code verification was performed.Gap
MA.1.1. 4HLESolution verification is not done for CASL CPs.Gap
MA.1.1. 5HLESome validation work was performed (SET, IET, and plant analysis).Gap
MA.1.2. 1MLESolution verification and code verification using the analytical solutions are in progress [56].
MA.1.2. 2MLESimulation of Westinghouse WALT Experiment:
• Comparison of cladding temperature versus rod power and crud thickness against the WALT data.
MA.1.2. 3MLEAn initial CIPS study compared axial offset predicted by coupled MAMBA/CTF/MPACT with plant data for Watts Bar.Multiple codes
MA.1.2. 4MLEPlant analysis:Multiple codes
• CIPS study by coupled MAMBA (1D)/CTF/MPACT simulations compared with plant data.
• Oxide thickness and morphology compared with an operating plant.
MA.1.3. 1LLESoftware quality assurance:SQA Gap
• Unit testing (water properties).
• Unit test coverage is good and most of the important routines are tested.
• The automatic test coverage feature reported coverage of ∼98%.
• Source properties and Steam generator properties are not tested in the assessed version of MAMBA [56].
MA.1.3. 1LLEComparisons between the model in FACTSAGE code and MAMBA [57].SQA, PMMF
MA.1.3. 2LLEComparison to BOA 3.0 code for heat transfer/chimney boiling model, mass evaporation rate versus crud thickness, pin power, and thermochemistry.Quasi-CVER
MA.1.3. 3LLEComparison to MAMBA-BDM to verify cladding temperature and boiling velocity.Quasi-SVER
MA.1.3. 4LLEConvergence studies for the main quantities of interest as a function of the radial mesh density are completed. Convergence studies with respect to the internal time-step size are completed [56].
MA.1.3. 5LLECode verification and solution verification tests conducted [56]:CVER and SVER
• The thermal and mass transport solvers were compared to analytical solutions for a simple diffusion problem (no convection or sinks/sources).
• A simplified thermal diffusion problem with a sink term was solved by introducing a few minor code changes and compared to the form of the corresponding analytical solution.
• A simplified convection-diffusion problem was implemented by setting reaction rates for internal chemical reactions to zero and choosing the concentrations of Li and B to avoid precipitation of Li2B4O7.
• The solution to the CRUD growth rate equation was verified by comparison to an analytical solution.
MA.1.3. 6LLEInference of CRUD model parameters from plant data [55].IVAL (partial credit) calibration study
MA.1.3. 7LLEImprovement in MAMABA source term model was achieved by calibration using measured plant data and experimental loop data. The calibration process was able to estimate thermophysical and growth rate parameters in MAMBA given experimental evidence in the form of flux maps and thermocouple measurements.IVAL (partial credit) calibration study
The small-scale WALT loop calibration demonstrated the ability to perform statistical inference of the thermophysical crud parameters present in MAMBA given an experimental data set from a small-scale crud test loop using a Markov Chain Monte Carlo sampler [55].
MA.1.3. 8LLEImprovement in representation and geometric fidelity of MAMABA was shown by the calibration study using measured plant data (Watts bar nuclear plant) and experimental loop data (westinghouse advanced loop tester) [55].RGF
MA.1.3. 9LLEDevelopment of preliminary VERA—CS Crud induced localized corrosion modeling capability (milestone: L2:PHI.P17.03) [53].RGF
Table 17:

Evidence related to validation of BISON

IndexCategoryDescriptionRelevance/ Comments
BI.2.1. 1HLEIET and SET validation work performed for key physical phenomenon related to CASL quantity of interest [58,59].
BI.2.2. 1MLELWR validation (48 Cases):
BI.2.2. 2MLEValidation metrics:
o Fuel centerline temperature through all phases of fuel life
o Fission gas release
o Clad diameter (PCMI).
BI.2.3. 1LLELWR fuel benchmark: reasonable prediction of centerline temperature.
BI.2.3. 2LLELWR fuel benchmark: rod diameter prediction with large errors.Gap
BI.2.3. 3LLELWR fuel benchmark: large uncertainty in key modelsGap (need SVER)
o Relocation (and recovery)
o Fuel (swelling) and clad creep
o Frictional contact
o Gaseous swelling (at high temperature.
BI.2.3. 4LLEL3: FMC.CLAD.P13.04 – Cluster dynamics modeling of Hydride precipitation.UQ (data assessment)
BI.2.3. 5LLE• SET (Bursting experiments) and IET validation of BISON for LOCA behavior [59].SVAL and IVAL
• Validation of BISON to integral LWR experiment (IET validation) [58].
IndexCategoryDescriptionRelevance/ Comments
BI.2.1. 1HLEIET and SET validation work performed for key physical phenomenon related to CASL quantity of interest [58,59].
BI.2.2. 1MLELWR validation (48 Cases):
BI.2.2. 2MLEValidation metrics:
o Fuel centerline temperature through all phases of fuel life
o Fission gas release
o Clad diameter (PCMI).
BI.2.3. 1LLELWR fuel benchmark: reasonable prediction of centerline temperature.
BI.2.3. 2LLELWR fuel benchmark: rod diameter prediction with large errors.Gap
BI.2.3. 3LLELWR fuel benchmark: large uncertainty in key modelsGap (need SVER)
o Relocation (and recovery)
o Fuel (swelling) and clad creep
o Frictional contact
o Gaseous swelling (at high temperature.
BI.2.3. 4LLEL3: FMC.CLAD.P13.04 – Cluster dynamics modeling of Hydride precipitation.UQ (data assessment)
BI.2.3. 5LLE• SET (Bursting experiments) and IET validation of BISON for LOCA behavior [59].SVAL and IVAL
• Validation of BISON to integral LWR experiment (IET validation) [58].
Table 18

Evidence related to VERA—CS verification and validation

IndexCategoryDescriptionRelevance/comments
VE.1.1. 1HLEThe initial VERA—CS validation efforts with WB Unit 1 and BEAVRS provide sufficient basis to propose metrics that can be used to assess the adequacy of the PWR core follow calculations for addition to the VERA—CS validation base.
VE.1.1. 2HLEFor every new VERA—CS reactor analyzed, the metrics shown in Table 19 were suggested as an initial proposal.
VE.1.2. 1MLESpecific attention/analysis would be expected for any plants/cycles/measurements that fall outside of these metrics. (VE.1.1.2).
VE.1.2. 2MLEA red-flag condition would be automatically generated on the results outside this metric (VE.1.1.2) and require reevaluation and review before that data is admitted to the validation base.
VE.1.2. 3MLEThe TIAMAT code for MPACT-BISON code coupling requires significant V&V work.References [6062] Gap
VE.1.3. 1LLEGodfrey [54,63] successfully demonstrated VERA—CS ability to model the operating history of the Watts bar i nuclear plant cycles 1-12 and Watts Bar Unit 2. A rigorous benchmark was performed using criticality measurements, physics testing results, critical soluble boron concentrations, and measured in-core neutron flux distributions.PMMF, RGF
VE.1.3. 2LLEThe BEAVRS provided measured data for BEAVRS includes Cycles 1 and 2 ZPPT results, power escalation and HFP measured flux maps and HFP critical boron concentration measurements for both cycles. In general, the VERA—CS prediction results for cycle 1 are in good agreement with the plant data.PMMF, RGF
VE.1.3. 3LLECycle 2 of BEAVRS has been completed and similar results were observed (to be documented)PMMF, RGF
VE.1.3. 4LLENeed to verify the MPACT-CTF coupling for a more general range of applications to include,Gap
• Nonsquare cells, complex composition mixtures such as coolant+grid mixtures, and regions with major variation (e.g., above/below the region CTF models).
• The impact of thermal expansion on the verification of the MPACT-CTF coupling.
VE.1.3. 5LLEL2:VMA.P12.01—data assimilation and uncertainty quantification using VERA—CS for a core wide LWR problem with depletion [64].UQ
VE.1.3. 6LLEL2:VMA.VUQ.P11.04—uncertainty quantification analysis using VERA—CS for a PWR fuel assembly with depletion.UQ
VE.1.3. 7LLEL2:VMA.P13.03—initial UQ of CIPS [65].UQ
VE.1.3. 8LLEUncertainty quantification and sensitivity analysis with CASL core simulator VERA—CS [66].UQ/SA
VE.1.3. 9LLEUncertainty quantification and data assimilation (UQ/DA) study on a VERA core simulator component for CRUD analysis [67].UQ
VE.1.3. 10LLEImprovement in representation and geometric fidelity of VERA—CS (MAMABA and CTF) was shown by the calibration study using measured plant data (Watts Bar Nuclear Plant) and experimental loop data (WALT experiment) [55].RGF
VE.1.3. 11LLEDevelopment of preliminary VERA—CS Crud induced localized corrosion modeling capability (milestone: L2:PHI.P17.03) [53].RGF (MAMBA, MPACT, and CTF)
IndexCategoryDescriptionRelevance/comments
VE.1.1. 1HLEThe initial VERA—CS validation efforts with WB Unit 1 and BEAVRS provide sufficient basis to propose metrics that can be used to assess the adequacy of the PWR core follow calculations for addition to the VERA—CS validation base.
VE.1.1. 2HLEFor every new VERA—CS reactor analyzed, the metrics shown in Table 19 were suggested as an initial proposal.
VE.1.2. 1MLESpecific attention/analysis would be expected for any plants/cycles/measurements that fall outside of these metrics. (VE.1.1.2).
VE.1.2. 2MLEA red-flag condition would be automatically generated on the results outside this metric (VE.1.1.2) and require reevaluation and review before that data is admitted to the validation base.
VE.1.2. 3MLEThe TIAMAT code for MPACT-BISON code coupling requires significant V&V work.References [6062] Gap
VE.1.3. 1LLEGodfrey [54,63] successfully demonstrated VERA—CS ability to model the operating history of the Watts bar i nuclear plant cycles 1-12 and Watts Bar Unit 2. A rigorous benchmark was performed using criticality measurements, physics testing results, critical soluble boron concentrations, and measured in-core neutron flux distributions.PMMF, RGF
VE.1.3. 2LLEThe BEAVRS provided measured data for BEAVRS includes Cycles 1 and 2 ZPPT results, power escalation and HFP measured flux maps and HFP critical boron concentration measurements for both cycles. In general, the VERA—CS prediction results for cycle 1 are in good agreement with the plant data.PMMF, RGF
VE.1.3. 3LLECycle 2 of BEAVRS has been completed and similar results were observed (to be documented)PMMF, RGF
VE.1.3. 4LLENeed to verify the MPACT-CTF coupling for a more general range of applications to include,Gap
• Nonsquare cells, complex composition mixtures such as coolant+grid mixtures, and regions with major variation (e.g., above/below the region CTF models).
• The impact of thermal expansion on the verification of the MPACT-CTF coupling.
VE.1.3. 5LLEL2:VMA.P12.01—data assimilation and uncertainty quantification using VERA—CS for a core wide LWR problem with depletion [64].UQ
VE.1.3. 6LLEL2:VMA.VUQ.P11.04—uncertainty quantification analysis using VERA—CS for a PWR fuel assembly with depletion.UQ
VE.1.3. 7LLEL2:VMA.P13.03—initial UQ of CIPS [65].UQ
VE.1.3. 8LLEUncertainty quantification and sensitivity analysis with CASL core simulator VERA—CS [66].UQ/SA
VE.1.3. 9LLEUncertainty quantification and data assimilation (UQ/DA) study on a VERA core simulator component for CRUD analysis [67].UQ
VE.1.3. 10LLEImprovement in representation and geometric fidelity of VERA—CS (MAMABA and CTF) was shown by the calibration study using measured plant data (Watts Bar Nuclear Plant) and experimental loop data (WALT experiment) [55].RGF
VE.1.3. 11LLEDevelopment of preliminary VERA—CS Crud induced localized corrosion modeling capability (milestone: L2:PHI.P17.03) [53].RGF (MAMBA, MPACT, and CTF)
Table 19

Metric for evaluation of validation of VERA [52]

StartupState point
HZP boron: ± 20 ppmHFP boron: ±35ppm
Rodworth: ± 7%AO: ±3%
ITC: ±1 pcm/FPin power distribution and peaking factors: ± 2%
StartupState point
HZP boron: ± 20 ppmHFP boron: ±35ppm
Rodworth: ± 7%AO: ±3%
ITC: ±1 pcm/FPin power distribution and peaking factors: ± 2%

5.6 Step 6: Evidence Classification and Organization.

The sixth step involves the classification and organization of evidence for maturity assessment. Evidence is classified based on the type of code, relevance to the PCMM attributes, and level of detail. An identifying number has been developed to refer to each piece of evidence as it relates to each software tool (e.g., MPACT, CTF, etc.) that provides information about the piece of evidence itself. The identifying numbers are of the form
AB.x.y.z

where AB identifies the code to which the evidence refers, x corresponds to the PCMM attribute or set of attributes for which the evidence was identified, z is a counter that differentiates between multiple pieces of evidence, and y is a level identifier that indicates the level of detail of the evidence. The level of detail of evidence is represented using three levels:

  • High-level evidence (HLE): Global statement or activity related to model development and VVUQ of code,

  • Medium-level evidence (MLE): Specific task to support the high-level evidence,

  • Low-level evidence (LLE): Reference to performance or test details.

Due to space constraints, it is not possible to include all the details from the CASL milestone reports and documentations in the evidence tables. Therefore, key information is abstracted from the relevant sources and included as evidence description. The “relevance/comment” section in the evidence tables clarify or contain the following information:

  • what is the relevant PCMM attribute for the evidence?

  • does the evidence refer to a gap in M & S functionality or assessment (VVUQ)?

  • any comment or further detail related to the evidence.

The evidence in Tables 918 is classified and organized according to the aforementioned scheme. As an example, consider MP.1.1.1 (in Table 9) which is a high-level piece of evidence stating the presence of comprehensive documentation related to verification and validation of MPACT. However, this statement does not include any specific details related to the type of tests performed or their results. Such information is captured by low-level evidence, e.g., evidence MP.1.3.1. indicates that a code verification exercise was performed using the method of the exact solution, benchmark problem 3.4 in Ganapol [49] was used as a code verification test for MPACT, and MPACT agreed with all cases to within a few per cent mille (pcm) [13].

5.7 Step 7: Map Predictive Capability Maturity Model Attribute to Supporting Evidence.

The seventh step in the assessment process is focused on mapping the evidence to the relevant PCMM attribute. The CASL VERA CS is developed for different applications mentioned in section 2. A common body of evidence is used for the assessment of CASL VERA CS for different CPs. However, the significance of the evidence is governed by its relevance to the specific CP (governed by PIRT) and the corresponding PCMM attribute. Three levels, i.e., high (H), medium (M), and low (L) are used to specify the level of significance. Mapping evidence to PCMM attributes helps in assigning the PCMM score (maturity level) and provides credibility to the attributes' assessment. Tables 20 and 21 show the results of mapping the evidence for the assessment of different PCMM attributes of VERA for CIPS. The evidence in these tables are graded based on their significance level with respect to the PCMM attributes and CIPS, e.g., evidence MP.1.3.1 and MP.1.3.4 are graded to a high significance level (H) as they directly support code verification of the neutronics component (MPACT) in VERA—CS. As another example, consider evidence MP.3.3.6, MP.3.3.7, MP.3.3.9, and MP.3.3.10 in Table 11. This evidence are related to validation of MPACT using data from different test facility and benchmark problems. However, this evidence adds value to the assessment of three different attributes with different significance levels: (1) Integral Effect Test Validation of VERA—CS at the low significance level (support validation of neutronic component in CASL VERA CS) (see Table 21), (2) representation and geometric fidelity of VERA—CS at medium significance level (see Table 20), and (3) physics and material model fidelity at low significance level (see Table 20). The significance level for this evidence is medium or low level because of different scaling issues and assumptions pertaining to the test facility and benchmark problems.

Table 20

Evidence supporting the assessment of VER—CS for CIPS-CP (RGF, PMMF)

Significance
PCMM attributeHMLGap/overall evaluation
RGF: Representation and Geometric FidelityMP.1.3. 2MP.3.3. 1Marginal [1.5]
MP.2.3. 1MP.3.3. 3
MP.2.3. 2MP.3.3. 4
MP.3.2. 2MP.3.3. 5
MA.1.3. 8MP.3.3. 6
MA.1.3. 9MP.3.3. 7
CT.2.3. 2MP.3.3. 9
CT.2.3. 3MP.3.3. 10
VE.1.3. 10CT.2.2. 2
VE.1.3. 11VE.1.3. 1
VE.1.3. 2
VE.1.3. 3
PMMF: Physics and Material Model FidelityMA.1.3. 2MP.3.3. 1MP.3.3. 6Marginal [1.5]
MP.2.3. 3MP.3.3. 3MP.3.3. 7
MP.2.3. 4MP.3.3. 4MP.3.3. 9
VE.1.3. 1MP.3.3. 5MP.3.3. 10
VE.1.3. 2
VE.1.3. 3
Significance
PCMM attributeHMLGap/overall evaluation
RGF: Representation and Geometric FidelityMP.1.3. 2MP.3.3. 1Marginal [1.5]
MP.2.3. 1MP.3.3. 3
MP.2.3. 2MP.3.3. 4
MP.3.2. 2MP.3.3. 5
MA.1.3. 8MP.3.3. 6
MA.1.3. 9MP.3.3. 7
CT.2.3. 2MP.3.3. 9
CT.2.3. 3MP.3.3. 10
VE.1.3. 10CT.2.2. 2
VE.1.3. 11VE.1.3. 1
VE.1.3. 2
VE.1.3. 3
PMMF: Physics and Material Model FidelityMA.1.3. 2MP.3.3. 1MP.3.3. 6Marginal [1.5]
MP.2.3. 3MP.3.3. 3MP.3.3. 7
MP.2.3. 4MP.3.3. 4MP.3.3. 9
VE.1.3. 1MP.3.3. 5MP.3.3. 10
VE.1.3. 2
VE.1.3. 3
Table 21

Evidence supporting the assessment of VERA—CS for CIPS-CP (VVUQ)

Significance
PCMM attributeHMLGap/overall evaluation
SQA: software quality assurance (including documentation)MA.1.3. 2MP.1.1. 2CT.1.3. 2MP.1.1. 1CT.1.1. 2
MP.1.1. 3MP.1.1. 4CT.1.3. 5MP.1.2. 1MA.1.1. 1
CT.1.1. 1CT.1.2. 1CT.1.3. 6MP.1.2. 2MA.1.1. 2
MA.1.3. 1CT.1.2. 2CT.1.3. 7MP.1.3. 1Marginal [1.5] (MAMBA)
CT.1.3. 1MP.1.3. 2
CVER: code verificationMP.1.2. 2MP.1.3. 1MP.2.2. 2
MP.2.3. 4MP.1.3. 2CT.1.1. 3
MP.1.3. 3CT.1.3. 3CT.1.2. 3
MP.1.3. 4
MA.1.1. 3
MP.1.2. 3
VE.1.3. 4
CT.1.2. 3
Need improvement [1]
CT.1.3. 8
CT.1.3. 10
CT.1.3. 12
MA.1.3. 2
MA.1.3. 4
MA.1.3. 5
SVER: solution verificationMP.2.1. 1MP.2.1. 2MP.2.2. 1MP.2.2. 2
MP.2.1. 4MP.2.1. 3MP.2.3. 1CT.1.2. 4
MP.2.3. 5MP.2.3. 3MP.2.3. 2MA.1.1. 4
CT.1.1. 4MP.2.3. 4MP.3.2. 4MA.1.2. 1
CT.1.2. 4CT.1.3. 4VE.1.3. 4
CT.1.3. 9Need improvement [1]
CT.1.3. 11
MA.1.3. 3
MA.1.3. 5
SVAL: separate effects validationMP.3.1. 1MP.2.3. 1MP.3.2. 1MP.3.1. 4
BI.2.3. 5MP.3.1. 3MP.3.2. 4CT.2.1. 1
CT.2.2. 1MP.3.3. 1MA.1.1. 5
MP.3.3. 7Need improvement [1] (MAMBA)
MP.3.3. 8
MP.3.3. 9
MP.3.3. 10
IVAL: integral effects validationMP.3.1. 1MP.3.1. 2MP.3.2. 2MP.3.1. 4
MA.1.2. 2MP.3.1. 3MP.3.2. 3MA.1.1. 5
MA.1.2. 3CT.2.1. 2MP.3.3. 3CT.2.3. 1
MA.1.2. 4MP.3.3. 4
MP.3.3. 5
VE.1.1. 2MP.3.3. 6
VE.1.2. 1CT.2.2. 2Marginal [1.5]
VE.1.2. 2MA.1.3. 6
BI.2.3. 5MA.1.3. 7
UQSA: uncertainty quantification and sensitivity analysisVE.1.3. 5None [0]
VE.1.3. 6
VE.1.3. 7
VE.1.3. 8 VE.1.3. 9
Significance
PCMM attributeHMLGap/overall evaluation
SQA: software quality assurance (including documentation)MA.1.3. 2MP.1.1. 2CT.1.3. 2MP.1.1. 1CT.1.1. 2
MP.1.1. 3MP.1.1. 4CT.1.3. 5MP.1.2. 1MA.1.1. 1
CT.1.1. 1CT.1.2. 1CT.1.3. 6MP.1.2. 2MA.1.1. 2
MA.1.3. 1CT.1.2. 2CT.1.3. 7MP.1.3. 1Marginal [1.5] (MAMBA)
CT.1.3. 1MP.1.3. 2
CVER: code verificationMP.1.2. 2MP.1.3. 1MP.2.2. 2
MP.2.3. 4MP.1.3. 2CT.1.1. 3
MP.1.3. 3CT.1.3. 3CT.1.2. 3
MP.1.3. 4
MA.1.1. 3
MP.1.2. 3
VE.1.3. 4
CT.1.2. 3
Need improvement [1]
CT.1.3. 8
CT.1.3. 10
CT.1.3. 12
MA.1.3. 2
MA.1.3. 4
MA.1.3. 5
SVER: solution verificationMP.2.1. 1MP.2.1. 2MP.2.2. 1MP.2.2. 2
MP.2.1. 4MP.2.1. 3MP.2.3. 1CT.1.2. 4
MP.2.3. 5MP.2.3. 3MP.2.3. 2MA.1.1. 4
CT.1.1. 4MP.2.3. 4MP.3.2. 4MA.1.2. 1
CT.1.2. 4CT.1.3. 4VE.1.3. 4
CT.1.3. 9Need improvement [1]
CT.1.3. 11
MA.1.3. 3
MA.1.3. 5
SVAL: separate effects validationMP.3.1. 1MP.2.3. 1MP.3.2. 1MP.3.1. 4
BI.2.3. 5MP.3.1. 3MP.3.2. 4CT.2.1. 1
CT.2.2. 1MP.3.3. 1MA.1.1. 5
MP.3.3. 7Need improvement [1] (MAMBA)
MP.3.3. 8
MP.3.3. 9
MP.3.3. 10
IVAL: integral effects validationMP.3.1. 1MP.3.1. 2MP.3.2. 2MP.3.1. 4
MA.1.2. 2MP.3.1. 3MP.3.2. 3MA.1.1. 5
MA.1.2. 3CT.2.1. 2MP.3.3. 3CT.2.3. 1
MA.1.2. 4MP.3.3. 4
MP.3.3. 5
VE.1.1. 2MP.3.3. 6
VE.1.2. 1CT.2.2. 2Marginal [1.5]
VE.1.2. 2MA.1.3. 6
BI.2.3. 5MA.1.3. 7
UQSA: uncertainty quantification and sensitivity analysisVE.1.3. 5None [0]
VE.1.3. 6
VE.1.3. 7
VE.1.3. 8 VE.1.3. 9

The assessment of the VERA CS for a CP is an iterative process and at the end of each iteration information related to gaps in modeling capability (see Fig. 2 for illustration), data need, and status of VVUQ is obtained. This information guides the development and assessment process for the subsequent iteration. However, if the target maturity level for all PCMM attributes is achieved with credible evidence, the assessment is completed. The current assessments of VERA—CS (MPACT, MAMBA, and CTF) for CIPS are shown in Table 22. PCMM scoring scheme is described in the next section.

Table 22

PCMM scoring for CIPS CP

PCMM attributeMPACTCTFMAMBA
Representation and geometric fidelity322
Physics and material model fidelity321.5
Software quality assurance221
Code verification221
Solution verification221.5
Separate effects validation210
Integral effects validation221
Uncertainty quantification000
PCMM attributeMPACTCTFMAMBA
Representation and geometric fidelity322
Physics and material model fidelity321.5
Software quality assurance221
Code verification221
Solution verification221.5
Separate effects validation210
Integral effects validation221
Uncertainty quantification000

6 Predictive Capability Maturity Model Scoring Technique

This section describes the process for making scoring decisions in the current assessment. PCMM is based on qualitative descriptors or criteria and there is no quantitative measure to clearly define the transition from one level to the next. The four-steps in maturity level for PCMM span a very wide range and the provided qualitative descriptors are sufficient to resolve maturity levels. In the current assessment, the scores are determined by carefully reviewing the evidence against the qualitative assessment criteria for attributes at the specified target levels. During the evidence classification and organization, the gaps are clearly identified and documented. These gaps help in determining the completeness of a body of evidence for a specific maturity level.

For all PCMM attributes, the decision process is supported by the identified phenomenology from the PIRT for each CP. For Representation and geometric fidelity and physics and material model fidelity, the maturity scoring is based on the ability of the code(s) to address the dependent phenomenology identified for each CP. For example, CRUD formation involves porosity and chimneys that promote boiling and current modeling does not resolve these features. For software quality assurance and engineering, the concept of regression test line coverage was used to help support the decision-making between maturity scores. For Code Verification, the particular partial differential equations relevant to simulating the phenomena of interest were paid particular attention. The code verification evidence was considered in light of which partial differential equations and associated solvers are tested for convergence behavior. For solution verification, the numerical effects on SRQs relevant for CPs were analyzed. For both separate and integral effects validation, the phenomena of interest were closely compared to the available validation data and associated comparisons to modeled results. For every CP, there is insufficient validation data to support the validation of every phenomenon identified. To distinguish between maturity levels a simple “majority rule” of validated phenomena was utilized. For uncertainty quantification, only the simulation of quantities of interest relating to the particular CP was considered. Future research in this area should involve a quantitative assessment of uncertainties that can further drive phenomena identification.

7 Conclusion

This paper summarizes the process of assessing VERA—CS for the CASL CPs. The classical PIRT methodology is adopted to identify relevant phenomena with respect to a CP application. Based on the identified phenomena, requirements for model development and assessment are defined and mapped to different codes in VERA—CS. Evaluation of VERA—CS is performed by assessing different PCMM attributes related to the VVUQ of codes. Credibility in the assessment is established by mapping relevant evidence obtained from the VVUQ of codes. The approach described herein has been iteratively applied to VERA—CS and the incremental findings from each report have been utilized to prioritize code development and VVUQ activities within the CASL program. Given the large volume of heterogeneous data (or evidence) from various modeling and VVUQ activities of different codes related to the CPs, the PCMM serves as a convenient tool for predictive capability evaluation. However, it needs a formal structure and the ability to incorporate evidence that can support the claims regarding the maturity levels achieved by a particular code. Evidence form the basis of any PCMM assessment. The current assessment takes into account the relevance and level of detail of the evidence. However, this is not sufficient. Assumptions and justification related to the evidence and their grading also need to be incorporated. Incorporating such details is difficult without formalizing the PCMM.

The assessment methodology presented here has certain drawbacks. The process of collection of evidence has a certain degree of subjectivism. PCMM lacks a quantitative basis to measure the evidence, and qualitative classifiers like low, medium, high, some, “many,” are extensively used. Both PCMM and PIRT rely on expert opinion. Therefore, the assessment may be affected by the knowledge and expertise of the people conducting the PIRT and PCMM. PIRT and PCMM are affected by subjective bias and disparity in the experts' opinions. One way to minimize this is to make use of the argumentation technique. However, this is a topic of further research, and the needs and requirements for a such framework could only be understood after performing PCMM at the initial stage. In this work, a systematic scheme for evidence classification and organization is incorporated to support scoring.

An extension of this work involves the formalization of PCMM as a decision model using argumentation theory and Bayesian Network. This work [50] demonstrates a quantitative approach for maturity assessment using a case study of CTF.

The predictive capability maturity model is not just focused on the results of the M & S activities but the quality and rigor of different processes (VVUQ) used to enhance confidence in a simulation tool for a specific application. Assessment using PCMM is based on heterogeneous data from different M & S activities. Therefore, it is qualitative in nature. The framework presented here for assessing the predictive capability and maturity of CASL VERA—CS can be utilized by researchers and code developers seeking to assess other M & S codes for other problems. In particular, the authors of this work believe that there is a need to formalize such a framework to address the widespread subjectivity and fallacious “appeal to authority” arguments for asserting M & S code adequacy. Of particular importance are the identification and decomposition of an intended problem of interest and the alignment of evidence and decision attributes for asserting maturity. Furthermore, the generation, documentation, and archival of objective evidence of maturity is a critical prerequisite for this process. The alignment of the problem of interest and evidence is accomplished in the present methodology via the PIRT process and the mapping of evidence to PCMM attributes. The PCMM matrix uses high-level criteria for the assessment of each PCMM attribute. However, for a comprehensive in-depth assessment, further details regarding the relevant subattributes need to be incorporated. A hierarchical model may be suitable for this purpose, however, the qualitative nature of PCMM makes it difficult to adopt hierarchy in PCMM. Thus, the assessment methodology presented here can serve as a starting point for developing such a framework.

Acknowledgment

The work was performed with support by the U.S. Department of Energy (DOE) via the Consortium for Advanced Simulation of Light Water Reactors (CASL) under contract number, DEAC05-00OR22725. The authors would like to express their gratitude to the CASL code team, CASL leadership team, and CASL council for providing necessary information, data, evidence that forms the basis of the CASL VERA assessment. The authors are also grateful to the reviewers for their insightful comments and suggestions for improving the quality of this paper.

Funding Data

  • U.S. Department of Energy (Grant No.: DEAC05-00OR22725; Funder ID: 10.13039/100000015).

Nomenclature

Abbreviation

    Abbreviation
     
  • CASL =

    consortium for advanced simulation of light water reactors

  •  
  • CIPS =

    CRUD induced power shift

  •  
  • CP =

    challenge problem

  •  
  • CRUD =

    chalk river unidentified deposits

  •  
  • CSAU =

    code scaling, applicability, and uncertainty

  •  
  • CVER =

    code verification

  •  
  • DNB =

    departure from nucleate boiling

  •  
  • IVAL =

    integral effect test validation

  •  
  • MAMBA =

    MPO advanced model for boron analysis code

  •  
  • MPACT =

    Michigan parallel characteristics transport code

  •  
  • PCI =

    pellet clad interactions

  •  
  • PCMM =

    predictive capability maturity model

  •  
  • PIRT =

    phenomena identification and ranking table

  •  
  • PMMF =

    physics and material model fidelity (a PCMM attribute)

  •  
  • QOI =

    quantity of interest

  •  
  • RGF =

    representation and geometric fidelity (a PCMM attribute)

  •  
  • SQA =

    software quality assurance

  •  
  • SQE =

    software quality engineering

  •  
  • SVAL =

    separate effect test validation

  •  
  • SVER =

    solution verification

  •  
  • U.S. NRC =

    United States Nuclear Regulatory Commission

  •  
  • VERA—CS =

    virtual environment for reactor applications code suite

  •  
  • VVUQ =

    verification, validation, and uncertainty quantification

References

1.
CSAU
,
1989
, “
Quantifying Reactor Safety Margins: Application of Code Scaling Applicability and Uncertainty Evaluation Methodology to a Large-Break Loss-of-Coolant Accident
,” U.S. Nuclear Regulatory Commission, Office of Nuclear Reactor Regulation, Rockville, MD, Standard No.
NUREG/CR-5249
.https://www.nrc.gov/docs/ML0303/ML030380473.pdf
2.
EMDAP
,
2005
, “
Regulatory Guide 1.203: Transient and Accident Analysis Methods
,” U.S. Nuclear Regulatory Commission, Office of Nuclear Regulatory Research, Rockville, MD.
3.
Oberkampf
,
W. L.
,
Pilch
,
M.
, and
Trucano
,
T. G.
,
2007
, Predictive Capability Maturity Model for Computational Modeling and Simulation,
Sandia National Laboratories
,
Albuquerque, NM
, Stadnard No.
SAND2007-5948
.https://cfwebprod.sandia.gov/cfdocs/CompResearch/docs/Oberkampf-Pilch-Trucano-SAND2007-5948.pdf
4.
Turinsky
,
P. J.
, and
Kothe
,
D. B.
,
2016
, “
Modeling and Simulation Challenges Pursued by the Consortium for Advanced Simulation of Light Water Reactors (CASL)
,”
J. Comput. Phys.
,
313
, pp.
367
376
.10.1016/j.jcp.2016.02.043
5.
Rider
,
J.
,
Kamm
,
J. R.
,
Weirs
,
V. G.
, and
Cacui
,
D. G.
,
2010
,
SAND2010-234P: Verification, Validation and Uncertainty Quantification Workflow in CASL
,
Sandia National Laboratories
,
Albuquerque, NM
.
6.
Turner
,
J. A.
,
Clarno
,
K.
,
Sieger
,
M.
,
Bartlett
,
R.
,
Collins
,
B.
,
Pawlowski
,
R.
,
Schmidt
,
R.
, and
Summers
,
R.
,
2016
, “
The Virtual Environment for Reactor Applications (VERA): Design and Architecture
,”
J. Comput. Phys.
,
326
, pp.
544
568
.10.1016/j.jcp.2016.09.003
7.
Downar
,
T.
,
2018
, “
Update MPACT Documentation – Theory Manual, V&V Documentation
,” Consortium for Advanced Simulation of Light Water Reactors, CASL Report, Oak Ridge, TN, Report No. CASL-U-2018-1641-000.
8.
Downar
,
T.
,
Kochunas
,
B.
, and
Collins
,
B.
,
2015
, “
MPACT Verification and Validation: Status and Plans
,” Consortium for Advanced Simulation of Light Water Reactors, CASL, Oak Ridge, TN, Report No. CASL-U-2015-0134-000.
9.
Downar
,
T.
,
Kochunas
,
B.
, and
Collins
,
B.
,
2017
, “
MPACT Verification and Validation Manual (Rev 2)
,” Consortium for Advanced Simulation of Light Water Reactors, CASL, Oak Ridge, TN, Report No. CASL-U-2016-1199-000.
10.
Downar
,
T.
,
Kochunas
,
B.
,
Liu
,
Y.
,
Collins
,
B.
, and
Stimpson
,
S.
,
2018
, “
MPACT Verification and Validation Manual (Rev 4)
,” Consortium for Advanced Simulation of Light Water Reactors, CASL, Oak Ridge, TN, Report No. CASL-U-2018-1641-000.
11.
Kochunas
,
B.
,
2019
, “
MPACT Software Test Plan, Requirements, and Test Report
,” Consortium for Advanced Simulation of Light Water Reactors, CASL, Oak Ridge, TN, Report No. CASL-U-2019-1858-000.
12.
Pandya
,
T. M.
,
Davidson
,
G. G.
,
Evans
,
T. M.
, and
Johnson
,
S. R.
,
2016
, “
Shift Validation Plan for CASL
,” Consortium for Advanced Simulation of Light Water Reactors, CASL, Oak Ridge, TN, Report No. CASL-U-2016-1186-000.
13.
Pilch
,
M.
,
Wang
,
J.
,
Martin
,
W.
,
Downar
,
T.
,
Kochunas
,
B.
,
Andrews
,
N.
, and
Gilkey
,
L.
,
2019
, “
MPACT Code Verification and Solution Verification
,” Consortium for Advanced Simulation of Light Water Reactors, CASL, Oak Ridge, TN, Report No. CASL-U-2019-1935-000.
14.
Avramova
,
M. N.
,
2019
,
CTF 4.0 User's Manual
,
Oak Ridge National Laboratory
,
Oak Ridge, TN
.
15.
Avramova
,
M. N.
,
2016
, “
CTF User's Manual
,” North Carolina State University, Raleigh, NC, Report No. CASL-U-2016-1111-000.
16.
Pilch
,
M.
, and
Salko
,
R.
,
2019
, “
CTF Code Verification and Solution Verification
,” Consortium for Advanced Simulation of Light Water Reactors, CASL, Oak Ridge, TN, Report No. CASL-X-201X-0XXX-000.
17.
Porter
,
N.
,
Mousseau
,
V.
, and
Salko
,
R. K.
, January
2017
, “
V&V for Residual Formulation CTF
,” CASL Verification Workshop, Oak National Laboratory, Oak Ridge, TN.
18.
Salko
,
R. K.
,
2019
,
CTF 4.0 Validation and Verification
,
Oak Ridge National Laboratory
,
Oak Ridge, TN
.
19.
Salko
,
R. K.
,
2016
,
CTF Validation and Verification
,
Penn State University
,
Pennsylvania, PA
.
20.
Salko
,
R. K.
, and
Avramova
,
M. N.
,
2019
,
CTF 4.0 Theory Manual
,
Oak Ridge National Laboratory
,
Oak Ridge, TN
.
21.
Salko
,
R. K.
, and
Avramova
,
M. N.
,
2016
,
CTF Theory Manual
,
North Carolina State University
,
Raleigh, NC
.
22.
Salko
,
R. K.
,
Delchini
,
M. O.
,
Zhao
,
X.
,
Pointer
,
D.
, and
Gurecky
,
W.
,
2017
, “
Summary of CTF Accuracy and Fidelity Improvements in FY17
,” Consortium for Advanced Simulation of Light Water Reactors, CASL, Oak Ridge, TN, Report No. CASL-U-2017-1428-000.
23.
Toptan
,
A.
,
Porter
,
N. W.
,
Salko
,
R. K.
, and
Avramova
,
M. N.
,
2018
, “
Implementation and Assessment of Wall Friction Models for LWR Core Analysis
,”
Ann Nucl Energy
,
115
, pp.
565
572
.10.1016/j.anucene.2018.02.022
24.
Wysocki
,
A.
,
Hu
,
J.
,
Salko
,
R.
, and
Kochunas
,
B.
, April 30,
2018
, “
Improvement of CTF for RIA Analyses
,” Consortium for Advanced Simulation of Light Water Reactors, CASL, Oak Ridge, TN, Report No. CASL-U-2018-1608-000.
25.
Anderson
,
D.
, and
Kendrick
,
B.
,
2016
, “MAMBA Validation and Verification Plan,” CASL, Oak Ridge, TN, Report No. CASL-I-2016-1132-000.
26.
Kendrick
,
B.
, and
Barber
,
J.
, October 08,
2012
, “
Initial Validation and Benchmark Study of 3D MAMBA v2.0 Against the Walt Loop Experiment and BOA 3.0
,” Consortium for Advanced Simulation of Light Water Reactors, CASL, Oak Ridge, TN, Report No. CASL-I-2012-0238-000 (L2:MPO.CRUD.P5.02).
27.
Okhyusen
,
2018
, “MAMBA Theory Manual,” CASL, Oak Ridge, TN, Report No. CASL-I-2018-1975-000.
28.
Hales
,
J.
,
2017
, “
Bison Verification and Validation Update
,” CASL Verification Workshop, Oak National Laboratory, Oak Ridge, TN.
29.
PItts
,
2018
, “
Bison Documentation Expansion for Tensor Mechanics and Layered 1D Capabilities
,” Consortium for Advanced Simulation of Light Water Reactors, CASL, Oak Ridge, TN, Report No. CASL-U-2018-1644-000.
30.
Williamson
,
R.
,
2016
, “
BISON Verification and Validation Plan for LWR Fuel: Status and Plans
,” Consortium for Advanced Simulation of Light Water Reactors, CASL, Oak Ridge, TN, Report No. CASL-X-2016-1060-000.
31.
Pointer
,
W. D.
,
2016
, “
Star CCM+ Verification and Validation Plan
,” Consortium for Advanced Simulation of Light Water Reactors, CASL, Oak Ridge, TN, Report No. CASL-U-2016-1198-000.
32.
CASL,
2014
, “
Phase 2 Proposal: The Consortium for Advanced Simulation of Light Water Reactors (CASL)
,” Consortium for Advanced Simulation of Light Water Reactors, CASL, Oak Ridge, TN, Report No. CASL-I-2014-0046-000.
33.
Karoutas
,
Z. E.
,
2010
, “
Challenge Problem Technical Specification
,” Consortium for Advanced Simulation of Light Water Reactors, CASL, Oak Ridge, TN, Report No. CASL-I-2010-0009-000.
34.
Jina
,
M.
, and
Short
,
M.
,
2014
, “
L3: MPO.CRUD.P8.02 Two-Phase Fluid Flow Modeling in CRUD Using MAMBA-BDM
,” Consortium for Advanced Simulation of Light Water Reactors, CASL, Oak Ridge, TN, Report No. CASL-U-2014-0143-000.
35.
Simon
,
H.
,
1962
, “
The Architecture of Complexity
,”
Proc. Am. Philos. Soc.
,
106
(
6
), pp.
467
482
.
36.
Aksan
,
S. R.
,
Bessette
,
D.
,
Brittain
,
I.
,
D'Auria
,
F. S.
,
Gruber
,
P.
,
Holmstrom
,
H. L. O.
,
Landry
,
R.
,
Naff
,
S.
,
Pochard
,
R.
, and
Preusche
,
G.
,
1987
, “
CSNI Code Validation Matrix of Thermo-Hydraulic Codes for LWR LOCA and Transients
,” Committee on the Safety of Nuclear Installations, OECD Nuclear Energy Agency, Paris, France.
37.
Lewis
,
M. J.
,
Pochard
,
R.
,
D'Auria
,
F. S.
,
Karwat
,
H.
,
Wolfert
,
K.
,
Yadigaroglu
,
G.
, and
Holmstrom
,
H. L. O.
,
1989
, “
Thermohydraulics of Emergency Core Cooling in Light Water Reactors-A State-of-the-Art Report. CSNI Report
,” Committee on the Safety of Nuclear Installations, OECD Nuclear Energy Agency, Paris, France.
38.
Boyack
,
B.
,
Duffey
,
R.
,
Wilson
,
G.
,
Griffith
,
P.
,
Lellouche
,
G.
,
Levy
,
S.
,
Rohatgi
,
U.
,
Wulff
,
W.
, and
Zuber
,
N.
,
1989
, “
NUREG/CR-5249: Quantifying Reactor Safety Margins: Application of Code Scaling, Applicability, and Uncertainty Evaluation Methodology to a Large-Break, Loss-of-Coolant Accident
,” Nuclear Regulatory Commission, Rockville, MD.
39.
Griffiths
,
M.
,
Schlegel
,
J. P.
,
Hibiki
,
T.
,
Ishii
,
M.
,
Kinoshita
,
I.
, and
Yoshida
,
Y.
,
2014
, “
Phenomena Identification and Ranking Table for Thermal-Hydraulic Phenomena During a Small-Break LOCA With Loss of High Pressure Injection
,”
Prog. Nucl. Energy
,
73
, pp.
51
63
.10.1016/j.pnucene.2014.01.008
40.
Olivier
,
T. J.
, and
Nowlen
,
S.
,
2008
, “
A Phenomena Identification and Ranking Table (PIRT) Exercise for Nuclear Power Plant Fire Modeling Applications
,” U.S. Nuclear Regulatory Commission, Office of Nuclear Regulatory Research, Rockville, MD.
41.
Diamond
,
D.
,
Edgar
,
C.
,
Fratoni
,
M.
,
Gougar
,
H.
,
Hawari
,
A.
,
Hu
,
J.
,
Hudson
,
N.
,
Llas
,
D.
,
Maldonado
,
I.
,
Petrovic
,
B.
,
Rahnema
,
F.
,
Serghiuta
,
D.
, and
Zhang
,
D.
,
2016
, “
Phenomena Identification and Ranking Tables (PIRT) Report for Fluoride High-Temperature Reactor (FHR) Neutronics
,” U. S. Department of Energy, N.E.U.P, Idaho Falls, ID, Report No. CRMP-2016-08-001.
42.
Wilson
,
G. E.
, and
Boyack
,
B. E.
,
1998
, “
The Role of the PIRT Process in Experiments, Code Development and Code Applications Associated With Reactor Safety Analysis
,”
Nucl. Eng. Des.
,
186
(
1–2
), pp.
23
37
.10.1016/S0029-5493(98)00216-7
43.
Oberkampf
,
W. L.
, and
Roy
,
C. J.
,
2010
,
Verification and Validation in Scientific Computing
,
Cambridge University Press
,
Cambridge, UK
.
44.
Wang
,
R. Y.
, and
Strong
,
D. M.
,
1996
, “
Beyond Accuracy: What Data Quality Means to Data Consumers
,”
J. Manage. Inf. Syst.
,
12
(
4
), pp.
5
33
.10.1080/07421222.1996.11518099
45.
Oberkampf
,
W. L.
, and
Trucano
,
T. G.
,
2002
, “
Verification and Validation in Computational Fluid Dynamics
,”
Prog. Aerosp. Sci.
,
38
(
3
), pp.
209
272
.10.1016/S0376-0421(02)00005-2
46.
Jones
,
C.
,
Mattie
,
P.
,
Dinh
,
N.
,
Athe
,
P.
, and
Moore
,
L.
,
2019
, “
Updated Verification and Validation Assessment for VERA
,” Consortium for Advanced Simulation of Light Water Reactors, CASL, Oak Ridge, TN, Report No. CASL-U-2019-1864-000.
47.
Toulmin
,
S. E.
,
2003
,
The Uses of Arguments
,
Cambridge University Press
,
Cambridge, UK
.
48.
Kelly
,
T. P.
,
1999
,
Arguing Safety: A Systematic Approach to Managing Safety Cases
,
University of York
,
York, UK
.
49.
Ganapol
,
B.
,
2008
, “
Analytical Benchmarks for Nuclear Engineering Applications
,” Case Studies in Neutron Transport Theory, Nuclear Energy Agency (NEA), Organisation for Economic Co-operation and Development (OECD), Paris, France.
50.
Athe
,
P.
, and
Dinh
,
N.
,
2019
, “
A Framework for Assessment of Predictive Capability Maturity and Its Application in Nuclear Thermal Hydraulics
,”
Nucl. Eng. Des.
,
354
, p.
110201
.10.1016/j.nucengdes.2019.110201
51.
Salko
,
R. K.
,
2016
,
CTF Validation and Verification
,
Penn State University
,
University Park, PA
.
52.
Palmtag
,
S.
,
2016
, “Investigation of Thermal Expansion Effects in MPACT,” CASL, Oak Ridge, TN, Report No. CASL-U-2016-1015-000.
53.
Salko
,
R.
,
Slattery
,
S.
,
Lange
,
T.
,
Delchini
,
M.-O.
,
Gurecky
,
W.
,
Tatli
,
E.
, and
Collins
,
B.
,
2018
, “
Development of Preliminary VERA—CS Crud-Induced Localized Corrosion Modeling Capability (Milestone: L2:PHI.P17.03)
,” CASL, Oak Ridge, TN, Report No. CASL-U-2018-1617-000.
54.
Godfrey
,
A. T.
,
Collins
,
B. S.
,
Gentry
,
C. A.
,
Stimpson
,
S. G.
, and
Ritchie
,
J. A.
,
2017
, “
Watts Bar Unit 2 Startup Results With VERA
,” CASL, Oak Ridge, TN, Report No. CASL-U-2017-1306-000.
55.
Collins
,
B.
,
Gurecky
,
W.
,
Elliott
,
A.
,
Lindsay
,
G.
,
Coleman
,
K.
,
Smith
,
R.
, and
Andersson
,
D.
,
2019
, “
Inference of Crud Model Parameters From Plant Data
,” CASL, Oak Ridge, TN, Report No. Milestone FY19.CASL.005.
56.
Anderson
,
D.
,
2019
,
MAMBA Code and Solution Verification Status Report
,
Los Alamos National Laboratory
,
Los Alamos, NM
.
57.
Rizk
,
J.
,
Wirth
,
B. D.
, and
McMurray
,
J.
,
2019
, “
CALPHAD Modeling of PWR CRUD Internal Chemistry
,” CASL, Oak Ridge, TN, Report No. CASL-U-2019-4026-000.
58.
Williamson
,
R. L.
,
Gamble
,
K. A.
,
Perez
,
D. M.
,
Novascone
,
S. R.
,
Pastore
,
G.
,
Gardner
,
R. J.
,
Hales
,
J. D.
,
Liu
,
W.
, and
Mai
,
A.
,
2016
, “
Validating the BISON Fuel Performance Code to Integral LWR Experiments
,”
Nucl. Eng. Des.
,
301
, pp.
232
244
.10.1016/j.nucengdes.2016.02.020
59.
Williamson
,
R. L.
,
Pastore
,
G.
,
Gardner
,
R. J.
,
Gamble
,
K. A.
,
Novascone
,
S.
,
Tompkins
,
J.
, and
Liu
,
W.
,
2019
, “
LOCA Challenge Problem Final Report
,” CASL, Oak Ridge, TN, Report No. CASL-U-2019-1856-000.
60.
Hamilton
,
S.
,
Berrilla
,
M. A.
,
Clarno
,
K. T.
, and
Pawlowski
,
R. P.
,
2014
, “
An Assessment of Coupling Algorithms for Nuclear Reactor Core Physics Simulations
,” CASL, Oak Ridge, TN, Report No. CASL-U-2014-0149-000-b.
61.
Berrilla
,
M. A.
,
Clarno
,
K. T.
, and
Hamilton
,
S. P.
,
2014
, “
Evaluation of Coupling Approaches
,” CASL, Oak Ridge, TN, Report No. CASL-U-2014-0081-000.
62.
Clarno
,
K. T.
, and
Pawlowski
,
R. P.
,
2014
, “
Incorporate MPACT Into TIAMAT and Demonstrate Pellet-Clad Interaction (PCI) Calculations
,” CASL, Oak Ridge, TN, Report No. CASL-U-2015-0022-000.
63.
Godfrey
,
A. T.
,
Collins
,
B. S.
,
Kim
,
K. S.
,
Lee
,
R.
,
Powers
,
J.
,
Salko
,
R.
,
Stimpson
,
S. G.
,
Wieselquist
,
W. A.
,
Montgomery
,
R.
,
Montgomery
,
R.
,
Kochunas
,
B.
,
Jabaay
,
D. R.
,
Capps
,
N.
, and
Secker
,
J.
, June 26,
2015
, “
VERA Benchmarking Results for Watts Bar Nuclear Plant
,” CASL, Oak Ridge, TN, Report No. CASL-U-2015-0206-000.
64.
Khuwaileh
,
B. A.
, and
Turinsky
,
P. J.
,
2016
, “
Data Assimilation and Uncertainty Quantification Using VERA—CS for a Core Wide LWR Problem With Depletion (L2:VMA.P12.01)
,” CASL, Oak Ridge, TN, Report No. CASL-U-2016-1054-000.
65.
Hooper
,
R.
,
2016
, “
Initial UQ of CIPS
,” CASL, Oak Ridge, TN, Report No. CASL-U-2016-XXXX-XXX.
66.
Brown
,
C. S.
, and
Zhang
,
H.
,
2016
, “
Uncertainty Quantification and Sensitivity Analysis With CASL Core Simulator VERA—CS
,” Idaho National Laboratory, Idaho Falls, ID, Report No.
INL/JOU-16-38314
.https://www.osti.gov/pages/servlets/purl/1357750
67.
Khalik
,
H. S. A.
,
2013
, “
Uncertainty Quantification and Data Assimilation (UQ/DA) Study on a VERA Core Simulator Component for CRUD Analysis
,” CASL, Oak Ridge, TN, Report No. CASL-U-2013-0184-000.