In the discussion, the definitions of truth T, experimental result D, modeled solution M, and simulation solution S seem to differ from ours. These key variables were defined using a sequence of initial boundary value problems (IBVP) summarized in the paper and derived in [13].

The IBVP for T by definition contains no modeling or numerical errors. Approximate solutions for T are provided by experimental, analytical, and simulation methods. Experimental methods use measurement systems and data acquisition and reduction procedures to provide D with error $δD=D−T.$ Analytical and simulation methods reformulate the IBVP for T using approximate models for the partial differential equation operators, initial and/or boundary conditions. Analytical methods solve the IBVP for M and $δSM$ exactly, and thus are limited to simple fluid mechanics problems. The continuous IBVP for M is reduced to a discreet IBVP for S, which is solved by the CFD computer code, introducing additional numerical errors. The $δSN=S−M$ is defined by transforming the discreet IBVP back to a continuous IBVP. As this shows, we believe that D, M, and S inherently have errors, which are estimated using experimental uncertainty analysis and verification and validation (V&V) methodologies and procedures, respectively.

Response to criticism (1)

The focus of our paper is on V&V methodology and procedures for CFD simulations with an already developed CFD code. It is implicitly assumed that code verification and software quality assurance issues have already been addressed during code development.

Response to criticism (2)

T in Eq. (10) has been clearly defined as the truth, which differs from the simulation result S by simulation error $δS$ and from the experimental result D by experimental error $δD.$ Therefore, we in no way have introduced experimental measurements or simulation modeling error in discussing verification and deriving Eq. (10).

Response to criticism (3)

The experimental result D in Eqs. (13), (14), and (18) has been clearly defined as the experimental result with error $δD$ and associated uncertainty $UD.$D is not an individual measurement, but based on appropriate averaging. S is the simulation result with simulation error $δS$ and associated uncertainty $US,$ comprised of the addition and root-sum-square of numerical and modeling errors and uncertainties, as defined by Eqs. (1) and (2), respectively. S is based on iterative and input parameter convergence studies using multiple solutions and systematic parameter refinement. The value used is usually the finest value of input parameter. The numerical and modeling error and uncertainty estimates are not arbitrary. The V&V procedures described in our paper provide quantitative estimates for levels of numerical and modeling errors and uncertainties. We have not used application requirements in defining validation, but rather used our validation definition to assess application requirements. The level of validation is important in that it determines one’s ability to discriminate among modeling assumptions/approaches, and to judge if a particular application requirement has been met. We have already addressed issues related to fact that validation uncertainty excludes modeling assumption uncertainty and “noisy” data and solutions are easier to validate in our paper and have no further comment.