Abstract

In recent years, identification of nonlinear dynamical systems from data has become increasingly popular. Sparse regression approaches, such as sparse identification of nonlinear dynamics (SINDy), fostered the development of novel governing equation identification algorithms assuming the state variables are known a priori and the governing equations lend themselves to sparse, linear expansions in a (nonlinear) basis of the state variables. In the context of the identification of governing equations of nonlinear dynamical systems, one faces the problem of identifiability of model parameters when state measurements are corrupted by noise. Measurement noise affects the stability of the recovery process yielding incorrect sparsity patterns and inaccurate estimation of coefficients of the governing equations. In this work, we investigate and compare the performance of several local and global smoothing techniques to a priori denoise the state measurements and numerically estimate the state-time derivatives to improve the accuracy and robustness of two sparse regression methods to recover governing equations: sequentially thresholded least squares (STLS) and weighted basis pursuit denoising (WBPDN) algorithms. We empirically show that, in general, global methods, which use the entire measurement data set, outperform local methods, which employ a neighboring data subset around a local point. We additionally compare generalized cross-validation (GCV) and Pareto curve criteria as model selection techniques to automatically estimate near optimal tuning parameters and conclude that Pareto curves yield better results. The performance of the denoising strategies and sparse regression methods is empirically evaluated through well-known benchmark problems of nonlinear dynamical systems.

1 Introduction

Engineers rely heavily on mathematical models of real systems. In the design process, engineers seek the best models that describe the physical system they seek to design and optimize. However, physical processes are not only highly complex and inadequately understood but also the sources of noise are critically dependent upon the nature of nonlinearities. System identification allows for any system, however complex or obscure, to be modeled solely from measurements.

Recent advances in data acquisition systems along with modern data science techniques have fostered the development of accurate data-driven approaches [1] for modeling of complex nonlinear dynamical systems with the aim of understanding their behavior and predicting future states. A current trend to identify dynamical system models relies on approximating the governing equations in an over-complete basis of the state variables and eliminate the expansion terms that do not carry significant signal energy. Examples of this approach include polynomial NARMAX [2], symbolic polynomial regression [3,4], and sparse (polynomial) regression [5,6] dubbed sparse identification of nonlinear dynamics (SINDy) in Ref. [6].

System identification via sparse (polynomial) regression employs techniques from compressed sensing—specifically regularization via sparsity promoting norms, such as ℓ0- and ℓ1-norms, to identify an a priori unknown subset of the basis describing the dynamics. The identified dynamics may then be further analyzed to understand the physical behavior of the dynamical system and can be integrated in time to predict future states of the system. Stage-wise regression [7], matching pursuits [810], least absolute shrinkage and selection operator (LASSO) [11], least angle regression (LARS) [12], sequentially thresholded least squares (STLS) [6], and basis pursuit denoising (BPDN) [13] are some sparsity promoting algorithms that may be used for model recovery. For the interested reader, Ref. [14] provides a thorough comparison of different state-of-the-art sparse regression algorithms. For high-dimensional systems and large data sets, sparse regression methods often suffer from the curse of dimensionality since they can be prohibitively expensive in terms of computational cost and storage capacity. Even though this work explores tractable low-dimensional systems, several methods have been proposed to mitigate the curse of dimensionality for high-dimensional systems such as coordinate reduction via auto-encoders [15,16], or efficient array manipulation via low-rank tensor decompositions [17].

In practice, analysts only have access to state measurements from real sensors that are invariably corrupted by noise. Although sparse regression methods perform well for low levels of measurement noise, the sparsity pattern and accuracy of the recovered coefficients are usually very sensitive to higher noise levels. Several techniques have been proposed to mitigate the effect of noise for recovering governing equations from state measurements. Denoising strategies for governing equation recovery can be divided into two categories: simultaneous denoising and equation recovery, and a priori state denoising. The former strategy generally involves nonlinear optimization procedures where the loss function usually contains a measurement data loss which measures mismatch between state measurements and state predictions, and a state derivative error which measures the deviation between the state-time derivatives and the estimated governing equation. Some of the simultaneous methods employ neural networks [18] and automatic differentiation [19] to decompose the noisy measurements into the deterministic and random noisy components to recover the governing equations more accurately. Others use smoothing splines [20] and deep neural networks [21] as a surrogate model to approximate the state variables for the data loss which then can be differentiated to obtain state-time derivatives required for the state derivative error. However, simultaneous methods often require solving challenging nonlinear optimization problems and fine tuning of the hyperparameters of the algorithm. The recent a priori denoising strategy, adopted in this article, focuses on estimating accurate and smooth state measurements and their derivatives as a preprocessing step for sparse regression algorithms to produce significant improvements in the accuracy of the identified dynamics. We adopt the SINDy framework, as proposed in Ref. [6], and compare different approaches to denoise the state measurements and estimate accurate time derivatives with no prior knowledge of the process that corrupted the measurements. Typically, numerical differentiation is carried out using finite differences. However, since data measured by digital equipment are inherently discrete and contain noise due to the device properties and quantization effects, naive finite differences produce inaccurate derivative estimates [2224]. A common approach to obtain derivative estimates is to approximate the underlying signal by a fitting function from the measured data. One then hopes that the derivative found from this approximation is accurate enough for noise filtering.

There exists extensive work on nonparametric regression smoothing and numerical differentiation of noisy data [2430], also referred to as denoising techniques in this article. The underlying assumption of nonparametric smoothing techniques is that the signal is smooth and the derivatives up to certain order exist. Then, to filter out undesired noisy components from the signal, one poses the smoothing task as a tradeoff between the accuracy and smoothness of the filtered solution. Denoising strategies can be split into local and global techniques. Local denoising fit a low-degree polynomial to the nearest neighbors of each data point with a weight proportional to inverse distance from the neighbor to the data point [28]. Global denoising use the entire data points to estimate a denoised version of the original data. Some of the common denoising techniques include local kernel smoothers [31,32], smoothing splines [33], Tikhonov regularization [34,35], total variation regularization [36,37], Fourier-based smoothing [29,38], wavelet-based denoising [39,40], or singular spectrum analysis [41,42]. More recent trends use deep neural networks for signal denoising [4345]. However, these methods often require large amounts of data and several trajectory realizations, making them unsuitable for the present application, where we focus on a single trajectory. This article compares the performance of Savitzky– Golay filter and LOWESS as local methods; and, smoothing splines, Tikhonov smoother, and ℓ1-trend filtering as global smoothers.

A key challenge of smoothing methods is the selection of the smoothing hyperparameter that controls the fidelity of the filtered solution, relative to the original, noisy measurements, and its smoothness after removing undesired (high-frequency) noise components. In the statistics literature, the compromise between accuracy and smoothness is often referred as the bias-variance tradeoff [46]. In general, the filtered solution is highly sensitive to the smoothing hyperparameter, and a suitable selection criterion is of paramount importance. Several methods have been proposed to select appropriate smoothing hyperparameters such as Akaike information criterion (AIC) [47] and Bayesian information criterion (BIC) [48], Mallow’s Cp [49], discrepancy principle [50,51], cross-validation (CV) [52,53] and the Pareto curve (also L-curve) criterion [54,55], among others. An ideal model selection criterion should approximate the solution as close as possible to the unknown dynamical process that generates the exact (i.e., noiseless) data. Hence, the selection of both the smoothing algorithm and the model selection criterion is inherently linked and should be studied simultaneously to select a smoothing parameter that is near optimal with respect to a suitable error measure. Due to their favorable properties, this article focuses on two robust model selection criteria: generalized cross-validation (GCV) and Pareto curve criterion.

1.1 Contribution of this Work.

The purpose of this article is to present an overview of major techniques for measurement denoising and numerical differentiation as a preprocessing step to recover the governing equations of nonlinear dynamical systems. Our aim is to provide an empirical assessment of several best-performing filtering techniques for system identification rather than delivering a thorough theoretical analysis.

In this work, we study and assess the performance of local and global filtering regression techniques to denoise state measurements and accurately estimate state-time derivatives. To the best of our knowledge, no prior work has been done on both denoising and numerical differentiation tailored to data-driven governing equation recovery. We extend the findings in Refs. [24,28,29,56,57] and apply smoothing techniques to dynamical systems, where the state trajectories are assumed to be smooth on the Poincaré map. One of the critical aspects of filtering and regularization techniques is the automatic selection of a suitable hyperparameter that controls the smoothness, for filtering, and sparsity, for equation recovery, of the solution. We also compare simple and robust model selection techniques to automatically select appropriate smoothing and regularization parameters. Although this article focuses on the differential form of sparse identification algorithms, the findings can also be used to any identification algorithm that requires accurate state measurement data and their time derivatives. For example, the smoothed data could be fed into equation-error methods, typical in aircraft system identification [58,59].

We begin, in the next section, by presenting a background on recovering dynamical system equations from state measurements using sparsity promoting regression techniques. In Sec. 3, we present an overview of different local and global smoothing techniques to denoise state measurements and estimate accurate state-time derivatives. In Sec. 4, we introduce two hyperparameter selection techniques, namely generalized cross-validation and Pareto curve, to automatically select near optimal regularization parameters for smoothing and sparse regression methods. In Sec. 5, we compare the performance of the different smoothing, numerical differentiation, and model selection techniques and assess the performance of sparse regression methods to recover the governing equations of three well-known dynamical systems. Finally, in Sec. 6, we draw conclusions and discuss relevant aspects of the various smoothing methods and offer directions for improvements of denoising techniques.

2 Problem Statement and Solution Strategies

Throughout this work, we assume that a dynamical system has the form
x˙(t)=dx(t)dt=f(x(t)),x(0)=x0
(1)
where x(t)Rn are the known and measurable state variables of the system at time t ∈ [0, T] and f(x(t)):RnRn is a state-dependent unknown vector that describes the motion of the system. An important observation is that in many systems f(x) : = f(x(t)) is a simple function of the state variables x : = x(t) in that only a small set of state-dependent quantities contribute to the dynamics.
Given that f(x) is unknown and following Refs. [5,6], we assume that each state dynamics x˙j:=x˙j(t) or, equivalently, fj(x), j=1,,n, is spanned by a set of p candidate nonlinear (in the state variables) basis functions ϕi(x) weighted by unknown coefficients ξji, i.e.,
x˙j=i=1pξjiϕi(x),j=1,,n
(2)
We assume that the true dynamics may be described by only a subset of the considered basis {ϕi(x)}, and thus the coefficients ξji are sparse. Exploiting this sparsity in identifying the governing equations f(x) is the key idea behind SINDy algorithms [6].
To determine f(x) via Eq. (2), we must numerically estimate the state-time derivatives at each time instance ti from discrete state observations yjRm. In practice, exact state measurements are not available and we only have access to discrete noisy state observations. In this article, we assume a signal plus noise measurement model of the form
yj(i)=xj(ti)+ϵj,j=1,,n
(3)
where i=1,,m, m is the number of measurements sampled at a rate Δt, xj(ti) is the jth state variable evaluated at time instances {ti}i=1m, and we assume that the random perturbations εj are i.i.d. with zero mean E[ϵj]=0 and constant variance V[ϵj]=σ2. Hence, the discretized system given by Eq. (2) may be written in matrix form as
y˙j=Φ(y)ξ^j,j=1,,n
(4)
where
y˙j=[y˙j(t1),y˙j(t2),,y˙j(tm)]TRmΦ(y)=[ϕ1(y(t1))ϕ2(y(t1))ϕp(y(t1))ϕ1(y(t2))ϕ2(y(t2))ϕp(y(t2))ϕ1(y(tm))ϕ2(y(tm))ϕp(y(tm))]Rm×pandξ^j=[ξ^1j,ξ^2j,,ξ^pj]TRp
Noise introduces perturbations on the basis functions {ϕi(x)} and complicates the numerical estimation of state-time derivatives. Figure 1 illustrates the effect of measurement noise on the sparse linear system in Eq. (4) we aim to solve. First, noise is amplified in the state-time derivative vector y˙j if numerical differentiation is not carefully performed. Second, the basis functions in the linear expansion in Eq. (4) act as nonlinear transformations of the noise components. For example, the monomial basis function ϕ(x) = x1x2, once evaluated at noisy measurements, yields ϕ(y) = (x1 + ε1)(x2 + ε2) = x1x2 + x1ε2 + x2ε1 + ε1ε2, where the second and third terms are signal-dependent errors, and the last term stems from pure noise errors. The linear system given in Eq. (4) can then be expressed in terms of the unavailable state variables as
x˙j+ej=[Φ(x)+E]ξj
(5)
where ej and E are implicitly the vector and matrix errors arising from the numerical estimation of state-time derivatives and the evaluation of the noisy terms through the basis matrix Φ, respectively. Noise perturbations change the correlation between the state-time derivative vector and the basis functions, making the recovery of the coefficient vector a challenging task. The aim of smoothing and numerical differentiation techniques is to reduce ej and E as much as possible with the aim of achieving a more accurate and stable recovery of the sparse coefficient vector ξj. Using a priori denoising methods, we replace y˙j and yj for denoised versions of the state variables x^˙j and x^j, where the system in Eq. (4) now yields
x^˙j=Φ(x^)ξ^j,j=1,,n
(6)
Fig. 1
Schematic of the linear system in Eq. (4). Solid lines represent exact signals, whereas dots are their noisy counterparts. Measurement noise affects the state-time derivative accuracy and perturbs the space spanned by the basis functions represented by the columns of the measurement matrix Φ.
Fig. 1
Schematic of the linear system in Eq. (4). Solid lines represent exact signals, whereas dots are their noisy counterparts. Measurement noise affects the state-time derivative accuracy and perturbs the space spanned by the basis functions represented by the columns of the measurement matrix Φ.
Close modal
Remark 1

Problem in Eq. (5) can be seen as an errors-in-variables model where perturbations are present in both the dependent and independent variables. There are specialized methods to solve this problem, such as total least squares [60,61]. Instead, this work focuses on reducing the size of the perturbations via filtering, where no specific structure is assumed in ej and E.

Notation. For the interest of a simpler notation, we henceforth drop the subscript j from x˙j and ξj in Eq. (4). Unless otherwise stated, x˙ refers to the measurements of x˙j and not the dynamics x˙ in Eq. (1).

2.1 Sparse Regression.

Dynamical systems arising from physics often contain forcing terms, such as damping, viscous, elastic, gravitational forces, to name a few, that can be expressed in a basis where only a few terms are active (i.e., nonzero). Other physical systems that satisfy invariances and symmetries are governed by dynamics that also contain few terms of a specific basis. Sparse regression seeks a solution to a linear system of equations where only a few of the components of the solution vector are nonzero, avoiding overfitting and worsening the prediction performance of the identified dynamical model. The sparse regression problem can be mathematically stated as
minξξ0subjecttoΦ(x)ξx˙2ϱ
(7)
where ξ0=#{i:ξi0,i=1,,p} is the ℓ0-pseudonorm, which counts the number of nonzero elements in ξ, and ϱ is a tolerance that controls the size of the residual. The optimization problem in Eq. (7) is known to be an NP-hard combinatorial problem [62]. Several alternatives have been proposed to approximate the solution of Eq. (7) in a computationally tractable manner: from greedy algorithms to convex optimization procedures (see Refs. [14,63] for an overview). In general, the accuracy of the recovered coefficients may strongly depend on the choice of the optimization strategy, especially in the large measurement noise regimes, [16,64]. In this article, we focus on STLS, used in the original SINDy algorithm [6], and weighted basis pursuit denoising (WBPDN) [64,65].

2.2 Sequential Thresholded Least Squares.

The STLS algorithm [6] iteratively solves a least squares regression problem and hard-thresholds the coefficients to promote sparsity and thereby regularize the regression problem. The procedure is repeated on the nonzero entries of ξ until the solution converges or the algorithm reaches a maximum number of iterations. Specifically, let S(ξ):={i:ξi0} denote the support of an instance of ξ. At the (k + 1)th iteration of STLS, ξ(k+1) is computed from a least squares problem over S(ξ(k)) and its components smaller than some threshold parameter γ > 0 are set to zero
(STLS)ξ(k+1)argminξ{Φ(x)ξx˙22subjecttoS(ξ)=S(ξ(k))}ξ(k+1)T(ξ(k+1);γ)
(8)
where the thresholding operator T(;γ) is defined as
Ti(ξ;γ)={ξiif|ξi|>γ0otherwisei=1,,p
(9)

The sufficient conditions for general convergence, rate of convergence, conditions for one-step recovery, and a recovery result with respect to the condition number and noise are addressed in Ref. [66]. Algorithm 1 summarizes a practical implementation of STLS.

Algorithm 1

 1: procedure STLS (Φ(x), x˙, γ)

 2:    Solve Φ(x)ξ(0)=x˙ for ξ^(0) using least squares.

 3:    Apply thresholding ξ^(0)T(ξ^(0);γ).

 4:    while not converged or k<kmaxdo

 5:       Delete the columns of Φ(x) whose corresponding ξ(k) component is 0, obtaining Φ(x).

 6:       Solve Φ(x)ξ(k)=x˙ for ξ^(k) using least squares.

 7:       Apply thresholding ξ^(k)T(ξ^(k);γ).

 8:       k=k+1.

 9:    end while

10: end procedure

2.3 Weighted Basis Pursuit Denoising.

The WBPDN is an iterative reweighted version of the basis pursuit denoising (BPDN) [5] where the solution at the current iteration is weighted by solution-dependent weights from the previous iteration. To improve the robustness of SINDy with respect to the state and state derivative noise, WBPDN regularizes the regression problem involving Eq. (4) via weighted ℓ1-norm of ξ
Wξ1=i=1pwi|ξi|
(10)
Here, WRp×p is a diagonal matrix with diagonal entries wi > 0, i = 1, …, p. WBPDN is inspired by the work in Refs. [6771] from the statistics, compressed sensing, and function approximation literature, where weighted ℓ1-norm has been shown to outperform the standard ℓ1-norm in promoting sparsity, especially in the case of noisy measurements or when the solution of interest is not truly sparse, i.e., many entries of ξ are near zero [6871].
More specifically, we solve the weighted variant of the BPDN problem
(WBPDN)minξΦ(x)ξx˙22+λWξ1
(11)
which coincides with the adaptive LASSO approach of Ref. [67]. Depending on the choice of W, Wξ1 gives a closer approximation to the ℓ0-norm of ξ than the ℓ1-norm, and thus better enforces sparsity in ξ. The main goal of using a weighted ℓ1-norm—instead of its standard counterpart—is to place a stronger penalty on the coefficients ξi that are anticipated to be small (or zero). As proposed in Refs. [67,68], the weights are set according to
wi(k+1)=1|ξi(k)|q+υ
(12)
where q > 0 represents the strength of the penalization and υ is a small parameter to prevent numerical issues when ξi is zero. In our numerical experiments, we set q = 2 and υ = 10−4 as they robustly produce accurate coefficient recovery. The problem in Eq. (11) may be solved via BPDN solvers for standard ℓ1-minimization, such as SparseLab [72], SPGL1 [73], or CVX [74,75] to name a few, with the simple transformations ξ~:=Wξ and Φ~(x)=Φ(x)W1, i.e.,
minξ~Φ~(x)ξx˙22+λξ~1
(13)
Given the solution ξ~ to Eq. (13), ξ is then computed from ξ=W1ξ~. Algorithm 2, adopted from Ref. [68], outlines the steps involved in WBPDN.

Iteratively reweighted weighted basis pursuit denoising (WBPDN) adopted from Ref. [68]

Algorithm 2

 1: procedure WBPDN (Φ(x), x˙, λ, q, υ)

 2:    Set the iteration counter to k=0 and wi(0)=1,i=1,,p.

 3:     while not converged or k<kmaxdo

 4:       Solve the WBPDN problem in Eq. (11) for a specific λ
ξ(k)=argminξ{Φ(x)ξx˙22+λW(k)ξ1}.
 5:       Update the weights for i=1,,p
wi(k+1)=1|ξi(k)|q+υ.

 6:       k=k+1.

 7:     end while

 8: end procedure

3 Measurement Denoising and Derivative Estimation

In the present work, the aim of denoising techniques is to obtain an approximate model of the state variables with maximum accuracy with respect to an appropriate error measure as well as the smoothest solution possible. The derivatives are then computed by differentiating the approximate model. The assumption in all the methods is to approximate the state variables x(t) to then estimate the time derivatives by differentiating the state variable approximations. The assumption is that if the approximation of the state variables x^(t) is close to the original, unknown variables x(t), then the estimated time derivatives x^˙(t) are also close to x˙(t). Mathematically, this can be expressed as x^˙(t)x˙(t)Cx^(t)x(t) for some positive constant COt), where Δt is the sampling rate, and a suitable norm, usually the ℓ2-norm.

One can divide smoothing techniques into local and global methods. On the one hand, local methods fit a regression function locally in time using neighboring measurements around a specific data point and ignore data outside the neighborhood. On the other hand, global methods employ all the data available to estimate a smooth approximation over the entire time domain. Inspired by the findings in Refs. [24,28,29,56,57], this article compares several best-performing local and global techniques for data denoising and numerical time differentiation.

3.1 Local Methods.

Local smoothing methods fit a model locally around a neighborhood of each query point. That is, for any given t, local methods fit a smooth regression function x(t) using the data points within a local neighborhood with a specific width. The local time derivative can be estimated by differentiating the approximation x^(t) analytically. The width of the neighborhood, h(t), called the window size or bandwidth, localizes the data around t. Usually, local regression methods approximate x(t) using a low-degree polynomial, where the local coefficients are estimated via least squares. The benefit of local polynomial regression is that it denoises the data and estimates the derivative simultaneously. Alternative local smoothing methods employed for smooth surface reconstruction from scattered data points exist in the literature that can also be applied. Examples of these methods include moving least squares [7678] or radial basis function smoothing [79]. The local polynomial of degree d at t0 is given by a local basis expansion of the form
x(t;t0)=k=1d+1θk(t0)bk(t)
(14)
where {θk(t0)}k=1d+1 are the local polynomial coefficients at t0 and the local basis functions bk(t) = (tt0)k−1, k = 1, …, d + 1, are monomials of increasing degree. Local polynomial regression estimates the local coefficients via weighted least squares as
θ^(t0)=argminθRd+1(yBθ)TW(t0)(yBθ)
(15)
where BRm×(d+1) is the basis matrix such that {B}ik = bk(ti), θ=[θ0,θ1,,θd]TRd+1 is the coefficient vector, and W(t0)Rm×m is the diagonal weight matrix defined as {W(t0)}ii = Kh(t0, ti), Kh being the kernel function. The kernel function localizes the data around point t0 by weighting its neighbors according to a distance metric. For example, the popular tricube kernel is given by [80]
Kh(t0,t)=D(|tt0|h)
(16)
with
D(u)={(1|u|3)3if|u|10otherwise
(17)
Those data points whose distance with respect to t0 exceed the bandwidth h receive a zero weight, meaning that they are excluded from the local regression. Figure 2 shows a schematic of local polynomial regression at a specific data point with two different kernels, represented by the shaded regions. The solution to Eq. (15) at t0 is given by
θ^(t0)=(BTW(t0)B)1BTW(t0)y
(18)
and the estimate and derivatives at t0 are given by x(t0)=θ^0(t0) and x˙(t0)=θ^1(t0).
Fig. 2
Illustration of the local polynomial regression method using a local quadratic polynomial (solid line) over a window of size 30% of the entire data range. The shaded regions correspond to the rectangular and Epanechnikov [81] kernel functions
Fig. 2
Illustration of the local polynomial regression method using a local quadratic polynomial (solid line) over a window of size 30% of the entire data range. The shaded regions correspond to the rectangular and Epanechnikov [81] kernel functions
Close modal

The quality of the local estimates is highly dependent on the local bandwidth h(t) and the degree of the polynomial. Large bandwidths tend to under-parametrize the regression function and increase the estimate bias because of curvature effects [82]. Small bandwidths, on the other hand, over-parametrize the unknown function, increase the variance, and produce noisy estimates. A similar tradeoff occurs with the degree of the polynomial, although the resulting estimates are less sensitive as compared to the bandwidth [31]. For a fixed bandwidth h, a large degree d reduces the bias but increases the variance, whereas a low order reduces the variance and increases the bias due to curvature effects. As recommended in Ref. [83], and since we are interested in first-order derivatives, we use a fixed polynomial degree d = 2 in the numerical experiments of this work. How to select an appropriate bandwidth h will be discussed in Sec. 4. In this article, we focus on the Savitzky–Golay (S–G) filter [84] and a locally weighted polynomial regression [85].

Remark 2

Reference [83] suggests a polynomial degree d = ν + 1, where ν is the order of the derivative (ν = 0 is the regression function estimate). Since we expect higher error on the derivative estimates than on the function estimates, we focus on first-order derivatives and set d = 2 for computing the function and derivative estimates.

3.1.1 Savitzky–Golay Filter.

Savitzky and Golay [84] proposed a simplified polynomial least squares fit convolution for smoothing and estimating derivatives of a set of sequential data. The convolution can be understood as a weighted moving average filter with weighting given as a low-degree polynomial within the filter window (see Ref. [86] for a detailed analysis). The S–G filter was designed to preserve higher moments within the data and to reduce the bias. In certain fields, such as econometrics and statistics, S–G filter is often referred as local estimated scatterplot smoothing (LOESS) when the bandwidth and spacing are constant. The S–G filter belongs to the class of local polynomial regression framework whose kernel is the rectangular function with compact support given by Eq. (16) with
D(u)={1if|u|10otherwise
(19)
The kernel in Eq. (19) acts as a window centered at t0 where data points outside of it are discarded.

3.1.2 Locally Weighted Polynomial Regression.

Locally weighted polynomial regression [85], often termed locally weighted scatterplot smoothing (LOWESS), is a generalization of S–G filter with different kernel functions, and possibly variable bandwidth and nonuniform spacing. Following the extensive work by Refs. [31,82,83,87], we use the Epanechnikov kernel function of the form Eq. (16) with
D(u)={34(1u2)if|u|10otherwise
(20)
The Epanechnikov kernel has compact support and is the optimal kernel function in the sense that it minimizes the mean squared error (MSE) in the interior of the data domain [81].

3.2 Global Methods.

Global methods yield an estimate over the entire data domain by imposing a smoothing constraint to the optimization problem. In this work, we consider smoothing algorithms whose global loss function is of the form
L(x)=SSE(x)+λR(x)
(21)
where the first term is the sum of the squared errors given by SSE(x)=yx22, and the second term promotes smoothness of the solution via a regularization function R, and λ ≥ 0 is the regularization—also smoothing—parameter. The latter controls the tradeoff between the closeness to the data and the smoothness of the function estimates. Figure 3 illustrates the behavior of the global smoothing methods as the regularization parameter is varied. When there is no regularization, i.e., λ = 0, the estimates interpolate the data, whereas when the regularization parameter is gradually increased, the resulting estimate tends to the best-linear approximation [46]. The optimalλ resides between these two extreme cases and must be conveniently selected. In Sec. 4, we present methods to automatically determine near optimal regularization parameters. This article considers three well-known global smoothing algorithms: smoothing splines, Tikhonov smoother (a.k.a. Hodrick–Prescott (H–P) filter), and ℓ1-trend filtering.
Fig. 3
Schematic of data filtering via global smoothing methods. The regularization parameter can be varied from λ = 0, where data are interpolated, to infinity, which yields the best-linear fit. The optimal λ lives within the two extreme cases. The λ parameter was normalized in the [0, 1] range.
Fig. 3
Schematic of data filtering via global smoothing methods. The regularization parameter can be varied from λ = 0, where data are interpolated, to infinity, which yields the best-linear fit. The optimal λ lives within the two extreme cases. The λ parameter was normalized in the [0, 1] range.
Close modal

3.2.1 Smoothing Splines.

Smoothing splines is a variant of regression splines [46] for function approximation based on truncated piece-wise polynomial basis where a regularization term is added to penalize against high curvatures, and hence produce a smooth estimate. Smoothing splines uses a basis expansion to approximate the function given by
x(t)=k=1pNk(t)θk
(22)
where {Nj(t)}k=1p is the p-dimensional natural spline basis over the knots t1, …, tm, and {θj}k=1p are the parameters weighting each basis function. Denoting x^=Nθ the vector of m fitted values at specific ti, the smoothing splines problem can be defined as
θ^=argminθRpyNθ22+λθTΩNθ
(23)
where NRm×p is the basis matrix of natural splines, θRp is the vector of parameters, and ΩN=(N(τ))TN(τ)dτRp×p is the matrix that penalizes the curvature. The unique minimizer of Eq. (23) is a natural cubic spline with knots at {ti}i=1m where regularization is introduced through the penalty term ΩN by weighting the basis coefficients θ [25]. The limit when λ → 0 corresponds to an estimate θ^ that interpolates the data, whereas as λ → ∞ the estimate tends to the best-linear fit since no second-order derivative is permitted [46]. Problem in Eq. (23) admits a closed-form solution given by
θ^=(NTN+λΩN)1NTy
(24)
Once the parameters θ^ are estimated, the function approximation yields
x^(t)=j=1pNj(t)θ^j
(25)
and one can easily obtain derivatives by differentiating Eq. (25). Smoothing splines can be thought as a linear filter that projects the input measurement vector y to a subspace spanned by the columns of a smoother matrix Sλ as
x^=Sλy
(26)
where Sλ=N(NTN+λΩN)1NT. By choosing B-spline basis functions, the smoothing splines estimate x^ in Eq. (22) can be computed in O(m) arithmetic operations (see Ref. [33] for an efficient implementation).

3.2.2 Tikhonov Smoother.

Tikhonov smoother, often referred as H–P filter [35] when used as a data-smoother, incorporates an ℓ2-norm regularization term to the loss function of a least squares problem to control the regularity of the solution. Specifically, for any given regularization parameter λ ≥ 0, the solution of Tikhonov smoother is obtained by solving the convex optimization problem
x^=argminxRmyx22+λD(2)x22
(27)
where D(2)R(m2)×m is the second-order difference matrix (see Appendix  A). The term D(2)x22 penalizes the curvature of x, yielding smooth approximations. For a fixed λ, problem in Eq. (27) admits the closed-form, unique solution
x^=(I+λD(2)TD(2))1y
(28)
The relative fitting error of Tikhonov smoother satisfies the following property [88]:
yx^2y232λ1+32λ
(29)
Tikhonov smoother offers linear computational complexity and the solution in Eq. (28) can be computed in O(m) arithmetic operations [89]. After computing the state estimate x^, we use natural cubic splines to fit the state estimate continuously and approximate the state-time derivative via differentiation. As with smoothing splines, we can project the input data onto the subspace of basis functions weighted by λ via x^=Sλy, where Sλ=(I+λD(2)TD(2))1 is the smoother matrix.

3.2.3 ℓ1-Trend Filtering.

The ℓ1-trend filtering is a variation of the Hodrick–Prescott filter where the ℓ2-norm of the regularization term is substituted for the ℓ1-norm to penalize the variations of the estimated trend. As opposed to smoothing splines and Tikhonov smoother, ℓ1-trend filtering does not possess a closed-form solution, and optimization algorithms, such as interior-point methods, must be employed to solve the problem. Originally, ℓ1-trend filtering was proposed to produce piece-wise linear trend estimates [88]. However, more recent work by Ref. [90] extended ℓ1-trend filtering accounting for piece-wise polynomials of any degree, leading to smoother estimates. They empirically observed that ℓ1-trend filtering estimates achieve better local adaptivity than smoothing splines and offer similar performance to locally adaptive smoothing splines [91] at reduced cost. A kth-order ℓ1-trend filtering fits a piece-wise polynomial of degree k to the data. In the specific case of k = 0, the ℓ1-trend filtering reduces to the well-known one-dimensional total variation regularization [36], or the one-dimensional fussed LASSO [92]. For a given integer k ≥ 0, the kth-order ℓ1-trend filtering estimate is defined as
x^=argminxRm12yx22+λD(k+1)x1
(30)
where λ is the regularization parameter and D(k+1)R(mk1)×(mk) is the difference matrix of order k + 1 (see Appendix  A). For any order k ≥ 0 and a prechoosen λ, the ℓ1-trend filtering estimate x^ is a unique minimizer of Eq. (30) since the problem is strictly convex.
The ℓ1-trend filtering estimates are not linear functions of the input data. Thus, there does not exist a smoother matrix that projects the input data onto its column space. Similar to smoothing splines and Tikhonov smoother, the ℓ1-trend filtering possesses interesting convergence properties, as studied in Refs. [88,93]. In particular, the maximum fitting error is bounded as
yx^4λ
(31)
where ‖ · ‖ := max| · | is the infinity norm. The bound in Eq. (31) indicates that as λ → 0 the estimate x^ tends to the original input data. On the other extreme, as λ → ∞ the estimate tends to the best-linear fit. However, there exists a finite value λmax=(D(k+1)TD(k+1))1D(k+1)y for which that is achieved. This is, for λλmax, the estimate x^ corresponds to the same optimal linear fit. The computational complexity of ℓ1-trend filtering is O(m) per iteration and can be solved efficiently via specialized primal–dual interior-point methods [90]. If speed and robustness are a concern, Ramdas and Tibshirani [94] propose an enhanced and more efficient ℓ1-trend filtering solving strategy.

We note that one can directly estimate derivatives by reformulating ℓ1-trend filtering in terms of an integral operator and regularizing the derivative estimates using the ℓ1-norm. Similar to Tikhonov, we did not perceive significant differences on the results compared to differentiation via spline interpolation on the denoised states.

4 Hyperparameter Selection Methods

Model selection is the task of optimally selecting model hyperparameters from a set of candidates, given a set of data. In the context of state variable denoising, the selection criterion aims to optimize the predictive quality of the model, i.e., denoised states, by selecting the most appropriate one that balances the closeness of fit and the complexity. All the methods described in Sects. 2.1 and 3 contain a sparse regularization parameter, in the case of sparse regression algorithms, and smoothing parameter, bandwidth h for local smoothing and λ for global smoothing, that controls this tradeoff. A considerable number of methods have been proposed, following different philosophies and exhibiting varying performances; see Refs. [46,95,96] for a review. In this work, we focus on two data-driven selection methods widely used in regularized regression without prior knowledge of the noise process: GCV and Pareto curve criterion.

4.1 Generalized Cross-Validation.

Cross-validation is a general approach for choosing near optimal regularization parameters λ and estimating the prediction error in regression problems. CV separates the data samples into training and validation sets and uses the training set to compute the solution which is then used to predict the elements of the validation set. By repeating this procedure for different regularization parameters, one can select the regularizer that gives the minimum estimated mean squared error. K-fold cross-validation splits the data into K roughly equal-sized portions. For the kth portion, the model is fitted to the other K − 1 parts of the samples, and the prediction error of the fitted model is estimated using the kth part (see Ref. [46] for more details). The case K = m is known as leave-one-out cross-validation and all the data are used to fit the model except one sample. Leave-one-out CV can be computationally expensive since a model must be fit m times for each regularizer. GCV approximates the leave-one-out cross-validation in a tractable and convenient manner. For smoothing splines, GCV is more reliable than ordinary cross-validation in the sense that it has less tendency to under-smooth the data [97,98]. The GCV function is defined as
GCV(λ)=myx^22(mdfλ)2
(32)
where x^ is the estimate of x for a given regularization parameter λ, and dfλ is the number of degrees of freedom of the regularized regression method. For linear smoothers, the number of degrees of freedom is defined as the trace of the smoother matrix, dfλ = trace(Sλ), [46]. The degrees of freedom measure the number of effective model parameters and therefore, the complexity of the resulting model.
In the case of ℓ1-regularized methods, which do not admit a closed-form solution, the definition of the degrees of freedom depends on the problem. Specifically, for LASSO-type problems, the number of nonzero coefficients of the solution ξ is an unbiased estimate of the degrees of freedom, i.e., dfλ=ξ0 [99], whereas for ℓ1-trend filtering the degrees of freedom are defined as dfλ=D(k+1)x^0+k+1 [90]. The regularization parameter λ is chosen as the minimizer of the GCV function
λGCV=argminλGCV(λ)
(33)
GCV is generally an accurate and robust method to compute estimates with minimum prediction error. However, there are some limitations. For ℓ2-regularization problems, GCV may suffer from severe under-smoothing [96], and the region around the minimum of the GCV function is often very flat. Depending on the resolution, the minimum may be difficult to locate numerically. Additionally, GCV may fail to select near optimal regularization parameters when the errors are highly correlated. According to Ref. [100], GCV may confuse correlated errors to be part of the exact trajectory, and therefore tends toward regularization parameters that only filter out the uncorrelated component of the noise. For ℓ1-regularization problems, GCV may tend toward zero for zero regularization parameters and the global minimum may hide the local minimum that yields optimal prediction errors [101].

For local methods, the GCV functions are defined in terms of the kernel bandwidth h. There are two alternatives to select suitable bandwidths using GCV: varying the bandwidth locally or globally. In the former case, the local bandwidth is varied within a range h(t0) ∈ [hmin(t0), hmax(t0)] for each data point t0, and the one that minimizes the local GCV is selected. In the latter case, a local regression for all data points with a fixed bandwidth is performed and repeated with different bandwidths within a range h ∈ [hmin, hmax], and the one that minimizes the global GCV function is chosen. In our numerical experiments, we did not observe significant differences in performance between the two approaches. We therefore only present the global bandwidth approach in Sec. 5.

4.2 Pareto Curve Criterion.

The Pareto curve is a graph that traces the tradeoff between the residual, i.e., SSE in Eq. (21), and the regularization constraint by varying the regularization parameter λ. Typically, the Pareto curve is an L-shaped graph generated by plotting the norm of the regularization constraint, i.e., R in Eq. (21), versus the norm of the residual in a log–log scale for different values of λ. The Pareto curve we are referring to in this work is often called the L-curve for ℓ2-regularization [54] and Pareto frontier for ℓ1-regularization [102]. We will refer to the Pareto curve when the ℓ2-norm is employed for the residual and either the ℓ1- or ℓ2-norms are used for the regularization constraint, and we will be specific in cases it creates confusion. The Pareto curve often has a corner located where the solution changes from being dominated by the regularization error, corresponding to the steepest part of the curve, to being dominated by the residual error, where the curve becomes flat. The Pareto curve criterion employs the heuristic that makes use of the observation: the optimal value of λ corresponds to the curve’s corner such that a good balance of the regularization and residual errors is achieved.

For continuous Pareto curves, defined for all λ ∈ [0, ∞), Hansen [54] suggests defining the corner point as the point with maximum curvature. For discrete curves, however, it becomes more complicated to define the location of the corner point. Hansen et al. [103] highlight the difficulties in computing the corner point from discrete L-curves built using a finite number of λ values. The discrete curves may contain small local corners other than the global one that may give suboptimal regularization parameters. To alleviate this issue, they propose an adaptive pruning algorithm, where the best-corner point is computed by using candidate corners from curves at different resolutions. Since the L-curves must be sampled at different scales, the pruning algorithm can be computationally expensive. We instead compute the corner points using a simpler method proposed in Cultrera et al. [104]. The algorithm iteratively employs the Menger curvature [105] of a circumcircle and the golden section search method to efficiently locate the point on the curve with maximum curvature.

As compared to the GCV criterion, the Pareto curve criterion for ℓ2-regularization is often more robust to correlated errors since it combines information about the residual norm and the norm of the solution, whereas GCV only uses information about the residual norm [54]. The Pareto curve criterion also presents some limitations. The L-curve is likely to fail for problems with very smooth solutions [106], and nonconvergence may occur when the generalized Fourier coefficients decay at the same rate or less rapidly than the singular values of the basis matrix, Φ in this article (see Ref. [107] for details). Additionally, the location of the corner is dependent on the scale in which the Pareto curve is considered, and it may not appear in certain scales [108]. In the numerical examples section, we compare the performance and discuss limitations of GCV and Pareto curve for the smoothing and sparse regression methods considered in this article.

5 Numerical Examples

In this section, we present numerical examples to assess the performance of the smoothing and numerical differentiation methods presented in Sec. 3, as well as model selection methods in Sec. 4. We then compare the accuracy of the sparse regression algorithms in Sec. 2, specifically WBPDN and STLS, to recover governing equations of three nonlinear dynamical systems using the preprocessed data. In all cases, we assume no prior knowledge of the noise values and dynamics that generated the data, except that the governing equations can be sparsely expressed in a multivariate monomial basis in the state variables. We only have access to the noisy state measurements at discrete times represented by the measurement vector yjRm,j=1,,n, sampled every Δt units of time. The exact state variables are computed by integrating the nonlinear ordinary differential equations (ODE) using the fourth-order explicit Runge–Kutta (RK4) integrator with a tolerance of 10−10. In this work, we assume that the state variables are contaminated with independent zero-mean additive noise with variance σ2. Specifically, for all the numerical examples, we employ white Gaussian noise model given by ϵjN(0,σ2),j=1,,n, and vary σ to study different noise levels. We additionally study the effect of Gaussian distributed colored noise generated with a power law spectrum [109].

While here we work with a Gaussian noise model, we note that more general anisotropic and correlated noise can be easily incorporated through the generalized least squares formulation of the smoothing algorithms. We refer the interested reader to Ref. [110] for a thorough discussion on the generalized least squares, and Section 4.6.2. of Ref. [97] to estimate the weighting matrix involved in the method. To measure the signal magnitude relative to the noise level, we provide, in Appendix  B, the signal-to-noise ratio (SNR) for each state j.

The sampling time is fixed to Δt = 0.01, and the number of samples is fixed to 201, which are enough samples to capture the essential behavior of each dynamical system. We run 100 realizations for the training set for each of the numerical examples and report the mean and variance of the errors. The state-time derivatives are computed over an interval that is 5% (from each side) larger than the intended training time span, but only the data over the original training time are retained to compute ξ^. To show the quality of the resulting state and derivative estimates and state predictions, we report the relative errors
eX=X^X*FX*F,eX˙=X^˙X˙*FX˙*F
(34)
where X=[x(t1),x(t2),,x(tm)]Rn×m, F is the Frobenius norm, and ()^ and ( · )* represent the estimated and true values, respectively.
We approximate the governing equations of each example by a multivariate monomial basis of total degree d in n state variables. That is, ϕi(x) in Eq. (2) are given by
ϕi(x)=j=1nxjijsuchthatj=1nijd,ijN{0}
(35)
where the nonnegative integer ij denotes the degree of the monomial in state variable xj. The size of the approximation basis is then p=(n+dn). For all test cases, we set the total degree of basis d to one more than the highest total degree monomial present in the governing equations. We run the WBPDN optimization procedure using solveBP routine in matlab from the open-source SparseLab 2.1 package [72,111] and employ the publicly available matlab codes for STLS.2 We report the relative solution error defined as
eξ=ξ^ξ*2ξ*2
(36)
where ξ^ and ξ* are the approximate and exact solution vectors for each state variables, respectively.

The considered nonlinear dynamical systems are the Lorenz 63 system as a base model for identifying chaotic dynamics, and the Duffing and Van der Pol oscillators as nonlinear stiffness and damping models. These dynamical systems are well-known reference problems and have been extensively studied in the literature.

We provide in Table 1 a compendium of available codes in matlab, python, and R for each of the smoothing methods. We highlighted in bold the codes we used in this article for smoothing splines and ℓ1-trend filtering. For local methods and Tikhonov smoother, we used our own implementation.

Table 1

Compendium of filtering software codes and packages available in matlab, python, and R

FiltermatlabpythonR
Savitzky–Golaysgolayfilt (signal processing)savgol_filter (scipy)sgolayfilt (signal)
LOWESSlowess (curve fitting)lowess (lowess)lowess (stats)
Tikhonovhpfilter (econometrics)hpfilter (statsmodels)hpfilter (mFilter)
Ssplinescsaps (curve fitting)UnivariateSpline (scipy)smooth.spline (stats)
1-trend filterl1_tfa1-trend filtering (CVXPY)btrendfilter (genlasso)
FiltermatlabpythonR
Savitzky–Golaysgolayfilt (signal processing)savgol_filter (scipy)sgolayfilt (signal)
LOWESSlowess (curve fitting)lowess (lowess)lowess (stats)
Tikhonovhpfilter (econometrics)hpfilter (statsmodels)hpfilter (mFilter)
Ssplinescsaps (curve fitting)UnivariateSpline (scipy)smooth.spline (stats)
1-trend filterl1_tfa1-trend filtering (CVXPY)btrendfilter (genlasso)

Note: We provide routine names and packages/toolboxes (enclosed in parentheses).

5.1 Lorenz 63 System.

The Lorenz 63 system is a canonical model for nonlinear chaotic dynamics that was developed by E. Lorenz as a simplified model for atmospheric convection [112]. This system of nonlinear ODE has been fundamental for the study of chaotic systems, wherein the future states of the system are highly sensitive to initial conditions. The state trajectories are three-dimensional, chaotic, deterministic, nonperiodic, and confined within a butterfly shaped attractor, making them hard to predict. The Lorenz 63 model is given by the following system of first-order equations:
x˙1=γ(x2x1),x1(0)=x1,0
(37a)
x˙2=x1(ρx3)x2,x2(0)=x2,0
(37b)
x˙3=x1x2βx3,x3(0)=x3,0
(37c)
where the parameter values are set to γ = 10, ρ = 28, and β = 8/3, and the initial condition is (x1,0, x2,0, x3,0) = ( − 8, 7, 27). Assuming a total degree d = 3 for the polynomial basis, the right-hand-side of Eq. (37) is described exactly by seven of the p = 20 monomials. We simulated the Lorenz 63 system from t = 0 to t = 2.2 time units to obtain the state trajectories. We then sampled the exact state variables at Δt = 0.01 resulting in m = 221 samples, and perturbed them with 100 random noise realizations. We repeated the process for four different noise levels σ.

Figure 4 compares the relative state (top row) and state-time derivative (bottom row) errors defined in Eq. (34) after running each of the local and global smoothing methods using GCV (left column) and Pareto curve criterion (right column) to automatically select the regularization parameter λ. The error trend with respect to different noise levels behaves as expected, with a smooth error increase as noise level grows. We notice that, in general, global methods outperform local methods for all noise levels. This observation is aligned with the conclusions drawn in Ref. [29]. Specifically, LOWESS with the optimal Epanechnikov kernel is superior to the Savitzky–Golay filter with a uniform kernel. For the estimation of state variables, the difference in accuracy between local and global methods is not as evident as in the case of state-time derivative estimates. We also observe that more accurate state estimates yield more accurate state-time derivatives for the different filters presented. In this example, global methods perform similarly, with ℓ1-trend filtering yielding the best results in general. As discussed in Ref. [90], the local adaptativity of ℓ1-trend filtering produces better estimates as compared to cubic smoothing splines. However, we do not notice significant superiority in terms of overall accuracy.

Fig. 4
Comparison of the relative state (top row) and state-time derivative (bottom row) errors for the Lorenz 63 system using local and global smoothers. The regularization parameters were computed using GCV (left column) and the Pareto curve criterion (right column).
Fig. 4
Comparison of the relative state (top row) and state-time derivative (bottom row) errors for the Lorenz 63 system using local and global smoothers. The regularization parameters were computed using GCV (left column) and the Pareto curve criterion (right column).
Close modal

Figure 5 compares the GCV function defined in Eq. (32) for the Savitzky–Golay filter and LOWESS as a function of the global bandwidth h at different noise levels. The solid circles and crosses correspond to the minimum error—i.e., optimal bandwidth—and the minimum of the GCV function, respectively. The general trend for both filters is consistent with noise levels: the bandwidth h increases as the noise level is increased. The interpretation is that as noise levels increase, the local features of the signal are progressively lost and a larger neighborhood is needed to estimate the signal. Reference [113] provides an explanation of this phenomenon from a geometric viewpoint. However, GCV tends to select larger bandwidths than the optimal ones, thus yielding over-smoothed estimates.

Fig. 5
GCV functions for each state variable of Lorenz 63 system using Savitzky–Golay and LOWESS filters at different noise levels for an arbitrary realization. Solid circles represent the global bandwidth h that yield the minimum estimation error defined in Eq. (34). Crosses represent the minima of the GCV functions for the GCV selection strategy.
Fig. 5
GCV functions for each state variable of Lorenz 63 system using Savitzky–Golay and LOWESS filters at different noise levels for an arbitrary realization. Solid circles represent the global bandwidth h that yield the minimum estimation error defined in Eq. (34). Crosses represent the minima of the GCV functions for the GCV selection strategy.
Close modal

For the global methods, we also illustrate the behavior of GCV and Pareto curves to select near optimal smoothing parameters λ. Figure 6 shows the Pareto curves (top panels) and GCV functions (bottom panels) for smoothing splines at different noise levels for one state trajectory realization. In the case of Pareto curve criterion, the crosses represent the corners of the L-shaped curves found via the algorithm in Ref. [104]. In the case of GCV criterion, the crosses represent the minimum of the GCV function computed via Eq. (32). In both criteria, the dots represent the λ that yields the lowest relative coefficient error eξ. The first observation is that the Pareto curves consistently select larger λ—smoothing increases as we move along the Pareto curve from top to bottom—as we increase the noise level. We noticed that the L-shaped structure becomes clearer and sharper in regions between low and high noise level extremes. Overall, in all noise cases, the λ corresponding to the corner points of the Pareto curves almost coincide with the optimal ones. In the GCV case, we observe a flat region, where the minimum is located, and a steep region, where over-smoothing errors become more noticeable. As with Pareto curves, GCV also results in a good model selection strategy for selecting the smoothing parameter λ. We omit the plots for Tikhonov smoother since it yields similar trends and observations. The Pareto curves for ℓ1-trend filtering, as shown in Fig. 7 (top panels), become better defined than the smoothing splines and Tikhonov smoother cases, yielding smoothing parameters closer to the optimal ones. However, in the case of GCV, we notice that the behavior of the GCV function around the region of small λ is inconsistent and may yield minima resulting in under-smoothed state estimates. We also observed that by removing the small λ region corresponding to the noisy GCV behavior, the λ corresponding to the new minima matched the optimal ones well (see, for example, the σ = 0.1 case for state x1 in Fig. 7). The results reported for ℓ1-trend filtering using GCV in Fig. 4 were obtained after truncating the region of small λ with irregularities in the GCV function. Next, we present the performance of WBPDN and STLS for estimating the governing equation coefficients ξ^j,j=1,,n, using the filtered data by each of the global smoothing methods. For each method, we used the filtered data via Pareto curve since it slightly outperformed GCV. We omit the analysis of local methods since we observed that global methods generally offer better accuracy for the state and state-time derivative estimates in the examples of this article. Figure 8 compares the relative solution error, given in Eq. (36), for WBPDN (top panels) and STLS (bottom panels) at different noise levels, and using GCV (dashed lines) and Pareto curves (solid lines) as automatic hyperparameter selection techniques. Generally, as noted in Ref. [64], WBPDN significantly outperforms STLS for the hyperparameter selection methods presented in this work. The error trend consistently increases as we increase the noise level for both sparse regression methods. Overall, WBPDN is more robust to noise and yields more accurate coefficient estimates. Similar to the filtering stage, the Pareto curve criterion results in superior performance as compared to GCV for the sparse regression algorithms presented in this article. We also note that even though ℓ1-trend filtering yields slightly better state and state time-derivative estimates than smoothing splines and Tikhonov smoother, that difference is not noticeable for the error on the coefficient estimates. Thus, the global smoothing methods along with the Pareto curve criterion give similar results for the recovery of the coefficients. Figure 9 shows the Pareto curves and GCV functions for the zeroth iteration—i.e., no weighting—of WBPDN and data filtered with ℓ1-trend filtering. We emphasize that good coefficient estimates for the zeroth iteration of WBPDN are crucial for the subsequent iterations to enhance sparsity, reduce the error and converge. As can be observed, the corner point criterion of Pareto curves results in accurate estimation of near optimal regularization parameters λ, whereas the minima of the GCV functions generally tend toward smaller λ from the optimal ones. Similar results were observed for Tikhonov smoother and smoothing splines.

Fig. 6
Pareto curves (top panels) and GCV functions (bottom panels) for each state variable of Lorenz 63 systems using smoothing splines at different noise levels for an arbitrary realization. Solid circles represent the regularization parameters λ that yield the minimum estimation error defined in Eq. (34). Crosses represent the converged corner points for Pareto curves (using the iterative algorithm in Ref. [104]) and the minima of the GCV functions for GCV. Note that the corner on Pareto curves for small noise levels is not visible because of the scale of the plot; however, the corner exists if one amplifies the plot around that region.
Fig. 6
Pareto curves (top panels) and GCV functions (bottom panels) for each state variable of Lorenz 63 systems using smoothing splines at different noise levels for an arbitrary realization. Solid circles represent the regularization parameters λ that yield the minimum estimation error defined in Eq. (34). Crosses represent the converged corner points for Pareto curves (using the iterative algorithm in Ref. [104]) and the minima of the GCV functions for GCV. Note that the corner on Pareto curves for small noise levels is not visible because of the scale of the plot; however, the corner exists if one amplifies the plot around that region.
Close modal
Fig. 7
Pareto curves (top panels) and GCV functions (bottom panels) for each state variable of Lorenz 63 systems using ℓ1-trend filtering at different noise levels for an arbitrary realization. Solid circles represent the regularization parameters λ that yield the minimum estimation error defined in Eq. (34). Crosses represent the converged corner points for Pareto curves and the minima of the GCV functions for GCV.
Fig. 7
Pareto curves (top panels) and GCV functions (bottom panels) for each state variable of Lorenz 63 systems using ℓ1-trend filtering at different noise levels for an arbitrary realization. Solid circles represent the regularization parameters λ that yield the minimum estimation error defined in Eq. (34). Crosses represent the converged corner points for Pareto curves and the minima of the GCV functions for GCV.
Close modal
Fig. 8
Relative coefficient errors of Lorenz 63 for WBPDN (top panels) and STLS (bottom panels) with respect to different noise levels σ with measurements preprocessed via smoothing splines, Tikhonov smoother, and ℓ1-trend filtering. The dashed and solid lines correspond to the solution using GCV and Pareto curves for selecting λ, respectively.
Fig. 8
Relative coefficient errors of Lorenz 63 for WBPDN (top panels) and STLS (bottom panels) with respect to different noise levels σ with measurements preprocessed via smoothing splines, Tikhonov smoother, and ℓ1-trend filtering. The dashed and solid lines correspond to the solution using GCV and Pareto curves for selecting λ, respectively.
Close modal
Fig. 9
Pareto curves (top panels) and GCV functions (bottom panels) for each state variable of Lorenz 63 system for WBPDN using ℓ1-trend filtering at different noise levels for an arbitrary realization. Solid circles represent the regularization parameters λ that yield the minimum estimation error defined in Eq. (34). Crosses represent the converged corner points for Pareto curves and the minima of the GCV functions for GCV. Similar trends were observed for smoothing splines and Tikhonov smoother.
Fig. 9
Pareto curves (top panels) and GCV functions (bottom panels) for each state variable of Lorenz 63 system for WBPDN using ℓ1-trend filtering at different noise levels for an arbitrary realization. Solid circles represent the regularization parameters λ that yield the minimum estimation error defined in Eq. (34). Crosses represent the converged corner points for Pareto curves and the minima of the GCV functions for GCV. Similar trends were observed for smoothing splines and Tikhonov smoother.
Close modal

For STLS, we noticed that both GCV and Pareto curves were nonsmooth as compared to the WBPDN case, possibly yielding suboptimal λ estimates. We also observed that measuring the complexity of the model—i.e., vertical axis of the Pareto curve log–log plots—using the ℓ0- and ℓ1-norms for STLS did not produce well-defined L-shaped curves and corners, resulting in poor estimates of optimal λ. Instead, we used 1/λ. Figure 10 shows the GCV functions and Pareto curves for STLS with data filtered using Tikhonov smoother. As seen, the ℓ0-Pareto curve (top panels) is inconsistent with the typical L shape of Pareto curves. Similar results were observed for the ℓ1-Pareto curve. Both 1/λ-Pareto curves and GCV functions present discontinuities for STLS, which may be caused by the hard-thresholding step of the algorithm. Next, we assess the prediction accuracy of the identified governing equations for the best-performing procedure—i.e., ℓ1-trend filtering for denoising the data and estimating derivatives and WBPDN for recovering the Lorenz 63 system dynamics. For the lowest noise case σ = 0.001, even though the coefficients were recovered with high accuracy, we noticed an abrupt change in the predicted state trajectory from the exact one at around eight time units. Therefore, we only report the prediction of the identified dynamics up to eight time units for all noise cases. Figure 11 illustrates the measurements used for training (left panel) and the state predictions of the mean recovered model (right panel). We generated 100 state trajectory realizations of training measurements and computed the mean of the estimated coefficients for each realization. We can see that the butterfly shaped behavior of the Lorenz 63 system is captured for all noise cases. However, the σ = 1 case results in poor prediction performance, as the state trajectory diverges significantly form the exact one. Table 2 provides the quantitative assessment of the prediction accuracy up to eight time units for all noise cases.

Fig. 10
ℓ0- and 1/λ-Pareto curves (top and central panels) and GCV functions (bottom panels) for each state variable of Lorenz 63 system for STLS using Tikhonov smoother at different noise levels for an arbitrary realization. Solid circles represent the regularization parameters λ that yield the minimum estimation error defined in Eq. (34). Crosses represent the converged corner points for Pareto curves and the minima of the GCV functions for GCV. Similar trends were observed for smoothing splines and ℓ1-trend filtering.
Fig. 10
ℓ0- and 1/λ-Pareto curves (top and central panels) and GCV functions (bottom panels) for each state variable of Lorenz 63 system for STLS using Tikhonov smoother at different noise levels for an arbitrary realization. Solid circles represent the regularization parameters λ that yield the minimum estimation error defined in Eq. (34). Crosses represent the converged corner points for Pareto curves and the minima of the GCV functions for GCV. Similar trends were observed for smoothing splines and ℓ1-trend filtering.
Close modal
Fig. 11
Predicted states for Lorenz 63 from the recovered governing equations for different noise levels using ℓ1-trend filter/Pareto for filtering and WBPDN/Pareto for sparse regression. Left: The true dynamics are highlighted in a solid line, and discrete observations used for training are represented by dots. The cross indicates the initial condition. Right: State predictions for different noise levels up to eight time units.
Fig. 11
Predicted states for Lorenz 63 from the recovered governing equations for different noise levels using ℓ1-trend filter/Pareto for filtering and WBPDN/Pareto for sparse regression. Left: The true dynamics are highlighted in a solid line, and discrete observations used for training are represented by dots. The cross indicates the initial condition. Right: State predictions for different noise levels up to eight time units.
Close modal
Table 2

Mean prediction errors and their standard deviations (enclosed in parenthesis) up to eight time units for Lorenz 63 for different noise levels using ℓ1-trend filter/Pareto for filtering and WBPDN/Pareto for sparse regression

Noiseσ = 0.001σ = 0.01σ = 0.1σ = 1
ex13.46 × 10−2 (3.05 × 10−2)4.24 × 10−1 (3.90 × 10−1)8.47 × 10−1 (2.69 × 10−1)1.27 × 1100 (1.89 × 110−1)
ex24.60 × 10−2 (4.07 × 10−2)4.67 × 10−1 (3.85 × 10−1)8.68 × 10−1 (2.58 × 10−1)1.25 × 100 (1.54 × 10−1)
ex32.19 × 10−2 (1.89 × 10−2)1.18 × 10−1 (7.88 × 10−2)2.39 × 10−1 (7.71 × 10−2)4.03 × 10−1 (7.66 × 10−2)
eX2.19 × 10−2 (1.93 × 10−2)2.18 × 10−1 (1.78 × 10−1)4.10 × 10−1 (1.19 × 10−1)5.93 × 100 (7.87 × 10−2)
Noiseσ = 0.001σ = 0.01σ = 0.1σ = 1
ex13.46 × 10−2 (3.05 × 10−2)4.24 × 10−1 (3.90 × 10−1)8.47 × 10−1 (2.69 × 10−1)1.27 × 1100 (1.89 × 110−1)
ex24.60 × 10−2 (4.07 × 10−2)4.67 × 10−1 (3.85 × 10−1)8.68 × 10−1 (2.58 × 10−1)1.25 × 100 (1.54 × 10−1)
ex32.19 × 10−2 (1.89 × 10−2)1.18 × 10−1 (7.88 × 10−2)2.39 × 10−1 (7.71 × 10−2)4.03 × 10−1 (7.66 × 10−2)
eX2.19 × 10−2 (1.93 × 10−2)2.18 × 10−1 (1.78 × 10−1)4.10 × 10−1 (1.19 × 10−1)5.93 × 100 (7.87 × 10−2)

Notes: For each of the recovered models from the 100 noisy trajectory realizations, we predicted the state trajectories via integrating the identified governing equations. We then computed the mean and standard deviation of the prediction errors defined in Eq. (34).

Remark 3

In the σ = 0.1 and σ = 1 cases, specially the latter, we noticed that the coefficients recovered for some of the 100 realizations yielded unstable dynamical models that could not be integrated for state prediction. We removed those cases for computing the average and the standard deviation to generate the plot in Fig. 11 (right panel) and assess the prediction accuracy in Table 2.

Finally, we study the performance of local and global smoothing techniques for additive Gaussian colored noise. We generated noise εj for each state variable j and different noise levels using a power spectral density (PSD) per unit of bandwidth proportional to 1/|f|d, where f denotes frequency. Specifically, we simulated pink, blue, and brown noise corresponding to exponents d = 1, − 1, − 2, respectively, and added it to the true state trajectory.

Figure 12 shows the relative state error performance against noise level for all local and global smoothers using GCV and Pareto curves as hyperparameter selection methods. As illustrated, the color of noise does not affect the performance of the smoothing methods yielding almost identical results. The same conclusion as in the white noise case can be drawn: global smoothers outperform local smoothers, and ℓ1-trend filtering is slightly more accurate than the rest of the global methods. For completeness, the error performance on the time derivatives can be found in Appendix D. We omit the analysis of the governing equation recovery performance since the smoothing results with colored noise are similar to the white noise case (Fig. 4).

Fig. 12
Comparison of the relative state errors for the Lorenz 63 system using local and global smoothers with pink, blue, and brown noise (from left to right). The regularization parameters were computed using GCV (top row) and the Pareto curve criterion (bottom row).
Fig. 12
Comparison of the relative state errors for the Lorenz 63 system using local and global smoothers with pink, blue, and brown noise (from left to right). The regularization parameters were computed using GCV (top row) and the Pareto curve criterion (bottom row).
Close modal

5.2 Duffing and Van der Pol Oscillators.

The Duffing [114] and Van der Pol [115] oscillators are classical benchmark problems in nonlinear dynamics which feature a cubic nonlinearity. The equation describing the dynamics of the unforced oscillators is given by
ζ¨+D(ζ)ζ˙+K(ζ)ζ=0
(38)
where D(ζ) and K(ζ) are the nonlinear damping and stiffness functions. The second-order system in Eq. (38) can be transformed into first-order by setting x1 = ζ and x2=ζ˙, giving
x˙1=x2,x1(0)=x1,0
(39a)
x˙2=D(x1)x2K(x1)x1,x2(0)=x2,0
(39b)
The Duffing oscillator physically models a spring–damper–mass system with a spring whose stiffness is K(x1)=κ+εx12, and constant damping D(x1) = δ. We focus on the nonchaotic, stable spiral case around the equilibrium (x1,e, x2,e) = (0, 0), where the parameters of the system are set to κ = 1, δ = 0.1, and ɛ = 5, and the initial condition to (x1,0, x2,0) = (1, 0). The Van der Pol system is a second-order oscillator with a nonlinear damping term of the form D(x1)=γ+μx12, and constant stiffness K(x1) = 1. It was originally proposed by Van der Pol as a model to describe the oscillation of a triode in an electrical circuit. The Van der Pol oscillator exhibits a limit cycle behavior around the origin (x1,e, x2,e) = (0, 0). For this case, we set the parameters to γ = 2 and μ = 2, and the initial condition to (x1,0, x2,0) = (0, 1). For both nonlinear oscillators, the number of state variables is n = 2 and the degree of the polynomial basis is set to d = 4, yielding p = 15 monomial terms. Out of these, only four describe the dynamics. In this case, the displacement x and velocity y are measured and we used 201 samples over two time units, from t = 0.1 to t = 2.1, to recover the system.

We present the performance of the different filtering strategies to denoise the state and estimate time-derivatives in Fig. 13 for the Duffing and Fig. 14 for the Van der Pol oscillators. Similar to the Lorenz 63 example, global methods outperform local methods for the model selection techniques used in this article. Again, ℓ1-trend filtering yields slightly better results than smoothing splines and Tikhonov smoother, specially when estimating derivatives. Both GCV and Pareto curves robustly select suitable λ and perform similarly for the global methods. Similar observations can be made on the identification of both nonlinear oscillators. Figure 15 shows the accuracy of WBPDN for recovering the governing equation coefficients for Duffing (top panels) and Van der Pol (bottom panels). State measurements preprocessed with global filtering strategies yield similar recovery performance using WBPDN, and Pareto curves outperform GCV for estimating near optimal regularization parameters. We do not show the plots for Pareto curves and GCV functions and omit the comparison with STLS for these examples since we observed similar trends as in the Lorenz 63 case. Finally, we report the prediction accuracy qualitatively in Fig. 16, for Duffing, and Fig. 17, for Van der Pol systems. We also report the prediction errors in Tables 3 and 4 up to 20 time units for the Duffing and Van der Pol, respectively. For these examples, we integrated the identified governing equations up to 20 time units, about 10 times the time span used for training. As highlighted in Remark 3, we removed those realizations that produced unstable dynamics for higher noise levels. As shown, the predictions match well with the exact trajectory for noise levels up to σ = 0.01, and diverge for the worst noise case σ = 0.1. It is remarkable that the recovered models capture the dynamical behavior of the Duffing and Van der Pol oscillators using a small number of discrete state samples. In the Duffing case, both the damping rate and cubic nonlinearity are well identified, except for the highest noise case. For the Van der Pol example, the identified models recover the limit cycle behavior even though the training set does not include any sample along the limit cycle trajectory.

Fig. 13
Comparison of the relative state (top row) and state-time derivative (bottom row) errors for the Duffing oscillator using local and global smoothers. The regularization parameters were computed using GCV (left column) and the Pareto curve criterion (right column).
Fig. 13
Comparison of the relative state (top row) and state-time derivative (bottom row) errors for the Duffing oscillator using local and global smoothers. The regularization parameters were computed using GCV (left column) and the Pareto curve criterion (right column).
Close modal
Fig. 14
Comparison of the relative state (top row) and state-time derivative (bottom row) errors for the Van der Pol oscillator using local and global smoothers. The regularization parameters were computed using GCV (left column) and the Pareto curve criterion (right column).
Fig. 14
Comparison of the relative state (top row) and state-time derivative (bottom row) errors for the Van der Pol oscillator using local and global smoothers. The regularization parameters were computed using GCV (left column) and the Pareto curve criterion (right column).
Close modal
Fig. 15
Relative coefficient errors for Duffing (top panels) and Van der Pol (bottom panels) oscillators via WBPDN with measurements preprocessed using smoothing splines, Tikhonov smoother, and ℓ1-trend filtering. The dashed and solid lines correspond to the solution using GCV and Pareto curves for selecting λ, respectively.
Fig. 15
Relative coefficient errors for Duffing (top panels) and Van der Pol (bottom panels) oscillators via WBPDN with measurements preprocessed using smoothing splines, Tikhonov smoother, and ℓ1-trend filtering. The dashed and solid lines correspond to the solution using GCV and Pareto curves for selecting λ, respectively.
Close modal
Fig. 16
Predicted states for the Duffing oscillator from the recovered governing equations for different noise levels using ℓ1-trend filter/Pareto for filtering and WBPDN/Pareto for sparse regression. Left: The true dynamics are highlighted in a solid line, and discrete observations used for training are represented by dots. The cross indicates the initial condition. Right: State predictions for different noise levels up to 20 time units.
Fig. 16
Predicted states for the Duffing oscillator from the recovered governing equations for different noise levels using ℓ1-trend filter/Pareto for filtering and WBPDN/Pareto for sparse regression. Left: The true dynamics are highlighted in a solid line, and discrete observations used for training are represented by dots. The cross indicates the initial condition. Right: State predictions for different noise levels up to 20 time units.
Close modal
Fig. 17
Predicted states for the Van der Pol oscillator from the recovered governing equations for different noise levels using ℓ1-trend filter/Pareto for filtering and WBPDN/Pareto for sparse regression. Left: The true dynamics are highlighted in a solid line, and discrete observations used for training are represented by dots. The cross indicates the initial condition. Right: State predictions for different noise levels up to 20 time units.
Fig. 17
Predicted states for the Van der Pol oscillator from the recovered governing equations for different noise levels using ℓ1-trend filter/Pareto for filtering and WBPDN/Pareto for sparse regression. Left: The true dynamics are highlighted in a solid line, and discrete observations used for training are represented by dots. The cross indicates the initial condition. Right: State predictions for different noise levels up to 20 time units.
Close modal
Table 3

Mean prediction errors and their standard deviations (enclosed in parenthesis) up to 20 time units for Duffing oscillator for different noise levels using ℓ1-trend filter/Pareto for filtering and WBPDN/Pareto for sparse regression

Noiseσ = 0.0001σ = 0.001σ = 0.01σ = 0.1
ex19.06 × 10−4 (6.96 × 10−4)8.39 × 10−3 (5.79 × 10−3)3.07 × 10−1 (3.78 × 10−1)9.76 × 10−1 (2.89 × 10−1)
ex27.34 × 10−4 (5.57 × 10−4)6.80 × 10−3 (4.62 × 10−3)2.76 × 10−1 (3.55 × 10−1)9.33 × 10−1 (4.59 × 10−1)
eX7.35 × 10−4 (5.58 × 10−4)6.81 × 10−3 (4.63 × 10−3)2.76 × 10−1 (3.54 × 10−1)9.33 × 10−1 (4.60 × 10−1)
Noiseσ = 0.0001σ = 0.001σ = 0.01σ = 0.1
ex19.06 × 10−4 (6.96 × 10−4)8.39 × 10−3 (5.79 × 10−3)3.07 × 10−1 (3.78 × 10−1)9.76 × 10−1 (2.89 × 10−1)
ex27.34 × 10−4 (5.57 × 10−4)6.80 × 10−3 (4.62 × 10−3)2.76 × 10−1 (3.55 × 10−1)9.33 × 10−1 (4.59 × 10−1)
eX7.35 × 10−4 (5.58 × 10−4)6.81 × 10−3 (4.63 × 10−3)2.76 × 10−1 (3.54 × 10−1)9.33 × 10−1 (4.60 × 10−1)

Notes: For each of the recovered models from the 100 noisy trajectory realizations, we predicted the state trajectories via integrating the identified governing equations. We then computed the mean and standard deviation of the prediction errors defined in Eq. (34).

Table 4

Mean prediction errors and their standard deviations (enclosed in parenthesis) up to 20 time units for Van der Pol for different noise levels using ℓ1-trend filter/Pareto for filtering and WBPDN/Pareto for sparse regression

Noiseσ = 0.0001σ = 0.001σ = 0.01σ = 0.1
ex12.01 × 10−4 (1.51 × 10−4)1.88 × 10−3 (1.52 × 10−3)1.56 × 10−2 (1.11 × 10−2)1.62 × 100 (1.47 × 100)
ex24.40 × 10−2 (3.30 × 10−4)4.12 × 10−3 (3.31 × 10−3)3.45 × 10−2 (2.41 × 10−2)7.46 × 10−1 (3.68 × 10−1)
eX4.35 × 10−4 (3.27 × 10−4)4.07 × 10−3 (3.28 × 10−3)3.42 × 10−2 (2.38 × 10−2)1.64 × 100 (1.35 × 100)
Noiseσ = 0.0001σ = 0.001σ = 0.01σ = 0.1
ex12.01 × 10−4 (1.51 × 10−4)1.88 × 10−3 (1.52 × 10−3)1.56 × 10−2 (1.11 × 10−2)1.62 × 100 (1.47 × 100)
ex24.40 × 10−2 (3.30 × 10−4)4.12 × 10−3 (3.31 × 10−3)3.45 × 10−2 (2.41 × 10−2)7.46 × 10−1 (3.68 × 10−1)
eX4.35 × 10−4 (3.27 × 10−4)4.07 × 10−3 (3.28 × 10−3)3.42 × 10−2 (2.38 × 10−2)1.64 × 100 (1.35 × 100)

Notes: For each of the recovered models from the 100 noisy trajectory realizations, we predicted the state trajectories via integrating the identified governing equations. We then computed the mean and standard deviation of the prediction errors defined in Eq. (34).

6 Conclusion

The motivation of this work has been to compare and investigate the performance of several filtering strategies and model selection techniques to improve the accuracy and robustness of sparse regression methods to recover governing equations of nonlinear dynamical systems from noisy state measurements. We presented several best-performing local and global noise filtering techniques for a priori state measurement denoising and estimation of state time-derivatives without stringent assumptions on the noise statistics. In general, we observed that global smoothing methods along with either GCV or Pareto curves for selecting near optimal regularization parameters λ outperform local smoothers with GCV as bandwidth selector. Of the global methods, ℓ1-trend filtering yields the best accuracy for state measurement denoising and estimation of state time-derivatives. However, the superior accuracy of ℓ1-trend filtering has not resulted in significant differences for coefficient recovery via WBPDN or STLS compared to smoothing splines and Tikhonov smoother. WBPDN produces smoother GCV functions and Pareto curves than STLS, and the selected regularization parameters are closer to the optimal ones. We conjecture that this is the reason why WBPDN yields more accurate governing equation coefficients. Similar to the findings in Ref. [64], the corner point criterion of Pareto curves often yields better estimates than cross-validation strategies in selecting near optimal regularization parameters.

We empathize that the filtering strategies and model selection techniques presented in this article can be used as guidelines for other identification algorithms that may require measurement smoothing or numerical differentiation to enhance their performance. For example, integral-based approaches, e.g., Refs. [116118], which do not require numerical differentiation to recover governing equations can also benefit from the filtering strategies presented in this article.

Footnote

2

matlab routines for STLS can be found at https://faculty.washington.edu/kutz/

Acknowledgment

This work was supported by the National Science Foundation (Grant CMMI-1454601).

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.

Appendix A: Difference Matrices

Here, we provide the construction of the difference matrices used in Tikhonov smoother and ℓ1-trend filtering. The (k + 1)st order difference matrix, k ≥ 0, is given by the following recursive formula [90]:
D(k+1)=D(1)D(k)R(mk1)×m
(A1)
where the first-order difference matrix corresponding to k = 0 is given by
D(1)=[110000110000011]R(m1)×m
(A2)
For example, the second- and third-order difference matrices read
D(2)=[1210001210001200]R(m2)×m
(A3)
and
D(3)=[1331001330001300]R(m3)×m
(A4)

Appendix B: Signal-to-Noise Ratios

In this section, we provide the signal-to-noise ratios (SNR) of each state variable of each dynamical system for all noise levels studied in this article. The SNR, expressed in decibels, is defined for each state variable j = 1, …, n as
(SNRj)dB=10log10(k=1mxj(tk)2σ2)
(B1)
Tables 5, 6, and 7 show the state variable SNR for the Lorenz 63 system, Duffing, and Van der Pol oscillators, respectively.
Table 5

Signal-to-noise ratios of each state variable for the Lorenz 63 system

SNR (dB)
NoiseState x1State x2State x3
σ = 0.001101.06102.76110.53
σ = 0.0181.0682.7690.53
σ = 0.161.0662.7670.53
σ = 141.0642.7650.53
SNR (dB)
NoiseState x1State x2State x3
σ = 0.001101.06102.76110.53
σ = 0.0181.0682.7690.53
σ = 0.161.0662.7670.53
σ = 141.0642.7650.53
Table 6

Signal-to-noise ratios of each state variable for the Duffing oscillator

SNR (dB)
NoiseState x1State x2
σ = 0.000199.47105.85
σ = 0.00179.4785.85
σ = 0.0159.4765.85
σ = 0.139.4745.85
SNR (dB)
NoiseState x1State x2
σ = 0.000199.47105.85
σ = 0.00179.4785.85
σ = 0.0159.4765.85
σ = 0.139.4745.85
Table 7

Signal-to-noise ratios of each state variable for the Van der Pol oscillator

SNR (dB)
NoiseState x1State x2
σ = 0.0001105.64104.34
σ = 0.00185.6484.34
σ = 0.0165.6464.34
σ = 0.145.6444.34
SNR (dB)
NoiseState x1State x2
σ = 0.0001105.64104.34
σ = 0.00185.6484.34
σ = 0.0165.6464.34
σ = 0.145.6444.34

Appendix C: State, State-Time Derivative, and Coefficient Error Variances

This section compares the variance of the relative state and state time-derivative errors defined in Eq. (34) after running each of the local and global smoothing methods over 100 noisy trajectory realizations using GCV (left column) and Pareto curve criterion (right column) to automatically select the regularization parameter λ (Figs. 1823).

Fig. 18
Variances of the relative state (top row) and state time-derivative (bottom row) errors for the Lorenz 63 system using local and global smoothers. The regularization parameters were computed using GCV (left column) and the Pareto curve criterion (right column).
Fig. 18
Variances of the relative state (top row) and state time-derivative (bottom row) errors for the Lorenz 63 system using local and global smoothers. The regularization parameters were computed using GCV (left column) and the Pareto curve criterion (right column).
Close modal
Fig. 19
Variances of the relative coefficient errors for the Lorenz 63 system using global smoothers and WBPDN. The regularization parameters were computed using GCV (left column) and the Pareto curve criterion (right column).
Fig. 19
Variances of the relative coefficient errors for the Lorenz 63 system using global smoothers and WBPDN. The regularization parameters were computed using GCV (left column) and the Pareto curve criterion (right column).
Close modal
Fig. 20
Variances of the relative state (top row) and state time-derivative (bottom row) errors for the Duffing oscillator using local and global smoothers. The regularization parameters were computed using GCV (left column) and the Pareto curve criterion (right column).
Fig. 20
Variances of the relative state (top row) and state time-derivative (bottom row) errors for the Duffing oscillator using local and global smoothers. The regularization parameters were computed using GCV (left column) and the Pareto curve criterion (right column).
Close modal
Fig. 21
Variances of the relative coefficient errors for the Duffing oscillator using global smoothers and WBPDN. The regularization parameters were computed using GCV (left column) and the Pareto curve criterion (right column).
Fig. 21
Variances of the relative coefficient errors for the Duffing oscillator using global smoothers and WBPDN. The regularization parameters were computed using GCV (left column) and the Pareto curve criterion (right column).
Close modal
Fig. 22
Variances of the relative state (top row) and state time-derivative (bottom row) errors for the Van der Pol using local and global smoothers. The regularization parameters were computed using GCV (left column) and the Pareto curve criterion (right column).
Fig. 22
Variances of the relative state (top row) and state time-derivative (bottom row) errors for the Van der Pol using local and global smoothers. The regularization parameters were computed using GCV (left column) and the Pareto curve criterion (right column).
Close modal
Fig. 23
Variances of the relative coefficient errors for the Van der Pol oscillator using global smoothers and WBPDN. The regularization parameters were computed using GCV (left column) and the Pareto curve criterion (right column).
Fig. 23
Variances of the relative coefficient errors for the Van der Pol oscillator using global smoothers and WBPDN. The regularization parameters were computed using GCV (left column) and the Pareto curve criterion (right column).
Close modal

Appendix D: State-Time Derivative Error Performance With Colored Noise

Refer Fig. 24.

Fig. 24
Comparison of the relative state time-derivative errors for the Lorenz 63 system using local and global smoothers with pink, blue, and brown noise (from left to right). The regularization parameters were computed using GCV (top row) and the Pareto curve criterion (bottom row).
Fig. 24
Comparison of the relative state time-derivative errors for the Lorenz 63 system using local and global smoothers with pink, blue, and brown noise (from left to right). The regularization parameters were computed using GCV (top row) and the Pareto curve criterion (bottom row).
Close modal

References

1.
Fassois
,
S.
, and
Rivera
,
D.
,
2007
, “
Applications of System Identification
,”
IEEE Control Syst. Mag.
,
27
(
5
), pp.
24
26
.
2.
Leontaritis
,
I.
, and
Billings
,
S. A.
,
1985
, “
Input–Output Parametric Models for Non-linear Systems Part I: Deterministic Non-linear Systems
,”
Int. J. Control
,
41
(
2
), pp.
303
328
.
3.
Schmidt
,
M.
, and
Lipson
,
H.
,
2009
, “
Distilling Free-Form Natural Laws From Experimental Data
,”
Science
,
324
(
5923
), pp.
81
85
.
4.
Bongard
,
J.
, and
Lipson
,
H.
,
2007
, “
Automated Reverse Engineering of Nonlinear Dynamical Systems
,”
Proc. Natl. Acad. Sci. U. S. A.
,
104
(
24
), pp.
9943
9948
.
5.
Wang
,
W. X.
,
Yang
,
R.
,
Lai
,
Y. -C.
,
Kovanis
,
V.
, and
Grebogi
,
C.
,
2011
, “
Predicting Catastrophes in Nonlinear Dynamical Systems by Compressive Sensing
,”
Phys. Rev. Lett.
,
106
(
15
), p.
154101
.
6.
Brunton
,
S. L.
,
Proctor
,
J. L.
, and
Kutz
,
J. N.
,
2016
, “
Discovering Governing Equations From Data by Sparse Identification of Nonlinear Dynamical Systems
,”
Proc. Natl. Acad. Sci. U. S. A.
,
113
(
15
), pp.
3932
3937
.
7.
Goldberger
,
A. S.
,
1961
, “
Stepwise Least Squares: Residual Analysis and Specification Error
,”
J. Am. Stat. Assoc.
,
56
(
296
), pp.
998
1000
.
8.
Mallat
,
S. G.
, and
Zhang
,
Z.
,
1993
, “
Matching Pursuits With Time–Frequency Dictionaries
,”
IEEE Trans. Signal Process.
,
41
(
12
), pp.
3397
3415
.
9.
Pati
,
Y. C.
,
Rezaiifar
,
R.
, and
Krishnaprasad
,
P. S.
,
1993
, “
Orthogonal Matching Pursuit: Recursive Function Approximation With Applications to Wavelet Decomposition
,”
Proceedings of the 27th Asilomar Conference on Signals, Systems and Computers
,
Pacific Grove, CA
,
Nov. 1–3
, IEEE, pp.
40
44
.
10.
Needell
,
D.
, and
Tropp
,
J. A.
,
2009
, “
Cosamp: Iterative Signal Recovery From Incomplete and Inaccurate Samples
,”
Appl. Comput. Harmonic Anal.
,
26
(
3
), pp.
301
321
.
11.
Tibshirani
,
R.
,
1996
, “
Regression Shrinkage and Selection Via the Lasso
,”
J. R. Stat. Soc.: Ser. B (Methodol.)
,
58
(
1
), pp.
267
288
.
12.
Efron
,
B.
,
Hastie
,
T.
,
Johnstone
,
I.
, and
Tibshirani
,
R.
,
2004
, “
Least Angle Regression
,”
Ann. Stat.
,
32
(
2
), pp.
407
499
.
13.
Chen
,
S.
,
Donoho
,
D.
, and
Saunders
,
M.
,
2001
, “
Atomic Decomposition by Basis Pursuit
,”
SIAM Rev.
,
43
(
1
), pp.
129
159
.
14.
Bertsimas
,
D.
,
Pauphilet
,
J.
, and
Van Parys
,
B.
,
2020
, “
Sparse Regression: Scalable Algorithms and Empirical Performance
,”
Stat. Sci.
,
35
(
4
), pp.
555
578
.
15.
Champion
,
K.
,
Lusch
,
B.
,
Kutz
,
J. N.
, and
Brunton
,
S. L.
,
2019
, “
Data-Driven Discovery of Coordinates and Governing Equations
,”
Proc. Natl. Acad. Sci. U. S. A.
,
116
(
45
), pp.
22445
22451
.
16.
Fukami
,
K.
,
Murata
,
T.
,
Zhang
,
K.
, and
Fukagata
,
K.
,
2021
, “
Sparse Identification of Nonlinear Dynamics With Low-Dimensionalized Flow Representations
,”
J. Fluid Mech.
,
926
, pp.
1
35
.
17.
Gelß
,
P.
,
Klus
,
S.
,
Eisert
,
J.
, and
Schütte
,
C.
,
2019
, “
Multidimensional Approximation of Nonlinear Dynamical Systems
,”
ASME J. Comput. Nonlinear Dyn.
,
14
(
6
), p.
061006
.
18.
Rudy
,
S. H.
,
Kutz
,
J. N.
, and
Brunton
,
S. L.
,
2019
, “
Deep Learning of Dynamics and Signal-Noise Decomposition With Time-Stepping Constraints
,”
J. Comput. Phys.
,
396
, pp.
483
506
.
19.
Kaheman
,
K.
,
Brunton
,
S. L.
, and
Kutz
,
J. N.
,
2020
, “
Automatic Differentiation to Simultaneously Identify Nonlinear Dynamics and Extract Noise Probability Distributions From Data
,” arXiv preprint arXiv:2009.08810.
20.
Sun
,
F.
,
Liu
,
Y.
,
Sun
,
H.
, and
Zhou
,
Z.-H.
,
2021
, “
Physics-Informed Spline Learning for Nonlinear Dynamics Discovery
,”
30th International Joint Conference on Artificial Intelligence (IJCAI-21)
,
Montreal, Canada
,
Aug. 19–27
,
Z.-H.
Zhou
, ed., IJCAI, pp.
2054
2061
.
21.
Chen
,
Z.
,
Liu
,
Y.
, and
Sun
,
H.
,
2021
, “
Physics-Informed Learning of Governing Equations From Scarce Data
,”
Nat. Commun.
,
12
(
1
), pp.
1
13
.
22.
Cullum
,
J.
,
1971
, “
Numerical Differentiation and Regularization
,”
SIAM J. Numer. Anal.
,
8
(
2
), pp.
254
265
.
23.
Lu
,
S.
, and
Pereverzev
,
S.
,
2006
, “
Numerical Differentiation From a Viewpoint of Regularization Theory
,”
Math. Comput.
,
75
(
256
), pp.
1853
1870
.
24.
Chartrand
,
R.
,
2011
, “
Numerical Differentiation of Noisy, Nonsmooth Data
,”
ISRN Appl. Math.
,
2011
, pp.
1
11
.
25.
Green
,
P. J.
, and
Silverman
,
B. W.
,
1993
,
Nonparametric Regression and Generalized Linear Models: A Roughness Penalty Approach
,
Chapman and Hall/CRC
,
London, UK
.
26.
Bowman
,
A. W.
, and
Azzalini
,
A.
,
1997
,
Applied Smoothing Techniques for Data Analysis: The Kernel Approach With S-Plus Illustrations
, Vol.
18
,
Oxford University Press
,
Oxford
.
27.
Aydin
,
D.
,
2007
, “
A Comparison of the Nonparametric Regression Models Using Smoothing Spline and Kernel Regression
,”
World Acad. Sci. Eng. Technol.
,
1
(
12
), pp.
253
257
.
28.
Knowles
,
I.
, and
Renka
,
R. J.
,
2014
, “
Methods for Numerical Differentiation of Noisy Data
,”
Electron. J. Differ. Equ.
,
21
, pp.
235
246
. https://ejde.math.txstate.edu/
29.
Ahnert
,
K.
, and
Abel
,
M.
,
2007
, “
Numerical Differentiation of Experimental Data: Local Versus Global Methods
,”
Comput. Phys. Commun.
,
177
(
10
), pp.
764
774
.
30.
Lubansky
,
A.
,
Yeow
,
Y. L.
,
Leong
,
Y.-K.
,
Wickramasinghe
,
S. R.
, and
Han
,
B.
,
2006
, “
A General Method of Computing the Derivative of Experimental Data
,”
AIChE J.
,
52
(
1
), pp.
323
332
.
31.
Fan
,
J.
, and
Gijbels
,
I.
,
1996
,
Local Polynomial Modelling and Its Applications: Monographs on Statistics and Applied Probability
, Vol.
66
,
Chapman and Hall/CRC
,
London, UK
.
32.
Loader
,
C.
,
2006
,
Local Regression and Likelihood
,
Springer Science & Business Media
,
New York
.
33.
De Boor
,
C.
,
1978
,
A Practical Guide to Splines
, Vol.
27
,
Springer-Verlag
,
New York
.
34.
Tikhonov
,
A. N.
,
Goncharsky
,
A.
,
Stepanov
,
V.
, and
Yagola
,
A. G.
,
2013
,
Numerical Methods for the Solution of Ill-Posed Problems
, Vol.
328
,
Springer Science & Business Media
,
New York
.
35.
Hodrick
,
R. J.
, and
Prescott
,
E. C.
,
1997
, “
Postwar us Business Cycles: An Empirical Investigation
,”
J. Money Credit Bank.
,
29
(
1
), pp.
1
16
.
36.
Rudin
,
L. I.
,
Osher
,
S.
, and
Fatemi
,
E.
,
1992
, “
Nonlinear Total Variation Based Noise Removal Algorithms
,”
Physica D
,
60
(
1–4
), pp.
259
268
.
37.
Chambolle
,
A.
,
2004
, “
An Algorithm for Total Variation Minimization and Applications
,”
J. Math. Imag. Vis.
,
20
(
1–2
), pp.
89
97
.
38.
Brunton
,
S. L.
, and
Kutz
,
J. N.
,
2019
,
Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control
,
Cambridge University Press
,
Cambridge, UK
.
39.
Mallat
,
S. G.
,
1989
, “
A Theory for Multiresolution Signal Decomposition: The Wavelet Representation
,”
IEEE Trans. Pattern Anal. Mach. Intell.
,
11
(
7
), pp.
674
693
.
40.
Donoho
,
D. L.
, and
Johnstone
,
I. M.
,
1995
, “
Adapting to Unknown Smoothness Via Wavelet Shrinkage
,”
J. Am. Stat. Assoc.
,
90
(
432
), pp.
1200
1224
.
41.
Elsner
,
J. B.
, and
Tsonis
,
A. A.
,
2013
,
Singular Spectrum Analysis: A New Tool in Time Series Analysis
,
Springer Science & Business Media
,
New York
.
42.
Golyandina
,
N.
, and
Zhigljavsky
,
A.
,
2013
,
Singular Spectrum Analysis for Time Series
,
Springer
,
Berlin/Heidelberg
.
43.
Zhu
,
W.
,
Mousavi
,
S. M.
, and
Beroza
,
G. C.
,
2019
, “
Seismic Signal Denoising and Decomposition Using Deep Neural Networks
,”
IEEE Trans. Geosci. Remote Sens.
,
57
(
11
), pp.
9476
9488
.
44.
Antczak
,
K.
,
2018
, “
Deep Recurrent Neural Networks for ECG Signal Denoising
,” arXiv preprint arXiv:1807.11551.
45.
Fan
,
G.
,
Li
,
J.
, and
Hao
,
H.
,
2020
, “
Vibration Signal Denoising for Structural Health Monitoring by Residual Convolutional Neural Networks
,”
Measurement
,
157
, p.
107651
.
46.
Hastie
,
T.
,
Tibshirani
,
R.
, and
Friedman
,
J.
,
2009
,
The Elements of Statistical Learning: Data Mining, Inference, and Prediction
,
Springer
,
New York
.
47.
Akaike
,
H.
,
1974
, “
A New Look at the Statistical Model Identification
,”
IEEE Trans. Autom. Control
,
19
(
6
), pp.
716
723
.
48.
Schwarz
,
G.
,
1978
, “
Estimating the Dimension of a Model
,”
Ann. Stat.
,
6
(
2
), pp.
461
464
.
49.
Mallows
,
C. L.
,
2000
, “
Some Comments on CP
,”
Technometrics
,
42
(
1
), pp.
87
94
.
50.
Morozov
,
V. A.
,
1966
, “
On the Solution of Functional Equations by the Method of Regularization
,”
Dokl. Akad. Nauk
,
167
(
3
), pp.
510
512
. http://mi.mathnet.ru/eng/dan/v167/i3/p510
51.
Morozov
,
V. A.
,
2012
,
Methods for Solving Incorrectly Posed Problems
,
Springer
,
New York
.
52.
Stone
,
M.
,
1974
, “
Cross-Validatory Choice and Assessment of Statistical Predictions
,”
J. R. Stat. Soc.: Ser. B (Methodol.)
,
36
(
2
), pp.
111
133
.
53.
Golub
,
G. H.
,
Heath
,
M.
, and
Wahba
,
G.
,
1979
, “
Generalized Cross-Validation As a Method for Choosing a Good Ridge Parameter
,”
Technometrics
,
21
(
2
), pp.
215
223
.
54.
Hansen
,
P. C.
,
1992
, “
Analysis of Discrete Ill-Posed Problems by Means of the L-Curve
,”
SIAM Rev.
,
34
(
4
), pp.
561
580
.
55.
Hansen
,
P. C.
,
1999
, “The L-Curve and Its Use in the Numerical Treatment of Inverse Problems,”
Computational Inverse Problems in Electrocardiology
,
WIT Press
,
Southampton, UK
, pp.
119
142
.
56.
D’Amico
,
M.
, and
Ferrigno
,
G.
,
1992
, “
Comparison Between the More Recent Techniques for Smoothing and Derivative Assessment in Biomechanics
,”
Med. Biol. Eng. Comput.
,
30
(
2
), pp.
193
204
.
57.
Stickel
,
J. J.
,
2010
, “
Data Smoothing and Numerical Differentiation by a Regularization Method
,”
Comput. Chem. Eng.
,
34
(
4
), pp.
467
475
.
58.
Morelli
,
E. A.
, and
Klein
,
V.
,
2016
,
Aircraft System Identification: Theory and Practice
, Vol.
2
,
Sunflyte Enterprises
,
Williamsburg, VA
.
59.
Jategaonkar
,
R. V.
,
2006
,
Flight Vehicle System Identification: A Time Domain Methodology
,
American Institute of Aeronautics and Astronautics
,
Reston, VA
.
60.
Golub
,
G. H.
, and
Van Loan
,
C. F.
,
1980
, “
An Analysis of the Total Least Squares Problem
,”
SIAM J. Numer. Anal.
,
17
(
6
), pp.
883
893
.
61.
Van Huffel
,
S.
, and
Vandewalle
,
J.
,
1991
,
The Total Least Squares Problem: Computational Aspects and Analysis
,
SIAM
,
Philadelphia, PA
.
62.
Natarajan
,
B. K.
,
1995
, “
Sparse Approximate Solutions to Linear Systems
,”
SIAM J. Comput.
,
24
(
2
), pp.
227
234
.
63.
Eldar
,
Y. C.
, and
Kutyniok
,
G.
,
2012
,
Compressed Sensing: Theory and Applications
,
Cambridge University Press
,
Cambridge, UK
.
64.
Cortiella
,
A.
,
Park
,
K.-C.
, and
Doostan
,
A.
,
2021
, “
Sparse Identification of Nonlinear Dynamical Systems Via Reweighted ℓ1-Regularized Least Squares
,”
Comput. Methods Appl. Mech. Eng.
,
376
, p.
113620
.
65.
Candès
,
E. J.
,
2008
, “
The Restricted Isometry Property and Its Implications for Compressed Sensing
,”
C. R. Math.
,
346
(
9
), pp.
589
592
.
66.
Zhang
,
L.
, and
Schaeffer
,
H.
,
2019
, “
On the Convergence of the Sindy Algorithm
,”
Multiscale Model. Simul.
,
17
(
3
), pp.
948
972
.
67.
Zou
,
H.
,
2006
, “
The Adaptive Lasso and Its Oracle Properties
,”
J. Am. Stat. Assoc.
,
101
(
476
), pp.
1418
1429
.
68.
Candès
,
E. J.
,
Wakin
,
M. B.
, and
Boyd
,
S. P.
,
2008
, “
Enhancing Sparsity by Reweighted ℓ1 Minimization
,”
J. Fourier Anal. Appl.
,
14
(
5
), pp.
877
905
.
69.
Yang
,
X.
, and
Karniadakis
,
G. E.
,
2013
, “
Reweighted ℓ1 Minimization Method for Stochastic Elliptic Differential Equations
,”
J. Comput. Phys.
,
248
, pp.
87
108
.
70.
Peng
,
J.
,
Hampton
,
J.
, and
Doostan
,
A.
,
2014
, “
A Weighted ℓ1-Minimization Approach for Sparse Polynomial Chaos Expansions
,”
J. Comput. Phys.
,
267
, pp.
92
111
.
71.
Adcock
,
B.
,
2017
, “
Infinite-Dimensional ℓ1 Minimization and Function Approximation From Pointwise Data
,”
Constr. Approx.
,
45
(
3
), pp.
345
390
.
72.
Donoho
,
D.
,
Drori
,
I.
,
Stodden
,
V.
, and
Tsaig
,
Y.
,
2019
, “
Sparselab: Seeking Sparse Solutions to Linear Systems of Equations
,” 2007-05-26[2013-01-27]. http://sparselab.stanford.edu
73.
Van den Berg
,
E.
, and
Friedlander
,
M.
,
2019
,
SPGL1: A Solver for Large-Scale Sparse Reconstruction
,
December
. https://friedlander.io/spgl1
74.
Grant
,
M.
, and
Boyd
,
S.
,
2014
,
Cvx: Matlab Software for Disciplined Convex Programming, Version 2.1
.
75.
Diamond
,
S.
, and
Boyd
,
S.
,
2016
, “
Cvxpy: A Python-Embedded Modeling Language for Convex Optimization
,”
J. Mach. Learn. Res.
,
17
(
1
), pp.
2909
2913
.
76.
Lancaster
,
P.
, and
Salkauskas
,
K.
,
1981
, “
Surfaces Generated by Moving Least Squares Methods
,”
Math. Comput.
,
37
(
155
), pp.
141
158
.
77.
Sober
,
B.
, and
Levin
,
D.
,
2019
, “
Manifold Approximation by Moving Least-Squares Projection (MMLS)
,”
Constr. Approx.
,
52
, pp.
1
46
.
78.
Sober
,
B.
,
Aizenbud
,
Y.
, and
Levin
,
D.
,
2021
, “
Approximation of Functions Over Manifolds: A Moving Least-Squares Approach
,”
J. Comput. Appl. Math.
,
383
, p.
113140
.
79.
Carr
,
J. C.
,
Beatson
,
R. K.
,
McCallum
,
B. C.
,
Fright
,
W. R.
,
McLennan
,
T. J.
, and
Mitchell
,
T. J.
,
2003
, “
Smooth Surface Reconstruction From Noisy Range Data
,”
Proceedings of the First International Conference on Computer Graphics and Interactive Techniques in Australasia and South East Asia
,
Melbourne, Australia
,
Feb. 11–14
, pp.
119
126
.
80.
Cleveland
,
W. S.
,
1979
, “
Robust Locally Weighted Regression and Smoothing Scatterplots
,”
J. Am. Stat. Assoc.
,
74
(
368
), pp.
829
836
.
81.
Epanechnikov
,
V. A.
,
1969
, “
Non-parametric Estimation of a Multivariate Probability Density
,”
Theory Probab. Appl.
,
14
(
1
), pp.
153
158
.
82.
Fan
,
J.
, and
Gijbels
,
I.
,
1995
, “
Data-Driven Bandwidth Selection in Local Polynomial Fitting: Variable Bandwidth and Spatial Adaptation
,”
J. R. Stat. Soc.: Ser. B (Methodol.)
,
57
(
2
), pp.
371
394
.
83.
Fan
,
J.
, and
Gijbels
,
I.
,
1995
, “
Adaptive Order Polynomial Fitting: Bandwidth Robustification and Bias Reduction
,”
J. Comput. Graph. Stat.
,
4
(
3
), pp.
213
227
.
84.
Savitzky
,
A.
, and
Golay
,
M. J.
,
1964
, “
Smoothing and Differentiation of Data by Simplified Least Squares Procedures
,”
Anal. Chem.
,
36
(
8
), pp.
1627
1639
.
85.
Cleveland
,
W. S.
, and
Devlin
,
S. J.
,
1988
, “
Locally Weighted Regression: An Approach to Regression Analysis by Local Fitting
,”
J. Am. Stat. Assoc.
,
83
(
403
), pp.
596
610
.
86.
Schafer
,
R. W.
,
2011
, “
What is a Savitzky–Golay Filter?[Lecture Notes]
,”
IEEE Signal Process. Mag.
,
28
(
4
), pp.
111
117
.
87.
Fan
,
J.
,
Gijbels
,
I.
,
Hu
,
T.-C.
, and
Huang
,
L.-S.
,
1996
, “
A Study of Variable Bandwidth Selection for Local Polynomial Regression
,”
Stat. Sinica
,
6
(
1
), pp.
113
127
. https://www.jstor.org/stable/24306002
88.
Kim
,
S.-J.
,
Koh
,
K.
,
Boyd
,
S.
, and
Gorinevsky
,
D.
,
2009
, “
ℓ_1 Trend Filtering
,”
SIAM Rev.
,
51
(
2
), pp.
339
360
.
89.
Kim
,
S.
,
Koh
,
K.
,
Lustig
,
M.
,
Boyd
,
S.
, and
Gorinevsky
,
D.
,
2007
, “
An Interior-Point Method for Large-Scale ℓ1-Regularized Least Squares
,”
IEEE J. Sel. Top. Signal Process.
,
1
(
4
), pp.
606
617
.
90.
Tibshirani
,
R. J.
,
2014
, “
Adaptive Piecewise Polynomial Estimation Via Trend Filtering
,”
Ann. Stat.
,
42
(
1
), pp.
285
323
.
91.
Mammen
,
E.
, and
van de Geer
,
S.
,
1997
, “
Locally Adaptive Regression Splines
,”
Ann. Stat.
,
25
(
1
), pp.
387
413
.
92.
Tibshirani
,
R.
,
Saunders
,
M.
,
Rosset
,
S.
,
Zhu
,
J.
, and
Knight
,
K.
,
2005
, “
Sparsity and Smoothness Via the Fused Lasso
,”
J. R. Stat. Soc.: Ser. B (Stat. Methodol.)
,
67
(
1
), pp.
91
108
.
93.
Tibshirani
,
R. J.
, and
Taylor
,
J.
,
2011
, “
The Solution Path of the Generalized Lasso
,”
Ann. Stat.
,
39
(
3
), pp.
1335
1371
.
94.
Ramdas
,
A.
, and
Tibshirani
,
R. J.
,
2016
, “
Fast and Flexible Admm Algorithms for Trend Filtering
,”
J. Comput. Graph. Stat.
,
25
(
3
), pp.
839
858
.
95.
Ding
,
J.
,
Tarokh
,
V.
, and
Yang
,
Y.
,
2018
, “
Model Selection Techniques: An Overview
,”
IEEE Signal Process. Mag.
,
35
(
6
), pp.
16
34
.
96.
Hansen
,
P. C.
,
2010
,
Discrete Inverse Problems: Insight and Algorithms
,
SIAM
,
Philadelphia, PA
.
97.
Ramsay
,
J. O.
, and
Silverman
,
B. W.
,
2005
,
Functional Data Analysis
, Vol.
4
,
Springer New York
.
98.
Gu
,
C.
,
2013
,
Smoothing Spline ANOVA Models
, Vol.
297
,
Springer
,
New York
.
99.
Zou
,
H.
,
Hastie
,
T.
, and
Tibshirani
,
R.
,
2007
, “
On the ‘Degrees of Freedom’ of the Lasso
,”
Ann. Stat.
,
35
(
5
), pp.
2173
2192
.
100.
Wahba
,
G.
,
1990
,
Spline Models for Observational Data
,
SIAM
,
Philadelphia, PA
.
101.
Jansen
,
M.
,
2015
, “
Generalized Cross Validation in Variable Selection With and Without Shrinkage
,”
J. Stat. Plann. Inference
,
159
, pp.
90
104
.
102.
Van Den Berg
,
E.
, and
Friedlander
,
M. P.
,
2009
, “
Probing the Pareto Frontier for Basis Pursuit Solutions
,”
SIAM J. Sci. Comput.
,
31
(
2
), pp.
890
912
.
103.
Hansen
,
P. C.
,
Jensen
,
T. K.
, and
Rodriguez
,
G.
,
2007
, “
An Adaptive Pruning Algorithm for the Discrete L-Curve Criterion
,”
J. Comput. Appl. Math.
,
198
(
2
), pp.
483
492
.
104.
Cultrera
,
A.
, and
Callegaro
,
L.
,
2020
, “
A Simple Algorithm to Find the L-Curve Corner in the Regularisation of Ill-Posed Inverse Problems
,”
IOP SciNotes
,
1
(
2
), p.
025004
.
105.
Léger
,
J.-C.
,
1999
, “
Menger Curvature and Rectifiability
,”
Ann. Math.
,
149
(
3
), pp.
831
869
.
106.
Hanke
,
M.
,
1996
, “
Limitations of the L-Curve Method in Ill-Posed Problems
,”
BIT Numer. Math.
,
36
(
2
), pp.
287
301
.
107.
Vogel
,
C. R.
,
1996
, “
Non-convergence of the L-Curve Regularization Parameter Selection Method
,”
Inverse Prob.
,
12
(
4
), p.
535
.
108.
Regińska
,
T.
,
1996
, “
A Regularization Parameter in Discrete Ill-Posed Problems
,”
SIAM J. Sci. Comput.
,
17
(
3
), pp.
740
749
.
109.
Timmer
,
J.
, and
Koenig
,
M.
,
1995
, “
On Generating Power Law Noise
,”
Astron. Astrophys.
,
300
, p.
707
.
110.
Kariya
,
T.
, and
Kurata
,
H.
,
2004
,
Generalized Least Squares
,
John Wiley & Sons
,
Chichester, UK
.
111.
Donoho
,
D. L.
,
Stodden
,
V. C.
, and
Tsaig
,
Y.
,
2007
,
About SparseLab
.
112.
Lorenz
,
E. N.
,
1963
, “
Deterministic Nonperiodic Flow
,”
J. Atmos. Sci.
,
20
(
2
), pp.
130
141
.
113.
Little
,
A. V.
,
Maggioni
,
M.
, and
Rosasco
,
L.
,
2017
, “
Multiscale Geometric Methods for Data Sets I: Multiscale SVD, Noise and Curvature
,”
Appl. Comput. Harmonic Anal.
,
43
(
3
), pp.
504
567
.
114.
Duffing
,
G.
,
1918
, “Forced Oscillations With Variable Natural Frequency and Their Technical Relevance,” Heft 41/42,
Vieweg
,
Braunschweig
.
115.
Van der Pol
,
B.
,
1920
, “
A Theory of the Amplitude of Free and Forced Triode Vibrations
,”
Radio Rev.
,
1
, pp.
701
710
, 754–762.
116.
Schaeffer
,
H.
, and
McCalla
,
S. G.
,
2017
, “
Sparse Model Selection Via Integral Terms
,”
Phys. Rev. E
,
96
(
2
), p.
023302
.
117.
Messenger
,
D. A.
, and
Bortz
,
D. M.
,
2021
, “
Weak Sindy: Galerkin-Based Data-Driven Model Selection
,”
Multiscale Model. Simul.
,
19
(
3
), pp.
1474
1497
.
118.
Messenger
,
D. A.
, and
Bortz
,
D. M.
,
2021
, “
Weak Sindy for Partial Differential Equations
,”
J. Comput. Phys.
,
443
, p.
110525
.