Wearable sensors embedded with inertial measurement units have become commonplace for the measurement of head impact biomechanics, but individual systems often suffer from a lack of measurement fidelity. While some researchers have focused on developing highly accurate, single sensor systems, we have taken a parallel approach in investigating optimal estimation techniques with multiple noisy sensors. In this work, we present a sensor network methodology that utilizes multiple skin patch sensors arranged on the head and combines their data to obtain a more accurate estimate than any individual sensor in the network. Our methodology visually localizes subject-specific sensor transformations, and based on rigid body assumptions, applies estimation algorithms to obtain a minimum mean squared error estimate. During mild soccer headers, individual skin patch sensors had over 100% error in peak angular velocity magnitude, angular acceleration magnitude, and linear acceleration magnitude. However, when properly networked using our visual localization and estimation methodology, we obtained kinematic estimates with median errors below 20%. While we demonstrate this methodology with skin patch sensors in mild soccer head impacts, the formulation can be generally applied to any dynamic scenario, such as measurement of cadaver head impact dynamics using arbitrarily placed sensors.

Introduction

Inertial measurement units (IMUs), which contain a triaxial linear accelerometer and triaxial gyroscope for six degree-of-freedom dynamic measurement, are now ubiquitous in biomechanical measurement systems due to their low cost and portability. Motion capture systems have long been the gold standard for high fidelity biomechanics measurements of relatively slow motions, but they are often expensive, require trained personnel to operate, dependent on environmental factors such as lighting and camera placement [1,2], and not appropriate for measurement of accelerations due to amplified noise in differentiation [3,4]. The field of concussion biomechanics has been particularly hampered by the limitations of motion capture systems. Measurement of concussive head impacts must generally be performed on the field in contact sports athletes, who are naturally at high risk of sustaining concussions. Additionally, motion measurements such as linear accelerations, which correlate with impact force, and angular velocities and accelerations, which are thought to correlate with brain tissue strain [59], cannot be directly quantified using motion capture systems without differentiation, which amplifies high frequency noise [3,4]. Thus, researchers have been developing alternative methods for collecting dynamic measurements from the field to study concussion [1014].

Embedding an IMU into a wearable device has been a logical approach to obtaining dynamic head impact measurements, but a single IMU lacks the ability to make direct measurements of angular acceleration, relying instead on differentiation of the angular velocity measurement obtained by the gyroscope. To address this limitation, some researchers have developed special arrangements of multiple linear accelerometers and/or IMUs to make direct measurements of angular acceleration through rigid body assumptions [1520]. Some arrangements typically used to measure dynamics in anthropomorphic headforms require precise orthogonal placement of linear accelerometers and/or IMUs with respect to one another to compute angular acceleration directly, but are generally not practical for use in vivo [16,20,21]. One arrangement consisting of six arbitrarily placed linear accelerometers can make direct measurements of angular acceleration, but sacrifices the ability to measure angular velocity directly and relies on the assumption that the initial angular velocity is 0rad/s [17,18]. Finally, a 12 linear accelerometer arrangement can make measurements of angular velocity and angular acceleration, but requires noncoplanar placement of sensors [15,19].

Despite these constraints on various IMU and linear accelerometer arrangements, researchers have developed measurement systems mounted to headgear such as the helmet [11,17,22], soft tissue such as the skin [23], or bony landmarks such as the upper dentition [15,2426]. One such device, the head impact telemetry system [11,17], has been used by researchers to collect thousands of head impact events over the past decade [1014]. These data have in turn been analyzed to identify concussion risk factors with the goal of developing quantitative, real-time diagnostic tools for concussion [5], and have contributed greatly to our understanding of concussion biomechanics. However, validation studies of sensor systems have shown that most sensors suffer from substantial measurement errors in linear acceleration, angular velocity, and angular acceleration [2224,27] due to significant decoupling from the skull. As a result, we need a more accurate sensing system to confirm current hypotheses regarding the underlying biomechanics of concussions.

Because of the limitations of previous sensor systems, there is now a push toward developing new sensor systems that better couple to the skull for accurate measurement of head impact biomechanics [15,25]. Recent literature has suggested that the instrumented mouthguard, which couples directly to the upper dentition and thus has a more rigid coupling to the skull, can achieve within 20% error in measuring linear acceleration, angular velocity, and angular acceleration magnitudes [15,2326]. This is a significant improvement over previous systems, which were shown to have over 200% error in measuring peak linear acceleration, angular velocity, and angular acceleration in vivo [23].

However, not all contact sports in which concussion biomechanics are studied use mouthguards. Sports such as soccer and cycling have high concussion rates and cannot be universally studied with an instrumented mouthguard. Thus, the goal of this work is to develop and evaluate a general methodology for making direct, accurate estimates of head impact linear acceleration, angular velocity, and angular acceleration using a network of arbitrarily placed, potentially noisy, IMU sensor systems. While individual sensor systems commonly suffer from substantial measurement errors, we hypothesize that applying optimal estimation techniques to a network of IMU sensor systems can account for individual sensor errors to achieve a more accurate global estimate of head impact kinematics.

Previously, researchers have used optimal estimation techniques and IMUs to accurately measure head orientation [2836]. However, these methodologies are not suitable for dynamic events, such as head impacts, because of an underlying assumption that gravitational acceleration dominates the linear acceleration measurement [37]. One previous investigation developed a method for fusing multiple IMUs [38] which is capable of making estimates of dynamic variables such as angular acceleration, but did so in the context of augmenting navigation and position tracking with a global positioning system. Thus, the formulation was not evaluated for its ability to measure dynamic variables of interest in head impact biomechanics. The formulations presented here are designed to capture dynamic motions, and are evaluated for their ability to estimate accelerations and velocities. Furthermore, these formulations can be extended to make biomechanical measures of other high dynamic events commonly associated with injury.

Materials and Methods

Instrumentation.

As a demonstration of our sensor network methodology, we used a network of four X2 skin patch sensors (X2 Biosystems, Seattle, WA) arranged on the head to collect head impact biomechanics measurements and compared them against a reference instrumented mouthguard (Fig. 1). It was previously shown that individual X2 skin patch sensors suffer from up to 200% measurement error in linear acceleration, angular velocity, and angular acceleration magnitudes [23]. The skin patch sensor's poor individual performance enables a compelling evaluation and demonstration of our methodology's capabilities. The sensor network's kinematics estimates were compared to a reference instrumented mouthguard, which was chosen as the reference sensor because of its previously demonstrated accuracy in measuring head impact kinematics in vivo [23].

Fig. 1
Instrumentation: (a) X2 skin patch and instrumented mouthguard, (b) representation of the X2 skin patch sensor network and reference instrumented mouthguard, and (c) image of the sensor network on a human subject
Fig. 1
Instrumentation: (a) X2 skin patch and instrumented mouthguard, (b) representation of the X2 skin patch sensor network and reference instrumented mouthguard, and (c) image of the sensor network on a human subject
Close modal

Each X2 skin patch sensor was adhered to the skin using a hypoallergenic adhesive. The recommended placement of the X2 skin patch sensor is behind the ear on the mastoid process, so we adhered an X2 skin patch sensor behind each ear. We also placed an X2 skin patch sensor below the eyes on the inferior edge of the orbit to create a four-sensor network for measuring head impact kinematics.

An instrumented mouthguard was custom made for each subject by pressure forming soft ethyl–vinyl–acetate onto subject-specific upper dentition molds. This method provided a tight coupling between the instrumented mouthguard and the upper dentition, which is coupled to the skull via stiff ligaments [25,26]. Electronics for the instrumented mouthguard were embedded in layers of ethyl–vinyl–acetate.

Both the instrumented mouthguard and the X2 skin patch sensors contained an IMU measuring head impact linear accelerations and angular velocities at 1000 Hz. The triaxial linear accelerometer for both instruments (part H3 LIS331DL) was filtered with a CFC180, 300 Hz fourth-order Butterworth low-pass filter (Society of Automotive Engineers standard) [25,26,39]. The triaxial angular rate gyroscope (part ITG3500A and part L3G4200D for instrumented mouthguard and X2 skin patch respectively) was filtered with a fourth-order Butterworth low-pass filter with cutoffs based on manufacturer specifications (184 Hz and 110 Hz for the instrumented mouthguard and X2 skin patch respectively). For analysis on individual sensors, the angular acceleration was obtained through five-point stencil differentiation of the angular velocity [40].

Both the instrumented mouthguard and the X2 skin patch were calibrated prior to testing to remove offsets in linear acceleration and angular velocity. During calibration, sensors were statically oriented in their neutral configuration with respect to the ground. 500 linear acceleration and angular velocity samples were collected and averaged to obtain the calibration offsets. Note, calibration removes the measurement of linear acceleration resulting from gravity, which other formulations use to estimate sensor orientation.

To record impacts, each sensor was set to identify impacts whenever a linear acceleration magnitude of 3.5 g was achieved. Given that the sensors were all triggered with the same linear acceleration threshold, we assumed that their data were time synchronized. Kinematics were recorded for a total of 100 ms, with 30 ms pretrigger and 70 ms post-trigger.

Mild Head Impacts.

To assess the accuracy of the sensor network approach, we recruited and consented four male subjects through the Stanford Internal Review Board IRB No. 40350 to perform ten mild soccer head impacts each in the laboratory. Soccer balls were delivered using a ball launcher (Sports Tutor, Burbank, CA) at speeds of up to 7 m/s which were expected to deliver an impact below 10 g. This procedure was similar to the one performed to assess accuracy of individual sensors [23].

Theory and Calculation

We compared individual skin patch kinematics measurements against reference mouthguard kinematics to provide a baseline performance for our sensor network. For the sensor network, we extended a previous multiple IMU fusion algorithm [38] to optimally combine data from individual sensors for estimation of dynamic head impact kinematics, which were then compared to the mouthguard reference kinematics. Our sensor network formulation was based on the assumption that individual sensors were affixed to a single rigid body (the head). The implication of this assumption is that the sensors will return to the same position and orientation with respect to the head (and each other) before and after each impact. However, as we have observed in the previous work evaluating skin patch sensor accuracy [23], the sensors themselves can oscillate due to underlying soft tissue motion during impacts, and this motion must be accounted for to obtain the true motion of the underlying rigid body head. Thus, the sensors in the network were assumed to obey the laws of rigid body dynamics.

The formulation then required two steps. First, a sensor localization method obtained relative positions and orientations of the X2 skin patches in the network with respect to one another. These relative positions and orientations were necessary for relating the accelerations and velocities of each individual sensor with others in the network using rigid body assumptions. Second, we implemented optimal estimation algorithms utilizing rigid body system dynamics to combine data from individual sensors.

Sensor Localization.

To optimally combine data from multiple IMUs located on the head, we first had to identify their relative locations and orientations with respect to one another. Most wearable IMU systems for measuring head impact kinematics require some knowledge of their position and orientation with respect to the head center of gravity. This is because linear accelerations are projected to the head center of gravity, and kinematics are assessed with respect to the anatomical coordinate frame [22,26]. These systems, however, typically assume a sensor position and orientation with respect to the head center of gravity based on 50th percentile male head properties, and an assumed sensor location, which are not subject specific.

For our sensor network approach, we first assumed a rough approximation of the sensors' relative orientations and positions based on sensor placements and head geometry, similar to how previous sensor systems generalized a “one-size-fits-all” sensor transform. Next, we also developed a visual localization methodology to determine relative sensor position and orientation to give more accurate and subject specific measures.

For the visual localization methodology, we first adhered fixed-sized, 1.27 mm (0.05 in) checkerboards to each X2 skin patch sensor. We then took still images of the sensor network, ensuring that each still image contained a pair of sensors. This required us to adhere two extra checkerboards on the left and right cheeks between the eye and the ear. For each still image, we used the matlab command “findCheckerboards” from the computer vision toolbox to locate checkerboards on each sensor (Mathworks, Natick, MA). We then used the “extrinsics” matlab command to identify the relative position and orientation of the camera with respect to each checkerboard in the image, and then combined them to get the relative position and orientation of the sensor pair. Given still images of sensor pairs, we could determine sequentially the relative positions and orientations of each sensor with respect to the others (Fig. 2). Multiple images of each pair of sensors were obtained and averaged to get more precise estimates of relative position and orientation.

Fig. 2
Visual localization: (a) we placed checkerboards with known square sizes and obtained images of sensor pairs. Using the known checkerboard sizes, we then determined the relative translation and orientation between imaged sensor pairs, and (b) built a map of the entire sensor network. Note, while six sensors are used in the visual localization, only four sensors actively record impact data. The remaining two (placed on the cheeks) were positioned as intermediaries because the ear sensors and the eye sensors could not be reliably captured in a single image.
Fig. 2
Visual localization: (a) we placed checkerboards with known square sizes and obtained images of sensor pairs. Using the known checkerboard sizes, we then determined the relative translation and orientation between imaged sensor pairs, and (b) built a map of the entire sensor network. Note, while six sensors are used in the visual localization, only four sensors actively record impact data. The remaining two (placed on the cheeks) were positioned as intermediaries because the ear sensors and the eye sensors could not be reliably captured in a single image.
Close modal

Rigid Body Dynamics.

Because we assumed that the head was a single rigid body to which the IMUs were attached, the linear acceleration, angular velocity, and angular acceleration measured by the IMU should obey rigid body dynamics. This allowed us to take advantage of two relationships in rigid body dynamics. First, for a rigid body, the angular velocity and the angular acceleration measured at any point on the rigid body are constant at a given point in time. Second, the linear acceleration at two points on a rigid body (ap,aq) are related by the angular velocity (ω) and the angular acceleration (α) of the rigid body, and the vector separating the two points (rq/p) as follows:
aq=ap+ω×(ω×rq/p)+α×rq/p
(1)

This allowed us to relate the angular velocities and linear accelerations as measured by all of the sensors in the network given that we knew their relative orientations and positions (which were determined either using assumed values or through visual localization).

Estimation Algorithms.

With the rigid body assumptions and the relative orientation and positions of the sensors in the network, we developed algorithms for optimally combining data from multiple sensors to get better estimates of head impact kinematics. We designed (1) a sensor fusion algorithm to estimate the mouthguard linear acceleration, angular velocity, and angular acceleration state from sensor measurements at each time-step; (2) a Kalman filter to integrate angular accelerations to obtain another estimate of angular velocity; and (3) a particle filter to address nonlinearities in the rigid body formulation for more accurate estimates of the state.

Sensor Fusion.

Our first algorithm was a sensor fusion implementation that combined sensor data at each time-step independently. The general form for the sensor fusion algorithm is given in Eqs. (2)(4) [41]. Equation (2) describes the observation process for observations zt of the state xt through linear process H with Gaussian process noise νt having a covariance Σν. Equation (3) describes the covariance of the resulting estimate and Eq. (4) describes how a minimum squared error state estimate x̂t is obtained from fused observations. The a priori state estimate x̂apriori and a priori state covariance Σx represent any known information about the state prior to making measurements
zt=Hxt+νt
(2)
Pt=(Σx1+HTΣν1H)1
(3)
x̂t=PtHTΣν1(ztHx̂apriori)+x̂apriori
(4)
In our network, the observations for a single skin patch sensor (zts) were defined as the triaxial angular velocities (ω) and the triaxial linear accelerations (as) at the sensor location as defined in a common reference frame (Eq. (5)). The state (xt) represented the kinematics of interest: triaxial angular velocity (ω), triaxial angular acceleration (α), and triaxial linear acceleration at the mouthguard for comparison to the reference (amg) (Eq (6))
zts=[ωx,ωy,ωz,axs,ays,azs]T
(5)
xt=[ωx,ωy,ωz,αx,αy,αz,axmg,aymg,azmg]T
(6)
Because of the orthogonality of the sensor measurements and the definition of the errors, we note that the rotation of the kinematic measurements from each individual sensor frame (SF) to the common reference frame through an orthonormal rotation matrix R does not affect the observation covariance Σν as given below:
ΣωSF=σω*I3×3
(7)
ωCF=R*ωSFΣωCF=R*ΣωSF*RT
(8)
ΣωCF=σω*R*I3×3*RT
(9)
ΣωCF=σω*I3×3=ΣωSF
(10)
The relation between the observation from a single sensor zts and the state xt was then defined by the rigid body assumptions outlined previously. However, because the equation relating the linear accelerations between two points on a rigid body (Eq. (1) has a nonlinear term (ω×(ω×rq/p), commonly referred to as the Coriolis term), we needed to linearize the process. This was done by assuming that the Coriolis term was substantially smaller than the other terms, and thus could be ignored. This differs from the previous multiple IMU formulations which use a first-order Taylor expansion to linearize the Coriolis contribution [38]. The linear observation process for a single sensor (Hs) is then given by the following equation:
Hs=[I3×303×303×30rzmg/srymg/s03×3rzmg/s0rxmg/s03×3rymg/srxmg/s0]
(11)
To represent observations from multiple sensors in the network, we stacked the individual observation vectors and the individual process matrices in the following equations:
zt=[(zts=1)T,(zts=2)T,,(zts=n)T]T
(12)
H=[(Hs=1)T,(Hs=2)T,,(Hs=n)T]T
(13)

Here, we note that to ensure full observability of the state, we required H to have at least rank 9. In the case of a single sensor, the rank of H=Hs=1 is trivially 6. However, having two sensors in the network, the rank of the full observation process H matrix is only 8 due to the skew symmetric block. To show this, we first see that the upper three rows of Hs=2 are redundant with those of Hs=1 and thus do not add rank. Then, we note that there exists a rotation matrix R that rotates the common frame such that the vector pointing from sensor s = 1 to sensor s = 2 is aligned with an axis (in Fig. 3, the vector is aligned with the x-axis). Thus, the vector to each sensor from the mouthguard rmg/s has two equal components, resulting in an additional shared row between Hs=1 and Hs=2. The addition of Hs=2 to the full observation process H matrix then only adds 2 rank, bringing the total to 8. In fact, we note that, with any number of collinear sensors, the rank of the full observation process H matrix is only 8.

Fig. 3
Undetermined angular acceleration with collinear sensors: The observation process H matrix does not have rank 9 with collinear sensors because the reference frame can be rotated such that two components in the location vector rmg/s are equal. This results in an inability to determine the angular acceleration about the common line.
Fig. 3
Undetermined angular acceleration with collinear sensors: The observation process H matrix does not have rank 9 with collinear sensors because the reference frame can be rotated such that two components in the location vector rmg/s are equal. This results in an inability to determine the angular acceleration about the common line.
Close modal

Thus, for our network formulation to have full observability of the desired state, we required at least one set of three noncollinear sensors. This requirement can also be intuitively understood through the rigid body formulation, wherein the angular acceleration about the vector between two sensors (or aligned with collinear sensors) cannot be determined because the cross product between the angular acceleration along the vector and the vector itself is always the zero vector.

Because we assumed no a priori knowledge of the state, we represented that mathematically as in Eq. (14). We defined expected errors in the IMU linear acceleration (Eq. (15)) and angular velocity (Eq. (16)) based on a previous study using similar in vivo head impacts [23]. Angular velocity error was modeled as a Gaussian with a standard deviation of 50% of the estimated angular velocity magnitude at the previous time-step, so long as this was greater than a minimum of 2rad/s representing the sensor noise band. For linear acceleration at the location of the IMU, we modeled the error as a Gaussian with a standard deviation of 5% of the peak linear acceleration at the previous time-step, so long as this was greater than a minimum of 0.5g representing the sensor noise band. This scaling error represented an expectation that sensors would experience greater noise during dynamic impact events, scaled with the severity of the impact. The process covariance Σν was then defined in Eq. (17)
Σx=09×9x̂apriori=09×1
(14)
σω=max(0.5*||ω̂t1||,2.0)
(15)
σa=max(0.05*||ât1CG||,0.5)
(16)
Σν=[I3×3*σω03×303×3I3×3*σa]
(17)

Kalman Filter.

We next implemented a Kalman Filter by including a time update step (Eqs. (18) and (19)) with the previously described measurement update (Eqs. (2) and (4)). In the general Kalman Filter formulation, the time update step propagates the state (Eq. (18)) and state covariance (Eq. (19)) one time-step forward. While in the sensor fusion approach, we assumed no a priori knowledge of the state, we could now use this integrated state update estimate and covariance as the a priori estimate and covariance. In our network formulation, our time update step involved integrating the angular acceleration to obtain an estimate of the angular velocity. The linear integration process is given in Eq. (20), and we assumed no process noise associated with the dynamics (Eq. (21)). Again, we assumed a linear form of the rigid body dynamics formulations
x̂apriori=Fxt1+wt
(18)
Σx=FPFT+Σw
(19)
F=[I3×3I3×3dt03×303×303×303×303×303×303×3]
(20)
Σw=09×9wt=09×1
(21)

Particle Filter.

Finally, we also extended our Kalman Filter using a particle filter to account properly for the nonlinearity in the rigid body formulation (Eq. (1)). Briefly, the particle filter represents state estimates as a collection of weighted particle estimates, which together represent the possibly non-Gaussian distribution of the state. Particles can then be propagated individually through state dynamics (Eq. (18)), and then individually weighted through comparison to measurements made by individual skin patch sensors through nonlinear measurement processes to obtain a new particle representation of the state distribution. The following equation (specific instance of the rigid body Eq. (1)) represents the nonlinear measurement process for sensor linear acceleration (replaces Eq. (2) for the Kalman Filter):
ats=atmg+ω×(ω×rmg/s)+α×rmg/s
(22)

Our particle filter utilized 20,000 particles with a systematic resampling technique [4244]. The number of particles was determined by assessing the repeatability of filter results (peak magnitude kinematics within 5%). Because angular accelerations and linear accelerations were not propagated through state dynamics (observed by the sparsity of the F dynamic propagation matrix in Eq. (20)), particle values for triaxial angular acceleration and triaxial linear acceleration were resampled from a Gaussian distribution with mean equal to the state at the previous time-step and standard deviation of 200rad/s2 and 2 g, respectively. This standard deviation was chosen as it spanned the acceleration differences between time steps observed in individual skin patch sensors during the mild soccer head impacts.

Evaluation.

We post-processed sensor data using combinations of the sensor localization and sensor fusion algorithms to understand the contributions of each step. There were six total combinations combining the two sensor localization algorithms (assumed transform and visual localization) and the three algorithms for combining sensor data (sensor fusion, Kalman filter, and particle filter).

We initially evaluated the performance of the full four-sensor network. We then compared the performance of the best algorithm utilizing the full four-sensor network with a minimum required three-sensor subset. We chose the right ear, right eye, and left eye sensors because we expected these sensors to have the least error in location and orientation following visual localization. Indeed, error in location and orientation likely compounded in visual localization, and the final sensor observed (the left ear sensor) was expected to have the highest error.

Performance of each methodology was evaluated by comparing the peak angular velocity magnitude, peak angular acceleration magnitude, and peak linear acceleration magnitude (max(||method||)) against the mouthguard reference (max(||mg||)) (Eq. (23)). This is standard in the field as it is thought that peak kinematic magnitudes are predictive of mild traumatic brain injuries [15,2325,27]. In addition, we computed the normalized root-mean-square error (NRMS) of the angular velocity magnitude, angular acceleration magnitude, and linear acceleration magnitude time traces (n samples) between the reference mouthguard and the methodologies to evaluate the performance of the methodologies over the entire impact period (Eq. (24)). We performed paired one-sided t-tests to determine which method had significantly smaller errors than other methods
percenterror=max(||mg||)max(||method||)max(||mg||)*100%
(23)
NRMSerror=1n(||mg||||method||)2nmax(||mg||)*100%
(24)

Results

For this study, we recruited four male subjects with an average age of 25.5±2.1 years, an average weight of 76.3±8.3 kg, and an average height of 179.7±9.4 cm. Each subject performed ten head impacts each, but only a total of 35 impacts (8+8+9+10 head impacts) had recorded data from all sensors in the network and the reference mouthguard. This yielded a sufficient number of impacts with which we do statistical analysis with the proposed filtering methods, and the impacts between subjects were found to have similar reference peak kinematics (p > 0.05), except for one subject that had significantly (p < 0.05) lower peak linear acceleration magnitudes than the others. We visually inspected sensor placement to obtain the following generic sensor transformations:
Tpatch_left_ear=[0.0000.0001.0000.1020.0001.0000.0000.0511.0000.0000.0000.0510.0000.0000.0001.000]
(25)
Tpatch_left_eye=[1.0000.0000.0000.0000.0000.0001.0000.0250.0001.0000.0000.0510.0000.0000.0001.000]
(26)
Tpatch_right_eye=[0.0001.0000.0000.0000.0000.0001.0000.0001.0000.0000.0000.0510.0000.0000.0001.000]
(27)
Tpatch_right_ear=[0.0000.0001.0000.1021.0000.0000.0000.0250.0001.0000.0000.0510.0000.0000.0001.000]
(28)
For comparison, we obtained the following sensor transforms using the visual localization technique with one of our subjects:
Tpatch_left_ear=[0.0100.1150.9930.6860.8050.5910.0610.0130.5940.7990.0990.0730.0000.0000.0001.000]
(29)
Tpatch_left_eye=[0.6200.7560.2110.0000.3440.0190.9390.0250.7060.6540.2720.0510.0000.0000.0001.000]
(30)
Tpatch_right_eye=[0.5440.8380.0470.0010.3800.1960.9040.0680.7480.5090.4250.0530.0000.0000.0001.000]
(31)
Tpatch_right_ear=[0.1570.0060.9880.0910.9150.3780.1440.0400.3730.9260.0650.0790.0000.0000.0001.000]
(32)

Figure 4 outlines the accuracy of each individual skin patch sensor and each of the methodologies at estimating peak angular velocity magnitude, peak angular acceleration magnitude, and peak linear acceleration magnitude when compared with the mouthguard reference. In addition, these are presented in Table 1 along with the reference measurement of peak angular velocity magnitude, peak angular acceleration magnitude, and peak linear acceleration magnitude. We observed that individual skin patch sensors exhibited median errors in excess of 100%, with the exception of the right eye sensor in estimating peak linear acceleration magnitude. This was consistent with the previous work describing the performance of skin patch mounted sensors in estimating head impacts in vivo.

Fig. 4
Aggregated results show that particle filter with visual localization transforms has best accuracy: We assessed the accuracy of individual skin patch sensors and our methodologies in estimating (a) peak angular velocity magnitude, (b) peak angular acceleration magnitude, and (c) peak linear acceleration magnitude compared with the reference instrumented mouthguard. Using generic transforms gives significantly (p < 0.05) better estimates of linear and angular acceleration than individual sensors; however, using sensor transformations obtained from visual localization improves accuracy further. Significant improvement in angular velocity estimates is only obtained with visual localization, with the particle filter having significantly lower errors than all other methods. For each box and whisker, the central line represents the median, the box represents 25th and 75th percentiles, and the whiskers extend a maximum of 1.5 times the interquartile range with outliers represented by “+” symbols.
Fig. 4
Aggregated results show that particle filter with visual localization transforms has best accuracy: We assessed the accuracy of individual skin patch sensors and our methodologies in estimating (a) peak angular velocity magnitude, (b) peak angular acceleration magnitude, and (c) peak linear acceleration magnitude compared with the reference instrumented mouthguard. Using generic transforms gives significantly (p < 0.05) better estimates of linear and angular acceleration than individual sensors; however, using sensor transformations obtained from visual localization improves accuracy further. Significant improvement in angular velocity estimates is only obtained with visual localization, with the particle filter having significantly lower errors than all other methods. For each box and whisker, the central line represents the median, the box represents 25th and 75th percentiles, and the whiskers extend a maximum of 1.5 times the interquartile range with outliers represented by “+” symbols.
Close modal
Table 1

Reference measurements and aggregated errors

Angular velocity error (%)Angular acceleration error (%)Linear acceleration error (%)
Method/sensormean ± std. dev.mean ± std. dev.mean ± std. dev.
Reference mouthguarda4.21 ± 1.46 rad/s633.06 ± 220.46 rad/s26.06 ± 1.60 g
Left ear skin patch sensor178.51 ± 111.11286.38 ± 210.06290.691 ± 143.96
Left eye skin patch sensor207.51 ± 128.04342.44 ± 304.69190.29 ± 101.40
Right eye skin patch sensor202.41 ± 136.39295.97 ± 268.4587.61 ± 60.10
Right ear skin patch sensor135.97 ± 118.77200.48 ± 216.77121.72 ± 115.12
Sensor fusion generic transforms14.22 ± 32.154.54 ± 36.9611.82 ± 23.36
Kalman filter generic transforms59.96 ± 38.224.54 ± 36.9611.82 ± 23.36
Particle filter generic transforms42.99 ± 35.506.47 ± 36.1615.39 ± 23.85
Sensor fusion visual localization33.33 ± 43.45−14.27 ± 32.413.82 ± 25.10
Kalman filter visual localization20.76 ± 29.08−14.27 ± 32.413.82 ± 25.10
Particle filter visual localization19.32 ± 25.81−12.65 ± 32.336.17 ± 25.37
Angular velocity error (%)Angular acceleration error (%)Linear acceleration error (%)
Method/sensormean ± std. dev.mean ± std. dev.mean ± std. dev.
Reference mouthguarda4.21 ± 1.46 rad/s633.06 ± 220.46 rad/s26.06 ± 1.60 g
Left ear skin patch sensor178.51 ± 111.11286.38 ± 210.06290.691 ± 143.96
Left eye skin patch sensor207.51 ± 128.04342.44 ± 304.69190.29 ± 101.40
Right eye skin patch sensor202.41 ± 136.39295.97 ± 268.4587.61 ± 60.10
Right ear skin patch sensor135.97 ± 118.77200.48 ± 216.77121.72 ± 115.12
Sensor fusion generic transforms14.22 ± 32.154.54 ± 36.9611.82 ± 23.36
Kalman filter generic transforms59.96 ± 38.224.54 ± 36.9611.82 ± 23.36
Particle filter generic transforms42.99 ± 35.506.47 ± 36.1615.39 ± 23.85
Sensor fusion visual localization33.33 ± 43.45−14.27 ± 32.413.82 ± 25.10
Kalman filter visual localization20.76 ± 29.08−14.27 ± 32.413.82 ± 25.10
Particle filter visual localization19.32 ± 25.81−12.65 ± 32.336.17 ± 25.37
a

Reference mouthguard values reported in original units (angular velocity rad/s, angular acceleration rad/s2, and linear acceleration g). All other values are reported as relative errors with respect to the reference mouthguard.

Using generic sensor transformations with each of our three estimation methodologies, we observed significant (p < 0.05) improvements in the estimate of peak linear acceleration and peak angular acceleration magnitude. We further improved estimate accuracy in peak linear acceleration magnitude and peak angular acceleration magnitude using transformations obtained from visual localization with our estimation methodologies (p < 0.05). For angular velocity, estimate error was significantly reduced when using sensor transforms obtained through visual localization. Furthermore, the particle filter formulation had significantly lower error in angular velocity magnitude estimates than the sensor fusion approach (p < 0.05); however, the particle filter did not have significantly lower errors than the Kalman filter approach (p = 0.7).

The significant improvement in measurement accuracy using the particle filter and visual localization compared with individual skin patch sensors was also observed in an example time history of a single impact (Fig. 5). The time history trace shows that the individual skin patch sensors had large oscillations compared with the reference instrumented mouthguard as a result of the underlying skin motion. Our methodology, however, accurately captured both the peak magnitude kinematics as well as the magnitude kinematics time history. This was evidenced quantitatively in the NRMS error, in which individual patch sensors had greater than 94%, 95%, and 55% error in angular velocity magnitude, angular acceleration magnitude, and linear acceleration magnitude, respectively, and the particle filter with visual localization had 35%, 23%, and 19% NRMS error, respectively.

Fig. 5
Kinematics time history demonstrating oscillations in individual sensors smoothed by filter: Time histories of (a) angular velocity magnitude, (b) angular acceleration magnitude, and (c) linear acceleration magnitude as measured by the reference instrumented mouthguard, the individual skin patch sensors, and the best performing particle filter utilizing sensor transforms obtained from visual localization. While individual skin patch sensors exhibit large oscillations in angular velocity measurement (which are amplified through differentiation in angular acceleration and propagated through projection in linear acceleration), our methodology successfully accounts for individual sensor errors and returns a smooth estimate closely matching the ground truth measurement.
Fig. 5
Kinematics time history demonstrating oscillations in individual sensors smoothed by filter: Time histories of (a) angular velocity magnitude, (b) angular acceleration magnitude, and (c) linear acceleration magnitude as measured by the reference instrumented mouthguard, the individual skin patch sensors, and the best performing particle filter utilizing sensor transforms obtained from visual localization. While individual skin patch sensors exhibit large oscillations in angular velocity measurement (which are amplified through differentiation in angular acceleration and propagated through projection in linear acceleration), our methodology successfully accounts for individual sensor errors and returns a smooth estimate closely matching the ground truth measurement.
Close modal

Finally, we assessed the performance of our best visual localization and particle filter methodology using the full four-sensor network and a minimum three-sensor subset (Fig. 6). The three-sensor network was significantly worse at estimating peak angular velocity magnitude (p < 0.05) compared with the four-sensor network over the 35 impacts analyzed. However, there was no significant difference in peak angular acceleration magnitude and peak linear acceleration magnitude estimates.

Fig. 6
Improved angular velocity accuracy when using more sensors: Our sensor fusion methodologies can accommodate any number of sensors; however, it requires at least three sensors for the rigid body formulation. We compared our visual localization sensor transforms and particle filter methodology when using the full four-sensor network against the minimal three-sensor network. We observed a significantly higher (p < 0.05) errors when using the minimal three-sensor network in estimating (a) peak angular velocity magnitude; however, no significant change in accuracy for (b) peak angular acceleration magnitude and (c) peak linear acceleration magnitude estimates.
Fig. 6
Improved angular velocity accuracy when using more sensors: Our sensor fusion methodologies can accommodate any number of sensors; however, it requires at least three sensors for the rigid body formulation. We compared our visual localization sensor transforms and particle filter methodology when using the full four-sensor network against the minimal three-sensor network. We observed a significantly higher (p < 0.05) errors when using the minimal three-sensor network in estimating (a) peak angular velocity magnitude; however, no significant change in accuracy for (b) peak angular acceleration magnitude and (c) peak linear acceleration magnitude estimates.
Close modal

Discussion

In this work, we demonstrate methodologies for combining kinematic data from a network of noisy, wearable inertial measurement units. Our most accurate methodology combines a visual localization method for obtaining the relative positions and orientations of sensors in the network with a particle filter utilizing rigid body assumptions. This methodology achieved a median estimate error of within 20% in peak angular velocity magnitude, peak angular acceleration magnitude, and peak linear acceleration magnitude, whereas individual sensors in the network had over 100% error in all kinematic estimates.

To understand why this combination yields the best performance, we examine the other methodologies presented here. First, we observe that the rigid body formulation (Eq. (1)) relies heavily on the relative positions of each sensor to relate linear accelerations at two different points with the angular velocity and angular acceleration. The relative orientations of each sensor are also used to properly rotate measurements to the reference mouthguard frame. Errors in relative sensor position and orientation lead to inconsistencies between the sensor measurements and the rigid body assumptions, which manifest as inaccurate estimates of head kinematics. Thus, the more accurate and subject-specific visual localization technique gives significantly more accurate estimates of head kinematics.

Second, individual IMU sensors only directly measure angular velocity and linear acceleration at the sensor location. Thus, angular acceleration is typically obtained through differentiation, which amplifies any noise in angular velocity, and linear acceleration is typically projected to an anatomical location, which propagates noise in angular acceleration into the linear acceleration estimate. Thus, all of our estimation techniques are expected to improve estimates of both linear acceleration and angular acceleration as they make direct estimates of angular acceleration using the rigid body formulation. This eliminates the amplification of noise through differentiation in angular acceleration estimates, and the subsequent projection of noise in linear acceleration estimates.

Finally, we observe differences between the individual optimal estimation algorithms in conjunction with sensor transforms obtained from visual localization. The particle filter formulation has significantly lower errors in peak angular velocity magnitude than the sensor fusion algorithm. The Kalman filter algorithm also has lower errors in peak angular velocity magnitude than the sensor fusion algorithm, though the difference did not reach significance (p = 0.066). This indicates that the system dynamics update step included in the Kalman filter and particle filter implementations are important for making accurate estimates of angular velocity. Indeed, the only difference between the sensor fusion and Kalman filter algorithms is the inclusion of the system dynamics to obtain an a priori estimate of the state angular velocity and its covariance for the measurement update (or sensor fusion) step, and we note that the process for estimating linear acceleration and angular acceleration is equivalent for sensor fusion and the Kalman Filter.

Curiously, the inclusion of the nonlinear Coriolis term in the rigid body formulation does not appear to affect results significantly in our experiments. The particle filter peak angular velocity magnitude error is lower than the Kalman filter peak angular velocity magnitude error, but it is not significantly different (p = 0.7). To understand why the Coriolis term (ω×(ω×rq/p)) does not affect results significantly, we observe the term's contribution to the formulation presented in Eq. (1) (compared with the other two terms (α×rq/p and ap with rq/p=[0.1,0.1,0.1]m) (Fig. 7). While the other two terms have similar magnitude, the Coriolis term has negligible contribution. This suggests that the removal of the Coriolis term in the Kalman filter may be sufficient for dynamic estimation using a sensor network in the case of head impacts, which is advantageous because the Kalman filter is near real-time, whereas the particle filter containing 20,000 particles of a nine-dimensional state took minutes to complete a single trial. In cases where the relative contribution of the Coriolis term is significant (|ω|2|α|), the particle filter or a proper linearization of the Coriolis term (such as with an unscented or extended Kalman filter [38]) is expected to have superior accuracy.

Fig. 7
Coriolis term has negligible contribution in rigid body formulation: With rq/p=[0.1,0.1,0.1]cm, we compute the contribution of each term in the rigid body formulation (Eq. (1)). α×rq/p and the measured linear acceleration (ap) terms have similar magnitudes, while the Coriolis term (ω×(ω×rq/p)) is negligible in comparison. This indicates that the linearization of the rigid body formulation in the Kalman Filter may be sufficient for sensor network estimation of dynamic events.
Fig. 7
Coriolis term has negligible contribution in rigid body formulation: With rq/p=[0.1,0.1,0.1]cm, we compute the contribution of each term in the rigid body formulation (Eq. (1)). α×rq/p and the measured linear acceleration (ap) terms have similar magnitudes, while the Coriolis term (ω×(ω×rq/p)) is negligible in comparison. This indicates that the linearization of the rigid body formulation in the Kalman Filter may be sufficient for sensor network estimation of dynamic events.
Close modal

While we demonstrated the ability of the sensor network formulation for measuring head impact kinematics using skin patch sensors, this formulation can be extended to other areas using any combination of sensors (including the instrumented mouthguard), with the caveat that each sensor has independent noise (as required by the optimal estimation techniques). One particular area of interest is in the instrumentation of cadavers for head impact reconstructions [25]. Instrumentation in anthropomorphic test dummies (ATD) is well documented [16,20,21], and some of these techniques have been utilized in cadavers as well. However, these instrumentation setups either require specific placements of sensors that necessitate sensor blocks for precise placement [20] or differentiation of gyroscope angular velocity to obtain angular acceleration, which is prone to errors from amplified noise [25,45]. The presented methodology allows for arbitrary placement of three noncollinear IMUs for direct estimation of angular velocity, angular acceleration, and linear acceleration of the head center of gravity. The methodology can also be more widely applied to other areas in injury biomechanics field where high fidelity measurements of dynamic events using noisy sensors are an ongoing challenge.

The methodologies presented here represent a first step toward addressing the challenge of obtaining accurate, direct measurement of head impact linear acceleration, angular velocity, and angular acceleration; however, there are still some limitations. First, the head impacts used in this study were mild in nature and less severe than median level American football impacts [11,13,14], and far less severe than concussive level events [9]. In particular, the predicted Gaussian error from soft tissue motion used in this paper was characterized based on similar low severity impacts, and may not hold at higher severities. Thus, further evaluation of the underlying error models, and testing of the presented methodologies, must be performed before deployment to the field.

Second, the impact parameters were limited to sagittal plane soccer ball headers, which do not necessarily represent the performance of the network in other header directions. It is known that wearable sensors have varying performance depending on the impact location, so it is plausible that the network too will have varying performance. This is particularly important because it is well known that the brain is more susceptible to certain directions of motion than others, and it is imperative that sensor systems measure kinematics in these directions accurately [9,46,47]. While we only evaluated the sensor network in sagittal plane soccer ball headers, we suspect that the network will be more robust to impact conditions because the estimation techniques upon which our methodologies are based are designed to account for individual sensor susceptibilities.

Third, while the visual localization technique for obtaining sensor transformations is expected to be superior to using generic “one-size fits all” transforms, it has not been validated against a ground truth. A more robust validation would utilize imaging technologies such as CT scans to obtain highly accurate, ground truth sensor transformations. Furthermore, we did not evaluate whether these sensor transforms remained constant throughout testing, though we did not see any significant change in performance between the first header and the final header that would indicate changes in sensor transforms. Practically, these sensor transforms should be evaluated every time sensors are applied (or reapplied) to the head. Future work could also include a parameter estimation component that would automatically optimize for sensor transform parameters using techniques such as the dual filter approach [48]. This would be more robust to sensors that may displace during impacts, but would be computationally challenging for real-time analysis and would be more heavily reliant on the proposed error model for the sensors. Finally, we note that the proposed visual localization can be extended to currently available single sensor systems to obtain subject-specific transforms between sensors and the head.

Finally, we made the assumption that the sensors were all time synced because they were set to trigger at the same linear acceleration threshold. Because of inherent sensor noise, and the fact that the linear acceleration is known to be different at different points on a rigid body (Eq. (1)), this was not necessarily true. A sensor network in the true sense with sensors capable of communicating with one another and triggering simultaneously would likely yield even greater improvements in estimate accuracy [38]. Despite this, our sensor network was still capable of estimating the dynamic response with below 20% median errors.

Conclusion

In conclusion, we explored several methodologies for combining data from individual inertial measurement units to obtain a more accurate estimate of head impact kinematics using optimal estimation techniques. In addition, we present a visual localization technique for obtaining subject-specific estimates of sensor locations and orientations on the head. In conjunction, these techniques allow users to obtain significantly more accurate estimates of head impact kinematics, with the combination of a particle filter and visual localization yielding the lowest errors.

The general mathematical framework presented here has been designed such that it can be applied generally to problems involving the measurement of dynamic events with noisy measurement systems. While we present a case using a network of skin patch sensors in soccer heading scenarios, this formulation can support any sensor form factor and any impact scenario. As we begin to further apply wearable instrumentation, it is important we develop such estimation techniques to obtain more accurate measurements than those obtained when treating sensors independently. Thus, with this work, we seek to lay the groundwork for further development of sensor networks for the measurement of dynamic events.

Acknowledgment

We would like to thank professor Mykel J. Kochenderfer for providing technical assistance and guidance.

Funding Data

  • National Science Foundation (Graduate Student Fellowship Program).

  • Office of Naval Research Young Investigator Program (N00014-16-1-2949).

  • Stanford Bio-X (Graduate Student Fellowship Program).

Nomenclature

aq =

linear acceleration at an arbitrary point q on a rigid body as defined in the rigid body fixed frame

CF =

a common rigid body fixed frame

F =

linear time dynamics propagation process

H =

linear measurement process relating the system state to the measurement. Assumed to be time invariant

mg =

the mouthguard reference sensor (e.g., amg represents the linear acceleration at the mouthguard reference sensor location)

||mg|| =

resultant of a vector (angular velocity, angular acceleration, or linear acceleration) as measured by the reference mouthguard

||method|| =

resultant of a vector (angular velocity, angular acceleration, or linear acceleration) as measured by a sensor or data fusion method

Pt =

state covariance at time t

rq/p =

vector originating at an arbitrary point q on a rigid body and terminating at an arbitrary point p on a rigid body defined in the rigid body fixed frame

R =

the rotation between sensor aligned fixed frame and common rigid body fixed frame

s =

a single IMU sensor (e.g., as represents the linear acceleration at the IMU sensor location)

SF =

a sensor aligned fixed frame

Tsensor =

homogenous transformation matrix describing the relative translation and orientation of a sensor with respect to the mouthguard in the rigid body fixed frame

νt =

measurement process noise at time t

vectorx,y,z =

x-, y-, or z-component of a vector defined in the rigid body fixed frame

wt =

time dynamics propagation process noise at time t

xt =

system state at time t

x̂t =

estimate of system state at time t

x̂apriori =

a priori estimate of system state

zt =

measurement taken at time t

zts =

measurement from a single IMU sensor s taken at time t

α =

angular acceleration of a rigid body as defined in the rigid body fixed frame

Σv =

measurement process covariance

Σw =

time dynamics propagation covariance

Σx =

a priori state covariance

σa =

error in IMU linear acceleration measurement

σω =

error in IMU angular velocity measurement

ω =

angular velocity of a rigid body as defined in the rigid body fixed frame

References

1.
Moeslund
,
T. B.
, and
Granum
,
E.
,
2001
, “
A Survey of Computer Vision-Based Human Motion Capture
,”
Comput. Vision Image Understanding
,
81
(
3
), pp.
231
268
.
2.
Moeslund
,
T. B.
,
Hilton
,
A.
, and
Kru
,
V.
,
2006
, “
A Survey of Advances in Vision-Based Human Motion Capture and Analysis
,”
Comput. Vision Image Understanding
,
104
(
2–3
), pp.
90
126
.
3.
Harvey
,
P. S.
, and
Gavin
,
H.
,
2014
, “
Assessing the Accuracy of Vision-Based Accelerometry
,”
Exp. Mech.
,
54
(
2
), pp.
273
277
.
4.
Leifer
,
J.
,
Weems
,
B.
,
Kienle
,
S. C.
, and
Sims
,
A. M.
,
2011
, “
Three-Dimensional Acceleration Measurement Using Videogrammetry Tracking Data
,”
Exp. Mech.
,
51
(
2
), pp.
199
217
.
5.
Gabler
,
L. F.
,
Crandall
,
J. R.
, and
Panzer
,
M. B.
,
2016
, “
Assessment of Kinematic Brain Injury Metrics for Predicting Strain Responses in Diverse Automotive Impact Conditions
,”
Ann. Biomed. Eng.
,
44
(
12
), pp.
3705
3718
.
6.
Ji
,
S.
,
Zhao
,
W.
,
Li
,
Z.
, and
McAllister
,
T. W.
,
2014
, “
Head Impact Accelerations for Brain Strain-Related Responses in Contact Sports: A Model-Based Investigation
,”
Biomech. Model. Mechanobiol.
,
13
(
5
), pp.
1121
1136
.
7.
Takhounts
,
E. G.
,
Ridella
,
S. A.
,
Rowson
,
S.
, and
Duma
,
S. M.
,
2010
, “
Kinematic Rotational Brain Injury Criterion (BRIC)
,”
22nd International Technical Conference on the Enhanced Safety of Vehicles
, Washington, DC, June 13–16, pp.
1
10
.http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.454.2216&rep=rep1&type=pdf
8.
Kleiven
,
S.
,
2006
, “
Evaluation of Head Injury Criteria Using a Finite Element Model Validated against Experiments on Localized Brain Motion, Intracerebral Acceleration, and Intracranial Pressure
,”
Int. J. Crashworthines
,
11
(
1
), pp.
65
79
.
9.
Hernandez
,
F.
,
Wu
,
L. C.
,
Yip
,
M. C.
,
Laksari
,
K.
,
Hoffman
,
A. R.
,
Lopez
,
J. R.
,
Grant
,
G.
,
Kleiven
,
S.
, and
Camarillo
,
D. B.
,
2015
, “
Six Degree-of-Freedom Measurements of Human Mild Traumatic Brain Injury
,”
Ann. Biomed. Eng.
,
43
(
8
), pp.
1918
1934
.
10.
Naunheim
,
R. S.
,
Standeven
,
J.
,
Richter
,
C.
, and
Lewis
,
L. M.
,
2000
, “
Comparison of Impact Data in Hockey, Football, and Soccer
,”
J. Trauma
,
48
(
5
), pp.
938
941
.
11.
Duma
,
S. M.
,
Manoogian
,
S. J.
,
Bussone
,
W. R.
,
Brolinson
,
P. G.
,
Goforth
,
M. W.
,
Donnenwerth
,
J. J.
,
Greenwald
,
R. M.
,
Chu
,
J. J.
, and
Crisco
,
J. J.
,
2005
, “
Analysis of Real-Time Head Accelerations in Collegiate Football Players
,”
Clin. J. Sports Med.
,
15
(1), pp. 3–8.
12.
Brolinson
,
P. G.
,
Manoogian
,
S.
,
McNeely
,
D.
,
Goforth
,
M.
,
Greenwald
,
R.
, and
Duma
,
S.
,
2006
, “
Analysis of Linear Head Accelerations From Collegiate Football Impacts
,”
Curr. Sports Med. Rep.
,
5
(
1
), pp.
23
28
.
13.
Crisco
,
J. J.
,
Wilcox
,
B. J.
,
Beckwith
,
J. G.
,
Chu
,
J. J.
,
Duhaime
,
A.-C.
,
Rowson
,
S.
,
Duma
,
S. M.
,
Maerlender
,
A. C.
,
McAllister
,
T. W.
, and
Greenwald
,
R. M.
,
2011
, “
Head Impact Exposure in Collegiate Football Players
,”
J. Biomech.
,
44
(
15
), pp.
2673
2678
.
14.
Mihalik
,
J. P.
,
Bell
,
D. R.
, and
Marshall
,
S. W.
,
2007
, “
Measurement of Head Impacts in Collegiate Football Players: An Investigation of Positional and Event—Type Differences
,”
Neurosurgery
,
61
(
6
), pp.
1229
1235
.
15.
Bartsch
,
A.
,
Samorezov
,
S.
,
Benzel
,
E.
,
Miele
,
V.
, and
Brett
,
D.
,
2014
, “
Validation of an “Intelligent Mouthguard” Single Event Head Impact Dosimeter
,”
Stapp Car Crash J.
,
58
, pp.
1
27
.
16.
Kang
,
Y.-S.
,
Moorhouse
,
K.
, and
Bolte
,
J. H.
,
2011
, “
Measurement of Six Degrees of Freedom Head Kinematics in Impact Conditions Employing Six Accelerometers and Three Angular Rate Sensors (6aω Configuration)
,”
ASME J. Biomech. Eng.
,
133
(
11
), p.
111007
.
17.
Crisco
,
J. J.
,
Chu
,
J. J.
, and
Greenwald
,
R. M.
,
2004
, “
An Algorithm for Estimating Acceleration Magnitude and Impact Location Using Multiple Nonorthogonal Single-Axis Accelerometers
,”
ASME J. Biomech. Eng.
,
126
(
6
), pp.
849
854
.
18.
Franck
,
J. A.
,
Blume
,
J.
,
Crisco
,
J. J.
, and
Franck
,
C.
,
2015
, “
Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements
,”
ASME J. Biomech. Eng.
,
137
(
9
), p.
091004
.
19.
Zappa
,
B.
,
Legnani
,
G.
,
Van Den Bogert
,
A. J.
, and
Adamini
,
R.
,
2001
, “
On the Number and Placement of Accelerometers for Angular Velocity and Acceleration Determination
,”
ASME J. Dyn. Syst. Meas. Control
,
123
(
3
), pp.
552
553
.
20.
Yoganandan
,
N.
,
Zhang
,
J.
,
Pintar
,
F. A.
, and
Liu
,
Y. K.
,
2006
, “
Lightweight Low-Profile Nine-Accelerometer Package to Obtain Head Angular Accelerations in Short-Duration Impacts
,”
J. Biomech.
,
39
(
7
), pp.
1347
1354
.
21.
Padgaonkar
,
A. J.
,
Krieger
,
K.
, and
King
,
A.
,
1975
, “
Measurement of Angular Acceleration of a Rigid Body Using Linear Accelerometers
,”
ASME J. Appl. Mech.
,
42
(
3
), pp.
552
556
.
22.
Campbell
,
K. R.
,
Warnica
,
M. J.
,
Levine
,
I. C.
,
Brooks
,
J. S.
,
Laing
,
A. C.
,
Burkhart
,
T. A.
, and
Dickey
,
J. P.
,
2016
, “
Laboratory Evaluation of the gForce Tracker, a Head Impact Kinematic Measuring Device for Use in Football Helmets
,”
Ann. Biomed. Eng.
,
44
(
4
), pp.
1246
1256
.
23.
Wu
,
L. C.
,
Nangia
,
V.
,
Bui
,
K.
,
Hammoor
,
B.
,
Kurt
,
M.
,
Hernandez
,
F.
,
Kuo
,
C.
, and
Camarillo
,
D. B.
,
2016
, “
In Vivo Evaluation of Wearable Head Impact Sensors
,”
Ann. Biomed. Eng.
,
44
(
4
), pp.
1234
1245
.
24.
Siegmund
,
G. P.
,
Guskiewicz
,
K. M.
,
Marshall
,
S. W.
,
DeMarco
,
A. L.
, and
Bonin
,
S. J.
,
2016
, “
Laboratory Validation of Two Wearable Sensor Systems for Measuring Head Impact Severity in Football Players
,”
Ann. Biomed. Eng.
,
44
(
4
), pp.
1257
1274
.
25.
Kuo
,
C.
,
Wu
,
L. C.
,
Hammoor
,
B. T.
,
Luck
,
J. F.
,
Cutcliffe
,
H. C.
,
Lynall
,
R. C.
,
Kait
,
J. R.
,
Campbell
,
K. R.
,
Mihalik
,
J. P.
,
Bass
,
C. R.
, and
Camarillo
,
D. B.
,
2016
, “
Effect of the Mandible on Mouthguard Measurements of Head Kinematics
,”
J. Biomech.
,
49
(
9
), pp.
1845
1853
.
26.
Camarillo
,
D. B.
,
Shull
,
P. B.
,
Mattson
,
J.
,
Shultz
,
R.
, and
Garza
,
D.
,
2013
, “
An Instrumented Mouthguard for Measuring Linear and Angular Head Impact Kinematics in American Football
,”
Ann. Biomed. Eng.
,
41
(
9
), pp.
1939
1949
.
27.
Jadischke
,
R.
,
Viano
,
D. C.
,
Dau
,
N.
,
King
,
A. I.
, and
McCarthy
,
J.
,
2013
, “
On the Accuracy of the Head Impact Telemetry (HIT) System Used in Football Helmets
,”
J. Biomech.
,
46
(
13
), pp.
2310
2315
.
28.
Foxlin
,
E.
,
1996
, “
Inertial Head-Tracker Sensor Fusion by a Complimentary Separate-Bias Kalman Filter
,”
IEEE
Virtual Reality Annual International Symposium
, Santa Clara, CA, Mar. 30–Apr. 3, p.
185
.
29.
Khaleghi
,
B.
,
Khamis
,
A.
,
Karray
,
F. O.
, and
Razavi
,
S. N.
,
2013
, “
Multisensor Data Fusion: A Review of the State-of-the-Art
,”
Inf. Fusion
,
14
(
1
), pp.
28
44
.
30.
Sabatini
,
A. M.
,
2006
, “
Quaternion-Based Extended Kalman Filter for Determining Orientation by Inertial and Magnetic Sensing
,”
IEEE Trans. Biomed. Eng.
,
53
(
7
), pp.
1346
1356
.
31.
Zihajehzadeh
,
S.
,
Loh
,
D.
,
Lee
,
M.
,
Hoskinson
,
R.
, and
Park
,
E. J.
,
2014
, “
A Cascaded Two—Step Kalman Filter for Estimation of Human Body Segment Orientation Using MEMS—IMU
,”
36th Annual International Conference of the IEEE Engineering in Medicince and Biology Society
(
EMBC
), Chicago, IL, Aug. 26–30, pp.
6270
6273
.
32.
Won
,
S. H. P.
,
Melek
,
W. W.
, and
Golnaraghi
,
F.
,
2010
, “
A Kalman/Particle Filter-Based Position and Orientation Estimation Method Using a Position Sensor/Inertial Measurement Unit Hybrid System
,”
IEEE Trans. Ind. Electron.
,
57
(
5
), pp.
1787
1798
.
33.
Yun
,
X.
, and
Bachmann
,
E.
,
2006
, “
Design, Implementation, and Experimental Results of a Quaternion-Based Kalman Filter for Human Body Motion Tracking
,”
IEEE Trans. Rob.
,
22
(
6
), pp.
1216
1227
.
34.
Luinge
,
H. J.
, and
Veltink
,
P. H.
,
2005
, “
Measuring Orientation of Human Body Segments Using Miniature Gyroscopes and Accelerometers
,”
Med. Biol. Eng. Comput.
,
43
(
2
), pp.
273
282
.
35.
Dissanayake
,
G.
,
Sukkarieh
,
S.
,
Nebot
,
E.
, and
Durrant-Whyte
,
H.
,
2001
, “
The Aiding of a Low-Cost Strapdown Inertial Measurement Unit Using Vehicle Model Constraints for Land Vehicle Applications
,”
IEEE Trans. Rob. Autom.
,
17
(
5
), pp.
731
747
.
36.
Roetenberg
,
D.
,
Luinge
,
H. J.
,
Baten
,
C. T. M.
, and
Veltink
,
P. H.
,
2005
, “
Compensation of Magnetic Disturbances Improves Inertial and Magnetic Sensing of Human Body Segment Orientation
,”
IEEE Trans. Neural Syst. Rehabil. Eng.
,
13
(
3
), pp.
395
405
.
37.
Kim
,
A.
, and
Golnaraghi
,
M. F.
,
2004
, “
Initial Calibration of an Inertial Measurement Unit Using an Optical Position Tracking System
,”
Position Location and Navigation Symposium
(
PLANS
), Monterey, CA, Apr. 26–29, pp.
96
101
.
38.
Bancroft
,
J. B.
, and
Lachapelle
,
G.
,
2011
, “
Data Fusion Algorithms for Multiple Inertial Measurement Units
,”
Sensors
,
11
(
7
), pp.
6771
6798
.
39.
SAE
,
1995
, “
Instrumentation for Impact Test—Part 1: Electronic Instrumentation
,” SAE Paper No.
SAE J211-1
.https://www.sae.org/standards/content/j211/1_199503/
40.
Abramowitz
,
M.
, and
Stequn
,
I.
,
1965
,
Handbook of Mathematical Functions: With Formulas, Graphs, and Mathematical Tables
, Vol.
55
,
Dover
, Mineola, NY.
41.
Hall
,
D. L.
, and
Llinas
,
J.
,
1997
, “
An Introduction to Multisensor Data Fusion
,”
Proc. IEEE
,
85
(
1
), pp.
6
23
.
42.
Arulampalam
,
M. S.
,
Maskell
,
S.
,
Gordon
,
N.
, and
Clapp
,
T.
,
2002
, “
A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking
,”
IEEE Trans. Signal Process.
,
50
(
2
), pp.
174
188
.
43.
Hol
,
J. D.
,
Schön
,
T. B.
, and
Gustafsson
,
F.
,
2006
, “
On Resampling Algorithms for Particle Filters
,”
IEEE Nonlinear Statistical Signal Processing Workshop
(
NSSPW
), Cambridge, UK, Sept. 11–15, pp. 79–82.
44.
Douc
,
R.
,
Cappé
,
O.
, and
Moulines
,
E.
,
2005
, “
Comparison of Resampling Schemes for Particle Filtering
,”
Fourth International Symposium on Image and Signal Processing and Analysis
(
ISPA
), Zagreb, Croatia, Sept. 15–17, pp.
64
69
.
45.
Wu
,
L. C.
,
Laksari
,
K.
,
Kuo
,
C.
,
Luck
,
J. F.
,
Kleiven
,
S.
,
Dale' Bass
,
C. R.
, and
Camarillo
,
D. B.
,
2016
, “
Bandwidth and Sample Rate Requirements for Wearable Head Impact Sensors
,”
J. Biomech.
,
39
(
13
), pp.
2918
2924
.
46.
Kuo
,
C.
,
Wu
,
L.
,
Zhao
,
W.
,
Fanton
,
M.
,
Ji
,
S.
, and
Camarillo
,
D. B.
,
2017
, “
Propagation of Errors From Skull Kinematic Measurements to Finite Element Tissue Responses
,”
Biomech. Modeling Mechanobiol.
,
17
(1), pp.
235
247
.
47.
Zhao
,
W.
,
Ford
,
J. C.
,
Flashman
,
L. A.
,
McAllister
,
T. W.
, and
Ji
,
S.
,
2016
, “
White Matter Injury Susceptibility Via Fiber Strain Evaluation Using Whole-Brain Tractography
,”
J. Neurotrauma
,
33
(
20
), pp.
1834
1847
.
48.
Wan
,
E. A.
, and
Nelson
,
A. T.
,
2001
, “
Dual Extended Kalman Filter Methods
,”
Kalman Filtering Neural Networks
, Wiley, New York, pp.
123
173
.