Wearable sensors embedded with inertial measurement units have become commonplace for the measurement of head impact biomechanics, but individual systems often suffer from a lack of measurement fidelity. While some researchers have focused on developing highly accurate, single sensor systems, we have taken a parallel approach in investigating optimal estimation techniques with multiple noisy sensors. In this work, we present a sensor network methodology that utilizes multiple skin patch sensors arranged on the head and combines their data to obtain a more accurate estimate than any individual sensor in the network. Our methodology visually localizes subject-specific sensor transformations, and based on rigid body assumptions, applies estimation algorithms to obtain a minimum mean squared error estimate. During mild soccer headers, individual skin patch sensors had over 100% error in peak angular velocity magnitude, angular acceleration magnitude, and linear acceleration magnitude. However, when properly networked using our visual localization and estimation methodology, we obtained kinematic estimates with median errors below 20%. While we demonstrate this methodology with skin patch sensors in mild soccer head impacts, the formulation can be generally applied to any dynamic scenario, such as measurement of cadaver head impact dynamics using arbitrarily placed sensors.
Introduction
Inertial measurement units (IMUs), which contain a triaxial linear accelerometer and triaxial gyroscope for six degree-of-freedom dynamic measurement, are now ubiquitous in biomechanical measurement systems due to their low cost and portability. Motion capture systems have long been the gold standard for high fidelity biomechanics measurements of relatively slow motions, but they are often expensive, require trained personnel to operate, dependent on environmental factors such as lighting and camera placement [1,2], and not appropriate for measurement of accelerations due to amplified noise in differentiation [3,4]. The field of concussion biomechanics has been particularly hampered by the limitations of motion capture systems. Measurement of concussive head impacts must generally be performed on the field in contact sports athletes, who are naturally at high risk of sustaining concussions. Additionally, motion measurements such as linear accelerations, which correlate with impact force, and angular velocities and accelerations, which are thought to correlate with brain tissue strain [5–9], cannot be directly quantified using motion capture systems without differentiation, which amplifies high frequency noise [3,4]. Thus, researchers have been developing alternative methods for collecting dynamic measurements from the field to study concussion [10–14].
Embedding an IMU into a wearable device has been a logical approach to obtaining dynamic head impact measurements, but a single IMU lacks the ability to make direct measurements of angular acceleration, relying instead on differentiation of the angular velocity measurement obtained by the gyroscope. To address this limitation, some researchers have developed special arrangements of multiple linear accelerometers and/or IMUs to make direct measurements of angular acceleration through rigid body assumptions [15–20]. Some arrangements typically used to measure dynamics in anthropomorphic headforms require precise orthogonal placement of linear accelerometers and/or IMUs with respect to one another to compute angular acceleration directly, but are generally not practical for use in vivo [16,20,21]. One arrangement consisting of six arbitrarily placed linear accelerometers can make direct measurements of angular acceleration, but sacrifices the ability to measure angular velocity directly and relies on the assumption that the initial angular velocity is [17,18]. Finally, a 12 linear accelerometer arrangement can make measurements of angular velocity and angular acceleration, but requires noncoplanar placement of sensors [15,19].
Despite these constraints on various IMU and linear accelerometer arrangements, researchers have developed measurement systems mounted to headgear such as the helmet [11,17,22], soft tissue such as the skin [23], or bony landmarks such as the upper dentition [15,24–26]. One such device, the head impact telemetry system [11,17], has been used by researchers to collect thousands of head impact events over the past decade [10–14]. These data have in turn been analyzed to identify concussion risk factors with the goal of developing quantitative, real-time diagnostic tools for concussion [5], and have contributed greatly to our understanding of concussion biomechanics. However, validation studies of sensor systems have shown that most sensors suffer from substantial measurement errors in linear acceleration, angular velocity, and angular acceleration [22–24,27] due to significant decoupling from the skull. As a result, we need a more accurate sensing system to confirm current hypotheses regarding the underlying biomechanics of concussions.
Because of the limitations of previous sensor systems, there is now a push toward developing new sensor systems that better couple to the skull for accurate measurement of head impact biomechanics [15,25]. Recent literature has suggested that the instrumented mouthguard, which couples directly to the upper dentition and thus has a more rigid coupling to the skull, can achieve within 20% error in measuring linear acceleration, angular velocity, and angular acceleration magnitudes [15,23–26]. This is a significant improvement over previous systems, which were shown to have over 200% error in measuring peak linear acceleration, angular velocity, and angular acceleration in vivo [23].
However, not all contact sports in which concussion biomechanics are studied use mouthguards. Sports such as soccer and cycling have high concussion rates and cannot be universally studied with an instrumented mouthguard. Thus, the goal of this work is to develop and evaluate a general methodology for making direct, accurate estimates of head impact linear acceleration, angular velocity, and angular acceleration using a network of arbitrarily placed, potentially noisy, IMU sensor systems. While individual sensor systems commonly suffer from substantial measurement errors, we hypothesize that applying optimal estimation techniques to a network of IMU sensor systems can account for individual sensor errors to achieve a more accurate global estimate of head impact kinematics.
Previously, researchers have used optimal estimation techniques and IMUs to accurately measure head orientation [28–36]. However, these methodologies are not suitable for dynamic events, such as head impacts, because of an underlying assumption that gravitational acceleration dominates the linear acceleration measurement [37]. One previous investigation developed a method for fusing multiple IMUs [38] which is capable of making estimates of dynamic variables such as angular acceleration, but did so in the context of augmenting navigation and position tracking with a global positioning system. Thus, the formulation was not evaluated for its ability to measure dynamic variables of interest in head impact biomechanics. The formulations presented here are designed to capture dynamic motions, and are evaluated for their ability to estimate accelerations and velocities. Furthermore, these formulations can be extended to make biomechanical measures of other high dynamic events commonly associated with injury.
Materials and Methods
Instrumentation.
As a demonstration of our sensor network methodology, we used a network of four X2 skin patch sensors (X2 Biosystems, Seattle, WA) arranged on the head to collect head impact biomechanics measurements and compared them against a reference instrumented mouthguard (Fig. 1). It was previously shown that individual X2 skin patch sensors suffer from up to 200% measurement error in linear acceleration, angular velocity, and angular acceleration magnitudes [23]. The skin patch sensor's poor individual performance enables a compelling evaluation and demonstration of our methodology's capabilities. The sensor network's kinematics estimates were compared to a reference instrumented mouthguard, which was chosen as the reference sensor because of its previously demonstrated accuracy in measuring head impact kinematics in vivo [23].

Instrumentation: (a) X2 skin patch and instrumented mouthguard, (b) representation of the X2 skin patch sensor network and reference instrumented mouthguard, and (c) image of the sensor network on a human subject
Each X2 skin patch sensor was adhered to the skin using a hypoallergenic adhesive. The recommended placement of the X2 skin patch sensor is behind the ear on the mastoid process, so we adhered an X2 skin patch sensor behind each ear. We also placed an X2 skin patch sensor below the eyes on the inferior edge of the orbit to create a four-sensor network for measuring head impact kinematics.
An instrumented mouthguard was custom made for each subject by pressure forming soft ethyl–vinyl–acetate onto subject-specific upper dentition molds. This method provided a tight coupling between the instrumented mouthguard and the upper dentition, which is coupled to the skull via stiff ligaments [25,26]. Electronics for the instrumented mouthguard were embedded in layers of ethyl–vinyl–acetate.
Both the instrumented mouthguard and the X2 skin patch sensors contained an IMU measuring head impact linear accelerations and angular velocities at 1000 Hz. The triaxial linear accelerometer for both instruments (part H3 LIS331DL) was filtered with a CFC180, 300 Hz fourth-order Butterworth low-pass filter (Society of Automotive Engineers standard) [25,26,39]. The triaxial angular rate gyroscope (part ITG3500A and part L3G4200D for instrumented mouthguard and X2 skin patch respectively) was filtered with a fourth-order Butterworth low-pass filter with cutoffs based on manufacturer specifications (184 Hz and 110 Hz for the instrumented mouthguard and X2 skin patch respectively). For analysis on individual sensors, the angular acceleration was obtained through five-point stencil differentiation of the angular velocity [40].
Both the instrumented mouthguard and the X2 skin patch were calibrated prior to testing to remove offsets in linear acceleration and angular velocity. During calibration, sensors were statically oriented in their neutral configuration with respect to the ground. 500 linear acceleration and angular velocity samples were collected and averaged to obtain the calibration offsets. Note, calibration removes the measurement of linear acceleration resulting from gravity, which other formulations use to estimate sensor orientation.
To record impacts, each sensor was set to identify impacts whenever a linear acceleration magnitude of 3.5 g was achieved. Given that the sensors were all triggered with the same linear acceleration threshold, we assumed that their data were time synchronized. Kinematics were recorded for a total of 100 ms, with 30 ms pretrigger and 70 ms post-trigger.
Mild Head Impacts.
To assess the accuracy of the sensor network approach, we recruited and consented four male subjects through the Stanford Internal Review Board IRB No. 40350 to perform ten mild soccer head impacts each in the laboratory. Soccer balls were delivered using a ball launcher (Sports Tutor, Burbank, CA) at speeds of up to 7 m/s which were expected to deliver an impact below 10 g. This procedure was similar to the one performed to assess accuracy of individual sensors [23].
Theory and Calculation
We compared individual skin patch kinematics measurements against reference mouthguard kinematics to provide a baseline performance for our sensor network. For the sensor network, we extended a previous multiple IMU fusion algorithm [38] to optimally combine data from individual sensors for estimation of dynamic head impact kinematics, which were then compared to the mouthguard reference kinematics. Our sensor network formulation was based on the assumption that individual sensors were affixed to a single rigid body (the head). The implication of this assumption is that the sensors will return to the same position and orientation with respect to the head (and each other) before and after each impact. However, as we have observed in the previous work evaluating skin patch sensor accuracy [23], the sensors themselves can oscillate due to underlying soft tissue motion during impacts, and this motion must be accounted for to obtain the true motion of the underlying rigid body head. Thus, the sensors in the network were assumed to obey the laws of rigid body dynamics.
The formulation then required two steps. First, a sensor localization method obtained relative positions and orientations of the X2 skin patches in the network with respect to one another. These relative positions and orientations were necessary for relating the accelerations and velocities of each individual sensor with others in the network using rigid body assumptions. Second, we implemented optimal estimation algorithms utilizing rigid body system dynamics to combine data from individual sensors.
Sensor Localization.
To optimally combine data from multiple IMUs located on the head, we first had to identify their relative locations and orientations with respect to one another. Most wearable IMU systems for measuring head impact kinematics require some knowledge of their position and orientation with respect to the head center of gravity. This is because linear accelerations are projected to the head center of gravity, and kinematics are assessed with respect to the anatomical coordinate frame [22,26]. These systems, however, typically assume a sensor position and orientation with respect to the head center of gravity based on 50th percentile male head properties, and an assumed sensor location, which are not subject specific.
For our sensor network approach, we first assumed a rough approximation of the sensors' relative orientations and positions based on sensor placements and head geometry, similar to how previous sensor systems generalized a “one-size-fits-all” sensor transform. Next, we also developed a visual localization methodology to determine relative sensor position and orientation to give more accurate and subject specific measures.
For the visual localization methodology, we first adhered fixed-sized, 1.27 mm (0.05 in) checkerboards to each X2 skin patch sensor. We then took still images of the sensor network, ensuring that each still image contained a pair of sensors. This required us to adhere two extra checkerboards on the left and right cheeks between the eye and the ear. For each still image, we used the matlab command “findCheckerboards” from the computer vision toolbox to locate checkerboards on each sensor (Mathworks, Natick, MA). We then used the “extrinsics” matlab command to identify the relative position and orientation of the camera with respect to each checkerboard in the image, and then combined them to get the relative position and orientation of the sensor pair. Given still images of sensor pairs, we could determine sequentially the relative positions and orientations of each sensor with respect to the others (Fig. 2). Multiple images of each pair of sensors were obtained and averaged to get more precise estimates of relative position and orientation.

Visual localization: (a) we placed checkerboards with known square sizes and obtained images of sensor pairs. Using the known checkerboard sizes, we then determined the relative translation and orientation between imaged sensor pairs, and (b) built a map of the entire sensor network. Note, while six sensors are used in the visual localization, only four sensors actively record impact data. The remaining two (placed on the cheeks) were positioned as intermediaries because the ear sensors and the eye sensors could not be reliably captured in a single image.

Visual localization: (a) we placed checkerboards with known square sizes and obtained images of sensor pairs. Using the known checkerboard sizes, we then determined the relative translation and orientation between imaged sensor pairs, and (b) built a map of the entire sensor network. Note, while six sensors are used in the visual localization, only four sensors actively record impact data. The remaining two (placed on the cheeks) were positioned as intermediaries because the ear sensors and the eye sensors could not be reliably captured in a single image.
Rigid Body Dynamics.
This allowed us to relate the angular velocities and linear accelerations as measured by all of the sensors in the network given that we knew their relative orientations and positions (which were determined either using assumed values or through visual localization).
Estimation Algorithms.
With the rigid body assumptions and the relative orientation and positions of the sensors in the network, we developed algorithms for optimally combining data from multiple sensors to get better estimates of head impact kinematics. We designed (1) a sensor fusion algorithm to estimate the mouthguard linear acceleration, angular velocity, and angular acceleration state from sensor measurements at each time-step; (2) a Kalman filter to integrate angular accelerations to obtain another estimate of angular velocity; and (3) a particle filter to address nonlinearities in the rigid body formulation for more accurate estimates of the state.
Sensor Fusion.
Here, we note that to ensure full observability of the state, we required H to have at least rank 9. In the case of a single sensor, the rank of is trivially 6. However, having two sensors in the network, the rank of the full observation process H matrix is only 8 due to the skew symmetric block. To show this, we first see that the upper three rows of are redundant with those of and thus do not add rank. Then, we note that there exists a rotation matrix R that rotates the common frame such that the vector pointing from sensor s = 1 to sensor s = 2 is aligned with an axis (in Fig. 3, the vector is aligned with the x-axis). Thus, the vector to each sensor from the mouthguard has two equal components, resulting in an additional shared row between and . The addition of to the full observation process H matrix then only adds 2 rank, bringing the total to 8. In fact, we note that, with any number of collinear sensors, the rank of the full observation process H matrix is only 8.

Undetermined angular acceleration with collinear sensors: The observation process H matrix does not have rank 9 with collinear sensors because the reference frame can be rotated such that two components in the location vector rmg/s are equal. This results in an inability to determine the angular acceleration about the common line.

Undetermined angular acceleration with collinear sensors: The observation process H matrix does not have rank 9 with collinear sensors because the reference frame can be rotated such that two components in the location vector rmg/s are equal. This results in an inability to determine the angular acceleration about the common line.
Thus, for our network formulation to have full observability of the desired state, we required at least one set of three noncollinear sensors. This requirement can also be intuitively understood through the rigid body formulation, wherein the angular acceleration about the vector between two sensors (or aligned with collinear sensors) cannot be determined because the cross product between the angular acceleration along the vector and the vector itself is always the zero vector.
Kalman Filter.
Particle Filter.
Our particle filter utilized 20,000 particles with a systematic resampling technique [42–44]. The number of particles was determined by assessing the repeatability of filter results (peak magnitude kinematics within 5%). Because angular accelerations and linear accelerations were not propagated through state dynamics (observed by the sparsity of the F dynamic propagation matrix in Eq. (20)), particle values for triaxial angular acceleration and triaxial linear acceleration were resampled from a Gaussian distribution with mean equal to the state at the previous time-step and standard deviation of and 2 g, respectively. This standard deviation was chosen as it spanned the acceleration differences between time steps observed in individual skin patch sensors during the mild soccer head impacts.
Evaluation.
We post-processed sensor data using combinations of the sensor localization and sensor fusion algorithms to understand the contributions of each step. There were six total combinations combining the two sensor localization algorithms (assumed transform and visual localization) and the three algorithms for combining sensor data (sensor fusion, Kalman filter, and particle filter).
We initially evaluated the performance of the full four-sensor network. We then compared the performance of the best algorithm utilizing the full four-sensor network with a minimum required three-sensor subset. We chose the right ear, right eye, and left eye sensors because we expected these sensors to have the least error in location and orientation following visual localization. Indeed, error in location and orientation likely compounded in visual localization, and the final sensor observed (the left ear sensor) was expected to have the highest error.
Results
Figure 4 outlines the accuracy of each individual skin patch sensor and each of the methodologies at estimating peak angular velocity magnitude, peak angular acceleration magnitude, and peak linear acceleration magnitude when compared with the mouthguard reference. In addition, these are presented in Table 1 along with the reference measurement of peak angular velocity magnitude, peak angular acceleration magnitude, and peak linear acceleration magnitude. We observed that individual skin patch sensors exhibited median errors in excess of 100%, with the exception of the right eye sensor in estimating peak linear acceleration magnitude. This was consistent with the previous work describing the performance of skin patch mounted sensors in estimating head impacts in vivo.

Aggregated results show that particle filter with visual localization transforms has best accuracy: We assessed the accuracy of individual skin patch sensors and our methodologies in estimating (a) peak angular velocity magnitude, (b) peak angular acceleration magnitude, and (c) peak linear acceleration magnitude compared with the reference instrumented mouthguard. Using generic transforms gives significantly (p < 0.05) better estimates of linear and angular acceleration than individual sensors; however, using sensor transformations obtained from visual localization improves accuracy further. Significant improvement in angular velocity estimates is only obtained with visual localization, with the particle filter having significantly lower errors than all other methods. For each box and whisker, the central line represents the median, the box represents 25th and 75th percentiles, and the whiskers extend a maximum of 1.5 times the interquartile range with outliers represented by “+” symbols.

Aggregated results show that particle filter with visual localization transforms has best accuracy: We assessed the accuracy of individual skin patch sensors and our methodologies in estimating (a) peak angular velocity magnitude, (b) peak angular acceleration magnitude, and (c) peak linear acceleration magnitude compared with the reference instrumented mouthguard. Using generic transforms gives significantly (p < 0.05) better estimates of linear and angular acceleration than individual sensors; however, using sensor transformations obtained from visual localization improves accuracy further. Significant improvement in angular velocity estimates is only obtained with visual localization, with the particle filter having significantly lower errors than all other methods. For each box and whisker, the central line represents the median, the box represents 25th and 75th percentiles, and the whiskers extend a maximum of 1.5 times the interquartile range with outliers represented by “+” symbols.
Reference measurements and aggregated errors
Angular velocity error (%) | Angular acceleration error (%) | Linear acceleration error (%) | |
---|---|---|---|
Method/sensor | mean ± std. dev. | mean ± std. dev. | mean ± std. dev. |
Reference mouthguarda | 4.21 ± 1.46 rad/s | 633.06 ± 220.46 rad/s2 | 6.06 ± 1.60 g |
Left ear skin patch sensor | 178.51 ± 111.11 | 286.38 ± 210.06 | 290.691 ± 143.96 |
Left eye skin patch sensor | 207.51 ± 128.04 | 342.44 ± 304.69 | 190.29 ± 101.40 |
Right eye skin patch sensor | 202.41 ± 136.39 | 295.97 ± 268.45 | 87.61 ± 60.10 |
Right ear skin patch sensor | 135.97 ± 118.77 | 200.48 ± 216.77 | 121.72 ± 115.12 |
Sensor fusion generic transforms | 14.22 ± 32.15 | 4.54 ± 36.96 | 11.82 ± 23.36 |
Kalman filter generic transforms | 59.96 ± 38.22 | 4.54 ± 36.96 | 11.82 ± 23.36 |
Particle filter generic transforms | 42.99 ± 35.50 | 6.47 ± 36.16 | 15.39 ± 23.85 |
Sensor fusion visual localization | 33.33 ± 43.45 | −14.27 ± 32.41 | 3.82 ± 25.10 |
Kalman filter visual localization | 20.76 ± 29.08 | −14.27 ± 32.41 | 3.82 ± 25.10 |
Particle filter visual localization | 19.32 ± 25.81 | −12.65 ± 32.33 | 6.17 ± 25.37 |
Angular velocity error (%) | Angular acceleration error (%) | Linear acceleration error (%) | |
---|---|---|---|
Method/sensor | mean ± std. dev. | mean ± std. dev. | mean ± std. dev. |
Reference mouthguarda | 4.21 ± 1.46 rad/s | 633.06 ± 220.46 rad/s2 | 6.06 ± 1.60 g |
Left ear skin patch sensor | 178.51 ± 111.11 | 286.38 ± 210.06 | 290.691 ± 143.96 |
Left eye skin patch sensor | 207.51 ± 128.04 | 342.44 ± 304.69 | 190.29 ± 101.40 |
Right eye skin patch sensor | 202.41 ± 136.39 | 295.97 ± 268.45 | 87.61 ± 60.10 |
Right ear skin patch sensor | 135.97 ± 118.77 | 200.48 ± 216.77 | 121.72 ± 115.12 |
Sensor fusion generic transforms | 14.22 ± 32.15 | 4.54 ± 36.96 | 11.82 ± 23.36 |
Kalman filter generic transforms | 59.96 ± 38.22 | 4.54 ± 36.96 | 11.82 ± 23.36 |
Particle filter generic transforms | 42.99 ± 35.50 | 6.47 ± 36.16 | 15.39 ± 23.85 |
Sensor fusion visual localization | 33.33 ± 43.45 | −14.27 ± 32.41 | 3.82 ± 25.10 |
Kalman filter visual localization | 20.76 ± 29.08 | −14.27 ± 32.41 | 3.82 ± 25.10 |
Particle filter visual localization | 19.32 ± 25.81 | −12.65 ± 32.33 | 6.17 ± 25.37 |
Reference mouthguard values reported in original units (angular velocity rad/s, angular acceleration rad/s2, and linear acceleration g). All other values are reported as relative errors with respect to the reference mouthguard.
Using generic sensor transformations with each of our three estimation methodologies, we observed significant (p < 0.05) improvements in the estimate of peak linear acceleration and peak angular acceleration magnitude. We further improved estimate accuracy in peak linear acceleration magnitude and peak angular acceleration magnitude using transformations obtained from visual localization with our estimation methodologies (p < 0.05). For angular velocity, estimate error was significantly reduced when using sensor transforms obtained through visual localization. Furthermore, the particle filter formulation had significantly lower error in angular velocity magnitude estimates than the sensor fusion approach (p < 0.05); however, the particle filter did not have significantly lower errors than the Kalman filter approach (p = 0.7).
The significant improvement in measurement accuracy using the particle filter and visual localization compared with individual skin patch sensors was also observed in an example time history of a single impact (Fig. 5). The time history trace shows that the individual skin patch sensors had large oscillations compared with the reference instrumented mouthguard as a result of the underlying skin motion. Our methodology, however, accurately captured both the peak magnitude kinematics as well as the magnitude kinematics time history. This was evidenced quantitatively in the NRMS error, in which individual patch sensors had greater than 94%, 95%, and 55% error in angular velocity magnitude, angular acceleration magnitude, and linear acceleration magnitude, respectively, and the particle filter with visual localization had 35%, 23%, and 19% NRMS error, respectively.

Kinematics time history demonstrating oscillations in individual sensors smoothed by filter: Time histories of (a) angular velocity magnitude, (b) angular acceleration magnitude, and (c) linear acceleration magnitude as measured by the reference instrumented mouthguard, the individual skin patch sensors, and the best performing particle filter utilizing sensor transforms obtained from visual localization. While individual skin patch sensors exhibit large oscillations in angular velocity measurement (which are amplified through differentiation in angular acceleration and propagated through projection in linear acceleration), our methodology successfully accounts for individual sensor errors and returns a smooth estimate closely matching the ground truth measurement.

Kinematics time history demonstrating oscillations in individual sensors smoothed by filter: Time histories of (a) angular velocity magnitude, (b) angular acceleration magnitude, and (c) linear acceleration magnitude as measured by the reference instrumented mouthguard, the individual skin patch sensors, and the best performing particle filter utilizing sensor transforms obtained from visual localization. While individual skin patch sensors exhibit large oscillations in angular velocity measurement (which are amplified through differentiation in angular acceleration and propagated through projection in linear acceleration), our methodology successfully accounts for individual sensor errors and returns a smooth estimate closely matching the ground truth measurement.
Finally, we assessed the performance of our best visual localization and particle filter methodology using the full four-sensor network and a minimum three-sensor subset (Fig. 6). The three-sensor network was significantly worse at estimating peak angular velocity magnitude (p < 0.05) compared with the four-sensor network over the 35 impacts analyzed. However, there was no significant difference in peak angular acceleration magnitude and peak linear acceleration magnitude estimates.

Improved angular velocity accuracy when using more sensors: Our sensor fusion methodologies can accommodate any number of sensors; however, it requires at least three sensors for the rigid body formulation. We compared our visual localization sensor transforms and particle filter methodology when using the full four-sensor network against the minimal three-sensor network. We observed a significantly higher (p < 0.05) errors when using the minimal three-sensor network in estimating (a) peak angular velocity magnitude; however, no significant change in accuracy for (b) peak angular acceleration magnitude and (c) peak linear acceleration magnitude estimates.

Improved angular velocity accuracy when using more sensors: Our sensor fusion methodologies can accommodate any number of sensors; however, it requires at least three sensors for the rigid body formulation. We compared our visual localization sensor transforms and particle filter methodology when using the full four-sensor network against the minimal three-sensor network. We observed a significantly higher (p < 0.05) errors when using the minimal three-sensor network in estimating (a) peak angular velocity magnitude; however, no significant change in accuracy for (b) peak angular acceleration magnitude and (c) peak linear acceleration magnitude estimates.
Discussion
In this work, we demonstrate methodologies for combining kinematic data from a network of noisy, wearable inertial measurement units. Our most accurate methodology combines a visual localization method for obtaining the relative positions and orientations of sensors in the network with a particle filter utilizing rigid body assumptions. This methodology achieved a median estimate error of within 20% in peak angular velocity magnitude, peak angular acceleration magnitude, and peak linear acceleration magnitude, whereas individual sensors in the network had over 100% error in all kinematic estimates.
To understand why this combination yields the best performance, we examine the other methodologies presented here. First, we observe that the rigid body formulation (Eq. (1)) relies heavily on the relative positions of each sensor to relate linear accelerations at two different points with the angular velocity and angular acceleration. The relative orientations of each sensor are also used to properly rotate measurements to the reference mouthguard frame. Errors in relative sensor position and orientation lead to inconsistencies between the sensor measurements and the rigid body assumptions, which manifest as inaccurate estimates of head kinematics. Thus, the more accurate and subject-specific visual localization technique gives significantly more accurate estimates of head kinematics.
Second, individual IMU sensors only directly measure angular velocity and linear acceleration at the sensor location. Thus, angular acceleration is typically obtained through differentiation, which amplifies any noise in angular velocity, and linear acceleration is typically projected to an anatomical location, which propagates noise in angular acceleration into the linear acceleration estimate. Thus, all of our estimation techniques are expected to improve estimates of both linear acceleration and angular acceleration as they make direct estimates of angular acceleration using the rigid body formulation. This eliminates the amplification of noise through differentiation in angular acceleration estimates, and the subsequent projection of noise in linear acceleration estimates.
Finally, we observe differences between the individual optimal estimation algorithms in conjunction with sensor transforms obtained from visual localization. The particle filter formulation has significantly lower errors in peak angular velocity magnitude than the sensor fusion algorithm. The Kalman filter algorithm also has lower errors in peak angular velocity magnitude than the sensor fusion algorithm, though the difference did not reach significance (p = 0.066). This indicates that the system dynamics update step included in the Kalman filter and particle filter implementations are important for making accurate estimates of angular velocity. Indeed, the only difference between the sensor fusion and Kalman filter algorithms is the inclusion of the system dynamics to obtain an a priori estimate of the state angular velocity and its covariance for the measurement update (or sensor fusion) step, and we note that the process for estimating linear acceleration and angular acceleration is equivalent for sensor fusion and the Kalman Filter.
Curiously, the inclusion of the nonlinear Coriolis term in the rigid body formulation does not appear to affect results significantly in our experiments. The particle filter peak angular velocity magnitude error is lower than the Kalman filter peak angular velocity magnitude error, but it is not significantly different (p = 0.7). To understand why the Coriolis term () does not affect results significantly, we observe the term's contribution to the formulation presented in Eq. (1) (compared with the other two terms ( and ap with ) (Fig. 7). While the other two terms have similar magnitude, the Coriolis term has negligible contribution. This suggests that the removal of the Coriolis term in the Kalman filter may be sufficient for dynamic estimation using a sensor network in the case of head impacts, which is advantageous because the Kalman filter is near real-time, whereas the particle filter containing 20,000 particles of a nine-dimensional state took minutes to complete a single trial. In cases where the relative contribution of the Coriolis term is significant (), the particle filter or a proper linearization of the Coriolis term (such as with an unscented or extended Kalman filter [38]) is expected to have superior accuracy.
![Coriolis term has negligible contribution in rigid body formulation: With rq/p=[0.1,0.1,0.1]cm, we compute the contribution of each term in the rigid body formulation (Eq. (1)). α×rq/p and the measured linear acceleration (ap) terms have similar magnitudes, while the Coriolis term (ω×(ω×rq/p)) is negligible in comparison. This indicates that the linearization of the rigid body formulation in the Kalman Filter may be sufficient for sensor network estimation of dynamic events.](https://asmedc.silverchair-cdn.com/asmedc/content_public/journal/biomechanical/140/9/10.1115_1.4039987/13/m_bio_140_09_091006_f007.png?Expires=1687268547&Signature=qrgB7DJXzg3SmUiz6nwVQgIawTQd4XDTX3DslIgKTc3Blu1mxGbHd3L5HcNayp2zc50bPR0pWgz2hL9y9y7oQN~wC6Tl~YhrUMrk~aITtRouvcscL6iQvjuLNUEePS70Q8xkGZ7oqH67W54JMkgbdsyIQD1uxR09v1k55X8mBEXHRDAkBaNijGj5iMwTcs--pQcClDtfRVgEl~mVrWHZa87G8thTFqyKCQfMmnLrjPsQVAE31F5AeEnn1JHHiihtg3qZCzdkeKqmZNVXcTGQfziQb1ct5haZnCnrLgENvkxuDuU~QE3R9M2EskBFAVFREmcl8w0i2DxUplij8WruQA__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA)
Coriolis term has negligible contribution in rigid body formulation: With , we compute the contribution of each term in the rigid body formulation (Eq. (1)). and the measured linear acceleration () terms have similar magnitudes, while the Coriolis term () is negligible in comparison. This indicates that the linearization of the rigid body formulation in the Kalman Filter may be sufficient for sensor network estimation of dynamic events.
![Coriolis term has negligible contribution in rigid body formulation: With rq/p=[0.1,0.1,0.1]cm, we compute the contribution of each term in the rigid body formulation (Eq. (1)). α×rq/p and the measured linear acceleration (ap) terms have similar magnitudes, while the Coriolis term (ω×(ω×rq/p)) is negligible in comparison. This indicates that the linearization of the rigid body formulation in the Kalman Filter may be sufficient for sensor network estimation of dynamic events.](https://asmedc.silverchair-cdn.com/asmedc/content_public/journal/biomechanical/140/9/10.1115_1.4039987/13/m_bio_140_09_091006_f007.png?Expires=1687268547&Signature=qrgB7DJXzg3SmUiz6nwVQgIawTQd4XDTX3DslIgKTc3Blu1mxGbHd3L5HcNayp2zc50bPR0pWgz2hL9y9y7oQN~wC6Tl~YhrUMrk~aITtRouvcscL6iQvjuLNUEePS70Q8xkGZ7oqH67W54JMkgbdsyIQD1uxR09v1k55X8mBEXHRDAkBaNijGj5iMwTcs--pQcClDtfRVgEl~mVrWHZa87G8thTFqyKCQfMmnLrjPsQVAE31F5AeEnn1JHHiihtg3qZCzdkeKqmZNVXcTGQfziQb1ct5haZnCnrLgENvkxuDuU~QE3R9M2EskBFAVFREmcl8w0i2DxUplij8WruQA__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA)
Coriolis term has negligible contribution in rigid body formulation: With , we compute the contribution of each term in the rigid body formulation (Eq. (1)). and the measured linear acceleration () terms have similar magnitudes, while the Coriolis term () is negligible in comparison. This indicates that the linearization of the rigid body formulation in the Kalman Filter may be sufficient for sensor network estimation of dynamic events.
While we demonstrated the ability of the sensor network formulation for measuring head impact kinematics using skin patch sensors, this formulation can be extended to other areas using any combination of sensors (including the instrumented mouthguard), with the caveat that each sensor has independent noise (as required by the optimal estimation techniques). One particular area of interest is in the instrumentation of cadavers for head impact reconstructions [25]. Instrumentation in anthropomorphic test dummies (ATD) is well documented [16,20,21], and some of these techniques have been utilized in cadavers as well. However, these instrumentation setups either require specific placements of sensors that necessitate sensor blocks for precise placement [20] or differentiation of gyroscope angular velocity to obtain angular acceleration, which is prone to errors from amplified noise [25,45]. The presented methodology allows for arbitrary placement of three noncollinear IMUs for direct estimation of angular velocity, angular acceleration, and linear acceleration of the head center of gravity. The methodology can also be more widely applied to other areas in injury biomechanics field where high fidelity measurements of dynamic events using noisy sensors are an ongoing challenge.
The methodologies presented here represent a first step toward addressing the challenge of obtaining accurate, direct measurement of head impact linear acceleration, angular velocity, and angular acceleration; however, there are still some limitations. First, the head impacts used in this study were mild in nature and less severe than median level American football impacts [11,13,14], and far less severe than concussive level events [9]. In particular, the predicted Gaussian error from soft tissue motion used in this paper was characterized based on similar low severity impacts, and may not hold at higher severities. Thus, further evaluation of the underlying error models, and testing of the presented methodologies, must be performed before deployment to the field.
Second, the impact parameters were limited to sagittal plane soccer ball headers, which do not necessarily represent the performance of the network in other header directions. It is known that wearable sensors have varying performance depending on the impact location, so it is plausible that the network too will have varying performance. This is particularly important because it is well known that the brain is more susceptible to certain directions of motion than others, and it is imperative that sensor systems measure kinematics in these directions accurately [9,46,47]. While we only evaluated the sensor network in sagittal plane soccer ball headers, we suspect that the network will be more robust to impact conditions because the estimation techniques upon which our methodologies are based are designed to account for individual sensor susceptibilities.
Third, while the visual localization technique for obtaining sensor transformations is expected to be superior to using generic “one-size fits all” transforms, it has not been validated against a ground truth. A more robust validation would utilize imaging technologies such as CT scans to obtain highly accurate, ground truth sensor transformations. Furthermore, we did not evaluate whether these sensor transforms remained constant throughout testing, though we did not see any significant change in performance between the first header and the final header that would indicate changes in sensor transforms. Practically, these sensor transforms should be evaluated every time sensors are applied (or reapplied) to the head. Future work could also include a parameter estimation component that would automatically optimize for sensor transform parameters using techniques such as the dual filter approach [48]. This would be more robust to sensors that may displace during impacts, but would be computationally challenging for real-time analysis and would be more heavily reliant on the proposed error model for the sensors. Finally, we note that the proposed visual localization can be extended to currently available single sensor systems to obtain subject-specific transforms between sensors and the head.
Finally, we made the assumption that the sensors were all time synced because they were set to trigger at the same linear acceleration threshold. Because of inherent sensor noise, and the fact that the linear acceleration is known to be different at different points on a rigid body (Eq. (1)), this was not necessarily true. A sensor network in the true sense with sensors capable of communicating with one another and triggering simultaneously would likely yield even greater improvements in estimate accuracy [38]. Despite this, our sensor network was still capable of estimating the dynamic response with below 20% median errors.
Conclusion
In conclusion, we explored several methodologies for combining data from individual inertial measurement units to obtain a more accurate estimate of head impact kinematics using optimal estimation techniques. In addition, we present a visual localization technique for obtaining subject-specific estimates of sensor locations and orientations on the head. In conjunction, these techniques allow users to obtain significantly more accurate estimates of head impact kinematics, with the combination of a particle filter and visual localization yielding the lowest errors.
The general mathematical framework presented here has been designed such that it can be applied generally to problems involving the measurement of dynamic events with noisy measurement systems. While we present a case using a network of skin patch sensors in soccer heading scenarios, this formulation can support any sensor form factor and any impact scenario. As we begin to further apply wearable instrumentation, it is important we develop such estimation techniques to obtain more accurate measurements than those obtained when treating sensors independently. Thus, with this work, we seek to lay the groundwork for further development of sensor networks for the measurement of dynamic events.
Acknowledgment
We would like to thank professor Mykel J. Kochenderfer for providing technical assistance and guidance.
Funding Data
National Science Foundation (Graduate Student Fellowship Program).
Office of Naval Research Young Investigator Program (N00014-16-1-2949).
Stanford Bio-X (Graduate Student Fellowship Program).
Nomenclature
- =
linear acceleration at an arbitrary point q on a rigid body as defined in the rigid body fixed frame
- CF =
a common rigid body fixed frame
- F =
linear time dynamics propagation process
- H =
linear measurement process relating the system state to the measurement. Assumed to be time invariant
- mg =
the mouthguard reference sensor (e.g., represents the linear acceleration at the mouthguard reference sensor location)
- =
resultant of a vector (angular velocity, angular acceleration, or linear acceleration) as measured by the reference mouthguard
- =
resultant of a vector (angular velocity, angular acceleration, or linear acceleration) as measured by a sensor or data fusion method
- Pt =
state covariance at time t
- =
vector originating at an arbitrary point q on a rigid body and terminating at an arbitrary point p on a rigid body defined in the rigid body fixed frame
- R =
the rotation between sensor aligned fixed frame and common rigid body fixed frame
- s =
a single IMU sensor (e.g., represents the linear acceleration at the IMU sensor location)
- SF =
a sensor aligned fixed frame
- Tsensor =
homogenous transformation matrix describing the relative translation and orientation of a sensor with respect to the mouthguard in the rigid body fixed frame
- =
measurement process noise at time t
- =
x-, y-, or z-component of a vector defined in the rigid body fixed frame
- =
time dynamics propagation process noise at time t
- =
system state at time t
- =
estimate of system state at time t
- =
a priori estimate of system state
- =
measurement taken at time t
- =
measurement from a single IMU sensor s taken at time t
- =
angular acceleration of a rigid body as defined in the rigid body fixed frame
- Σv =
measurement process covariance
- Σw =
time dynamics propagation covariance
- Σx =
a priori state covariance
- σa =
error in IMU linear acceleration measurement
- =
error in IMU angular velocity measurement
- =
angular velocity of a rigid body as defined in the rigid body fixed frame