Measurement of stride-related, biomechanical parameters is the common rationale for objective gait impairment scoring. State-of-the-art double-integration approaches to extract these parameters from inertial sensor data are, however, limited in their clinical applicability due to the underlying assumptions. To overcome this, we present a method to translate the abstract information provided by wearable sensors to context-related expert features based on deep convolutional neural networks. Regarding mobile gait analysis, this enables integration-free and data-driven extraction of a set of eight spatio-temporal stride parameters. To this end, two modeling approaches are compared: a combined network estimating all parameters of interest and an ensemble approach that spawns less complex networks for each parameter individually. The ensemble approach is outperforming the combined modeling in the current application. On a clinically relevant and publicly available benchmark dataset, we estimate stride length, width and medio-lateral change in foot angle up to -0.15 ± 6.09 cm, -0.09 ± 4.22 cm and 0.13 ± 3.78° respectively. Stride, swing and stance time as well as heel and toe contact times are estimated up to ± 0.07, ± 0.05, ± 0.07, ± 0.07 and ± 0.12 s respectively. This is comparable to and in parts outperforming or defining state of the art. Our results further indicate that the proposed change in the methodology could substitute assumption-driven double-integration methods and enable mobile assessment of spatio-temporal stride parameters in clinically critical situations as, e.g., in the case of spastic gait impairments.
- Convolutional neural networks (CNNs)
- deep learning
- mobile gait analysis
- spatio-temporal gait parameters