Wearable sensors have enabled objective functional data collection from patients before total knee replacement (TKR) and at clinical follow-ups post-surgery whereas traditional evaluation has solely relied on self-reported subjective measures. The timed-up-and-go (TUG) test has been used to evaluate function but is commonly measured using only total completion time, which does not assess joint function or test completion strategy. The current work employs machine learning techniques to distinguish patient groups based on derived functional metrics from the TUG test and expose clinically important functional parameters that are predictive of patient recovery. Patients scheduled for TKR (n=70) were recruited and instrumented with a wearable sensor system while performing three TUG test trials. Remaining study patients (n=68) also completed three TUG trials at their 2, 6, and 13-week follow-ups. Many patients (n=36) have also participated up to their 26-week appointment. Custom developed software was used to segment recorded tests into sub-activities and extract 54 functional metrics to evaluate op/non-operative knee function. All preoperative TUG samples and their standardized metrics were clustered into two unlabelled groups using the k-means algorithm. Both groups were tracked forward to see how their early functional parameters translated to functional improvement at their three-month assessment. Test total completion time was used to estimate overall functional improvement and to relate findings to existing literature. Patients that completed their 26-week tests were tracked further to their most recent timepoint.Objective
Methods
Emergence of low-cost wearable systems has permitted extended data collection for unsupervised subject monitoring. Recognizing individual activities performed during these sessions gives context to recorded data and is an important first step towards automated motion analysis. Convolutional neural networks (CNNs) have been used with great success to detect patterns of pixels in images for object detection and recognition in many different applications. This work proposes a novel image encoding scheme to create images from time-series activity data and uses CNNs to accurately classify 13 daily activities performed by instrumented subjects. Twenty healthy subjects were instrumented with a previously developed wearable sensor system consisting of four inertial sensors mounted above and below each knee. Each subject performed eight static and five dynamic activities: standing, sitting in a chair/cross-legged, kneeling on left/right/both knees, squatting, laying, walking/running, biking and ascending/descending stairs. Data from each sensor were synchronized, windowed, and encoded as images using a novel encoding scheme. Two CNNs were designed and trained to classify the encoded images of both static and dynamic activities separately. Network performance was evaluated using twenty iterations of a leave-one-out validation process where a single subject was left out for test data to estimate performance on future unseen subjects.Objective
Methods