Avoid injuries, improve training – with self-learning sensors
Avoid injuries, improve training – with self-learning sensors
Interview with Kaustubh Gandhi, Senior Product Manager, Bosch Sensortec
Artificial intelligence, sensors, wearables: they all collect and process data from their wearers. They are particularly popular in sports, because users no longer have to rely on their intuition, but can optimise their training based on sober, exact data. However, wearables are often criticized for being not only practical gadgets but also data krakens.
A sensor from Bosch is now entering the ring to counter this prejudice: it can be integrated into many different devices, but keeps the user's data to itself and helps the user to train better and, above all, more safely on the basis of this data. In an interview with MEDICA-tradefair.com, Kaustubh Gandhi explains how the sensor collects the data and then processes it safely.
Mr. Gandhi, which properties does your sensor have?
Kaustubh Gandhi: In the fitness world, it is quite common to have accelerometers and gyroscopes in fitness trackers. Our sensor is a system-package which includes an accelerometer and a gyroscope with a programable microcontroller and several smart features. That can be, for example, an AI function for fitness tracking, swimming analysis and position and orientation tracking.
For which purposes or sports can the sensor be used?
Gandhi: The sensor can be used in a wide variety of human activities, such as home workouts with high intensity or activities in the gym, but also for basic activities such as running or walking as well as custom activities.
The sensor identifies the type of activity – how does that work?
Gandhi: Typically, the sensor is integrated in an end-user electronic device like a smartwatch, or in accessories such as shoe insoles, earphones and so on. Once the user starts to some activity, the self-learning AI function inside the sensor tracks the data sensed and generated during user's activity and automatically matches this to previously self-learnt patterns, to identify the type of activity, without any need of manual instruction or intervention.
How does it learn independently?
Gandhi: This function automatically generates patterns from the data sensed by the accelerometer and gyroscope. These patterns basically work like fingerprints of each unique exercise: When a user is doing an activity for 30 seconds at a time, the sensor identifies what looks like a repeating pattern – if it is repeating, it means that it must be interesting for the user. The sensor captures that part and saves it like a fingerprint for an exercise. The patterns can be named by the end user, by the manufacturer of the device or by Bosch while making the sensor itself.
Once an activity-specific pattern is learned, the sensors’ AI matches the learned pattern against incoming data and recognizes the activity during the workout. Since this is a sensor, the user does not need to explicitly inform the device about what they are doing – the automatic tracking function works in real time. Even when the user changes the activity or takes a break in between, the automatic tracking simply isolates the part of the activity.
The sensor has another advantage: individual users around the world have varying styles during their workout. Also, there is a wide variety of coaching styles around the world for the same activity, and also the demographics of the users are different. In order to address this, the self-learning function inside the sensor automatically helps the device personalizing and adapting to the user’s style. If the user is doing the exercise differently than the sensor’s pre-set pattern, the sensor adapts the original pattern to the user's style and detects that type of activity without any confusion for the user.
The Bosch sensor automatically detects the fitness exercise being performed and does not need to be operated manually during the workout.
Is it possible to improve the training with the sensor, for example by giving advice?
Gandhi: Yes. Typically, the sensor has orientation tracking which can be used for providing feedback on how different the user's activity is in comparison to a gold standard.
The sensor includes Edge AI – what is Edge AI and how does it work?
Gandhi: In simple words, Edge AI enables artificial intelligence to learn or track directly on a consumer device in real time, typically very close to the sensor, so right at the place where the data is being generated. Social Media or E-Commerce for example enable personalization of the respective service by computation on a cloud. In contrast to this, Edge AI enables personalization directly on the consumer's device – this enables 100 percent data privacy for the user.
Is it possible to give any advice with this data, for example to reduce the risk of injury or improve the training?
Gandhi: Indeed, the sensor can give feedback to the user. As mentioned, generally the feedback is given in the form of orientation, like where to move the hand or leg to minimize the risk of injury. First, the sensor is classifying the type of activity, then it looks at the intensity of the activity. After that, it can give feedback like “have less force on the heels, try to maintain the balance between two legs” and so on. This is what we call contextual feedback for reducing the risk of injury.
Products and exhibitors around IT
Find related exhibitors and products in the database of MEDICA 2021:
Is the sensor safe to use for individual lay people?
Gandhi: The sensor is only sensing the data, filtering it and providing information about the type of activity and orientation to a higher-level application. Typically, this information should be sufficient for the higher-level application to combine it with physiotherapeutic knowhow, to provide insights to users on how to improve the performance or how to achieve goals, such as getting more fit.
How do you think sensors and wearable technologies can improve people's health?
Gandhi: I think the great part about sensors is that they are passive in nature, which means that they do not make the users conscious of that there is an object tracking them. That means the user can fully concentrate on the activity. In contrast, if you look at camera-based devices, as soon as the user gets conscious about being tracked, their behaviour changes. This creates a kind of confusion on the user’s side on whether they should concentrate on the health or on the privacy aspect.
The metrics that are generated from the sensors are also pretty insightful for the users and improving the health, because these metrics are fine-grained and exact and can also give a prediction about the "future-me". For example, if you are constantly doing activities that involve movements of the back, there is a certain risk of back pain. The sensor can measure if there actually is a risk or injury or if the user does activities that involve both the abs and the back so that they balance out each other. In total, that means a system including higher level application and the sensor can give a very good recommendation built on this data basis.
More topic-related exciting news from the editors of MEDICA-tradefair.com: