What do you need to use the application?
Noll: You need a conventional ultrasound machine, an AR headset and a PC/computer to make the required calculations via our software. Every ultrasound machine typically has a central processing unit, but you need access to the system. You can also use a separate PC to feed data directly to the AR headset.
One of the most important components in this setting is the tracking system, which computes the position of the ultrasound probe in relation to the headset. This is essential to facilitate the topographical representation of ultrasound images on the ultrasound probe and thus ensure the spatial position and orientation relative to the user's field of view. If we did not take this step, the ultrasound plane would end up somewhere in space without correlation to the ultrasound probe. We chose an external optical ultrasound probe/AR glasses tracking system for our purposes. This device uses two cameras to determine the position of the AR headset worn by the user and the position of the ultrasound probe reflected by tracking markers.
What is the current status of the project?
Noll: Right now, the system is a demonstrator model we are hopefully able to use in an OR demonstration laboratory later this year. The next step is to find the right technology partner to join us in advancing this technology or bringing it to market. Parts of the system would have to be integrated into or connected to an ultrasound machine to make this a reality.
What is the hidden potential of the AR system and some possible future developments?
Noll: A conceivable application would be to use artificial intelligence to automatically detect physical anomalies in the ultrasound image. These could be displayed live as virtual content in the physician's headset and indicated via a border or arrows for example. Another idea is to overlay useful real-time information for the user, including the size or volume of the anomaly. I could also envision an automated segmentation of specific structures during scanning, which is subsequently displayed as a stationary 3D model in the presentation space. You could also use this to create a 3D surface model of an organ for example. The technology creates many opportunities and offers many conceivable scenarios and options for further development.