From my point of view, and others might have a different opinion on this, the full immersion of a user into an Augmented Reality world requires two levels of user interfaces. The first one is designed along with the hardware that technically enables the fusion of virtual and real objects. The second level of user interfaces allows the user to interact with this composed world. Control devices of this second level can be part of the Augmented Reality scene again either as a virtual or a real input device. The design of both UIs needs to be driven by the required interactive tasks, the workflow and the working environment of the designated application.
When Augmented Reality supports surgical applications, the designers of those user interfaces need to take the factor sterility into account. Around the surgeon, there is the sterile area in the operating theatre and anything being or coming inside this area needs to remain sterile. For instance, introduction of a navigation system, which is partially outside this sterile area, e.g. the tracking cameras or the computer running the navigation software, can become a severe problem when you want the surgeon to take over the full control of this system. For example, pressing a button on the PCs keyboard to change a certain view of the navigation can become an impossible tasks for the operating surgeon. He would need to ask a nurse working in the non-sterile area of the operating theatre to do that adjustment. When the system allows for more complex configurations, frustration due to communication problems becomes inevitable: “Press that button! NO! The other one!”, “The left one?”, “Right!”, “Now, ok?”, ”NO, I meant the right button!”
Researchers have introduced pedals, voice recognition, touch interfaces, gesture tracking e.g. with Kinect etc. but every type of those interfaces has its drawbacks. In most cases they require already reserved space, or hands or voice of the surgeon, which are actually are blocked already for other tasks.
Tomas Brusell, dentist surgeon and CEO of Brusell Communications, is developing an input device that belongs to the second level of user interfaces in case it becomes part of an Augmented Reality application.
Tomas, can you describe your input device?
Lipit is a patented, truly hands free, voice free and mobile input device and would fit the bill for Health Tech biosafety improvements. It’s an extra oral, mini touchpad touched by the lip to gain full control of the ICT.
Tell me more about your ideas how to apply it in clinical scenarios and in combination with Augmented Reality!
I know what I can do with Lipit, behind the face mask, in the rather safe dental environments and I have a vivid vision of what could make Lipit a preferred UI in emergency situations. Post mortem the surface of the Ebola patient bead is high risk material and touching anything in the vicinity of an infected is a question of high alert mobility planning. In combination with emerging Augmented Reality devices, Lipit will make information and communication more intuitively accessible. There are many apps conceivable for Augmented Reality goggles technologies, but it’s obviously a hassle with input. With our device there is no need for fingering on a touchpad or screen, no embarrassing gestures or foolish talking to the machine, to achieve 100% biosafety. Regarding Ebola patients treatment, with Lipit combined with Augmented Reality, e.g. transparent (HUD) screens, you are still able to concentrate on where you touch things and humans around you, and you can talk to colleagues in a normal way. You can fully focus on what’s best for patient treatment and the hygiene control. I’m convinced that our technology in combination with AR technology can help medical combatants to do a better and more bio safe job.
Did you test it already in a clinical environment? Is it already available on the market?
At my clinic I have tested Lipit for 15 months – the technology is stable, no Bluetooth or software bugs. Android and Windows PC applications are available and the prototype is ready for production as a standalone device and in combination with Augmented Reality technologies. It took us almost 10 years to take the project from my idea to a working prototype, and I’m just about to reach out and declare that our technology now is ready for industrialization and that we are ready to talk licensing/cooperation with an AR/VR player. Currently I’m looking for the right individuals, playing in the emerging Augmented Reality forefront. People understanding, how crucial the UI is for optimal convenience and precision in the Augmented Reality control. People designing and developing the protection gear and people supplying the Augmented Reality technology.
Thank you for your time!