Exhibition Combining an Anatomical Wax Cabinet and Art

You are looking for inspiration to design the Augmented Reality based view into the body and visualize anatomy? Then take a look at the hostorical approaches how to achieve appealing, instructive views into the human body.

Recently, the exhibition Blicke ! Körper ! Sensationen ! Ein anatomisches Wachskabinett und die Kunst (Views! Bodies! Sensations! An anatomical wax cabinet and the art) in Dresden, Germany has opened its doors recently.

The exhibition is based on a collection that has been established around 1900 in Dresden and is presented for the first time in a museeum. For centuries these pieces were shown at fun fairs to educate the audience about health care but also for pure amusement reasons. (see also the press release, which is in German only)

You will be able to visit the exhibition from 11th of Oct. 2014 until 19th of April 2015 at the Deutschen Hygiene-Museum in Dresden.

Learning from Artists How to Design the Immersive View

When having started developing Augmented Reality views into the body, I got a lot inspired by artists, who have designed those views using different media formats. One of the most exiting artists in this field is Alexander Tsiaras, Founder, CEO and Editor-in-Chief of TheVisualMD.com doing beautiful illustrations to demonstrate the interior of the human body.

Recently, I just stumbled upon an image of a guy having his entire head covered with an anatomical tattoo showing parts of the skull, the brain and muscles. While this is an exeptional statement to tell the public that he fell in love with the subject anatomy, I found that it was definitively great artwork. A less durable approach is presented by Danny Quirk, a body painting artist of Springfield, MA, USA, who “creates body paintings with latex, markers and some acrylic that appear as if his models’ skin is peeled back.” (see full article at smithsonian.com)

Here is another example of anatomical body as part of the traveling exhibition Body Worlds.

Transfering the artists knowledge and skills about the correct design of immersive views into the body to computer graphics and image compositions of Augmented Reality scenes  is certainly an important task to create apealing as well as useful Augmented Reality experiences in the health care sector. Of course, these visualizations need to be adapted to the  designated application such as teaching and training or surgical navigation.

Christoph Bichlmeier

Enthusiast of Augmented Reality for Medical Applications.

More Posts - Website

Follow Me:

German Science TV Show Introduces Magic Mirror for Visualizing Anatomy In-Situ

The german science TV Show “Wissen Vor Acht” has introduced the protoype system mirracle of TU Munich, Germany.

Now, a commercial version, the “medicalAR Mirror”, is available, which allows the use of this Augmented Reality Interface at places and environments outside of the research context of TU Munich.

The company medicalAR UG (haftungsbeschränkt) offers Software Development Services to adapt the capabilities of the “medicalAR Mirror” in order to meet special requirements of customers.


Interview with Terry Peters – Bringing Research to Clinical Practice

Terry Peters running the Imaging Research Laboratories at the Robarts Research Institute of the University of Western Ontario in London, Canada gave an impressive keynote talk at ISMAR 2014 in Munich. This year, Terry Peters received the 2014 MICCAI Enduring Impact Award for pioneering contributions in CT imaging and for bringing computer-assisted intervention (CAI) techniques to clinical practice. His homebase, the Robarts Research Institute, is one of the leading research places in Canada pushing basic and clinical research to find innovative solutions “to bring medical discoveries and new technologies faster to clinical trial, faster to market, and, ultimately, faster to you”.

Terry Peters` keynote talk “The Role of Augmented Reality Displays for Guiding Intra-cardiac Interventions” at ISMAR 2014 Peters gave an introduction to a guidance platform, that he calls a “Augmented Virtuality System” to be used for minimally invasive “repair of a mitral-valve leaflet, and the replacement of aortic valves”. The following videos show animations explaining the standard procedures for mitral valve repair and Transcatheter Aortic Valve Implantation (TAVI).

Regarding the repair of the mitral valve, the platform is used for tracking a 3D Trans-esophageal echo (TEE) probe, mostly showing images in the bi-plane mode, and additional surgical instruments. They key aspect of fusing imaging data and instrumentation is the augmentation of the ultrasound images “with virtual elements representing the instrument and target”, namely the aortic and the mitral annulus. With a number of user studies, the research team around Terry Peters has shown advantages of their system with respect to guiding instruments to the operation site. Measurements on “total tool distance from ideal pathway”, the time to navigate the instruments, and the “total tool path lengths” prove a better navigation performance when applying their system. In addition, entering “unsafe zones in the heart” could be avoided.

Aortic valve replacement is usually performed using fluoroscopy imaging. Main objectives are navigation to the operating site and positioning the artificial valve. Next to the problem of high radiation doses also the target area becomes hardly visible using x-ray. Peters again evaluated their ultrasound approach with cardiac phantoms that had shown promising results already for the mitral valve repair. The pure ultrasound driven navigation showed similar accurate results than the x-ray guided approach.

Augmented Virtuality

Image courtesy of Terry Peters

Along with an email interview, I got the chance to get some more insight into Terry Peters’ research and visions that I would like to share with you.

Prof. Peters, can you describe some of the highlights of your current research projects?
Current research projects span image-guidance in the Brain, Spine, Abdomen and thoracic cavity as well as the heart. Highlights include the ability to use multi-parametric MRI to predict abnormalities at the cellular level when identifying pathological tissue involved with epilepsy (while not directly related to Augmented reality, this information will be displayed in an AR-like environment to surgeons in the OR); The results presented at the conference (comment of the author: ISMAR 2014) that demonstrated the role of AR in improving the speed and safety of the cardiac valve repair procedure; the evaluation of an AR environment to plan brain-tumor removal, and the development of an ARF-enhanced system for ultrasound-guided spinal injections. Key people involved in this project were John Moore, Dr. Feng Li, Jonathan McLeod for Cardiac; Dr Ali Khan, Dr Maged Goubran, Diego Cantor for Neuro; Dr Kamyar Abhari, Dr Roy Eagleson, Dr Sandrine deRibaupierre for Neurosurgical simulation; Dr Elvis Chen and Golafsoun Ameri for Spinal interventions.

What are the different key features that a system need to provide with regards: guiding an instrument to the operation site vs. positioning an implant?
Many procedures can be broken up into navigation and guidance components. Navigation requires that the context of the target be provided to the image-guided platform via registration with some pre-operative data – performing the role of a GPS. But since many targets are moving and deformable, one should rely on intra-operative imaging once it has been determined that the therapeutic tool has been safely navigated to the vicinity of the target. The key differences are that the former needs to provide a broad view of the scene that allows the operator to avoid danger zones if applicable, and guides the surgeon to the vicinity of the target safely, while latter is a narrow view that accurately portrays the target in real time.

How would you approach clinical evaluation of a system like your cardiac guidance platform and show its benefit for the patient and the surgeon?
We begin with extensive animal trials that allow procedures to be performed with and without AR, and compare various metrics such as time, tool path-length and safety. These results would form part of an application to FDA (CE mark) for regulatory approval. To evaluate clinically, one would like to see a trial where half the patients were operated using the AR system, while the other half were treated using the current standard of care without the AR augmentation. Evaluation would seek data on time, tool path-length and danger zone incursion, as well as looking at multi-year follow-up of patients. Having said this, I think it would be very difficult to set up such a trial because of logistical and expense issues. One would probably instead look at a meta study that collected data from many different sites, where some used the technique and others did not.

In one of your papers of 2006 you state that the ”greatest challenge to the successful adoption of image-guided surgery systems relates not to technology, but to the user interface.” Can you describe a strategy how to design a suitable UI for a certain procedure and how to successfully integrate this UI into the clinical workflow?
The strategy we are following is, in consultation with the surgeon, to develop several alternative approaches and have the surgeon take each through its paces. Each system can be evaluated in terms of its efficiency and accuracy in guiding an instrument to its target. It is also important to evaluate the UI in terms of its cognitive load via a rating scale such as the NASA TLX scale. One thing we learned was that building a system to the initial specifications of a surgeon, is probably NOT going to result in a system that provides the data they need without overloading their cognitive channels. Because an operating room environment is already a highly stressful one, additional information needs to be kept simple.

In the same paper of 2006 you were telling that ”It ultimately must be possible to operate computer systems in the OR without the use of a keyboard or complicated switching devices.” Which human computer interfaces will make their way to the surgery room in the next 10 years?
This is a tough question. Voice commands and gestures have been tried without much traction.  It could be that some combination of eye ware like “Google Glass” or equivalent, coupled with a simple gesture detector or even skin mounted electrodes to detect neuromuscular signals could be used to select between a limited number of menu items on the unobtrusive HMD (Head Mounted Display). A significant challenge will be just to have a surgeon accept the use of such a HMD in the OR.

Which roles will robots take over in the operating room  and health care in general in 10 years?
Robots are already used extensively for many surgical procedures with the highest prevalence being the removal of the prostate, but the da Vinci robot is used in dozens of other procedures as well. While the data on long term improvements in patient outcome are still sparse, the use of the robot, often in conjunction with image-guided procedures, certainly decreases patient trauma and length of hospital stay. Note that none of these procedures use autonomous robots – they are all master-slave devices – and I do not expect this to change in the next 10 years. A potential role for an autonomous robot is however to replace some personnel functions in the operating room. I am aware of a robotic “scrub nurse” which can ensure that the surgeon is handed the correct instrument for the stage of the surgical procedure. Robot data will also be used increasingly to evaluate surgical skills and errors.

Christoph Bichlmeier

Enthusiast of Augmented Reality for Medical Applications.

More Posts - Website

Follow Me: