Dr. Marco Feuerstein

Interview with Dr. Marco Feuerstein – Augmented Reality in Laparoscopic Surgery

In his dissertation with the chair for Computer Aided Medical Procedures & Augmented Reality, TU München, Germany, Dr. Marco Feuerstein addressed the topic “Augmented Reality in Laparoscopic Surgery – New Concepts for Intraoperative Multimodal Imaging“. He submitted his thesis in 2007 and continued his research activities in Japan at Prof. Kensaku Mori’s laboratory. Today, he is a co-founder of the spin-off microDimensions. This interview took place in München, Deutschland on August 2, 2011. (German Version)

Hello Dr. Feuerstein! Please give us a short summary of your dissertation.
I developed a system that uses an endoscopic camera to see into the liver without cutting and opening the organ. The system visualizes vessels of the organ that have to be cut or clipped during an operation. An exemplary application of the system can be preoperative planning.

How did you get the idea to do a doctorate about this topic?
Actually, I targeted the topic already with my diploma thesis. At that time, the plan was to improve heart surgeries that were executed using a human-controlled robot equipped with endoscopic cameras (the DaVinci® Robot). We tried to find out how to technically augment the system in order to see a little more with the video images provided by the endoscope cameras. The project had been started at the German Heart Center, München, TU München and was continued at the trauma surgery department, Chirurgischen Klinik und Poliklinik – Innenstadt des Klinikums der Universität München. In conjunction with the move to Klinikum Innenstadt, we also focused on another body region. We then addressed abdominal surgery in cooperation with the local visceral surgeons.

How did you get in contact with surgeons at Klinikum Innenstadt?
We went to the surgeons and proposed the research topic. Surgeons were immediately interested, in particular MD Sandro Michael Heining. He then knew and recommended the specialist in laparoscopy of the hospital, Dr. Thomas Mussack.

What is the motivation in combining Augmented Reality and Endoscopy?
In particular in liver surgery, which I defined as an exemplary application, it is very important to visualize structures beneath the surface of the organ in order to avoid potential complications. For instance, one could accidentally cut a vessel. In this case, it has to be shifter from a keyhole surgery to an open surgery. This is, of course, quite disadvantageous for the operation and the therapy and means more unnecessary pain for the patient. In particular the visualization of hidden and occluded structures is important. An improved quality of the camera images comes for free. Usually, there is a distortion of the images of the endoscope camera due to the fish eye lens. When applying augmented reality, it is essential to exactly superimpose real and virtual camera images and the undistortion has to be calculated anyways. The resulting images then get a more natural appearance.

You also thought about the optimal integration of your system into the surgical workflow.
Using my „registration-free“ approach, we optically track (localization in 3D) a mobile x-ray device instead of the patient. The x-ray device generates a 3D data set of the patient. Using this idea, one can avoid one registration step and all tracked devices such as the laparoscope camera, the x-ray device, and further endoscopic instruments are handled within only one coordinate system. The technique allows to directly projecting the acquired 3D data onto the patient.

Which applications are appropriate regarding the accuracy of the system?
In case the system is applied to procedures inside the body, one should try to become as accurate as possible. However, this strongly depends on the task. Our system operates with the accuracy of a few millimeters. Breathing and patient movement are always factors that work against accuracy. Physicians told us that the present accuracy is sufficient since they are able to mentally remove the body’s motion due to breathing. They know in which moment of the breathing cycle the superimposition of the data onto the organ is correct or not.

In addition, you developed an extension of the system that integrates ultrasound.
Using laparoscopic ultrasound, in addition to the laparoscopic camera one inserts an ultrasound probe through a second port. Here, the probe is installed at the tip of a long instrument. The tip of the instrument can be rotated on two axes, which allows for a little more freedom of movement. Then the surgeon touches the surface of the liver with the ultrasound probe and scans the organ. In contrast to the previously explained “registration-free” approach, the advantage of the ultrasound approach is the availability of life images. The “registration-free” approach only provides the surgeon with one snapshot at the beginning of the surgical procedure. Once the patient is moved or breathing and other deforming events take place, the registration is not valid anymore. Using the ultrasound approach, one tracks the camera and the ultrasound in real time. Here, patient movement does not affect the system quality. It is always possible to geometrically relate both imaging instances to each other and visualize the resulting images in combination. Here, the surgeon can see the ultrasound plane being projected into the laparoscopic video images. For this reason, the physician does not need to mentally position the ultrasound image, usually being presented on a separate monitor, within the body of the patient.

Is there already a commercially available system that combines laparoscopy and augmented reality?
According to my knowledge, there is no product so far. However, there are navigation systems for bronchoscopy, which one may call virtual reality systems. But those are by definition not Augmented Reality systems. In this case, one navigates through CT (computer tomography) data. Regarding our project, we developed a prototype in cooperation with a company producing tracking systems. However, the project ended here. At that time, the big companies developing laparoscope systems were not ready to invest.

What is the missing part that prevents such a system for being applied in an operating theatre?
In particular, the integration of the hardware needs to become much easier. Endoscopes have to be equipped with a standard, sterilizable plug device for an optical tracking system that allows attaching optical marker spheres for tracking. In addition the tracking cameras have to become standard equipment of operating theatres.  These cameras may not hinder surgical staff; however, need unrestricted views onto the tracked objects.  This still requires a lot of engineering efforts. In addition, one may attack the problem of handling patient motion, e.g. for the abdominal region. The easiest way to start applying the system would be its usage with surgeries that deal with rigid anatomy, for example in orthopedic surgery.

Your prediction! When can we expect an AR System to be used in the operating room? Will it ever be there?
Yes, I guess so! In particular for orthopedic procedures, we will see such systems in action soon. In laparoscopy or in flexible endoscopy, it will take a little longer, I guess. Not before 10 years.

Which are the most promising medical application fields of Augmented Reality from your point of view?
Regarding intraoperative applications, I see potentials when dealing with rigid body regions. In addition, medical education and training can benefit from simulation systems that base on AR technology. I do not see too many advantages for diagnostic tasks, e.g. based on CT data.

How can I prepare myself, in case I want to start doing research in this field?
It is essential to have sustained experience in math, in particular geometry to understand camera geometry. This is part of the research domain computer vision, which is used to teach a computer how to see and understand his environment. Medical knowledge can be achieved by talking to physicians or literature review if needed. Here it is important to be proactive and to contact physicians. You should be skilled in computer science as well. It is very important to know programming using C++ for medical image processing. In case, somebody wants to continue with my project, he should contact Prof. Navab.

Which follow-up projects are available?
One important task would be the evaluation of the system. We completed an initial study using a body phantom. Here, one could start a follow-up study using a better phantom, which is deformable and more appropriate for generating realistic ultrasound images. The body phantom can be constructed by your own, or one could imagine collaborating with a company having experience in building anatomic manikins. In addition, a successor could evaluate alternative tracking options that allow for higher accuracy and less parasitic drags in tracking data.  Even though electromagnetic tracking systems tend to produce more errors, one may start to exhaust this tracking approach and find better algorithms. One may think of exclusively using electromagnetic tracking. Also the ultrasound component of the system can be optimized by using an ultrasound device that produces fewer errors. Another task would be the integration of the system into the surgical workflow and working environment. Of course, there would be also many scientific problems waiting for a solution. Regarding visualization, it would be interesting to implement a bird‘s eye view in order to provide the physician with an overview to understand relative positions of instruments. It is very important to find a sophisticated presentation of the ultrasound plane in the camera view. The current approach of naively superimposing camera image and ultrasound plane is not optimal yet since the depth perception of the ultrasound image is difficult. One may think of a virtual window being installed on the surface of the organ that reveals the view into the organ and the ultrasound plane. The most important issue when starting a project is always the discussion with the surgeons in order o find out what they expect from an Augmented Reality based view into the body.

Which were the most important steps of your project?
The most important step was the development of the software. Here, we started from the scratch and many things that looked trivial at first turned out to be quite complicated. It was really nice when we finally managed to superimpose a virtual camera image onto a real one. Unfortunately, at the beginning of the project at the heart center, we had problems to get an interface from the robot manufacturer for receiving the robot’s tracking data. There is no point to install an additional tracking systems when the tracking information is there, however, cannot be provided due to company policy. The reorganization of the project with the new focus on laparoscopy consumed some time. Then my supervisor wanted to start an animal study. It was quite laborious to organize that experiment and to get together all surgeons. At the second onset, everything went fine and we had an experienced animal doctor in our team. It was a very intensive experience, however, for the future I would rather go for more extensive lab tests using better body phantoms. The animal study showed that the system works in general; however, there is still a lot of space for improvements. The optimization can be achieved using phantom models. There is no point of conducting further animal studies at this stage.

Which were the most important factors, when working on your subject and finally completing the dissertation?
The most important factor was my personal interest in Augmented Reality and Visualization. At that time, Prof. Klinker sent me to Prof. Navab since she found that he is the optimal supervisor of a project in the research Medical Augmented Reality. Furthermore, there was a really nice working atmosphere at Prof. Navab’s chair.  In addition, it was very important for me to be engaged in a project having the objective to help the patient, and even to save lives. A big advantage for my work was the strong collaboration of the chair with various physicians and the installations of different labs at the local hospitals. Working at those labs helped not to lose the focus of the project and allowed me to discuss with physicians their expectations of the system on a daily basis.

Which companies were open minded regarding your subject and supported you with hardware?
Advanced Realtime Tracking GmbH provided us with tracking cameras and the company Storz gave us a new endoscope system. The endoscope system could be organized thanks to our surgeons reviving medical equipment from that company for their daily surgical tasks. Both companies are very communicative and open minded. Unfortunately, they still were not ready to sponsor my position as a PhD student. I guess, at that time, the subject was still too far away from application in real working environments.

What are you doing now?
Already at the end of my time as a PhD student, I thought about different options to start a spinoff. When I came back to Germany after my stay in Nagoya, Japan, Dr. Martin Groher and I decided to start our own venture outside of the university. At that time, Dr. Martin Groher was member of a research project that had a very good industrial partner. We found that we could develop the existing prototype into a marketable commodity. Also Dr. Hauke Heibel was excited about the project and became our third team member. Together, we submitted a proposal the get financial support for 1.5 years through the governmental program EXIST Forschungstransfer(website: microDimensions).

Start-up Projekt microDimensionsThe project addresses microscopic samples that can be seen through a light microscope. Such samples are cut into filmy, light-transmissive slices. When cutting these 3D samples, cracks, folds or deformations can be generated. For this reason it difficult to reset these 2D slices to become the original 3D sample. We develop software to automate this cumbersome and time-consuming manual process. For example, this is interesting for the task of vascularization: Where are the vessels, what is their volume and how does their branching structure look like? These findings are interesting for pharmaceutical research in oncology, because there are usually many vessels crowing around tumors that feed the tumor tissue with nutrient. Currently, we are programming a lot and finished a first prototype having a graphical user interface. In the close future, we plan to present our software at one or two conferences. In September, we will receive a fourth person in our team who will care about business related tasks, e.g. the optimization of our business plan. Among other activities, we plan to participate in a business plan competition, e.g. MBPW or science4life. It is a lot of fun to see a project not only from the scientific point of view but also from an economic perspective.

Thank you for the interview!

(Videos of Dr. Feuerstein‘s publications: video 1, video 2)

 

Christoph Bichlmeier

Enthusiast of Augmented Reality for Medical Applications.

More Posts - Website

Follow Me:
LinkedIn

One thought on “Interview with Dr. Marco Feuerstein – Augmented Reality in Laparoscopic Surgery”

Comments are closed.