Interview with Jörg Traub, SurgicEye GmbH – Augmented Reality for Intraoperative Imaging and Navigated Surgery

Dr. Joerg Traub is one of the co-founders and managing director of SurgicEye GmbH. As a CEO he is responsible for General Management, Team Development, Business Development, Production, Marketing and Sales. During his studies of Computer Sciences at Technische Universität München (TUM), Dr. Traub worked for several high-tech companies in Germany, the USA and Singapore including BMW, Siemens and multiple internet start-up companies. In 2005 he received the Werner von Siemens Excellence Award for his exceptional thesis in the field of novel visualization concepts for minimally invasive surgery; in 2008 he received the Ph.D. in medical imaging and navigation with highest distinction. Dr. Traub coordinated the Image Guided Surgery Group at the Department „Computer Aided Medical Procedures“ at TUM where he prepared several industrial and clinical cooperation activities. He is author / co-author of over 40 international papers and eight patent applications. Under his supervision, several new image guided surgery systems were designed, developed and evaluated.

Your company is selling a navigation system that uses Augmented Reality technology. Please give us a short introduction to your product declipseSPECT?

courtesy of SurgicEye GmbH

Our system generates radioactive SPECT imaging data and visualizes the data in-situ, i.e. at the location where the data has been measured.  In fact, we never tried to sell a pure Augmented Reality device. We rather integrated Augmented Reality technology in our system declipseSPECT. Here, we use a gamma probe to measure radioactive radiation that is emitted by Technetiumhaving been injected into the anatomical region of interest before surgery. The gamma probe is actually a Geiger-Mueller counter being able to measure medical radioactivity. The radioactive substance spreads in particular within tumorous tissue. The gamma probe is then used to scan the patient. In addition to this imaging system, we integrated a navigation system into our product. For this reason, we measure the position of the probe during the scanning procedure. This part of the workflow uses Augmented Reality to show the scan region augmented with the scan result.  The Augmented Reality views are created from the perspective of a video camera being rigidly mounted on top of the optical tracking system. The visualization of the scanned radioactivity is then superimposed onto the video images. Since the camera is rigidly attached to our system, it only allows for 2D views from a fixed position. Rotating the patient or the system is not applicable to get additional views for showing the 3D structure of the measured radioactivity. For this reason, we developed a second Virtual Reality based visualization mode that allows for 3D views. Here, we can control the position of the data volume storing the measured radioactivity as well as the position of the view of the virtual camera using the tracked gamma probe that has assessed the data before.  In this view mode, the virtual camera is positioned at the tip of the gamma probe.

Which surgical applications can benefit from your system?
This type of radioactive imaging is currently used for sentinel lymph nodes, breast cancer, skin cancer – melanoma, and sometimes in the head and neck region, e.g.  tongue carcinoma. Sentinel lymph nodes is rather an introductory application since it works well and the usage of radioactivity is widely established. At the moment, we start testing our system to finding primary breast tumors, which is actually our core application.  Here, radioactive imaging has not been established yet. One still needs to slightly adapt the clinical protocol in order to apply radioactivity. About 20 to 25% of all surgeries need subsequent interventions to complete tumor resection. A recently published study, gathering results from multiple studies, states that the application of radioactivity reduces the need for additional surgeries to entirely resect tumorous tissue to 8-10%. That means, instead of every 4th patient only every 10th patient would need a subsequent surgical treatment, which is a big progress in this medical field.

Would the application of your system cause a delay in surgery time?
Currently it does, which can be explained by a certain training effect. However, in our case studies applying the system to the resection of melanomas, surgeons claimed that they could reduce time, since they immediately knew where to cut thanks to our visualization methods. We need 1.5 minutes to generate imaging data while moving the gamma probe to scan the region of interest. Then, our reconstruction algorithm computing a 3D image from the scanned data needs additional 30 seconds. In April, we released a new algorithm that allows 3D reconstruction in almost real time (two seconds) to create previews having a little lower quality as our optimal result. This preview has been integrated, because we experienced that surgeons get impatient when they have to wait for the images to be displayed after the scanning procedure. The high quality reconstruction is then computed in the background and displayed as soon as the original algorithm has finished (takes 30 seconds). In a standard breast surgery, we estimate a delay of three to five minutes. However, we can reduce invasiveness of skin incisions. For this reason, the procedure of targeting and finding the tumor needs less time. There is no need for digging into tissue to detect those tumors. Regarding lymph nodes, we could find tumors that had not been detected by standard techniques. Currently, we run a study at Klinikum Rechts der Isar, TUM München to further explore this effect.

Who is controlling the device during surgery?
That depends on the field of application. Usually, there is one unsterile person controlling the system. Our newest release will allow operating the imaging system as well as the navigation system with only two buttons being operated by a food pedal. Actually, there is almost no interaction required anymore. Generally, user interaction is a big problem regarding the acceptance of navigation systems, in particular also of Augmented Reality systems. Interaction has to be intuitive! Today, navigation systems are not really accepted in neurosurgery and orthopedic surgery. One of the reasons is the need of a technical specialist to install and prepare, and operate the system. There are only few cases that use such navigation systems on a daily bases. Our very first system was just as bulky and too complex to be used by a surgeon. We have gathered lots of experiences from studies at Klinikum Rechts der Isar, TUM München, in Würzburg, and Amsterdam, which showed us that the system is acceptable in study conditions. However, when the system shall be used in a daily clinical routine, the surgeon must not think about the exhaustive preparation of the system, which button to press to get a certain effect, etc. At best, the complexity of the technology is entirely hidden. However, it is not possible to totally avoid interaction.

How many surgeons are currently using the system and how many patients have been operated?
Overall, we have sold 20 systems. 10 of these systems are currently used. Some of the devices are used in research of nuclear medicine. Most of the devices are applied in ENT surgery. Sentinel lymph nodes in the head and neck regions are not that common in contrast to breast cancer and melanoma, however, the advantage is bigger. There are three clinical centers using our system for breast cancer, one for sentinel lymph nodes, one for primary tumors and two for multiple applications.  We have customers in EU, and already a few in US and Middle East.

Please describe the type of surgeon who benefits most from your system!

courtesy of SurgicEye GmbH

There is not only one answer to this question. We noticed that more experienced, older surgeons consider the new technology rather as a media to teach their operation techniques to others as long as we make sure that their thoughts and workflows are implemented in the system.  Generally, younger surgeons are rather open-minded with respect to novel technology and try to experiment with new gadgets and establish these gadgets. In particular the interpretation of the Augmented Reality visualization can be achieved easier by the younger generation, grown up with computer games. However, we strongly benefit from our collaborations with more experienced surgeons, since we do not want to take away but integrate their knowledge into standardized processes to be used by the next generations.

Were there major changes since the initial prototype?
There were a few minor changes. For instance, there was no Augmented Reality visualization at the beginning. It was added at one point as a gimmick. The production cost of that visualization mode was small compared to other components of the system. So, we left it there as an innovative feature. We also extended the field of application of our system. At the beginning, we strongly focused on breast cancer. We understood that there is an advantage regarding the resection of the primary tumor. However, as I said previously, the usage of radioactivity had not been established in the related treatment plan at that time. This is still not the case. With respect to sentinel lymph nodes the advantage is not big enough to invest here and justify the application of the system. For this reason, we counted on interdisciplinary projects and analyzed melanoma surgery, ENT surgery, and breast surgery, but also interventions in urology. We simply tried to find surgical applications that can benefit from radiologic imaging and navigation.

Can you describe in which situations Virtual Reality is favored over Augmented Reality and vice versa?

courtesy of SurgicEye GmbH

This depends on the stage of the surgery. Augmented Reality is rather used for diagnostic tasks in order to assess the position, distribution and amount of radiation before the skin incision is done. This is the big difference to competing products, which only provide 2D visualizations and do not allow for a combined view of the patient and the location of the source of the imaging data. When it comes to the navigation to the tumor, we noticed that surgeons preferred the 3D visualization mode (Virtual Reality) that allows the user to perceive depth information, i.e. how deep is a certain structure seated inside the patient and how deep, how long and into which direction do I have to cut.

Wouldn’t it be possible to track the scalpel cutting the tissue to display it in combination with the point cloud?
This has not been done due to regulatory affairs, since we cannot describe the exact deformation of soft tissue. We try to visualize tumors that move all the time. Also the surrounding tissue is permanently getting deformed. That’s exactly the reason, why we chose this kind of imaging modality. We do not believe that preoperative imaging data such as CT or MRI can be deformed in a way so that it matches with our intraoperative situation. This is in some sort valid in neurosurgery, orthopedics, and trauma surgery, but it is never valid in surgeries addressing soft tissue. For this reason, we decided to work with intraoperative imaging. In fact, our data still reflects only a snapshot of the moment of image acquisition. As soon as we start cutting, the tissue gets deformed again. Thus, we do not track the tissue but the gamma probe. When no radioactivity is measured at the location of a previously assessed radioactive structure, we know that the tissue has been deformed and our current scan does not reflect the reality anymore. Then we need to get back and start a new data acquisition process.

Please tell us about your previous entrepreneurial milestones?
We founded the company in 2008. Until 2010, we strongly focused on R&D, regulatory affairs, and a few first studies with clinical partners. In 2011, we had quite good sales results since we spend most of the time on the market launch, sales strategies, etc.  Also our employees of the development team spend time on marketing issues, and the improvement of the usability of the system. The first feedback was quite negative. He said, that he cannot work with our device and wants to throw it away. Then we spend lots of energy on the improvement of the user interface to make usage of the system more intuitive. This year, we invested in augmenting our sales department, hired additional people and reorganized our structures. For this reason, Thomas and I, being the heads of this venture, can spend again more time on R&D issues, securing IP, and finding new application fields, e.g. urology, laparoscopy and fusion of imaging data from different sources.

Where do you see your company in five years? Will there be more products and applications?
Of course! Either we have further own products, or we develop products and integrate them into another product, e.g. of a company producing laparoscopes or ultrasound devices. We want to go further in the field of intraoperative imaging and navigation that use Augmented Reality. Augmented Reality needs to be integrated into the workflow, the devices, and the working environment of surgeons. In particular, Augmented Reality needs to be integrated into imaging devices CAMC (Camera Augmented C-Arm) was during my research days at TUM a good example of an intelligent integration of Augmented Reality. One has imaging data that is embedded into an Augmented Reality – and navigation environment.

Regarding CAMC Augmented Reality has been technically integrated into the C-arm, an x-ray imaging system. Your solution requires a new card to be moved into the operating theatre. What is your vision of the future operating room?
Using a card is rather a temporary solution for all manufacturers because interfaces between solutions are difficult to be handled in the surgical environment. There are lots of discussions about open interfaces. However, regulation issues stop those approaches.  At the end, there is always someone who has to assume liability that the device works. In a perfect world, I’d throw the card out of the operating room. I can imagine attaching navigation and imaging devices to the ceiling and present data on one of the five to six monitors that are already installed in the OR. We may now take our device as an example how difficult it is to interface with one of those monitors. Finally, someone has to guarantee that my visualization is not falsified by interpolation, distortion or change of color and contrast ranges. As long as you work with a closed system, you can control all parameters. However, as soon as the system is connected to another system, one needs to define all interfaces in a complete and rigid way, which is in reality still not possible.  Everybody is complaining about having to deal with a huge equipment pool being located on the gangway in front of the OR. But this situation won’t change unless there is a major change regarding the structure of regulation standards. My preferred image of the OR of future is taken from in the movie „The Island“. One scene shows a liver resection using robot arms and all devices of the operating environment are seamlessly combined. This scene exactly reflects my vision of the future ORs.

Thanks a lot for your time.

Christoph Bichlmeier

Christoph Bichlmeier

Enthusiast of Augmented Reality for Medical Applications.

More Posts - Website

Follow Me:
FacebookLinkedIn