Segmentation of Medical Imaging Data Using Augmented Reality

Segmentation of volumetric medical imaging data such as CT or MRI data is a process to select structures and areas within such data. The result of this selection can usually be stored to be further used for various kinds of secondary processes. An exemplary application using segmentation as a basis is the computation of surface models around the contour of segmented structures. These models can then be used for 3D visualization of the imaging data, measuring structures in 3D, building VR surgery simulators, additive manufacturing of anatomical models, designing patient specific implants, and much more.

In standard segmentation software, manually controlled but also semi-automatic tools help the user to select and separate regions of interest in such data. These tools typically use grayscale values but also contour information to identify corresponding regions in the data. Although, the user of such software deals with 3D information that has been acquired from a 3D basis, the patient, he generally works with standard computer interfaces namely, mouse and keyboard. Alternatively, 3D mice and haptic devices have been introduced to interact with the data. However, all these user interfaces still require a 2D monitor showing either 2D slices of the volume data or a projection of a 3D visualization of the data.

So, imagine the process of segmenting 3D structures from the 3D data would be comparable to modeling pottery. What if one can hold the data in one’s hand or place it on the table and then use scarpers and brushes guided with the other hand to model the regions of interest? What if we succeed in transferring the computer and 2D screen based segmentation tasks of today to a more analog 3D working environment?

Dr. Takehiro Tawara follows this vision and presents an Augmented Reality approach that provides an intuitive manual 3D working environment to segment medical imaging data. The data set is registered with a standard optical tracking marker and rendered with volume rendering giving a visual indication of the different regions in the data set. For this reason, the data can be simply positioned on the working desk in front of the user. The user holds an optically tracked, partially virtual stylus in his hand to point at regions inside the data set. Since the real part of that stylus is a Wii controller, the user can use its standard buttons to change the function of the tool. For this reason, the controller can be used to position cross sections through the volume data, i.e. the 3D data is cut and a slice view is shown at the cutting plane. In addition to that, the tip of a virtual extension of the stylus can be used to start a segmentation process at a 3D point inside the data. Here a so called seat point is set for an adapted version of the dynamic region growing algorithm. This algorithm allows for a semi-automatic segmentation of regions sharing one contour, e.g. a certain organ or vessel structure.

The image in the background of Dr. Tawara’s poster let’s assume that the AR scene can be presented by one of the AR head mounted displays of the company Vuzix.  With Dr. Takehiro Tawara‘s system, the task of segmentation becomes very comparable to the one of children painting in a color book. It suddenly becomes extremely intuitive. And that’s what an optimal human computer interface should be!