Patient-specific models of anatomical structures and pathologies generated from volumetric medical images play an increasingly central role in many aspects of patient care. A key task in the generation of these models is the segmentation of anatomical structures and pathologies of interest. Although numerous segmentation methods have been developed, they often produce erroneous delineations that require time-consuming modifications. The delineation and segmentation of anatomical structures and pathologies in volumetric scans is a challenging and time-consuming task that significantly hampers the use of their 3D models in the clinic.
A 3D segmentation annotation and correction system based on hand gestures tracked in real time with an optical tracking device.
We have developed a new a semi-automatic method for 3D segmentation in volumetric medical scans using natural input from the user. The input consists of hand motions and gestures tracked in real time with the Leap Motion Device. The inputs are scribbles defined by hand gestures across several slices of the CT scan; the output consists of both segmentation of the corrected model and its 3D surface mesh. Our method consists of four steps: 1) acquisition of two initial scribbles of the anatomy of interest and of the background; 2) segmentation of the structure of interest with the Grow-Cut segmentation algorithm using the initial scribble and the corrections; 3) 3D correction of the segmentation leaks over the mesh using hands gestures, and; 4) iteratively repeat stages 2-3 until a satisfactory result is obtained.
To evaluate our method, we performed both a qualitative analysis on healthy soft-tissue anatomies and quantitative examination on scans of liver tumors from 10 patients. The experimental results for the quantitative examination for liver tumors yield Volume Overlap Error of 15.1% (std=4.6) and Volume Similarity Error of 8.7% (std=4.8).
Participants: S. Shenzis, M. Samson, R. Vivanti, L. Joskowicz, CASMIP Lab.