Experimental Results of 2D Depth-Depth Matching Algorithm Based on Depth Camera Kinect v1

Open Access

Abstract: Last year, we proposed a smart transcription algorithm in which a real liver is captured
using a 3D depth camera. As opposed to this, a virtual liver is represented by a polyhedron in STL
(Standard Triangulated Language) format (stereo-lithography) via DICOM (Digital Imaging and
Communication in Medicine) data captured by MRI (magnetic resonance imaging) and/or a CT
(computed tomography) scanner. By comparing the depth image in the real world and the Z-buffer in its virtual world, we quickly identify translation/rotation differences between real and virtual livers in a GPU (graphics processing unit). Then by a randomized steepest descent method based on the differences, we can quickly copy real liver motion to virtual liver motion. In this paper, this
performance (i.e., motion precision and calculation time) of the proposed algorithm is ascertained from several kinds of experiments based on the depth camera Kinect v1. This is the first challenge to use matching real-virtual-depth-images in our algorithm running in 3D AR (augmented reality) with overlapping real and virtual environment.
Keywords: Depth camera image, Z-buffer, steepest descent method, GPU, Parallel processing

Hiroshi Noborio, Kaoru Watanabe, Masahiro Yagi, Yasuhiro Ida, Shigeki Nankaku, Katsuhiko Onishi, Masanao Koeda, Masanori Kon, Kosuke Matsui and Masaki Kaibori

The Author field can not be Empty

The Institution field can't be Empty

Volume 1 Issue 1

Volume and Issue can't be empty

38 - 44

The Page Numbers field can't be Empty


Publication Date field can't be Empty