5 research outputs found
A Visual Sensor for Domestic Service Robots
In this study, we present a visual sensor for domestic service robots, which can capture both color information and three-dimensional information in real time, by calibrating a time of flight camera and two CCD cameras. The problem of occlusions is solved by the proposed occlusion detection algorithm. Since the proposed sensor uses two CCD cameras, missing color information of occluded pixels is compensated by one another. We conduct several evaluations to validate the proposed sensor, including investigation on object recognition task under occluded scenes using the visual sensor. The results revealed the effectiveness of proposed visual sensor
Probabilistic three-dimensional object tracking based on adaptive depth segmentation
Object tracking is one of the fundamental topics of computer vision with diverse applications. The arising challenges in tracking, i.e., cluttered scenes, occlusion, complex motion, and illumination variations have motivated utilization of depth information from 3D sensors. However, current 3D trackers are not applicable to unconstrained environments without a priori knowledge. As an important object detection module in tracking, segmentation subdivides an image into its constituent regions. Nevertheless, the existing range segmentation methods in literature are difficult to implement in real-time due to their slow performance. In this thesis, a 3D object tracking method based on adaptive depth segmentation and particle filtering is presented. In this approach, the segmentation method as the bottom-up process is combined with the particle filter as the top-down process to achieve efficient tracking results under challenging circumstances. The experimental results demonstrate the efficiency, as well as robustness of the tracking algorithm utilizing real-world range information
Precise Depth Image Based Real-Time 3D Difference Detection
3D difference detection is the task to verify whether the 3D geometry of a real object exactly corresponds to a 3D model of this object. This thesis introduces real-time 3D difference detection with a hand-held depth camera. In contrast to previous works, with the proposed approach, geometric differences can be detected in real time and from arbitrary viewpoints. Therefore, the scan position of the 3D difference detection be changed on the fly, during the 3D scan. Thus, the user can move the scan position closer to the object to inspect details or to bypass occlusions.
The main research questions addressed by this thesis are:
Q1: How can 3D differences be detected in real time and from arbitrary viewpoints using a single depth camera?
Q2: Extending the first question, how can 3D differences be detected with a high precision?
Q3: Which accuracy can be achieved with concrete setups of the proposed concept for real time, depth image based 3D difference detection?
This thesis answers Q1 by introducing a real-time approach for depth image based 3D difference detection. The real-time difference detection is based on an algorithm which maps the 3D measurements of a depth camera onto an arbitrary 3D model in real time by fusing computer vision (depth imaging and pose estimation) with a computer graphics based analysis-by-synthesis approach.
Then, this thesis answers Q2 by providing solutions for enhancing the 3D difference detection accuracy, both by precise pose estimation and by reducing depth measurement noise. A precise variant of the 3D difference detection concept is proposed, which combines two main aspects. First, the precision of the depth cameraâs pose estimation is improved by coupling the depth camera with a very precise coordinate measuring machine. Second, measurement noise of the captured depth images is reduced and missing depth information is filled in by extending the 3D difference detection with 3D reconstruction.
The accuracy of the proposed 3D difference detection is quantified by a quantitative evaluation. This provides an anwer to Q3. The accuracy is evaluated both for the basic setup and for the variants that focus on a high precision. The quantitative evaluation using real-world data covers both the accuracy which can be achieved with a time-of-flight camera (SwissRanger 4000) and with a structured light depth camera (Kinect). With the basic setup and the structured light depth camera, differences of 8 to 24 millimeters can be detected from one meter measurement distance. With the enhancements proposed for precise 3D difference detection, differences of 4 to 12 millimeters can be detected from one meter measurement distance using the same depth camera.
By solving the challenges described by the three research question, this thesis provides a solution for precise real-time 3D difference detection based on depth images. With the approach proposed in this thesis, dense 3D differences can be detected in real time and from arbitrary viewpoints using a single depth camera. Furthermore, by coupling the depth camera with a coordinate measuring machine and by integrating 3D reconstruction in the 3D difference detection, 3D differences can be detected in real time and with a high precision
ãã«ãã¢ãŒãã«æœåšçãã£ãªã¯ã¬é åæ³ã®å€å±€åã«ããç¥èã®ç¢ºççè¡šçŸ
è¿å¹ŽïŒãããããšäººã®å
±åãç®æãããã®ç 究ãçãã«è¡ãããŠããïŒçŸç¶ã®ããããæè¡ã«ãããŠïŒæ§ã
ãªãããããéçºãããŠãããïŒéãããç°å¢ã§ç¹å®ã®ã¿ã¹ã¯ãå®è¡ãããã®ãæ®ã©ã§ããïŒã¿ã¹ã¯ã«å¿
èŠãªè¡åãå
¥åãã¿ãŒã³ã«å¯Ÿããå¿çãªã©ã人ãå
šãŠäºåã«äžããªããã°ãªããªãïŒããããã人ãšèªç¶ã«æ®ããããã«ã¯ïŒäººã®èšèãç解ããå¿
èŠãããïŒãã®èšèã®èåŸã«ããæœåšçãªæå³ã解éããŠè¡åããªããã°ãªããªãïŒãŸãïŒã³ãã¥ãã±ãŒã·ã§ã³ã®ããã«ïŒããããèªèº«ã®æå³ãèšèªãšããŠåµåºããããšãæãŸããïŒæ§æ¥ã®äººå·¥ç¥èœã®ç 究ã§ã¯ïŒåèªãåãªãèšå·ãšããŠæ±ãïŒãã®èšå·ã§éããäžçã®äžã§èšèªãç解ããåªåãç¶ããŠããïŒèªç¶èšèªåŠçã»ç解ã¯ïŒãã®æµãã匷ãåããŠããïŒããã«å¯ŸããŠè¿å¹Žã®ãããã£ã¯ã¹ã»äººå·¥ç¥èœç 究ã§ã¯ïŒããããèšå·æ¥å°åé¡ãåºæ¬ãšããŠïŒèšèªã®æ¬è³ªçãªæå³ãæ±ãå§ããŠãããïŒæªã ã«èšèªã®ç解ãçæã®æ¬è³ªçãªè§£æ±ºã«ã¯é ãåã°ãªãïŒæ¬è«æã§ã¯ïŒãããããçµéšã«ãã£ãŠåŸããã«ãã¢ãŒãã«æ
å ±ã«åºã¥ããŠå€æ§ãªæŠå¿µã圢æãïŒãã®æŠå¿µãåºç€ãšããèšèªç解ã»çæãèããããšã§ãã®åé¡ã解決ããæ°ããªæ¹åæ§ã瀺ãïŒããã§ïŒæŠå¿µãšã¯ãã«ãã¢ãŒãã«ãªæ
å ±ãåé¡ããŠåœ¢æããããã«ããŽãªãã§ããïŒãã®æŠå¿µãéããŠæ§ã
ãªäºæž¬ãããããšããç解ãã§ãããšå®çŸ©ããïŒããã«èšèªã¯ïŒããããæŠå¿µãšçµã³ä»ããé³é»ã©ãã«ã§ããïŒäººãšã®èªç¶ãªã€ã³ã¿ã©ã¯ã·ã§ã³ã®äžã§ç²åŸããããšãå¯èœã§ããïŒã€ãŸãæ¬è«æã§ææ¡ããã¢ãã«ã¯ïŒãããããæ¥åžžã®æŽ»åã«ãã£ãŠåŸãããšã®ã§ããæ
å ±ãåºç€ã«æŠå¿µã圢æãïŒé³é»ã©ãã«ãšã®çµã³ä»ããèªã®é çªãæå³ããææ³ãããã ã¢ããã«ç²åŸããããšã§ïŒèšèªã®æå³ç解ãçæãå®çŸãããã®ã§ããïŒãããŸã§ïŒãã«ãã¢ãŒãã«æ
å ±ãçšããç©äœã®ã«ããŽãªåé¡ææ³ã¯äžæãã«ãã£ãŠææ¡ãããŠããïŒå®éã«ïŒãããããçµéšããããšã«ãã£ãŠåŸãæ
å ±ãã«ããŽãªåé¡ããããšã§ïŒäººéã®æèŠã«è¿ãç©äœæŠå¿µã®åœ¢æãå¯èœã§ããããšã瀺ããŠããïŒãŸãïŒåœ¢æãããæŠå¿µãå©çšããŠæªèŠ³æž¬æ
å ±ãäºæž¬ããããšãã§ãïŒããããã«ããç©äœã®ç解ãåè¿°ã®å®çŸ©ã®ç¯å²ã§å¯èœã§ãããšèšããïŒãããïŒãã人éã®ããã«æè»ãªç解ãããããã§å®çŸããããã«ã¯ïŒç©äœæŠå¿µã®ç²åŸã ãã§ã¯äžååã§ããããšã¯æããã§ããïŒãªããªãïŒã»ãšãã©ã®ç©äœã¯ããã䜿ã人ã䜿ã人ã®åãïŒäœ¿ãããå Žæãªã©ãé¢é£ããŠããïŒãããã®æ
å ±ãäºæž¬ã§ããªãéããã®ç©äœãç解ãããšã¯èšããªãããã§ããïŒã€ãŸãïŒç©äœæŠå¿µã®ã¿ãªãã人ã®åãæŠå¿µãå ŽææŠå¿µãªã©å€æ§ãªæŠå¿µãåŠç¿ãããšåæã«ïŒãããã®é¢ä¿æ§ãç²åŸããå¿
èŠãããïŒãã®ãããªå€æ§ãªæŠå¿µã®ç²åŸã¯ïŒãã«ãã¢ãŒãã«æ
å ±ã®éå±€çã«ããŽãªåé¡ãžãšçºå±ãããããšã§å®çŸããããšã§å¯èœã§ããïŒæçµçã«ã¯ãããããããã«ãããäºç©ã®çã®ç解ã®èšç®ã¢ãã«ããšãªãããšãæããã«ããïŒãããæ¬è«æã®ãŽãŒã«ã§ããïŒæ¬è«æã§ã¯ãŸãïŒç¬¬2ç« ã§ããããã家åºç°å¢ã§äœæ¥ããããšãèæ
®ãïŒãããŸã§èè
ãéçºãããã¥ãŒããã€ãã«ããæé€ã¿ã¹ã¯ãäžäŸãšããŠåãäžããïŒæé€ã¿ã¹ã¯ãè¡ãããã«ïŒãæé€ããå®çŸ©ããå¿
èŠãããïŒãã®å®çŸ©ã«åŸã£ãã¿ã¹ã¯ã®å®çŸã«å¿
èŠãªèŠèŠèªèã·ã¹ãã ãã¿ã¹ã¯ã®å¶åŸ¡ãªã©ãå®è£
ããïŒããã«ãã£ãŠå®çŸ©ç¯å²å
ã®ç©äœèªèãææè¡åãªã©ãå®çŸããããšãã§ãããïŒæªç¥ãªç°å¢ã«å¯ŸããŠæè»ã«ã¿ã¹ã¯ãè¡ãããšãã§ããªãïŒãã®çµæãèžãŸããŠïŒãæé€ãã®æ¬è³ªçãªæå³ãèå¯ããïŒäŸãã°ïŒãæé€æ©ããããããšããè¡åã¯æé€æ©ãæã£ãŠçŽ°ãããã¿ã®äžã§åããããšã§ãããšèãïŒãæé€æ©ããšããç©äœæŠå¿µïŒãäœãã®äžã§åããããšããåãæŠå¿µã®çžäºé¢ä¿ãã圢æãããæŠå¿µã§ãããšèããããšãã§ããïŒããªãã¡ïŒãæé€ããšã¯å€æ§ãªæŠå¿µã®éå±€çãªçžäºäŸåé¢ä¿ããæ§æãããæŠå¿µã§ãããšèããïŒããããå€æ§ãªæŠå¿µã®åœ¢æãšãããã®éå±€çãªæ§é ã®æ§ç¯ãããããã®ç¥èãšããŠéèŠã§ããïŒç¬¬2ç« ã§ã®è°è«ã«åºã¥ã第3ç« ã§ã¯ïŒããããã®ç¢ºççç¥èè¡šçŸã®ããã®ãã«ãã¢ãŒãã«æ
å ±ã®éå±€çã«ããŽãªåé¡ææ³ãææ¡ããïŒææ¡ææ³ã¯ïŒãã«ãã¢ãŒãã«æœåšçãã£ãªã¯ã¬é
åæ³ïŒMultimodal Latent Dirichlet AllocationïŒMLDAïŒãéå±€åããå€å±€ãã«ãã¢ãŒãã«æœåšçãã£ãªã¯ã¬é
åæ³ïŒmultilayered MLDAïŒmMLDAïŒã§ããïŒäžå±€ã®MLDAã§ã¯äžäœæŠå¿µã§ããïŒç©äœïŒåãïŒå ŽæïŒäººç©ã®æŠå¿µããããã圢æããïŒäžå±€ã®MLDA ã§ã¯ãããã®æŠå¿µãçµ±åããäžäœæŠå¿µã圢æãããïŒãã®ã¢ãã«ãçšããããšã§äŸãã°ïŒäžäœæŠå¿µãšããŠãžã¥ãŒã¹ãšããç©äœæŠå¿µãç©ãå£ã«éã¶ãšããåãæŠå¿µïŒãã€ãã³ã°ãšããå ŽææŠå¿µãªã©ã圢æãããïŒäžäœå±€ã§ã¯ãããã®é¢ä¿æ§ãåŠç¿ããïŒã飲ãããšããè¡åæŠå¿µã圢æãããïŒããã«ããïŒãžã¥ãŒã¹ãèŠãããšã§ãããå£ã«éã¶ã飲ãããšããè¡åãïŒãã®ã飲ãããšããè¡åãããã€ãã³ã°ããšããå Žæã§è¡ãªããããããšãã£ãæªèŠ³æž¬æ
å ±ã®äºæž¬ãè¡ãããšãå¯èœãšãªãïŒç¬¬4ç« ã§ã¯ïŒåœ¢æãããå€æ§ãªæŠå¿µãå©çšãïŒåæã«èªæãææ³ãç²åŸããããšã§ïŒèŠ³æž¬ããã·ãŒã³ãæç« ã§è¡šçŸããææ³ãæ€èšããïŒããã§æ±ãåé¡ã¯ïŒéå±€çãªæŠå¿µã«ãããèªæã®ç²åŸã§ããïŒã©ã®éå±€ã®ã©ã®æŠå¿µã«ã©ã®åèªãçµã³ä»ãããšããåé¡ã解ãå¿
èŠãããïŒæ¬è«æã§ã¯ïŒåèªãšæŠå¿µéã®çžäºæ
å ±éãçšããããšã§ïŒã©ã®åèªãæ¬æ¥ã©ã®æŠå¿µã«çµã³ä»ããŠããã®ããèªåçã«æšå®ããææ³ãææ¡ããïŒããã«ããåèªãšæŠå¿µã®çµã³ä»ããåŠç¿ããããšãå¯èœã§ããïŒååèªã«å¯Ÿå¿ããïŒç©äœïŒå Žæã人ãªã©ãšãã£ãæŠå¿µã¯ã©ã¹ã®æšå®ãå¯èœã§ããïŒåŸã£ãŠïŒæ瀺çºè©±ã«ãããæŠå¿µã¯ã©ã¹ã®çèµ·é ãåŠç¿ããããšã§ïŒæŠå¿µã¯ã©ã¹ã®é·ç§»ç¢ºçãšãã圢ã§è¡šçŸããã確çææ³ãåŠç¿ããããšãã§ããïŒããã«ãã£ãŠïŒããããã«ããèšèªã®æå³ç解ãçæãå®çŸããããšãå¯èœãšãªãïŒäžæ¹ïŒå®éã®ã³ãã¥ãã±ãŒã·ã§ã³ã¯ïŒèæ¯ç¥èãåšèŸºã®ç¶æ³ãªã©ãšãã£ãæèãèæ
®ããªããã°æç«ããªãïŒã€ãŸãïŒäºç©ã«å¯Ÿããç解ãããæè»ã«è¡ãããã«ã¯ïŒåŠãã§ããå€æ§ãªæŠå¿µã掻çšããäžã§ïŒæ§ã
ãªæèãèæ
®ããå¿
èŠãããïŒç¬¬5ç« ã§ã¯ïŒããããã人ãšç掻ããäžã§ïŒæ§ã
ãªæèã«ãããŠã©ã®ããã«è¡å決å®ããããè°è«ããïŒã€ãŸãïŒç²åŸããå€æ§ãªæŠå¿µãšæèãšçµ±åããããšã§ïŒé©åãªè¡åã決å®ããææ³ãææ¡ããïŒããã«ããäŸãã°ïŒäººãæ®æ®µãœãã¡ãŒã§ãã¬ããèŠãŠãããšãã«ïŒãèåãé£ã¹ãªãããè¶ã飲ãã§ãããšããããšãç¥ã£ãŠããã°ïŒäººãããèåãæã£ãŠããŠããšåœä»€ããéã®é³å£°èªèã«èª€ããçãããšããŠãïŒãã®ãšãã«ããœãã¡ãŒã§ãã¬ããèŠãŠããŠãè¶ã飲ãã§ããããšããæèãçšããããšã§ïŒãããããé©åã«å€æãããŠæ£ããè¡åããšãããšãã§ããå¯èœæ§ãããïŒç¬¬6ç« ã§ã¯ïŒæ¬è«æã®ãŸãšããšä»åŸã®èª²é¡ã«ã€ããŠè¿°ã¹ãïŒé»æ°é信倧åŠ201