139 research outputs found

    Synthesizing a virtual imager with a large field of view and a high resolution for micromanipulation.

    No full text
    International audiencePhoton microscope connected with a camera is the usual imager required in micromanipulation applications. That microimager gives high resolution views, but the corresponding field of view are very narrow and do not allow the vision of the entire workfield. The classical solution consists in using multiple views imaging system: a high resolution imager for local view and a low resolution imager for global view. We are developing an alternative solution based on image mosaicing that requires only one microimager. The views from that real microimager are associated in order to achieve a virtual microimager which combines a large field of view with a high resolution

    高速ビジョンを用いたリアルタイムビデオモザイキングと安定化に関する研究

    Get PDF
    広島大学(Hiroshima University)博士(工学)Doctor of Engineeringdoctora

    Design of Immersive Online Hotel Walkthrough System Using Image-Based (Concentric Mosaics) Rendering

    Get PDF
    Conventional hotel booking websites only represents their services in 2D photos to show their facilities. 2D photos are just static photos that cannot be move and rotate. Imagebased virtual walkthrough for the hospitality industry is a potential technology to attract more customers. In this project, a research will be carried out to create an Image-based rendering (IBR) virtual walkthrough and panoramic-based walkthrough by using only Macromedia Flash Professional 8, Photovista Panorama 3.0 and Reality Studio for the interaction of the images. The web-based of the image-based are using the Macromedia Dreamweaver Professional 8. The images will be displayed in Adobe Flash Player 8 or higher. In making image-based walkthrough, a concentric mosaic technique is used while image mosaicing technique is applied in panoramic-based walkthrough. A comparison of the both walkthrough is compared. The study is also focus on the comparison between number of pictures and smoothness of the walkthrough. There are advantages of using different techniques such as image-based walkthrough is a real time walkthrough since the user can walk around right, left, forward and backward whereas the panoramic-based cannot experience real time walkthrough because the user can only view 360 degrees from a fixed spot

    Development Of A High Performance Mosaicing And Super-Resolution Algorithm

    Get PDF
    In this dissertation, a high-performance mosaicing and super-resolution algorithm is described. The scale invariant feature transform (SIFT)-based mosaicing algorithm builds an initial mosaic which is iteratively updated by the robust super resolution algorithm to achieve the final high-resolution mosaic. Two different types of datasets are used for testing: high altitude balloon data and unmanned aerial vehicle data. To evaluate our algorithm, five performance metrics are employed: mean square error, peak signal to noise ratio, singular value decomposition, slope of reciprocal singular value curve, and cumulative probability of blur detection. Extensive testing shows that the proposed algorithm is effective in improving the captured aerial data and the performance metrics are accurate in quantifying the evaluation of the algorithm

    Vision based robot assistance in TTTS fetal surgery

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.This paper presents an accurate and robust tracking vision algorithm for Fetoscopic Laser Photo-coagulation (FLP) surgery for Twin-Twin Transfusion Syndrome (TTTS). The aim of the proposed method is to assist surgeons during anastomosis localization, coagulation and review using a tele-operated robotic system. The algorithm computes the relative position of the fetoscope tool tip with respect to the placenta, via local vascular structure registration.The algorithm uses image features (local superficial vascular structures of the placenta’s surface) to automatically match consecutive fetoscopic images. It is composed of three sequential steps: image processing (filtering, binarization and vascular structures segmentation); relevant Points Of Interest (POIs) seletion; and image registration between consecutive images.The algorithm has to deal with the low quality of fetoscopic images, the liquid and dirty environment inside the placenta jointly with the thin diameter of the fetoscope optics and low amount of environment light reduces the image quality. The obtained images are blurred, noisy and with very poor color components.The tracking system has been tested using real video sequences of FLP surgery for TTTS. The computational performance enables real time tracking, locally guiding the robot over the placenta’s surface with enough accuracy.Peer ReviewedPostprint (author's final draft

    Developing object detection, tracking and image mosaicing algorithms for visual surveillance

    Get PDF
    Visual surveillance systems are becoming increasingly important in the last decades due to proliferation of cameras. These systems have been widely used in scientific, commercial and end-user applications where they can store, extract and infer huge amount of information automatically without human help. In this thesis, we focus on developing object detection, tracking and image mosaicing algorithms for a visual surveillance system. First, we review some real-time object detection algorithms that exploit motion cue and enhance one of them that is suitable for use in dynamic scenes. This algorithm adopts a nonparametric probabilistic model over the whole image and exploits pixel adjacencies to detect foreground regions under even small baseline motion. Then we develop a multiple object tracking algorithm which utilizes this algorithm as its detection step. The algorithm analyzes multiple object interactions in a probabilistic framework using virtual shells to track objects in case of severe occlusions. The final part of the thesis is devoted to an image mosaicing algorithm that stitches ordered images to create a large and visually attractive mosaic for large sequence of images. The proposed mosaicing method eliminates nonlinear optimization techniques with the capability of real-time operation on large datasets. Experimental results show that developed algorithms work quite successfully in dynamic and cluttered environments with real-time performance

    Design of Immersive Online Hotel Walkthrough System Using Image-Based (Concentric Mosaics) Rendering

    Get PDF
    Conventional hotel booking websites only represents their services in 2D photos to show their facilities. 2D photos are just static photos that cannot be move and rotate. Imagebased virtual walkthrough for the hospitality industry is a potential technology to attract more customers. In this project, a research will be carried out to create an Image-based rendering (IBR) virtual walkthrough and panoramic-based walkthrough by using only Macromedia Flash Professional 8, Photovista Panorama 3.0 and Reality Studio for the interaction of the images. The web-based of the image-based are using the Macromedia Dreamweaver Professional 8. The images will be displayed in Adobe Flash Player 8 or higher. In making image-based walkthrough, a concentric mosaic technique is used while image mosaicing technique is applied in panoramic-based walkthrough. A comparison of the both walkthrough is compared. The study is also focus on the comparison between number of pictures and smoothness of the walkthrough. There are advantages of using different techniques such as image-based walkthrough is a real time walkthrough since the user can walk around right, left, forward and backward whereas the panoramic-based cannot experience real time walkthrough because the user can only view 360 degrees from a fixed spot

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Tele-immersive display with live-streamed video.

    Get PDF
    Tang Wai-Kwan.Thesis (M.Phil.)--Chinese University of Hong Kong, 2001.Includes bibliographical references (leaves 88-95).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.iiiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Applications --- p.3Chapter 1.2 --- Motivation and Goal --- p.6Chapter 1.3 --- Thesis Outline --- p.7Chapter 2 --- Background and Related Work --- p.8Chapter 2.1 --- Panoramic Image Navigation --- p.8Chapter 2.2 --- Image Mosaicing --- p.9Chapter 2.2.1 --- Image Registration --- p.10Chapter 2.2.2 --- Image Composition --- p.12Chapter 2.3 --- Immersive Display --- p.13Chapter 2.4 --- Video Streaming --- p.14Chapter 2.4.1 --- Video Coding --- p.15Chapter 2.4.2 --- Transport Protocol --- p.18Chapter 3 --- System Design --- p.19Chapter 3.1 --- System Architecture --- p.19Chapter 3.1.1 --- Video Capture Module --- p.19Chapter 3.1.2 --- Video Streaming Module --- p.23Chapter 3.1.3 --- Stitching and Rendering Module --- p.24Chapter 3.1.4 --- Display Module --- p.24Chapter 3.2 --- Design Issues --- p.25Chapter 3.2.1 --- Modular Design --- p.25Chapter 3.2.2 --- Scalability --- p.26Chapter 3.2.3 --- Workload distribution --- p.26Chapter 4 --- Panoramic Video Mosaic --- p.28Chapter 4.1 --- Video Mosaic to Image Mosaic --- p.28Chapter 4.1.1 --- Assumptions --- p.29Chapter 4.1.2 --- Processing Pipeline --- p.30Chapter 4.2 --- Camera Calibration --- p.33Chapter 4.2.1 --- Perspective Projection --- p.33Chapter 4.2.2 --- Distortion --- p.36Chapter 4.2.3 --- Calibration Procedure --- p.37Chapter 4.3 --- Panorama Generation --- p.39Chapter 4.3.1 --- Cylindrical and Spherical Panoramas --- p.39Chapter 4.3.2 --- Homography --- p.41Chapter 4.3.3 --- Homography Computation --- p.42Chapter 4.3.4 --- Error Minimization --- p.44Chapter 4.3.5 --- Stitching Multiple Images --- p.46Chapter 4.3.6 --- Seamless Composition --- p.47Chapter 4.4 --- Image Mosaic to Video Mosaic --- p.49Chapter 4.4.1 --- Varying Intensity --- p.49Chapter 4.4.2 --- Video Frame Management --- p.50Chapter 5 --- Immersive Display --- p.52Chapter 5.1 --- Human Perception System --- p.52Chapter 5.2 --- Creating Virtual Scene --- p.53Chapter 5.3 --- VisionStation --- p.54Chapter 5.3.1 --- F-Theta Lens --- p.55Chapter 5.3.2 --- VisionStation Geometry --- p.56Chapter 5.3.3 --- Sweet Spot Relocation and Projection --- p.57Chapter 5.3.4 --- Sweet Spot Relocation in Vector Representation --- p.61Chapter 6 --- Video Streaming --- p.65Chapter 6.1 --- Video Compression --- p.66Chapter 6.2 --- Transport Protocol --- p.66Chapter 6.3 --- Latency and Jitter Control --- p.67Chapter 6.4 --- Synchronization --- p.70Chapter 7 --- Implementation and Results --- p.71Chapter 7.1 --- Video Capture --- p.71Chapter 7.2 --- Video Streaming --- p.73Chapter 7.2.1 --- Video Encoding --- p.73Chapter 7.2.2 --- Streaming Protocol --- p.75Chapter 7.3 --- Implementation Results --- p.76Chapter 7.3.1 --- Indoor Scene --- p.76Chapter 7.3.2 --- Outdoor Scene --- p.78Chapter 7.4 --- Evaluation --- p.78Chapter 8 --- Conclusion --- p.83Chapter 8.1 --- Summary --- p.83Chapter 8.2 --- Future Directions --- p.84Chapter A --- Parallax --- p.8
    corecore