135 research outputs found

    Amorphous silicon e 3D sensors applied to object detection

    Get PDF
    Nowadays, existing 3D scanning cameras and microscopes in the market use digital or discrete sensors, such as CCDs or CMOS for object detection applications. However, these combined systems are not fast enough for some application scenarios since they require large data processing resources and can be cumbersome. Thereby, there is a clear interest in exploring the possibilities and performances of analogue sensors such as arrays of position sensitive detectors with the final goal of integrating them in 3D scanning cameras or microscopes for object detection purposes. The work performed in this thesis deals with the implementation of prototype systems in order to explore the application of object detection using amorphous silicon position sensors of 32 and 128 lines which were produced in the clean room at CENIMAT-CEMOP. During the first phase of this work, the fabrication and the study of the static and dynamic specifications of the sensors as well as their conditioning in relation to the existing scientific and technological knowledge became a starting point. Subsequently, relevant data acquisition and suitable signal processing electronics were assembled. Various prototypes were developed for the 32 and 128 array PSD sensors. Appropriate optical solutions were integrated to work together with the constructed prototypes, allowing the required experiments to be carried out and allowing the achievement of the results presented in this thesis. All control, data acquisition and 3D rendering platform software was implemented for the existing systems. All these components were combined together to form several integrated systems for the 32 and 128 line PSD 3D sensors. The performance of the 32 PSD array sensor and system was evaluated for machine vision applications such as for example 3D object rendering as well as for microscopy applications such as for example micro object movement detection. Trials were also performed involving the 128 array PSD sensor systems. Sensor channel non-linearities of approximately 4 to 7% were obtained. Overall results obtained show the possibility of using a linear array of 32/128 1D line sensors based on the amorphous silicon technology to render 3D profiles of objects. The system and setup presented allows 3D rendering at high speeds and at high frame rates. The minimum detail or gap that can be detected by the sensor system is approximately 350 μm when using this current setup. It is also possible to render an object in 3D within a scanning angle range of 15º to 85º and identify its real height as a function of the scanning angle and the image displacement distance on the sensor. Simple and not so simple objects, such as a rubber and a plastic fork, can be rendered in 3D properly and accurately also at high resolution, using this sensor and system platform. The nip structure sensor system can detect primary and even derived colors of objects by a proper adjustment of the integration time of the system and by combining white, red, green and blue (RGB) light sources. A mean colorimetric error of 25.7 was obtained. It is also possible to detect the movement of micrometer objects using the 32 PSD sensor system. This kind of setup offers the possibility to detect if a micro object is moving, what are its dimensions and what is its position in two dimensions, even at high speeds. Results show a non-linearity of about 3% and a spatial resolution of < 2µm

    A Structured-Light Approach for the Reconstruction of Complex Objects

    Get PDF

    EXTRACTING DEPTH INFORMATION FROM STEREO VISION SYSTEM, USING A CORRELATION AND A FEATURE BASED METHODS

    Get PDF
    This thesis presents a new method to extract depth information from stereo-vision acquisitions using a feature and a correlation based approaches. The main implementation of the proposed method is in the area of Autonomous Pick & Place, using a robotic manipulator. Current vision-guided robotics are still based on a priori training and teaching steps, and still suffer from long response time. The study uses a stereo triangulation setup where two Charged Coupled Devices CCDs are arranged to acquire the scene from two different perspectives. The study discusses the details of two methods to calculate the depth; firstly a correlation matching routine is programmed using a Square Sum Difference SSD algorithm to search for the corresponding points from the left and the right images. The SSD is further modified using an adjustable Region Of Interest ROI along with a center of gravity based calculations. Furthermore, the two perspective images are rectified to reduce the required processing time. Secondly, a feature based approach is proposed to match the objects from the two perspectives. The proposed method implements a search kernel based on the 8-connected neighbor principle. The reported error in depth using the feature method is found to be around 1.2 m

    Towards Intelligent Telerobotics: Visualization and Control of Remote Robot

    Get PDF
    Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image

    Morphological analysis for improving clinical diagnosis of skin cancer

    Get PDF

    Digital Techniques for Documenting and Preserving Cultural Heritage

    Get PDF
    In this unique collection the authors present a wide range of interdisciplinary methods to study, document, and conserve material cultural heritage. The methods used serve as exemplars of best practice with a wide variety of cultural heritage objects having been recorded, examined, and visualised. The objects range in date, scale, materials, and state of preservation and so pose different research questions and challenges for digitization, conservation, and ontological representation of knowledge. Heritage science and specialist digital technologies are presented in a way approachable to non-scientists, while a separate technical section provides details of methods and techniques, alongside examples of notable applications of spatial and spectral documentation of material cultural heritage, with selected literature and identification of future research. This book is an outcome of interdisciplinary research and debates conducted by the participants of the COST Action TD1201, Colour and Space in Cultural Heritage, 2012–16 and is an Open Access publication available under a CC BY-NC-ND licence.https://scholarworks.wmich.edu/mip_arc_cdh/1000/thumbnail.jp

    Aerospace medicine and biology: A continuing bibliography with indexes (supplement 344)

    Get PDF
    This bibliography lists 125 reports, articles and other documents introduced into the NASA Scientific and Technical Information System during January, 1989. Subject coverage includes: aerospace medicine and psychology, life support systems and controlled environments, safety equipment, exobiology and extraterrestrial life, and flight crew behavior and performance

    Perception de la géométrie de l'environnement pour la navigation autonome

    Get PDF
    Le but de de la recherche en robotique mobile est de donner aux robots la capacité d'accomplir des missions dans un environnement qui n'est pas parfaitement connu. Mission, qui consiste en l'exécution d'un certain nombre d'actions élémentaires (déplacement, manipulation d'objets...) et qui nécessite une localisation précise, ainsi que la construction d'un bon modèle géométrique de l'environnement, a partir de l'exploitation de ses propres capteurs, des capteurs externes, de l'information provenant d'autres robots et de modèle existant, par exemple d'un système d'information géographique. L'information commune est la géométrie de l'environnement. La première partie du manuscrit couvre les différents méthodes d'extraction de l'information géométrique. La seconde partie présente la création d'un modèle géométrique en utilisant un graphe, ainsi qu'une méthode pour extraire de l'information du graphe et permettre au robot de se localiser dans l'environnement.The goal of the mobile robotic research is to give robots the capability to accomplish missions in an environment that might be unknown. To accomplish his mission, the robot need to execute a given set of elementary actions (movement, manipulation of objects...) which require an accurate localisation of the robot, as well as a the construction of good geometric model of the environment. Thus, a robot will need to take the most out of his own sensors, of external sensors, of information coming from an other robot and of existing model coming from a Geographic Information System. The common information is the geometry of the environment. The first part of the presentation will be about the different methods to extract geometric information. The second part will be about the creation of the geometric model using a graph structure, along with a method to retrieve information in the graph to allow the robot to localise itself in the environment

    A Simplified Phase Display System for 3D Surface Measurement and Abnormal Surface Pattern Detection

    Full text link
    Today’s engineering products demand increasingly strict tolerances. The shape of a machined surface plays a critical role to the desired functionality of a product. Even a small error can be the difference between a successful product launch and a major delay. It is important to develop tools that confirm the quality and accuracy of manufactured products. The key to assessing the quality is robust measurement and inspection tools combined with advanced analysis. This research is motivated by the goals of 1) developing an advanced optical metrology system that provides accurate 3D profiles of target objects with curvature and irregular texture and 2) developing algorithms that can recognize and extract meaningful surface features with the consideration of machining process information. A new low cost measurement system with a simple coherent interferometric fringe projection system is developed. Comparing with existing optical measurement systems, the developed system generates fringe patterns on object surface through a pair of fiber optics that have a relatively simple and flexible configuration. Three-dimensional measurements of a variety of surfaces with curvatures demonstrate the applicability and flexibility of the developed system. An improved phase unwrapping algorithm based on a flood fill method is developed to enhance the performance of image processing. The developed algorithm performs phase unwrapping under the guidance of a hybrid quality map that is generated by considering the quality of both acquired original intensity images and the calculated wrapped phase map. Advances in metrology systems enable engineers to obtain a large amount of surface information. A systematic framework for surface shape characterization and abnormal pattern detection is proposed to take the advantage of the availability of high definition surface measurements through advanced metrology systems. The proposed framework evaluates a measured surface in two stages. The first step focuses on the extraction of general shape (e.g., surface form) from measurement for surface functionality evaluation and process monitoring. The second step focuses on the extraction of application specific surface details with the consideration of process information (e.g., surface waviness). Applications of automatic abnormal surface pattern detection have been demonstrated. In summary, this research focuses on two core areas: 1) developing metrology system that is capable of measuring engineered surfaces accurately; 2) proposing a methodology that can extract meaningful information from high definition measurements with consideration of process information and product functionality.PHDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/136999/1/xinweng_1.pd
    • …
    corecore