82 research outputs found

    Improving Quantitative Infrared Imaging for Medical Diagnostic Applications

    Get PDF
    Infrared (IR) thermography is a non-ionizing and non-invasive imaging modality that allows the measurement of the spatial and temporal variations of the infrared radiation emitted by the human body. The emitted radiation and the skin surface temperature that can be derived from the emitted radiation data carry a wealth of information about different processes within the human body. To advance the quantitative use of IR thermography in medical diagnostics, this dissertation investigates several issues critical to the demands imposed by clinical applications. We developed a computational thermal model of the human skin with multiple layers and a near-surface lesion to understand the thermal behavior of skin tissue in dynamic infrared imaging. With the aid of this model, various cooling methods and conditions suitable for the clinical application of dynamic IR imaging are critically evaluated. The analysis of skin cooling provides a quantitative basis for the selection and optimization of cooling conditions in the clinical practice of dynamic IR imaging. To improve the quantitative accuracy for the analysis of dynamic IR imaging, we proposed a motion tracking approach using a template-based algorithm. The motion tracking approach is capable of following the involuntary motion of the subject in the IR image sequence, thereby allowing us to track the temperature evolution for a particular region on the skin. In addition, to compensate for the measurement artifacts induced by the surface curvature in IR thermography, a correction formula was developed based on the emissivity model and phantom experiments. The correction formula was integrated into a 3D imaging procedure based on a system combining Kinect and IR cameras. We demonstrated the feasibility of mapping 2D IR images onto the 3D surface of the human body. The accuracy of temperature measurement was improved by applying the correction method. Finally, we designed a variety of quantitative approaches to analyze the clinical data acquired from patient studies of pigmented lesions and hemangiomas. These approaches allow us to evaluate the thermal signatures of lesions with different characteristics, measured in both static and dynamic IR imaging. The collection of methodologies described in this dissertation, leading to improved ease of use and accuracy, can contribute to the broader implementation of quantitative IR thermography in medical diagnostics

    Bin-picking de precisão usando um sensor 3D e um sensor laser 1D

    Get PDF
    The technique that is being used by a robot to grab objects that are randomly placed inside a box or on a pallet is called bin-picking. This process is of great interest in an industrial environment as it provides enhanced automation, increased production and cost reduction. Bin-picking has evolved greatly over the years due to tremendous strides empowered by advanced vision technology, software, and gripping solutions which are in constant development. However, the creation of a versatile system, capable of collecting any type of object without deforming it, regardless of the disordered environment around it, remains a challenge. To this goal, the use of 3D perception is unavoidable. Still, the information acquired by some lower cost 3D sensors is not very precise; therefore, the combination of this information with the one of other devices is an approach already in study. The main goal of this work is to develop a solution for the execution of a precise bin-picking process capable of grasping small and fragile objects without breaking or deforming them. This may be done by combining the information provided by two sensors: one 3D sensor (Kinect) used to analyse the workspace and identify the object, and a 1D laser sensor to determine the exact distance to the object when approaching it. Additionally, the developed system may be placed at the end of a manipulator in order to become an active perception unit. Once the global system of sensors, their controllers and the robotic manipulator are integrated into a ROS (Robot Operating System) infrastructure, the data provided by the sensors can be analysed and combined to provide a bin-picking solution. Finally, the testing phase demonstrated the viability and the reliability of the developed bin-picking process.À tecnologia usada por um robô para agarrar objetos que estão dispostos de forma aleatória dentro de uma caixa ou sobre uma palete chama-se binpicking. Este processo é de grande interesse para a industria uma vez que oferece maior autonomia, aumento de produção e redução de custos. O binpicking tem evoluido de forma significativa ao longo dos anos graças aos avanços possibilitados pelo desenvolvimento tecnológico na área da visão, software e soluções de diferentes garras que estão em constante evolução. Contudo, a criação de um sistema versátil, capaz de agarrar qualquer tipo de objeto sem o deformar, independentemente do ambiente desordenado à sua volta, continua a ser o principal objetivo. Para esse fim, o recurso à perceção 3D é imprescindível. Ainda assim, a informação adquirida por sensores 3D não é muito precisa e, por isso, a combinação deste com a de outros dispositivos é uma abordagem ainda em estudo. O objetivo principal deste trabalho é então desenvolver uma solução para a execução de um processo de bin-picking capaz de agarrar objetos pequenos e frágeis sem os partir ou deformar. Isto poderá ser feito através da combinação entre a informação proveniente de dois sensores: um sensor 3D (Kinect) usado para analisar o espaço de trabalho e identificar o objeto, e um sensor laser 1D usado para determinar a sua distância exata e assim se poder aproximar. Adicionalmente, o sistema desenvolvido pode ser acoplado a um manipulador de forma a criar uma unidade de perceção ativa. Uma vez tendo um sistema global de sensores, os seus controladores e o manipulador robótico integrados numa infraestrutura ROS (Robot Operating System), os dados fornecidos pelos sensores podem ser analisados e combinados, e uma solução de bin-picking pode ser desenvolvida. Por último, a fase de testes demonstrou, depois de alguns ajustes nas medidas do sensor laser, a viabilidade e fiabilidade do processo de bin-picking desenvolvido.Mestrado em Engenharia Mecânic

    自己投影法に基づく高速三次元形状検査の研究

    Get PDF
    広島大学(Hiroshima University)博士(工学)Doctor of Engineeringdoctora

    A coordinated UAV deployment based on stereovision reconnaissance for low risk water assessment

    Get PDF
    Biologists and management authorities such as the World Health Organisation require monitoring of water pollution for adequate management of aquatic ecosystems. Current water sampling techniques based on human samplers are time consuming, slow and restrictive. This thesis takes advantage of the recent affordability and higher flexibility of Unmanned Aerial Vehicles (UAVs) to provide innovative solutions to the problem. The proposed solution involves having one UAV, “the leader”, equipped with sensors that are capable of accurately estimating the wave height in an aquatic environment, if the region identified by the leader is characterised as having a low wave height, the area is deemed suitable for landing. A second UAV, “the follower UAV”, equipped with a payload such as an Autonomous Underwater Vehicle (AUV) can proceed to the location identified by the leader, land and deploy the AUV into the water body for the purposes of water sampling. The thesis acknowledges there are two main challenges to overcome in order to develop the proposed framework. Firstly, developing a sensor to accurately measure the height of a wave and secondly, achieving cooperative control of two UAVs. Two identical cameras utilising a stereovision approach were developed for capturing three-dimensional information of the wave distribution in a non-invasive manner. As with most innovations, laboratory based testing was necessary before a full-scale implementation can be attempted. Preliminary results indicate that provided a suitable stereo matching algorithm is applied, one can generate a dense 3D reconstruction of the surface to allow estimation of the wave height parameters. Stereo measurements show good agreement with the results obtained from a wave probe in both the time and frequency domain. The mean absolute error for the average wave height and the significant wave height is less than 1cm from the acquired time series data set. A formation-flying algorithm was developed to allow cooperative control between two UAVs. Results show that the follower was able to successfully track the leader’s trajectory and in addition maintain the given separation distance from the leader to within 1m tolerance through the course of the experiments despite windy conditions, low sampling rate and poor accuracy of the GPS sensors. In the closing section of the thesis, near real-time dense 3D reconstruction and wave height estimation from the reconstructed 3D points is demonstrated for an aquatic body using the leader UAV. Results show that for a pair of images taken at a resolution of 320 by 240 pixels up to 21,000 3D points can be generated to provide a dense 3D reconstruction of the water surface within the field of view of the cameras

    Analyse und Modellierung dynamischer dreidimensionaler Szenen unter Verwendung einer Laufzeitkamera

    Get PDF
    Many applications in Computer Vision require the automatic analysis and reconstruction of static and dynamic scenes. Therefore the automatic analysis of three-dimensional scenes is an area which is intensively investigated. Most approaches focus on the reconstruction of rigid geometry because the reconstruction of non-rigid geometry is far more challenging and requires that three-dimensional data is available at high frame-rates. Rigid scene analysis is for example used in autonomous navigation, for surveillance and for the conservation of cultural heritage. The analysis and reconstruction of non-rigid geometry on the other hand provides a lot more possibilities, not only for the above-mentioned applications. In the production of media content for television or cinema the analysis, recording and playback of full 3D content can be used to generate new views of real scenes or to replace real actors by animated artificial characters. The most important requirement for the analysis of dynamic content is the availability of reliable three-dimensional scene data. Mostly stereo methods have been used to compute the depth of scene points, but these methods are computationally expensive and do not provide sufficient quality in real-time. In recent years the so-called Time-of-Flight cameras have left the prototype stadium and are now capable to deliver dense depth information in real-time at reasonable quality and price. This thesis investigates the suitability of these cameras for the purpose of dynamic three-dimensional scene analysis. Before a Time-of-Flight camera can be used to analyze three-dimensional scenes it has to be calibrated internally and externally. Moreover, Time-of-Flight cameras suffer from systematic depth measurement errors due to their operation principle. This thesis proposes an approach to estimate all necessary parameters in one calibration step. In the following the reconstruction of rigid environments and objects is investigated and solutions for these tasks are presented. The reconstruction of dynamic scenes and the generation of novel views of dynamic scenes is achieved by the introduction of a volumetric data structure to store and fuse the depth measurements and their change over time. Finally a Mixed Reality system is presented in which the contributions of this thesis are brought together. This system is able to combine real and artificial scene elements with correct mutual occlusion, mutual shadowing and physical interaction. This thesis shows that Time-of-Flight cameras are a suitable choice for the analysis of rigid as well as non-rigid scenes under certain conditions. It contains important contributions for the necessary steps of calibration, preprocessing of depth data and reconstruction and analysis of three-dimensional scenes.Viele Anwendungen des Maschinellen Sehens benötigen die automatische Analyse und Rekonstruktion von statischen und dynamischen Szenen. Deshalb ist die automatische Analyse von dreidimensionalen Szenen und Objekten ein Bereich der intensiv erforscht wird. Die meisten Ansätze konzentrieren sich auf die Rekonstruktion statischer Szenen, da die Rekonstruktion nicht-statischer Geometrien viel herausfordernder ist und voraussetzt, dass dreidimensionale Szeneninformation mit hoher zeitlicher Auflösung verfügbar ist. Statische Szenenanalyse wird beispielsweise in der autonomen Navigation, für die Überwachung und für die Erhaltung des Kulturerbes eingesetzt. Andererseits eröffnet die Analyse und Rekonstruktion nicht-statischer Geometrie viel mehr Möglichkeiten, nicht nur für die bereits erwähnten Anwendungen. In der Produktion von Medieninhalten für Film und Fernsehen kann die Analyse und die Aufnahme und Wiedergabe von vollständig dreidimensionalen Inhalten verwendet werden um neue Ansichten realer Szenen zu erzeugen oder echte Schauspieler durch animierte virtuelle Charaktere zu ersetzen. Die wichtigste Voraussetzung für die Analyse von dynamischen Inhalten ist die Verfügbarkeit von zuverlässigen dreidimensionalen Szeneninformationen. Um die Entfernung von Punkten in der Szene zu bestimmen wurden meistens Stereo-Verfahren eingesetzt, aber diese Verfahren benötigen viel Rechenzeit und erreichen in Echtzeit nicht die benötigte Qualität. In den letzten Jahren haben die so genannten Laufzeitkameras das Stadium der Prototypen verlassen und sind jetzt in der Lage dichte Tiefeninformationen in vernünftiger Qualität zu einem vernünftigen Preis zu liefern. Diese Arbeit untersucht die Eignung dieser Kameras für die Analyse nicht-statischer dreidimensionaler Szenen. Bevor eine Laufzeitkamera für die Analyse eingesetzt werden kann muss sie intern und extern kalibriert werden. Darüber hinaus leiden Laufzeitkameras an systematischen Fehlern bei der Entfernungsmessung, bedingt durch ihr Funktionsprinzip. Diese Arbeit stellt ein Verfahren vor um alle nötigen Parameter in einem Kalibrierschritt zu berechnen. Im Weiteren wird die Rekonstruktion von statischen Umgebungen und Objekten untersucht und Lösungen für diese Aufgaben werden präsentiert. Die Rekonstruktion von nicht-statischen Szenen und die Erzeugung neuer Ansichten solcher Szenen wird mit der Einführung einer volumetrischen Datenstruktur erreicht, in der die Tiefenmessungen und ihr Änderungen über die Zeit gespeichert und fusioniert werden. Schließlich wird ein Mixed Reality System vorgestellt in welchem die Beiträge dieser Arbeit zusammengeführt werden. Dieses System ist in der Lage reale und künstliche Szenenelemente unter Beachtung von korrekter gegenseitiger Verdeckung, Schattenwurf und physikalischer Interaktion zu kombinieren. Diese Arbeit zeigt, dass Laufzeitkameras unter bestimmten Voraussetzungen eine geeignete Wahl für die Analyse von statischen und nicht-statischen Szenen sind. Sie enthält wichtige Beiträge für die notwendigen Schritte der Kalibrierung, der Vorverarbeitung von Tiefendaten und der Rekonstruktion und der Analyse von dreidimensionalen Szenen

    Multi-scale metrology for automated non-destructive testing systems

    Get PDF
    This thesis was previously held under moratorium from 5/05/2020 to 5/05/2022The use of lightweight composite structures in the aerospace industry is now commonplace. Unlike conventional materials, these parts can be moulded into complex aerodynamic shapes, which are diffcult to inspect rapidly using conventional Non-Destructive Testing (NDT) techniques. Industrial robots provide a means of automating the inspection process due to their high dexterity and improved path planning methods. This thesis concerns using industrial robots as a method for assessing the quality of components with complex geometries. The focus of the investigations in this thesis is on improving the overall system performance through the use of concepts from the field of metrology, specifically calibration and traceability. The use of computer vision is investigated as a way to increase automation levels by identifying a component's type and approximate position through comparison with CAD models. The challenges identified through this research include developing novel calibration techniques for optimising sensor integration, verifying system performance using laser trackers, and improving automation levels through optical sensing. The developed calibration techniques are evaluated experimentally using standard reference samples. A 70% increase in absolute accuracy was achieved in comparison to manual calibration techniques. Inspections were improved as verified by a 30% improvement in ultrasonic signal response. A new approach to automatically identify and estimate the pose of a component was developed specifically for automated NDT applications. The method uses 2D and 3D camera measurements along with CAD models to extract and match shape information. It was found that optical large volume measurements could provide suffciently high accuracy measurements to allow ultrasonic alignment methods to work, establishing a multi-scale metrology approach to increasing automation levels. A classification framework based on shape outlines extracted from images was shown to provide over 88% accuracy on a limited number of samples.The use of lightweight composite structures in the aerospace industry is now commonplace. Unlike conventional materials, these parts can be moulded into complex aerodynamic shapes, which are diffcult to inspect rapidly using conventional Non-Destructive Testing (NDT) techniques. Industrial robots provide a means of automating the inspection process due to their high dexterity and improved path planning methods. This thesis concerns using industrial robots as a method for assessing the quality of components with complex geometries. The focus of the investigations in this thesis is on improving the overall system performance through the use of concepts from the field of metrology, specifically calibration and traceability. The use of computer vision is investigated as a way to increase automation levels by identifying a component's type and approximate position through comparison with CAD models. The challenges identified through this research include developing novel calibration techniques for optimising sensor integration, verifying system performance using laser trackers, and improving automation levels through optical sensing. The developed calibration techniques are evaluated experimentally using standard reference samples. A 70% increase in absolute accuracy was achieved in comparison to manual calibration techniques. Inspections were improved as verified by a 30% improvement in ultrasonic signal response. A new approach to automatically identify and estimate the pose of a component was developed specifically for automated NDT applications. The method uses 2D and 3D camera measurements along with CAD models to extract and match shape information. It was found that optical large volume measurements could provide suffciently high accuracy measurements to allow ultrasonic alignment methods to work, establishing a multi-scale metrology approach to increasing automation levels. A classification framework based on shape outlines extracted from images was shown to provide over 88% accuracy on a limited number of samples

    Automated Visual Database Creation For A Ground Vehicle Simulator

    Get PDF
    This research focuses on extracting road models from stereo video sequences taken from a moving vehicle. The proposed method combines color histogram based segmentation, active contours (snakes) and morphological processing to extract road boundary coordinates for conversion into Matlab or Multigen OpenFlight compatible polygonal representations. Color segmentation uses an initial truth frame to develop a color probability density function (PDF) of the road versus the terrain. Subsequent frames are segmented using a Maximum Apostiori Probability (MAP) criteria and the resulting templates are used to update the PDFs. Color segmentation worked well where there was minimal shadowing and occlusion by other cars. A snake algorithm was used to find the road edges which were converted to 3D coordinates using stereo disparity and vehicle position information. The resulting 3D road models were accurate to within 1 meter

    Target Tracking Using Optical Markers for Remote Handling in ITER

    Get PDF
    The thesis focuses on the development of a vision system to be used in the remote handling systems of the International Thermonuclear Experimental Rector - ITER. It presents and discusses a realistic solution to estimate the pose of key operational targets, while taking into account the specific needs and restrictions of the application. The contributions to the state of the art are in two main fronts: 1) the development of optical markers that can withstand the extreme conditions in the environment; 2) the development of a robust marker detection and identification framework that can be effectively applied to different use cases. The markers’ locations and labels are used in computing the pose. In the first part of the work, a retro reflective marker made up ITER compliant materials, particularly, fused silica and stainless steel, is designed. A methodology is proposed to optimize the markers’ performance. Highly distinguishable markers are manufactured and tested. In the second part, a hybrid pipeline is proposed that detects uncoded markers in low resolution images using classical methods and identifies them using a machine learning approach. It is demonstrated that the proposed methodology effectively generalizes to different marker constellations and can successfully detect both retro reflective markers and laser engravings. Lastly, a methodology is developed to evaluate the end-to-end accuracy of the proposed solution using the feedback provided by an industrial robotic arm. Results are evaluated in a realistic test setup for two significantly different use cases. Results show that marker based tracking is a viable solution for the problem at hand and can provide superior performance to the earlier stereo matching based approaches. The developed solutions could be applied to other use cases and applications
    corecore