38 research outputs found

    Development of a calibration pipeline for a monocular-view structured illumination 3D sensor utilizing an array projector

    Get PDF
    Commercial off-the-shelf digital projection systems are commonly used in active structured illumination photogrammetry of macro-scale surfaces due to their relatively low cost, accessibility, and ease of use. They can be described as inverse pinhole modelled. The calibration pipeline of a 3D sensor utilizing pinhole devices in a projector-camera setup configuration is already well-established. Recently, there have been advances in creating projection systems offering projection speeds greater than that available from conventional off-the-shelf digital projectors. However, they cannot be calibrated using well established techniques based on the pinole assumption. They are chip-less and without projection lens. This work is based on the utilization of unconventional projection systems known as array projectors which contain not one but multiple projection channels that project a temporal sequence of illumination patterns. None of the channels implement a digital projection chip or a projection lens. To workaround the calibration problem, previous realizations of a 3D sensor based on an array projector required a stereo-camera setup. Triangulation took place between the two pinhole modelled cameras instead. However, a monocular setup is desired as a single camera configuration results in decreased cost, weight, and form-factor. This study presents a novel calibration pipeline that realizes a single camera setup. A generalized intrinsic calibration process without model assumptions was developed that directly samples the illumination frustum of each array projection channel. An extrinsic calibration process was then created that determines the pose of the single camera through a downhill simplex optimization initialized by particle swarm. Lastly, a method to store the intrinsic calibration with the aid of an easily realizable calibration jig was developed for re-use in arbitrary measurement camera positions so that intrinsic calibration does not have to be repeated

    Geometric model and calibration method for a solid-state LiDAR

    Get PDF
    This paper presents a novel calibration method for solid-state LiDAR devices based on a geometrical description of their scanning system, which has variable angular resolution. Determining this distortion across the entire Field-of-View of the system yields accurate and precise measurements which enable it to be combined with other sensors. On the one hand, the geometrical model is formulated using the well-known Snell’s law and the intrinsic optical assembly of the system, whereas on the other hand the proposed method describes the scanned scenario with an intuitive camera-like approach relating pixel locations with scanning directions. Simulations and experimental results show that the model fits with real devices and the calibration procedure accurately maps their variant resolution so undistorted representations of the observed scenario can be provided. Thus, the calibration method proposed during this work is applicable and valid for existing scanning systems improving their precision and accuracy in an order of magnitude.Peer ReviewedPostprint (published version

    A novel electrical conductive resin for stereolithographic 3D printing

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Development of Machine Vision Based Workstation for Laser Micromachining

    Get PDF
    Today, laser based micromachining technologies enable the most advanced material manufacturing. Since it has wide range of applications in Microelectronics, medical device, aerospace etc., the accuracy of the process is of utmost significance. The current project proposes a machine vision assisted workstation for laser micromachining. The machine vision system not only has the ability to control the Laser path but also has the ability to locate the starting point of machining. The system was designed and developed from basic components, while MATLAB was used to control the laser direction, and to image the specimen. To analyse the limitations of the developed system, a rectangular shape was machined. Subsequently, known magnitudes of translational and rotational movements were given to the specimen. The images of machined area were captured before and after transformation. MATLAB algorithm was used to process the images to find the initial point of machined area on the transformed specimen. The laser beam is then guided to that point and the machining is repeated. The specimen was measured under microscope to find the error between the former and latter machined paths. Translational and angular errors were measured for various transformations. In this study, the challenges and corresponding possible solutions that are encountered in machining complex geometries are addressed. The study proposes mathematical function based and image processing based algorithms to find the machining coordinates and function-based approach was found to be more efficient for complex geometries. Furthermore, the effect of process parameters on the overall quality of the manufacturing are discussed. The COMSOL software was used to model all the effect of laser parameters on the roughness, depth and thickness of machined path. To validate the numerical model, experiments were conducted for different process parameters the results are in good agreement with a simulation results. The simulated model can be used to estimate the effect of the process parameters before the machining. Since the laser beam can be controlled on the geometry of the specimen and the study demonstrates the minimum possible error, this system can be applied to manufacture and repair wide range of microstructures

    Kaleidoscopic imaging

    Get PDF
    Kaleidoscopes have a great potential in computational photography as a tool for redistributing light rays. In time-of-flight imaging the concept of the kaleidoscope is also useful when dealing with the reconstruction of the geometry that causes multiple reflections. This work is a step towards opening new possibilities for the use of mirror systems as well as towards making their use more practical. The focus of this work is the analysis of planar kaleidoscope systems to enable their practical applicability in 3D imaging tasks. We analyse important practical properties of mirror systems and develop a theoretical toolbox for dealing with planar kaleidoscopes. Based on this theoretical toolbox we explore the use of planar kaleidoscopes for multi-view imaging and for the acquisition of 3D objects. The knowledge of the mirrors positions is crucial for these multi-view applications. On the other hand, the reconstruction of the geometry of a mirror room from time-of-flight measurements is also an important problem. We therefore employ the developed tools for solving this problem using multiple observations of a single scene point.Kaleidoskope haben in der rechnergestützten Fotografie ein großes Anwendungspotenzial, da sie flexibel zur Umverteilung von Lichtstrahlen genutzt werden können. Diese Arbeit ist ein Schritt auf dem Weg zu neuen Einsatzmöglichkeiten von Spiegelsystemen und zu ihrer praktischen Anwendung. Das Hauptaugenmerk der Arbeit liegt dabei auf der Analyse planarer Spiegelsysteme mit dem Ziel, sie für Aufgaben in der 3D-Bilderzeugung praktisch nutzbar zu machen. Auch für die Time-of-flight-Technologie ist das Konzept des Kaleidoskops, wie in der Arbeit gezeigt wird, bei der Rekonstruktion von Mehrfachreflektionen erzeugender Geometrie von Nutzen. In der Arbeit wird ein theoretischer Ansatz entwickelt der die Analyse planarer Kaleidoskope stark vereinfacht. Mithilfe dieses Ansatzes wird der Einsatz planarer Spiegelsysteme im Multiview Imaging und bei der Erfassung von 3-D-Objekten untersucht. Das Wissen um die Spiegelpositionen innerhalb des Systems ist für diese Anwendungen entscheidend und erfordert die Entwicklung geeigneter Methoden zur Kalibrierung dieser Positionen. Ein ähnliches Problem tritt in Time-of-Flight Anwendungen bei der, oft unerwünschten, Aufnahme von Mehrfachreflektionen auf. Beide Problemstellungen lassen sich auf die Rekonstruktion der Geometrie eines Spiegelraums zurückführen, das mit Hilfe des entwickelten Ansatzes in allgemeinererWeise als bisher gelöst werden kann

    Développements en spectroscopie et microscopie non linéaire pour l'étude morphologique et fonctionnelle de la peau humaine

    Get PDF
    Skin is an organ that envelops the entire body, acts as a pivotal, efficient natural barrier to- wards various invaders. For the treatment of major dermatological diseases and in the cosmetic industry, topical applications on skin are widely used, thus many efforts in skin research have been aimed at understanding detailed molecular absorption and efficient penetration mechanisms. However, it remains difficult to obtain high-resolution visualization in 3D together with chemical selectivity and quantification in skin research. Nonlinear spectroscopy and microscopy, including two-photon excited fluorescence (TPEF), spontaneous Raman scattering, coherent anti-Stokes Raman scattering (CARS) and stimulated Raman scattering (SRS), are introduced in this work for unambiguous skin morphological identification and topical applied molecules detection. Sev- eral quantitative methods based on nonlinear spectroscopy and microscopy are designed for 3D chemical analysis in reconstructed skin, ex vivo and in vivo on human skin. Furthermore, to adapt to forthcoming clinical applications, an endoscopic design is investigated to bring nonlin- ear imaging in flexible endoscopes.La peau est un organe qui enveloppe le corps, elle est une barrière naturelle importante et efficace contre différents envahisseurs. Pour le traitement des maladies dermatologiques ainsi que dans l'industrie cosmétique, les applications topiques sur la peau sont largement utilisées. Ainsi beaucoup d'efforts ont été investis dans la recherche sur la peau visant à comprendre l'absorption moléculaire et les mécanismes rendant efficace la pénétration. Cependant, il reste difficile d'obtenir une visualisation 3D de haute résolution combinée à une information chimique- ment spécifique et quantitative dans la recherche sur la peau. La spectroscopie et la microscopie non-linéaire, incluant la fluorescence excitée à 2-photon (TPEF), la diffusion Raman spontanée, la diffusion Raman cohérente anti-Stokes (CARS) et la diffusion Raman stimulée (SRS), sont introduits dans ce travail pour l'identification sans ambiguïté de la morphologique de la peau et la détection de molécules appliquées de façon topique. Plusieurs méthodes quantitatives basées sur la spectroscopie et la microscopie non-linéaire sont proposées pour l'analyse chimique en3D sur la peau artificielle, ex vivo et in vivo sur la peau humaine. De plus, afin de s'adapter aux applications cliniques à venir, un design endoscopique est étudié pour permettre l'imagerie non-linéaires dans les endoscopes flexibles

    Machine learning for the automation and optimisation of optical coordinate measurement

    Get PDF
    Camera based methods for optical coordinate metrology are growing in popularity due to their non-contact probing technique, fast data acquisition time, high point density and high surface coverage. However, these optical approaches are often highly user dependent, have high dependence on accurate system characterisation, and can be slow in processing the raw data acquired during measurement. Machine learning approaches have the potential to remedy the shortcomings of such optical coordinate measurement systems. The aim of this thesis is to remove dependence on the user entirely by enabling full automation and optimisation of optical coordinate measurements for the first time. A novel software pipeline is proposed, built, and evaluated which will enable automated and optimised measurements to be conducted. No such automated and optimised system for performing optical coordinate measurements currently exists. The pipeline can be roughly summarised as follows: intelligent characterisation -> view planning -> object pose estimation -> automated data acquisition -> optimised reconstruction. Several novel methods were developed in order to enable the embodiment of this pipeline. Chapter 4 presents an intelligent camera characterisation (the process of determining a mathematical model of the optical system) is performed using a hybrid approach wherein an EfficientNet convolutional neural network provides sub-pixel corrections to feature locations provided by the popular OpenCV library. The proposed characterisation scheme is shown to robustly refine the characterisation result as quantified by a 50 % reduction in the mean residual magnitude. The camera characterisation is performed before measurements are performed and the results are fed as an input to the pipeline. Chapter 5 presents a novel genetic optimisation approach is presented to create an imaging strategy, ie. the positions from which data should be captured relative to part’s specific geometry. This approach exploits the computer aided design (CAD) data of a given part, ensuring any measurement is optimal given a specific target geometry. This view planning approach is shown to give reconstructions with closer agreement to tactile coordinate measurement machine (CMM) results from 18 images compared to unoptimised measurements using 60 images. This view planning algorithm assumes the part is perfectly placed in the centre of the measurement volume so is first adjusted for an arbitrary placement of the part before being used for data acquistion. Chapter 6 presents a generative model for the creation of surface texture data is presented, allowing the generation of synthetic butt realistic datasets for the training of statistical models. The surface texture generated by the proposed model is shown to be quantitatively representative of real focus variation microscope measurements. The model developed in this chapter is used to produce large synthetic but realistic datasets for the training of further statistical models. Chapter 7 presents an autonomous background removal approach is proposed which removes superfluous data from images captured during a measurement. Using images processed by this algorithm to reconstruct a 3D measurement of an object is shown to be effective in reducing data processing times and improving measurement results. Use the proposed background removal on images before reconstruction are shown to benefit from up to a 41 % reduction in data processing times, a reduction in superfluous background points of up to 98 %, an increase in point density on the object surface of up to 10 %, and an improved agreement with CMM as measured by both a reduction in outliers and reduction in the standard deviation of point to mesh distances of up to 51 microns. The background removal algorithm is used to both improve the final reconstruction and within stereo pose estimation. Finally, in Chapter 8, two methods (one monocular and one stereo) for establishing the initial pose of the part to be measured relative to the measurement volume are presented. This is an important step to enabling automation as it allows the user to place the object at an arbitrary location in the measurement volume and for the pipeline to adjust the imaging strategy to account for this placement, enabling the optimised view plan to be carried out without the need for special part fixturing. It is shown that the monocular method can locate a part to within an average of 13 mm and the stereo method can locate apart to within an average of 0.44 mm as evaluated on 240 test images. Pose estimation is used to provide a correction to the view plan for an arbitrary part placement without the need for specialised fixturing or fiducial marking. This pipeline enables an inexperienced user to place a part anywhere in the measurement volume of a system and, from the part’s associated CAD data, the system will perform an optimal measurement without the need for any user input. Each new method which was developed as part of this pipeline has been validated against real experimental data from current measurement systems and shown to be effective. In future work given in Section 9.1, a possible hardware integration of the methods developed in this thesis is presented. Although the creation of this hardware is beyond the scope of this thesis

    Subsurface ablation of tissue by ultrafast laser

    Get PDF
    Laser-induced optical breakdown (LIOB) is a multiphoton process which can be used for selective removal of material. It revolves around the creation of a plasma in the focal volume of a beam, and requires very high peak intensities in the order of the GW.cm-2. For this reason, ultrafast lasers sending high energy pulses with very short durations below 1 ps are favorite tools for triggering LIOB. The local creation of the plasma can induce a sharp rise in temperature and pressure over a few micrometers, which produce a cavitation bubble. The combined mechanical effects from the bubble creation and chemical effects from the free electrons in the plasma can induce dramatic changes in and around the focal volume. This is particularly true in sensitive samples such as biological tissues, where cells can be selectively destroyed by LIOB. The axial and lateral confinement of the plasma creation due to the multiphoton nature of LIOB opens interesting perspectives in the field of microsurgery. In this regard, the work presented in this thesis concerns the analysis of the effects of LIOB in soft biological tissues. More specifically, we investigate the case of arterial tissues and the opportunities this technique could offer in the treatment of atherosclerosis. First, we present the current knowledge on the mechanism and impact of LIOB on the surrounding medium, and particularly in biological samples. We discuss their modeling, both via simulation and replication in organic and inorganic phantoms. We consider the theory of the linear and non-linear mechanisms driving the evolution of the plasma density in the focal volume, the minimum requirements for the creation of a cavitation bubble, and optical effects which can modify the shape of a plasma. We then observe the behavior described by this theory in transparent and scattering phantoms mimicking biological tissues, and investigate scanning approaches to remove volumes of material. The following section of this thesis is devoted to investigating the effect of LIOB at the cellular level. We discuss an approach according to which LIOB may be of interest in the treatment of atherosclerosis or other pathologies which could benefit from the control of the population of cells undergoing controlled cell death (apoptosis). We then investigate the effect of LIOB on populations of epithelial cells in 2D and 3D cultures. We monitor the increase in the number of necrotic and apoptotic cells, in different regimes of ablation. We then present the methods and results of subsurface ablation in arterial tissue, both healthy and atherosclerotic. On ex-vivo experiments, we focus on the observation of a bubble produced by LIOB, and the structural damage generated. On in-vivo experiments, we investigate the effect on necrosis and apoptosis of cells around the target area, and compare our findings with the results obtained in cell cultures and phantoms. Finally, delivering the high intensities pulses to the target area in a minimally invasive way is essential in biomedical applications of LIOB, and we investigate this question in the final part of this thesis. We present two different approaches to answer this challenge: first by the use of transmission of pulses via a hollow-core photonic crystal fiber, and secondly by wavefront shaping of a pulse through a multicore fiber. Through both methods, we demonstrate subsurface ablation of biological tissue

    Machine learning for the automation and optimisation of optical coordinate measurement

    Get PDF
    Camera based methods for optical coordinate metrology are growing in popularity due to their non-contact probing technique, fast data acquisition time, high point density and high surface coverage. However, these optical approaches are often highly user dependent, have high dependence on accurate system characterisation, and can be slow in processing the raw data acquired during measurement. Machine learning approaches have the potential to remedy the shortcomings of such optical coordinate measurement systems. The aim of this thesis is to remove dependence on the user entirely by enabling full automation and optimisation of optical coordinate measurements for the first time. A novel software pipeline is proposed, built, and evaluated which will enable automated and optimised measurements to be conducted. No such automated and optimised system for performing optical coordinate measurements currently exists. The pipeline can be roughly summarised as follows: intelligent characterisation -> view planning -> object pose estimation -> automated data acquisition -> optimised reconstruction. Several novel methods were developed in order to enable the embodiment of this pipeline. Chapter 4 presents an intelligent camera characterisation (the process of determining a mathematical model of the optical system) is performed using a hybrid approach wherein an EfficientNet convolutional neural network provides sub-pixel corrections to feature locations provided by the popular OpenCV library. The proposed characterisation scheme is shown to robustly refine the characterisation result as quantified by a 50 % reduction in the mean residual magnitude. The camera characterisation is performed before measurements are performed and the results are fed as an input to the pipeline. Chapter 5 presents a novel genetic optimisation approach is presented to create an imaging strategy, ie. the positions from which data should be captured relative to part’s specific geometry. This approach exploits the computer aided design (CAD) data of a given part, ensuring any measurement is optimal given a specific target geometry. This view planning approach is shown to give reconstructions with closer agreement to tactile coordinate measurement machine (CMM) results from 18 images compared to unoptimised measurements using 60 images. This view planning algorithm assumes the part is perfectly placed in the centre of the measurement volume so is first adjusted for an arbitrary placement of the part before being used for data acquistion. Chapter 6 presents a generative model for the creation of surface texture data is presented, allowing the generation of synthetic butt realistic datasets for the training of statistical models. The surface texture generated by the proposed model is shown to be quantitatively representative of real focus variation microscope measurements. The model developed in this chapter is used to produce large synthetic but realistic datasets for the training of further statistical models. Chapter 7 presents an autonomous background removal approach is proposed which removes superfluous data from images captured during a measurement. Using images processed by this algorithm to reconstruct a 3D measurement of an object is shown to be effective in reducing data processing times and improving measurement results. Use the proposed background removal on images before reconstruction are shown to benefit from up to a 41 % reduction in data processing times, a reduction in superfluous background points of up to 98 %, an increase in point density on the object surface of up to 10 %, and an improved agreement with CMM as measured by both a reduction in outliers and reduction in the standard deviation of point to mesh distances of up to 51 microns. The background removal algorithm is used to both improve the final reconstruction and within stereo pose estimation. Finally, in Chapter 8, two methods (one monocular and one stereo) for establishing the initial pose of the part to be measured relative to the measurement volume are presented. This is an important step to enabling automation as it allows the user to place the object at an arbitrary location in the measurement volume and for the pipeline to adjust the imaging strategy to account for this placement, enabling the optimised view plan to be carried out without the need for special part fixturing. It is shown that the monocular method can locate a part to within an average of 13 mm and the stereo method can locate apart to within an average of 0.44 mm as evaluated on 240 test images. Pose estimation is used to provide a correction to the view plan for an arbitrary part placement without the need for specialised fixturing or fiducial marking. This pipeline enables an inexperienced user to place a part anywhere in the measurement volume of a system and, from the part’s associated CAD data, the system will perform an optimal measurement without the need for any user input. Each new method which was developed as part of this pipeline has been validated against real experimental data from current measurement systems and shown to be effective. In future work given in Section 9.1, a possible hardware integration of the methods developed in this thesis is presented. Although the creation of this hardware is beyond the scope of this thesis
    corecore