17 research outputs found

    An improved photometric stereo through distance estimation and light vector optimization from diffused maxima region

    Get PDF
    © 2013 Elsevier B.V. All rights reserved. Although photometric stereo offers an attractive technique for acquiring 3D data using low-cost equipment, inherent limitations in the methodology have served to limit its practical application, particularly in measurement or metrology tasks. Here we address this issue. Traditional photometric stereo assumes that lighting directions at every pixel are the same, which is not usually the case in real applications, and especially where the size of object being observed is comparable to the working distance. Such imperfections of the illumination may make the subsequent reconstruction procedures used to obtain the 3D shape of the scene prone to low frequency geometric distortion and systematic error (bias). Also, the 3D reconstruction of the object results in a geometric shape with an unknown scale. To overcome these problems a novel method of estimating the distance of the object from the camera is developed, which employs photometric stereo images without using other additional imaging modality. The method firstly identifies Lambertian diffused maxima region to calculate the object distance from the camera, from which the corrected per-pixel light vector is able to be derived and the absolute dimensions of the object can be subsequently estimated. We also propose a new calibration process to allow a dynamic(as an object moves in the field of view) calculation of light vectors for each pixel with little additional computation cost. Experiments performed on synthetic as well as real data demonstrates that the proposed approach offers improved performance, achieving a reduction in the estimated surface normal error of up to 45% as well as mean height error of reconstructed surface of up to 6 mm. In addition, when compared to traditional photometric stereo, the proposed method reduces the mean angular and height error so that it is low, constant and independent of the position of the object placement within a normal working range

    3D SEM Surface Reconstruction: An Optimized, Adaptive, and Intelligent Approach

    Get PDF
    Structural analysis of microscopic objects is a longstanding topic in several scientific disciplines, including biological, mechanical, and material sciences. The scanning electron microscope (SEM), as a promising imaging equipment has been around to determine the surface properties (e.g., compositions or geometries) of specimens by achieving increased magnification, contrast, and resolution greater than one nanometer. Whereas SEM micrographs still remain two-dimensional (2D), many research and educational questions truly require knowledge and information about their three-dimensional (3D) surface structures. Having 3D surfaces from SEM images would provide true anatomic shapes of micro samples which would allow for quantitative measurements and informative visualization of the systems being investigated. In this research project, we novel design and develop an optimized, adaptive, and intelligent multi-view approach named 3DSEM++ for 3D surface reconstruction of SEM images, making a 3D SEM dataset publicly and freely available to the research community. The work is expected to stimulate more interest and draw attention from the computer vision and multimedia communities to the fast-growing SEM application area

    Exploring space situational awareness using neuromorphic event-based cameras

    Get PDF
    The orbits around earth are a limited natural resource and one that hosts a vast range of vital space-based systems that support international systems use by both commercial industries, civil organisations, and national defence. The availability of this space resource is rapidly depleting due to the ever-growing presence of space debris and rampant overcrowding, especially in the limited and highly desirable slots in geosynchronous orbit. The field of Space Situational Awareness encompasses tasks aimed at mitigating these hazards to on-orbit systems through the monitoring of satellite traffic. Essential to this task is the collection of accurate and timely observation data. This thesis explores the use of a novel sensor paradigm to optically collect and process sensor data to enhance and improve space situational awareness tasks. Solving this issue is critical to ensure that we can continue to utilise the space environment in a sustainable way. However, these tasks pose significant engineering challenges that involve the detection and characterisation of faint, highly distant, and high-speed targets. Recent advances in neuromorphic engineering have led to the availability of high-quality neuromorphic event-based cameras that provide a promising alternative to the conventional cameras used in space imaging. These cameras offer the potential to improve the capabilities of existing space tracking systems and have been shown to detect and track satellites or ‘Resident Space Objects’ at low data rates, high temporal resolutions, and in conditions typically unsuitable for conventional optical cameras. This thesis presents a thorough exploration of neuromorphic event-based cameras for space situational awareness tasks and establishes a rigorous foundation for event-based space imaging. The work conducted in this project demonstrates how to enable event-based space imaging systems that serve the goals of space situational awareness by providing accurate and timely information on the space domain. By developing and implementing event-based processing techniques, the asynchronous operation, high temporal resolution, and dynamic range of these novel sensors are leveraged to provide low latency target acquisition and rapid reaction to challenging satellite tracking scenarios. The algorithms and experiments developed in this thesis successfully study the properties and trade-offs of event-based space imaging and provide comparisons with traditional observing methods and conventional frame-based sensors. The outcomes of this thesis demonstrate the viability of event-based cameras for use in tracking and space imaging tasks and therefore contribute to the growing efforts of the international space situational awareness community and the development of the event-based technology in astronomy and space science applications

    Innovative optical non-contact measurement of respiratory function using photometric stereo

    Get PDF
    Pulmonary functional testing is very common and widely used in today's clinical environment for testing lung function. The contact based nature of a Spirometer can cause breathing awareness that alters the breathing pattern, affects the amount of air inhaled and exhaled and has hygiene implications. Spirometry also requires a high degree of compliance from the patient, as they have to breathe through a hand held mouth piece. To solve these issues a non-contact computer vision based system was developed for Pulmonary Functional Testing. This employs an improved photometric stereo method that was developed to recover local 3D surface orientation to enable calculation of breathing volumes. Although Photometric Stereo offers an attractive technique for acquiring 3D data using low-cost equipment, inherent limitations in the methodology have served to limit its practical application, particularly in measurement or metrology tasks. Traditional Photometric Stereo assumes that lighting directions at every pixel are the same, which is not usually the case in real applications and especially where the size of object being observed is comparable to the working distance. Such imperfections of the illumination may make the subsequent reconstruction procedures used to obtain the 3D shape of the scene, prone to low frequency geometric distortion and systematic error (bias). Also, the 3D reconstruction of the object results in a geometric shape with an unknown scale. To overcome these problems a novel method of estimating the distance of the object from the camera was developed, which employs Photometric Stereo images without using other additional imaging modality. The method firstly identifies the Lambertian Diffused Maxima regions to calculate the object's distance from the camera, from which the corrected per-pixel light vector is derived and the absolute dimensions of the object can be subsequently estimated. We also propose a new calibration process to allow a dynamic (as an object moves in the field of view) calculation of light vectors for each pixel with little additional computational cost. Experiments performed on synthetic as well as real data demonstrate that the proposed approach offers improved performance, achieving a reduction in the estimated surface normal error by up to 45% as well as the mean height error of reconstructed surface of up to 6 mm. In addition, compared with traditional photometric stereo, the proposed method reduces the mean angular and height error so that it is low, constant and independent of the position of the object placement within a normal working range. A high (0.98) correlation between breathing volume calculated from Photometric Stereo and Spirometer data was observed. This breathing volume is then converted to absolute amount of air by using distance information obtained by Lambertian Diffused Maxima Region. The unique and novel feature of this system is that it views the patients from both front and back and creates a 3D structure of the whole torso. By observing the 3D structure of the torso over time, the amount of air inhaled and exhaled can be estimated

    TOWARDS CASUAL APPEARANCE CAPTURE BY REFLECTANCE SYMMETRY

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Speaker comfort and increase of voice level in lecture rooms

    Get PDF

    Fine‐structure processing, frequency selectivity and speech perception in hearing‐impaired listeners

    Get PDF

    Fine-structure processing, frequency selectivity and speech perception in hearing-impaired listeners

    Get PDF
    corecore