1,627 research outputs found

    Optical Micromanipulation Techniques Combined with Microspectroscopic Methods

    Get PDF
    Předložená dizertační práce se zabývá kombinací optických mikromanipulací s mikrospektroskopickými metodami. Využili jsme laserovou pinzetu pro transport a třídění živých mikroorganismů, například jednobuněčných řas, či kvasinek. Ramanovskou spektroskopií jsme analyzovali chemické složení jednotlivých buněk a tyto informace jsme využili k automatické selekci buněk s vybranými vlastnostmi. Zkombinovali jsme pulsní amplitudově modulovanou fluorescenční mikrospektroskopii, optické mikromanipulace a jiné techniky ke zmapování stresové odpovědi opticky zachycených buněk při různých časech působení, vlnových délkách a intenzitách chytacího laseru. Vyrobili jsme různé typy mikrofluidních čipů a zkonstruovali jsme Ramanovu pinzetu pro třídění mikro-objektů, především živých buněk, v mikrofluidním prostředí.The subject of the presented Ph.D. thesis is a combination of optical micromanipulation and microspectroscopic methods. We used laser tweezers to transport and sort various living microorganisms, such as microalgal or yeast cells. We employed Raman microspectroscopy to analyze chemical composition of individual cells and we used the information about chemical composition to automatically select the cells of interest. We combined pulsed amplitude modulation fluorescence microspectroscopy, optical micromanipulation and other techniques to map the stress response of cells to various laser wavelengths, intensities and durations of optical trapping. We fabricated microfluidic chips of various designs and we constructed Raman-tweezers sorter of micro-objects such as living cells on a microfluidic platform.

    Image based analysis of visibility in smoke laden environments

    Get PDF
    This study investigates visibility in a smoke laden environment. For many years, researchers and engineers in fire safety have criticized the inadequacy of existing theory in describing the effects such as colour, viewing angle, environmental lighting etc. on the visibility of an emergency sign. In the current study, the author has raised the fundamental question on the concept of visibility and how it should be measured in fire safety engineering and tried to address the problem by redefining visibility based on the perceived image of a target sign. New algorithms have been created during this study to utilise modern hardware and software technology in the simulation of human perceived image of object in both experiment and computer modelling. Unlike the traditional threshold of visual distance, visibility in the current study has been defined as a continuous function changing from clearly discemable to completely invisible. It allows the comparison of visibility under various conditions, not just limited to the threshold. Current experiment has revealed that different conditions may results in the same visual threshold but follow very different path on the way leading to the threshold. The new definition of visibility has made the quantification of visibility in the pre-threshold conditions possible. Such quantification can help to improve the performance of fire evacuation since most evacuees will experience the pre-threshold condition. With current measurement of visibility, all the influential factors such as colour, viewing angle etc. can be tested in experiment and simulated in numerical model. Based on the newly introduced definition of visibility, a set of experiments have been carried output in a purposed built smoke tunnel. Digital camera images of various illuminated signs were taken under different illumination, colour and smoke conditions. Using an algorithm developed by the author in this study, the digital camera images were converted into simulated human perceived images. The visibility of a target sign is measured against the quality of its image acquired. Conclusions have been drawn by comparing visibility under different conditions. One of them is that signs illuminated with red and green lights have the similar visibility that is far better than that with blue light. It is the first time this seemingly obvious conclusion has been quantified. In the simulation of visibility in participating media, the author has introduced an algorithm that combines irradiance catching in 3D space with Monte Carlo ray tracing. It can calculate the distribution of scattered radiation with good accuracy without the high cost typically related to zonal method and the limitations in discrete ordinate method. The algorithm has been combined with a two pass solution method to produce high resolution images without introducing excessive number of rays from the light source. The convergence of the iterative solution procedure implemented has been theoretically proven. The accuracy of the model is demonstrated by comparing with the analytical solution of a point radiant source in 3D space. Further validation of the simulation model has been carried out by comparing the model prediction with the data from the smoke tunnel experiments. The output of the simulation model has been presented in the form of an innovative floor map of visibility (FMV). It helps the fire safety designer to identify regions of poor visibility in a glance and will prove to be a very useful tool in performance based fire safety design

    Image based analysis of visibility in smoke laden environments

    Get PDF
    This study investigates visibility in a smoke laden environment. For many years, researchers and engineers in fire safety have criticized the inadequacy of existing theory in describing the effects such as colour, viewing angle, environmental lighting etc. on the visibility of an emergency sign. In the current study, the author has raised the fundamental question on the concept of visibility and how it should be measured in fire safety engineering and tried to address the problem by redefining visibility based on the perceived image of a target sign. New algorithms have been created during this study to utilise modern hardware and software technology in the simulation of human perceived image of object in both experiment and computer modelling. Unlike the traditional threshold of visual distance, visibility in the current study has been defined as a continuous function changing from clearly discemable to completely invisible. It allows the comparison of visibility under various conditions, not just limited to the threshold. Current experiment has revealed that different conditions may results in the same visual threshold but follow very different path on the way leading to the threshold. The new definition of visibility has made the quantification of visibility in the pre-threshold conditions possible. Such quantification can help to improve the performance of fire evacuation since most evacuees will experience the pre-threshold condition. With current measurement of visibility, all the influential factors such as colour, viewing angle etc. can be tested in experiment and simulated in numerical model.Based on the newly introduced definition of visibility, a set of experiments have been carried output in a purposed built smoke tunnel. Digital camera images of various illuminated signs were taken under different illumination, colour and smoke conditions. Using an algorithm developed by the author in this study, the digital camera images were converted into simulated human perceived images. The visibility of a target sign is measured against the quality of its image acquired. Conclusions have been drawn by comparing visibility under different conditions. One of them is that signs illuminated with red and green lights have the similar visibility that is far better than that with blue light. It is the first time this seemingly obvious conclusion has been quantified.In the simulation of visibility in participating media, the author has introduced an algorithm that combines irradiance catching in 3D space with Monte Carlo ray tracing. It can calculate the distribution of scattered radiation with good accuracy without the high cost typically related to zonal method and the limitations in discrete ordinate method. The algorithm has been combined with a two pass solution method to produce high resolution images without introducing excessive number of rays from the light source. The convergence of the iterative solution procedure implemented has been theoretically proven. The accuracy of the model is demonstrated by comparing with the analytical solution of a point radiant source in 3D space. Further validation of the simulation model has been carried out by comparing the model prediction with the data from the smoke tunnel experiments.The output of the simulation model has been presented in the form of an innovative floor map of visibility (FMV). It helps the fire safety designer to identify regions of poor visibility in a glance and will prove to be a very useful tool in performance based fire safety design

    Efficient and Fast Implementation of Embedded Time-of-Flight Ranging System Based on FPGAs

    Get PDF

    Resolving Measurement Errors Inherent with Time-of-Flight Range Imaging Cameras

    Get PDF
    Range imaging cameras measure the distance to objects in the field-of-view (FoV) of the camera, these cameras enable new machine vision applications in robotics, manufacturing, and human computer interaction. Time-of-flight (ToF) range cameras operate by illuminating the scene with amplitude modulated continuous wave (AMCW) light and measuring the phase difference between the emitted and reflected modulation envelope. Currently ToF range cameras suffer from measurement errors that are highly scene dependent, and these errors limit the accuracy of the depth measurement. The major cause of measurement errors is multiple propagation paths from the light source to pixel, known as multi path interference. Multi-path interference typically arises from: inter reflections, lens flare, subsurface scattering, volumetric scattering, and translucent objects. This thesis contributes three novel methods for resolving multi-path interference: coding in time, coding in frequency, and coding in space. Time coding is implemented by replacing the single frequency amplitude modulation with a binary sequence. Fundamental to ToF range cameras is the cross-correlation between the reflected light and a reference signal. The measured cross-correlation depends on the selection of the binary sequence. With selection of an appropriate binary sequence and using sparse deconvolution on the measured cross-correlation the multiple return path lengths and their amplitudes can be recovered. However, the minimal resolvable path length is dependent on the highest frequency in the binary sequence. Frequency coding is implemented by taking multiple measurements at different modulation frequencies. A subset of frequency coding is operating the camera in a mode analogous to stepped frequency continuous wave (SFCW). Frequency coding uses techniques from radar to resolve multiple propagation paths. The minimal resolvable path length is dependent on the camera's modulation bandwidth and the spectrum estimation technique used to recover distance, and it is shown that SFCW can be used to measure depth of objects behind a translucent sheet, while AMCW measurements can not. Path lengths below quarter a wavelength of the highest modulation frequency are difficult to resolve. The use of spatial coding is used to resolve diffuse multi-path interference. The original technique comes from direct and global separation in computer graphics, and it is modified to operate on the complex data produced by a ToF range camera. By illuminating the scene with a pattern the illuminated areas contain the direct return and the scattering (global return). The non-illuminated regions contain the scattering return, assuming the global component is spatially smooth. The direct and global separation with sinusoidal patterns is combining with the sinusoidal modulation signal of ToF range cameras for a closed form solution to multi-path interference in nine frames. With nine raw frames it is possible to implement direct and global separation at video frame rates. The RMSE of a corner is reduced from 0.0952 m to 0.0112 m. Direct and global separation correctly measures the depth of a diffuse corner, and resolves subsurface scattering however fails to resolve specular reflections. Finally the direct and global separation is combined with replacing the illumination and reference signals with a binary sequence. The combination allows for resolving diffuse multi-path interference present in a corner, with the sparse multi-path interference caused mixed pixels between the foreground and background. The corner is correctly measured and the number of mixed pixels is reduced by 90%. With the development of new methods to resolve multi-path interference ToF range cameras can measure scenes with more confidence. ToF range cameras can be built into small form factors as they require a small number of parts: a pixel array, a light source and a lens. The small form factor coupled with accurate range measurements allows ToF range cameras to be embedded in cellphones and consumer electronic devices, enabling wider adoption and advantages over competing range imaging technologies

    Digital fabrication of custom interactive objects with rich materials

    Get PDF
    As ubiquitous computing is becoming reality, people interact with an increasing number of computer interfaces embedded in physical objects. Today, interaction with those objects largely relies on integrated touchscreens. In contrast, humans are capable of rich interaction with physical objects and their materials through sensory feedback and dexterous manipulation skills. However, developing physical user interfaces that offer versatile interaction and leverage these capabilities is challenging. It requires novel technologies for prototyping interfaces with custom interactivity that support rich materials of everyday objects. Moreover, such technologies need to be accessible to empower a wide audience of researchers, makers, and users. This thesis investigates digital fabrication as a key technology to address these challenges. It contributes four novel design and fabrication approaches for interactive objects with rich materials. The contributions enable easy, accessible, and versatile design and fabrication of interactive objects with custom stretchability, input and output on complex geometries and diverse materials, tactile output on 3D-object geometries, and capabilities of changing their shape and material properties. Together, the contributions of this thesis advance the fields of digital fabrication, rapid prototyping, and ubiquitous computing towards the bigger goal of exploring interactive objects with rich materials as a new generation of physical interfaces.Computer werden zunehmend in Geräten integriert, mit welchen Menschen im Alltag interagieren. Heutzutage basiert diese Interaktion weitgehend auf Touchscreens. Im Kontrast dazu steht die reichhaltige Interaktion mit physischen Objekten und Materialien durch sensorisches Feedback und geschickte Manipulation. Interfaces zu entwerfen, die diese Fähigkeiten nutzen, ist allerdings problematisch. Hierfür sind Technologien zum Prototyping neuer Interfaces mit benutzerdefinierter Interaktivität und Kompatibilität mit vielfältigen Materialien erforderlich. Zudem sollten solche Technologien zugänglich sein, um ein breites Publikum zu erreichen. Diese Dissertation erforscht die digitale Fabrikation als Schlüsseltechnologie, um diese Probleme zu adressieren. Sie trägt vier neue Design- und Fabrikationsansätze für das Prototyping interaktiver Objekte mit reichhaltigen Materialien bei. Diese ermöglichen einfaches, zugängliches und vielseitiges Design und Fabrikation von interaktiven Objekten mit individueller Dehnbarkeit, Ein- und Ausgabe auf komplexen Geometrien und vielfältigen Materialien, taktiler Ausgabe auf 3D-Objektgeometrien und der Fähigkeit ihre Form und Materialeigenschaften zu ändern. Insgesamt trägt diese Dissertation zum Fortschritt der Bereiche der digitalen Fabrikation, des Rapid Prototyping und des Ubiquitous Computing in Richtung des größeren Ziels, der Exploration interaktiver Objekte mit reichhaltigen Materialien als eine neue Generation von physischen Interfaces, bei
    corecore