24 research outputs found

    Multi-contrast imaging and digital refocusing on a mobile microscope with a domed LED array

    Get PDF
    We demonstrate the design and application of an add-on device for improving the diagnostic and research capabilities of CellScope--a low-cost, smartphone-based point-of-care microscope. We replace the single LED illumination of the original CellScope with a programmable domed LED array. By leveraging recent advances in computational illumination, this new device enables simultaneous multi-contrast imaging with brightfield, darkfield, and phase imaging modes. Further, we scan through illumination angles to capture lightfield datasets, which can be used to recover 3D intensity and phase images without any hardware changes. This digital refocusing procedure can be used for either 3D imaging or software-only focus correction, reducing the need for precise mechanical focusing during field experiments. All acquisition and processing is performed on the mobile phone and controlled through a smartphone application, making the computational microscope compact and portable. Using multiple samples and different objective magnifications, we demonstrate that the performance of our device is comparable to that of a commercial microscope. This unique device platform extends the field imaging capabilities of CellScope, opening up new clinical and research possibilities

    Imaging Pressure, Cells and Light Fields

    Get PDF
    Imaging systems often make use of macroscopic lenses to manipulate light. Modern microfabrication techniques, however, have opened up a pathway to the development of novel arrayed imaging systems. In such systems, centimeter-scale areas can contain thousands to millions of micro-scale optical elements, presenting exciting opportunities for new imaging applications. We show two such applications in this thesis: pressure sensing in microfluidics and high throughput fluorescence microscopy for high content screening. Conversely, we show that arrayed elements are not always needed for three dimensional light field imaging.Engineering and Applied Science

    3D Modelling from Real Data

    Get PDF
    The genesis of a 3D model has basically two definitely different paths. Firstly we can consider the CAD generated models, where the shape is defined according to a user drawing action, operating with different mathematical “bricks” like B-Splines, NURBS or subdivision surfaces (mathematical CAD modelling), or directly drawing small polygonal planar facets in space, approximating with them complex free form shapes (polygonal CAD modelling). This approach can be used for both ideal elements (a project, a fantasy shape in the mind of a designer, a 3D cartoon, etc.) or for real objects. In the latter case the object has to be first surveyed in order to generate a drawing coherent with the real stuff. If the surveying process is not only a rough acquisition of simple distances with a substantial amount of manual drawing, a scene can be modelled in 3D by capturing with a digital instrument many points of its geometrical features and connecting them by polygons to produce a 3D result similar to a polygonal CAD model, with the difference that the shape generated is in this case an accurate 3D acquisition of a real object (reality-based polygonal modelling). Considering only device operating on the ground, 3D capturing techniques for the generation of reality-based 3D models may span from passive sensors and image data (Remondino and El-Hakim, 2006), optical active sensors and range data (Blais, 2004; Shan & Toth, 2008; Vosselman and Maas, 2010), classical surveying (e.g. total stations or Global Navigation Satellite System - GNSS), 2D maps (Yin et al., 2009) or an integration of the aforementioned methods (Stumpfel et al., 2003; Guidi et al., 2003; Beraldin, 2004; Stamos et al., 2008; Guidi et al., 2009a; Remondino et al., 2009; Callieri et al., 2011). The choice depends on the required resolution and accuracy, object dimensions, location constraints, instrument’s portability and usability, surface characteristics, working team experience, project’s budget, final goal, etc. Although aware of the potentialities of the image-based approach and its recent developments in automated and dense image matching for non-expert the easy usability and reliability of optical active sensors in acquiring 3D data is generally a good motivation to decline image-based approaches. Moreover the great advantage of active sensors is the fact that they deliver immediately dense and detailed 3D point clouds, whose coordinate are metrically defined. On the other hand image data require some processing and a mathematical formulation to transform the two-dimensional image measurements into metric three-dimensional coordinates. Image-based modelling techniques (mainly photogrammetry and computer vision) are generally preferred in cases of monuments or architectures with regular geometric shapes, low budget projects, good experience of the working team, time or location constraints for the data acquisition and processing. This chapter is intended as an updated review of reality-based 3D modelling in terrestrial applications, with the different categories of 3D sensing devices and the related data processing pipelines

    Modeling and Simulation in Engineering

    Get PDF
    This book provides an open platform to establish and share knowledge developed by scholars, scientists, and engineers from all over the world, about various applications of the modeling and simulation in the design process of products, in various engineering fields. The book consists of 12 chapters arranged in two sections (3D Modeling and Virtual Prototyping), reflecting the multidimensionality of applications related to modeling and simulation. Some of the most recent modeling and simulation techniques, as well as some of the most accurate and sophisticated software in treating complex systems, are applied. All the original contributions in this book are jointed by the basic principle of a successful modeling and simulation process: as complex as necessary, and as simple as possible. The idea is to manipulate the simplifying assumptions in a way that reduces the complexity of the model (in order to make a real-time simulation), but without altering the precision of the results

    Next generation Fourier ptychographic microscopy: computational and experimental techniques

    Get PDF
    Fourier ptychography is a recently developed computational imaging technique, which enables gigapixel image reconstruction from multiple low-resolution measurements. The technique can be implemented on simple, low-quality microscopes to achieve unprecedented image quality by exchanging optical design complexity with computational complexity. While developments have been made, demonstrations typically use well-calibrated, highperformance microscopes. Therefore, the real world performance and true benefits of(lowcost) Fourier ptychography still need to be demonstrated in out-of-lab environments where unforeseen problems are not unlikely. In this thesis, I will demonstrate how to utilise Fourier ptychography in a fast, robust and cheap manner. Two experimental prototypes will be introduced, one of them being an ultra-low-cost 3D printed microscope capable of wide-field sub-micron resolution imaging. Another prototype was built to demonstrate high-speed gigapixel imaging, capable of 100-megapixel, 1µm resolution image capture in under 3 seconds. Novel image formation models and their refinements were developed to correct the incomplete conventional model. These include partial coherence of the illumination, deviation from the plane-wave assumption, and spatially varying aberrations. Lastly, Experimental work was also heavily supplemented by novel calibration and reconstruction algorithms. Theoretical work outlined in this thesis enables the use of tilted, off-axis optical components, alleviating typically assumed parallel plane optical geometry. Optical precision requirements can also be relaxed due to novel robust calibration algorithms. As a result, low-cost 3D printed microscopes can be used

    Developing Advanced Photogrammetric Methods for Automated Rockfall Monitoring

    Get PDF
    [eng] In recent years, photogrammetric models have become a widely used tool in the field of geosciences thanks to their ability to reproduce natural surfaces. As an alternative to other systems such as LiDAR (Light Detection and Ranging), photogrammetry makes it possible to obtain 3D points clouds at a lower cost and with a lower learning curve. This combination has allowed the democratisation of this 3D model creation strategy. On the other hand, rockfalls are one of the geological phenomena that represent a risk for society. It is the most common natural phenomenon in mountainous areas and, given its great speed, its hazard is very high. This doctoral thesis deals with the creation of photogrammetric systems and processing algorithms for the automatic monitoring of rockfalls. To this end, 3 fixed camera photogrammetric systems were designed and installed in 2 study areas. In addition, 3 different workflows have been developed, two of which are aimed at obtaining comparisons of higher quality using photogrammetric models and the other focused on automating the entire monitoring process with the aim of obtaining automatic monitoring systems of low temporal frequency. The photogrammetric RasPi system has been designed and installed in the study area of Puigcercós (Catalonia). This very low-cost system has been designed using Raspberry cameras. Despite being a very low-cost and low-resolution system, the results obtained demonstrate its ability to identify rockfalls and pre-failure deformation. The HRCam photogrammetric system has also been designed and installed in the Puigcercós study area. This system uses commercial cameras and more complex control systems. With this system, higher quality models have been obtained that enable better monitoring of rockfalls. Finally, the DSLR system has been designed similarly to the HRCam system but has been installed in a real risk area in the Tajo de San Pedro in the Alhambra (Andalusia). This system has been used to constantly monitor the rockfalls affecting this escarpment. In order to obtain 3D comparisons with the highest possible quality, two workflows have been developed. The first, called PCStacking, consists of stacking 3D models in order to calculate the median of the Z coordinates of each point to generate a new averaged point cloud. This thesis shows the application of the algorithm both with ad hoc created synthetic point clouds and with real point clouds. In both cases, the 25th and 75th percentile errors of the 3D comparisons were reduced from 3.2 cm to 1.4 cm in synthetic tests and from 1.5 cm to 0.5 cm in real conditions. The second workflow that has been developed is called MEMI (Multi-Epoch and Multi-Imagery). This workflow is capable of obtaining photogrammetric comparisons with a higher quality than those obtained with the classical workflow. The redundant use of images from the two periods to be compared reduces the error to a factor of 2 compared to the classical approach, yielding a standard deviation of the comparison of 3D models of 1.5 cm. Finally, the last workflow presented in this thesis is an update and an automation of the method for detecting rockfalls from point-clouds carried out by the RISKNAT research group. The update has been carried out with two objectives in mind. The first is to transfer the entire working method to free licence (both language and programming), and the second is to include in the processing the new algorithms and improvements that have recently been developed. The automation of the method has been performed to cope with the large amount of data generated by photogrammetric systems. It consists of automating all the processes, which means that everything from the capture of the image in the field to the obtention of the rockfalls is performed automatically. This automation poses important challenges, which, although not completely solved, are addressed in this thesis. Thanks to the creation of photogrammetric systems, 3D model improvement algorithms and automation of the rockfall identification workflow, this doctoral thesis presents a solid and innovative proposal in the field of low-cost automatic monitoring. The creation of these systems and algorithms constitutes a further step in the unimpeded expansion of monitoring and warning systems, whose ultimate goal is to enable us to live in a safer world and to build more resilient societies to deal with geological hazards.[cat] En els darrers anys, els models fotogramètrics s’han convertit en una eina molt utilitzada en l’àmbit de les geociències gràcies a la seva capacitat per reproduir superfícies naturals. Com a alternativa a altres sistemes com el LiDAR (Light Detection and Ranging), la fotogrametria permet obtenir núvols de punts 3D a un cost més baix i amb una corba d’aprenentatge menor. Per altra banda, els despreniments de roca són un dels fenòmens geològics que representen un risc per al conjunt de la societat. Aquesta tesi doctoral aborda la creació de sistemes fotogramètrics i algoritmes de processat per al monitoratge automàtic de despreniments de roca. Per una banda, s’ha dissenyat un sistema fotogramètric de molt baix cost fent servir càmeres Raspberry Pi, anomenat RasPi System, instal·lat a la zona d’estudi de Puigcercós (Catalunya). Per altra banda, s’ha dissenyat un sistema fotogramètric d’alta resolució anomenat HRCam també instal·lat a la zona d’estudi de Puigcercós. Finalment, s’ha dissenyat un tercer sistema fotogramètric de manera similar al sistema HRCam anomenat DSLR, instal·lat en una zona de risc real al Tajo de San Pedro de l’Alhambra (Andalusia). Per obtenir comparacions 3D amb la màxima qualitat possible, s’han desenvolupat dos fluxos de treball. El primer, anomenat PCStacking consisteix a realitzar un apilament de models 3D per tal de calcular la mediana de les coordenades Z de cada punt. El segon flux de treball que s’ha desenvolupat s’anomena MEMI (Multi-Epoch and Multi-Imagery). Aquest flux de treball és capaç d’obtenir comparacions fotogramètriques amb una qualitat superior a les que s’obtenen amb el flux de treball clàssic. Finalment, el darrer flux de treball que es presenta en aquesta tesi és una actualització i una automatització del mètode de detecció de despreniments de roca del grup de recerca RISKNAT. L’actualització s’ha dut a terme perseguint dos objectius. El primer, traspassar tot el mètode de treball a llicència lliure (tant llenguatge com programari) i el segon, incloure els nous algoritmes i millores desenvolupats en aquesta tesi en el processat fotogramètric Gràcies a la creació dels sistemes fotogramètrics, algoritmes de millora de models 3D i l’automatització en la identificació de despreniments aquesta tesi doctoral presenta una proposta sòlida i innovadora en el camp del monitoratge automàtic de baix cost. La creació d’aquests sistemes i algoritmes representen un avenç important en l’expansió dels sistemes de monitoratge i alerta que tenen com a objectiu final permetre'ns viure en un món més segur i construir societats més resilients enfront dels riscos geològics

    Smart Nanoscopy: A Review of Computational Approaches to Achieve Super-Resolved Optical Microscopy

    Get PDF
    The field of optical nanoscopy , a paradigm referring to the recent cutting-edge developments aimed at surpassing the widely acknowledged 200nm-diffraction limit in traditional optical microscopy, has gained recent prominence & traction in the 21st century. Numerous optical implementations allowing for a new frontier in traditional confocal laser scanning fluorescence microscopy to be explored (termed super-resolution fluorescence microscopy ) have been realized through the development of techniques such as stimulated emission and depletion (STED) microscopy, photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM), amongst others. Nonetheless, it would be apt to mention at this juncture that optical nanoscopy has been explored since the mid-late 20th century, through several computational techniques such as deblurring and deconvolution algorithms. In this review, we take a step back in the field, evaluating the various in silico methods used to achieve optical nanoscopy today, ranging from traditional deconvolution algorithms (such as the Nearest Neighbors algorithm) to the latest developments in the field of computational nanoscopy, founded on artificial intelligence (AI). An insight is provided into some of the commercial applications of AI-based super-resolution imaging, prior to delving into the potentially promising future implications of computational nanoscopy. This is facilitated by recent advancements in the field of AI, deep learning (DL) and convolutional neural network (CNN) architectures, coupled with the growing size of data sources and rapid improvements in computing hardware, such as multi-core CPUs & GPUs, low-latency RAM and hard-drive capacitie

    Fast fluorescence lifetime imaging and sensing via deep learning

    Get PDF
    Error on title page – year of award is 2023.Fluorescence lifetime imaging microscopy (FLIM) has become a valuable tool in diverse disciplines. This thesis presents deep learning (DL) approaches to addressing two major challenges in FLIM: slow and complex data analysis and the high photon budget for precisely quantifying the fluorescence lifetimes. DL's ability to extract high-dimensional features from data has revolutionized optical and biomedical imaging analysis. This thesis contributes several novel DL FLIM algorithms that significantly expand FLIM's scope. Firstly, a hardware-friendly pixel-wise DL algorithm is proposed for fast FLIM data analysis. The algorithm has a simple architecture yet can effectively resolve multi-exponential decay models. The calculation speed and accuracy outperform conventional methods significantly. Secondly, a DL algorithm is proposed to improve FLIM image spatial resolution, obtaining high-resolution (HR) fluorescence lifetime images from low-resolution (LR) images. A computational framework is developed to generate large-scale semi-synthetic FLIM datasets to address the challenge of the lack of sufficient high-quality FLIM datasets. This algorithm offers a practical approach to obtaining HR FLIM images quickly for FLIM systems. Thirdly, a DL algorithm is developed to analyze FLIM images with only a few photons per pixel, named Few-Photon Fluorescence Lifetime Imaging (FPFLI) algorithm. FPFLI uses spatial correlation and intensity information to robustly estimate the fluorescence lifetime images, pushing this photon budget to a record-low level of only a few photons per pixel. Finally, a time-resolved flow cytometry (TRFC) system is developed by integrating an advanced CMOS single-photon avalanche diode (SPAD) array and a DL processor. The SPAD array, using a parallel light detection scheme, shows an excellent photon-counting throughput. A quantized convolutional neural network (QCNN) algorithm is designed and implemented on a field-programmable gate array as an embedded processor. The processor resolves fluorescence lifetimes against disturbing noise, showing unparalleled high accuracy, fast analysis speed, and low power consumption.Fluorescence lifetime imaging microscopy (FLIM) has become a valuable tool in diverse disciplines. This thesis presents deep learning (DL) approaches to addressing two major challenges in FLIM: slow and complex data analysis and the high photon budget for precisely quantifying the fluorescence lifetimes. DL's ability to extract high-dimensional features from data has revolutionized optical and biomedical imaging analysis. This thesis contributes several novel DL FLIM algorithms that significantly expand FLIM's scope. Firstly, a hardware-friendly pixel-wise DL algorithm is proposed for fast FLIM data analysis. The algorithm has a simple architecture yet can effectively resolve multi-exponential decay models. The calculation speed and accuracy outperform conventional methods significantly. Secondly, a DL algorithm is proposed to improve FLIM image spatial resolution, obtaining high-resolution (HR) fluorescence lifetime images from low-resolution (LR) images. A computational framework is developed to generate large-scale semi-synthetic FLIM datasets to address the challenge of the lack of sufficient high-quality FLIM datasets. This algorithm offers a practical approach to obtaining HR FLIM images quickly for FLIM systems. Thirdly, a DL algorithm is developed to analyze FLIM images with only a few photons per pixel, named Few-Photon Fluorescence Lifetime Imaging (FPFLI) algorithm. FPFLI uses spatial correlation and intensity information to robustly estimate the fluorescence lifetime images, pushing this photon budget to a record-low level of only a few photons per pixel. Finally, a time-resolved flow cytometry (TRFC) system is developed by integrating an advanced CMOS single-photon avalanche diode (SPAD) array and a DL processor. The SPAD array, using a parallel light detection scheme, shows an excellent photon-counting throughput. A quantized convolutional neural network (QCNN) algorithm is designed and implemented on a field-programmable gate array as an embedded processor. The processor resolves fluorescence lifetimes against disturbing noise, showing unparalleled high accuracy, fast analysis speed, and low power consumption

    Assessing spring phenology of a temperate woodland : a multiscale comparison of ground, unmanned aerial vehicle and Landsat satellite observations

    Get PDF
    PhD ThesisVegetation phenology is the study of plant natural life cycle stages. Plant phenological events are related to carbon, energy and water cycles within terrestrial ecosystems, operating from local to global scales. As plant phenology events are highly sensitive to climate fluctuations, the timing of these events has been used as an independent indicator of climate change. The monitoring of forest phenology in a cost-effective manner, at a fine spatial scale and over relatively large areas remains a significant challenge. To address this issue, unmanned aerial vehicles (UAVs) appear to be a potential new platform for forest phenology monitoring. The aim of this research is to assess the potential of UAV data to track the temporal dynamics of spring phenology, from the individual tree to woodland scale, and to cross-compare UAV results against ground and satellite observations, in order to better understand characteristics of UAV data and assess potential for use in validation of satellite-derived phenology. A time series of UAV data were acquired in tandem with an intensive ground campaign during the spring season of 2015, over Hanging Leaves Wood, Northumberland, UK. The radiometric quality of the UAV imagery acquired by two consumer-grade cameras was assessed, in terms of the ability to retrieve reflectance and Normalised Difference Vegetation Index (NDVI), and successfully validated against ground (0.84≤R2≥0.96) and Landsat (0.73≤R2≥0.89) measurements, but only NDVI resulted in stable time series. The start (SOS), middle (MOS) and end (EOS) of spring season dates were estimated at an individual tree-level using UAV time series of NDVI and Green Chromatic Coordinate (GCC), with GCC resulting in a clearer and stronger seasonal signal at a tree crown scale. UAV-derived SOS could be predicted more accurately than MOS and EOS, with an accuracy of less than 1 week for deciduous woodland and within 2 weeks for evergreen. The UAV data were used to map phenological events for individual trees across the whole woodland, demonstrating that contrasting canopy phenological events can occur within the extent of a single Landsat pixel. This accounted for the poor relationships found between UAV- and Landsat-derived phenometrics (R2<0.45) in this study. An opportunity is now available to track very fine scale land surface changes over contiguous vegetation communities, information which could improve characterization of vegetation phenology at multiple scales.The Science without Borders program, managed by CAPES-Brazil (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior)
    corecore