1,641 research outputs found

    Object Search Strategy in Tracking Algorithms

    Get PDF
    The demand for real-time video surveillance systems is increasing rapidly. The purpose of these systems includes surveillance as well as monitoring and controlling the events. Today there are several real-time computer vision applications based on image understanding which emulate the human vision and intelligence. These machines include object tracking as their primary task. Object tracking refers to estimating the trajectory of an object of interest in a video. A tracking system works on the principle of video processing algorithms. Video processing includes a huge amount of data to be processed and this fact dictates while implementing the algorithms on any hardware. However, the problems becomes challenging due to unexpected motion of the object, scene appearance change, object appearance change, structures of objects that are not rigid. Besides this full and partial occlusions and motion of the camera also pose challenges. Current tracking algorithms treat this problem as a classification task and use online learning algorithms to update the object model. Here, we explore the data redundancy in the sampling techniques and develop a highly structured kernel. This kernel acquires a circulant structure which is extremely easy to manipulate. Also, we take it further by using mean shift density algorithm and optical flow by Lucas Kanade method which gives us a heavy improvement in the results

    SEISMIC EXPRESSION OF IGNEOUS BODIES IN SEDIMENTARY BASINS AND THEIR IMPACT ON HYDROCARBON EXPLORATION: EXAMPLES FROM A COMPRESSIVE TECTONIC SETTING, TARANAKI BASIN, NEW ZEALAND

    Get PDF
    The impact of Neogene volcanism on hydrocarbon exploration in the Taranaki Basin, New Zealand remains under-explored. To better understand these effects, I performed detailed seismic interpretation coupled with examination of data from exploratory wells drilled into andesitic volcanoes. I discovered that igneous bodies can mimic the seismic expression of common sedimentary exploration targets such as bright spots, carbonate mounds and sinuous sand-prone channels. I find that by understanding the context of volcanic systems, one can avoid misinterpreting them as something else. Important clues that help distinguishing volcanoes from carbonate mounds in seismic data are not in the actual mound-like reflectors, but rather in features around and below these ambiguous facies. These clues are the disruption of reflectors immediately below volcanoes and igneous sills forming forced folds nearby and below the volcanic edifices. Secondly, in good quality seismic surveys, volcanic rocks of intermediate magma composition (andesitic) present distinctive patterns in seismic data. Such patterns are easy for machine learning to identify using a combination of seismic attributes that highlight the continuity, amplitude and frequency of the reflectors at the same voxels. Clustering of these seismic attributes using Self-Organizing Maps (SOM) allowed for the identification of different architectural elements such as lava flows, subaqueous landslides and pyroclastic flows associated with the andesitic Kora volcano. Finally, by 3D mapping of the Eocene, Miocene and Pleistocene strata in the Kora 3D seismic survey, I reveal that the andesitic volcanoes are capable of large structural trapping (Mega forced folds) in both the strata predating and postdating the volcanism. These traps are four way-dip closures with the potential to store more than 1.0 billion of barrels of oil if filled to spill point

    Service robotics and machine learning for close-range remote sensing

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Collaborative SLAM using a swarm intelligence-inspired exploration method

    Get PDF
    Master's thesis in Mechatronics (MAS500)Efficient exploration in multi-robot SLAM is a challenging task. This thesis describes the design of algorithms that would enable Loomo robots to collaboratively explore an unknown environment. A pose graph-based SLAM algorithm using the on-board sensors of the Loomo was developed from scratch. A YOLOv3-tiny neural network has been trained to recognize other Loomos, and an exploration simulation has been developed to test exploration methods. The bots in the simulation are controlled using swarm intelligence inspired rules. The system is not finished, and further workis needed to combine the work done in the thesis into a collaborative SLAM system that runs on the Loomo robots

    NaRPA: Navigation and Rendering Pipeline for Astronautics

    Full text link
    This paper presents Navigation and Rendering Pipeline for Astronautics (NaRPA) - a novel ray-tracing-based computer graphics engine to model and simulate light transport for space-borne imaging. NaRPA incorporates lighting models with attention to atmospheric and shading effects for the synthesis of space-to-space and ground-to-space virtual observations. In addition to image rendering, the engine also possesses point cloud, depth, and contour map generation capabilities to simulate passive and active vision-based sensors and to facilitate the designing, testing, or verification of visual navigation algorithms. Physically based rendering capabilities of NaRPA and the efficacy of the proposed rendering algorithm are demonstrated using applications in representative space-based environments. A key demonstration includes NaRPA as a tool for generating stereo imagery and application in 3D coordinate estimation using triangulation. Another prominent application of NaRPA includes a novel differentiable rendering approach for image-based attitude estimation is proposed to highlight the efficacy of the NaRPA engine for simulating vision-based navigation and guidance operations.Comment: 49 pages, 22 figure

    PRECONDITIONING AND THE APPLICATION OF CONVOLUTIONAL NEURAL NETWORKS TO CLASSIFY MOVING TARGETS IN SAR IMAGERY

    Get PDF
    Synthetic Aperture Radar (SAR) is a principle that uses transmitted pulses that store and combine scene echoes to build an image that represents the scene reflectivity. SAR systems can be found on a wide variety of platforms to include satellites, aircraft, and more recently, unmanned platforms like the Global Hawk unmanned aerial vehicle. The next step is to process, analyze and classify the SAR data. The use of a convolutional neural network (CNN) to analyze SAR imagery is a viable method to achieve Automatic Target Recognition (ATR) in military applications. The CNN is an artificial neural network that uses convolutional layers to detect certain features in an image. These features correspond to a target of interest and train the CNN to recognize and classify future images. Moving targets present a major challenge to current SAR ATR methods due to the “smearing” effect in the image. Past research has shown that the combination of autofocus techniques and proper training with moving targets improves the accuracy of the CNN at target recognition. The current research includes improvement of the CNN algorithm and preconditioning techniques, as well as a deeper analysis of moving targets with complex motion such as changes to roll, pitch or yaw. The CNN algorithm was developed and verified using computer simulation.Lieutenant, United States NavyApproved for public release. Distribution is unlimited

    Fast fluorescence lifetime imaging and sensing via deep learning

    Get PDF
    Error on title page – year of award is 2023.Fluorescence lifetime imaging microscopy (FLIM) has become a valuable tool in diverse disciplines. This thesis presents deep learning (DL) approaches to addressing two major challenges in FLIM: slow and complex data analysis and the high photon budget for precisely quantifying the fluorescence lifetimes. DL's ability to extract high-dimensional features from data has revolutionized optical and biomedical imaging analysis. This thesis contributes several novel DL FLIM algorithms that significantly expand FLIM's scope. Firstly, a hardware-friendly pixel-wise DL algorithm is proposed for fast FLIM data analysis. The algorithm has a simple architecture yet can effectively resolve multi-exponential decay models. The calculation speed and accuracy outperform conventional methods significantly. Secondly, a DL algorithm is proposed to improve FLIM image spatial resolution, obtaining high-resolution (HR) fluorescence lifetime images from low-resolution (LR) images. A computational framework is developed to generate large-scale semi-synthetic FLIM datasets to address the challenge of the lack of sufficient high-quality FLIM datasets. This algorithm offers a practical approach to obtaining HR FLIM images quickly for FLIM systems. Thirdly, a DL algorithm is developed to analyze FLIM images with only a few photons per pixel, named Few-Photon Fluorescence Lifetime Imaging (FPFLI) algorithm. FPFLI uses spatial correlation and intensity information to robustly estimate the fluorescence lifetime images, pushing this photon budget to a record-low level of only a few photons per pixel. Finally, a time-resolved flow cytometry (TRFC) system is developed by integrating an advanced CMOS single-photon avalanche diode (SPAD) array and a DL processor. The SPAD array, using a parallel light detection scheme, shows an excellent photon-counting throughput. A quantized convolutional neural network (QCNN) algorithm is designed and implemented on a field-programmable gate array as an embedded processor. The processor resolves fluorescence lifetimes against disturbing noise, showing unparalleled high accuracy, fast analysis speed, and low power consumption.Fluorescence lifetime imaging microscopy (FLIM) has become a valuable tool in diverse disciplines. This thesis presents deep learning (DL) approaches to addressing two major challenges in FLIM: slow and complex data analysis and the high photon budget for precisely quantifying the fluorescence lifetimes. DL's ability to extract high-dimensional features from data has revolutionized optical and biomedical imaging analysis. This thesis contributes several novel DL FLIM algorithms that significantly expand FLIM's scope. Firstly, a hardware-friendly pixel-wise DL algorithm is proposed for fast FLIM data analysis. The algorithm has a simple architecture yet can effectively resolve multi-exponential decay models. The calculation speed and accuracy outperform conventional methods significantly. Secondly, a DL algorithm is proposed to improve FLIM image spatial resolution, obtaining high-resolution (HR) fluorescence lifetime images from low-resolution (LR) images. A computational framework is developed to generate large-scale semi-synthetic FLIM datasets to address the challenge of the lack of sufficient high-quality FLIM datasets. This algorithm offers a practical approach to obtaining HR FLIM images quickly for FLIM systems. Thirdly, a DL algorithm is developed to analyze FLIM images with only a few photons per pixel, named Few-Photon Fluorescence Lifetime Imaging (FPFLI) algorithm. FPFLI uses spatial correlation and intensity information to robustly estimate the fluorescence lifetime images, pushing this photon budget to a record-low level of only a few photons per pixel. Finally, a time-resolved flow cytometry (TRFC) system is developed by integrating an advanced CMOS single-photon avalanche diode (SPAD) array and a DL processor. The SPAD array, using a parallel light detection scheme, shows an excellent photon-counting throughput. A quantized convolutional neural network (QCNN) algorithm is designed and implemented on a field-programmable gate array as an embedded processor. The processor resolves fluorescence lifetimes against disturbing noise, showing unparalleled high accuracy, fast analysis speed, and low power consumption

    Challenges and opportunities for quantifying roots and rhizosphere interactions through imaging and image analysis

    Get PDF
    The morphology of roots and root systems influences the efficiency by which plants acquire nutrients and water, anchor themselves and provide stability to the surrounding soil. Plant genotype and the biotic and abiotic environment significantly influence root morphology, growth and ultimately crop yield. The challenge for researchers interested in phenotyping root systems is, therefore, not just to measure roots and link their phenotype to the plant genotype, but also to understand how the growth of roots is influenced by their environment. This review discusses progress in quantifying root system parameters (e.g. in terms of size, shape and dynamics) using imaging and image analysis technologies and also discusses their potential for providing a better understanding of root:soil interactions. Significant progress has been made in image acquisition techniques, however trade-offs exist between sample throughput, sample size, image resolution and information gained. All of these factors impact on downstream image analysis processes. While there have been significant advances in computation power, limitations still exist in statistical processes involved in image analysis. Utilizing and combining different imaging systems, integrating measurements and image analysis where possible, and amalgamating data will allow researchers to gain a better understanding of root:soil interactions
    corecore