83 research outputs found

    CMOS SPAD-based image sensor for single photon counting and time of flight imaging

    Get PDF
    The facility to capture the arrival of a single photon, is the fundamental limit to the detection of quantised electromagnetic radiation. An image sensor capable of capturing a picture with this ultimate optical and temporal precision is the pinnacle of photo-sensing. The creation of high spatial resolution, single photon sensitive, and time-resolved image sensors in complementary metal oxide semiconductor (CMOS) technology offers numerous benefits in a wide field of applications. These CMOS devices will be suitable to replace high sensitivity charge-coupled device (CCD) technology (electron-multiplied or electron bombarded) with significantly lower cost and comparable performance in low light or high speed scenarios. For example, with temporal resolution in the order of nano and picoseconds, detailed three-dimensional (3D) pictures can be formed by measuring the time of flight (TOF) of a light pulse. High frame rate imaging of single photons can yield new capabilities in super-resolution microscopy. Also, the imaging of quantum effects such as the entanglement of photons may be realised. The goal of this research project is the development of such an image sensor by exploiting single photon avalanche diodes (SPAD) in advanced imaging-specific 130nm front side illuminated (FSI) CMOS technology. SPADs have three key combined advantages over other imaging technologies: single photon sensitivity, picosecond temporal resolution and the facility to be integrated in standard CMOS technology. Analogue techniques are employed to create an efficient and compact imager that is scalable to mega-pixel arrays. A SPAD-based image sensor is described with 320 by 240 pixels at a pitch of 8ÎŒm and an optical efficiency or fill-factor of 26.8%. Each pixel comprises a SPAD with a hybrid analogue counting and memory circuit that makes novel use of a low-power charge transfer amplifier. Global shutter single photon counting images are captured. These exhibit photon shot noise limited statistics with unprecedented low input-referred noise at an equivalent of 0.06 electrons. The CMOS image sensor (CIS) trends of shrinking pixels, increasing array sizes, decreasing read noise, fast readout and oversampled image formation are projected towards the formation of binary single photon imagers or quanta image sensors (QIS). In a binary digital image capture mode, the image sensor offers a look-ahead to the properties and performance of future QISs with 20,000 binary frames per second readout with a bit error rate of 1.7 x 10-3. The bit density, or cumulative binary intensity, against exposure performance of this image sensor is in the shape of the famous Hurter and Driffield densitometry curves of photographic film. Oversampled time-gated binary image capture is demonstrated, capturing 3D TOF images with 3.8cm precision in a 60cm range

    Machine vision detection of crop diseases

    Get PDF
    The production of agricultural crops such as wheat is a multi-billion dollar industry. Each year, diseases such as fungal infections can potentially destroy the entire production in a region if the conditions are right. Traditional mitigation has involved the grower's own judgement as the first option, followed by use of trained professionals for confirmation of diagnosis and advice. Often the onset of infection is rapid, and the professionals may not always be available to assess the situation before the infection spreads. This reliance on individual judgement and off-site experts highlights a need to develop a reliable, automated software solution which can provide an accurate and immediate software diagnosis at the first sign of infection. This research has developed into two solutions: a system to help agronomists by using professional camera equipment, as well as the outline for a mobile solution which can operate on a `smart' device to provide growers with an on-hand diagnosis tool. By investigating the way the human mind and eye works, a software emulation of the human visual system was constructed, with artificial intelligence approaches used for final interpretation of the optical response. This use of artificial intelligence has allowed for the design of a robust system which can `self-learn' to recognise any new disease samples. Research involved investigation of a number of camera and hardware options. Final system validation was conducted on both `stock' disease images provided by agronomists, and on actual plant samples, which proved that the system could function across a broad range of diseases and crops with a degree of accuracy between 95-99%. This research indicates that it is possible to develop tools which can give an immediate analysis at all stages of infection, and be robust enough to work over a range of diseases and crops. Further development and refinement would provide a useful diagnosis tool for both growers and experts

    Stereoscopic motion analysis in densely packed clusters: 3D analysis of the shimmering behaviour in Giant honey bees

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The detailed interpretation of mass phenomena such as human escape panic or swarm behaviour in birds, fish and insects requires detailed analysis of the 3D movements of individual participants. Here, we describe the adaptation of a 3D stereoscopic imaging method to measure the positional coordinates of individual agents in densely packed clusters. The method was applied to study behavioural aspects of shimmering in Giant honeybees, a collective defence behaviour that deters predatory wasps by visual cues, whereby individual bees flip their abdomen upwards in a split second, producing Mexican wave-like patterns.</p> <p>Results</p> <p>Stereoscopic imaging provided non-invasive, automated, simultaneous, <it>in-situ </it>3D measurements of hundreds of bees on the nest surface regarding their thoracic position and orientation of the body length axis. <it>Segmentation </it>was the basis for the <it>stereo matching</it>, which defined correspondences of individual bees in pairs of stereo images. Stereo-matched "agent bees" were re-identified in subsequent frames by the <it>tracking </it>procedure and <it>triangulated </it>into real-world coordinates. These algorithms were required to calculate the three spatial motion components (dx: horizontal, dy: vertical and dz: towards and from the comb) of individual bees over time.</p> <p>Conclusions</p> <p>The method enables the assessment of the 3D positions of individual Giant honeybees, which is not possible with single-view cameras. The method can be applied to distinguish at the individual bee level active movements of the thoraces produced by abdominal flipping from passive motions generated by the moving bee curtain. The data provide evidence that the z-deflections of thoraces are potential cues for colony-intrinsic communication. The method helps to understand the phenomenon of collective decision-making through mechanoceptive synchronization and to associate shimmering with the principles of wave propagation. With further, minor modifications, the method could be used to study aspects of other mass phenomena that involve active and passive movements of individual agents in densely packed clusters.</p

    Trade-off analysis of modes of data handling for earth resources (ERS), volume 1

    Get PDF
    Data handling requirements are reviewed for earth observation missions along with likely technology advances. Parametric techniques for synthesizing potential systems are developed. Major tasks include: (1) review of the sensors under development and extensions of or improvements in these sensors; (2) development of mission models for missions spanning land, ocean, and atmosphere observations; (3) summary of data handling requirements including the frequency of coverage, timeliness of dissemination, and geographic relationships between points of collection and points of dissemination; (4) review of data routing to establish ways of getting data from the collection point to the user; (5) on-board data processing; (6) communications link; and (7) ground data processing. A detailed synthesis of three specific missions is included

    Bio-Inspired Motion Vision for Aerial Course Control

    No full text

    Real-Time Computational Gigapixel Multi-Camera Systems

    Get PDF
    The standard cameras are designed to truthfully mimic the human eye and the visual system. In recent years, commercially available cameras are becoming more complex, and offer higher image resolutions than ever before. However, the quality of conventional imaging methods is limited by several parameters, such as the pixel size, lens system, the diffraction limit, etc. The rapid technological advancements, increase in the available computing power, and introduction of Graphics Processing Units (GPU) and Field-Programmable-Gate-Arrays (FPGA) open new possibilities in the computer vision and computer graphics communities. The researchers are now focusing on utilizing the immense computational power offered on the modern processing platforms, to create imaging systems with novel or significantly enhanced capabilities compared to the standard ones. One popular type of the computational imaging systems offering new possibilities is a multi-camera system. This thesis will focus on FPGA-based multi-camera systems that operate in real-time. The aim of themulti-camera systems presented in this thesis is to offer a wide field-of-view (FOV) video coverage at high frame rates. The wide FOV is achieved by constructing a panoramic image from the images acquired by the multi-camera system. Two new real-time computational imaging systems that provide new functionalities and better performance compared to conventional cameras are presented in this thesis. Each camera system design and implementation are analyzed in detail, built and tested in real-time conditions. Panoptic is a miniaturized low-cost multi-camera system that reconstructs a 360 degrees view in real-time. Since it is an easily portable system, it provides means to capture the complete surrounding light field in dynamic environment, such as when mounted on a vehicle or a flying drone. The second presented system, GigaEye II , is a modular high-resolution imaging system that introduces the concept of distributed image processing in the real-time camera systems. This thesis explains in detail howsuch concept can be efficiently used in real-time computational imaging systems. The purpose of computational imaging systems in the form of multi-camera systems does not end with real-time panoramas. The application scope of these cameras is vast. They can be used in 3D cinematography, for broadcasting live events, or for immersive telepresence experience. The final chapter of this thesis presents three potential applications of these systems: object detection and tracking, high dynamic range (HDR) imaging, and observation of multiple regions of interest. Object detection and tracking, and observation of multiple regions of interest are extremely useful and desired capabilities of surveillance systems, in security and defense industry, or in the fast-growing industry of autonomous vehicles. On the other hand, high dynamic range imaging is becoming a common option in the consumer market cameras, and the presented method allows instantaneous capture of HDR videos. Finally, this thesis concludes with the discussion of the real-time multi-camera systems, their advantages, their limitations, and the future predictions

    3D Motion Analysis via Energy Minimization

    Get PDF
    This work deals with 3D motion analysis from stereo image sequences for driver assistance systems. It consists of two parts: the estimation of motion from the image data and the segmentation of moving objects in the input images. The content can be summarized with the technical term machine visual kinesthesia, the sensation or perception and cognition of motion. In the first three chapters, the importance of motion information is discussed for driver assistance systems, for machine vision in general, and for the estimation of ego motion. The next two chapters delineate on motion perception, analyzing the apparent movement of pixels in image sequences for both a monocular and binocular camera setup. Then, the obtained motion information is used to segment moving objects in the input video. Thus, one can clearly identify the thread from analyzing the input images to describing the input images by means of stationary and moving objects. Finally, I present possibilities for future applications based on the contents of this thesis. Previous work in each case is presented in the respective chapters. Although the overarching issue of motion estimation from image sequences is related to practice, there is nothing as practical as a good theory (Kurt Lewin). Several problems in computer vision are formulated as intricate energy minimization problems. In this thesis, motion analysis in image sequences is thoroughly investigated, showing that splitting an original complex problem into simplified sub-problems yields improved accuracy, increased robustness, and a clear and accessible approach to state-of-the-art motion estimation techniques. In Chapter 4, optical flow is considered. Optical flow is commonly estimated by minimizing the combined energy, consisting of a data term and a smoothness term. These two parts are decoupled, yielding a novel and iterative approach to optical flow. The derived Refinement Optical Flow framework is a clear and straight-forward approach to computing the apparent image motion vector field. Furthermore this results currently in the most accurate motion estimation techniques in literature. Much as this is an engineering approach of fine-tuning precision to the last detail, it helps to get a better insight into the problem of motion estimation. This profoundly contributes to state-of-the-art research in motion analysis, in particular facilitating the use of motion estimation in a wide range of applications. In Chapter 5, scene flow is rethought. Scene flow stands for the three-dimensional motion vector field for every image pixel, computed from a stereo image sequence. Again, decoupling of the commonly coupled approach of estimating three-dimensional position and three dimensional motion yields an approach to scene ow estimation with more accurate results and a considerably lower computational load. It results in a dense scene flow field and enables additional applications based on the dense three-dimensional motion vector field, which are to be investigated in the future. One such application is the segmentation of moving objects in an image sequence. Detecting moving objects within the scene is one of the most important features to extract in image sequences from a dynamic environment. This is presented in Chapter 6. Scene flow and the segmentation of independently moving objects are only first steps towards machine visual kinesthesia. Throughout this work, I present possible future work to improve the estimation of optical flow and scene flow. Chapter 7 additionally presents an outlook on future research for driver assistance applications. But there is much more to the full understanding of the three-dimensional dynamic scene. This work is meant to inspire the reader to think outside the box and contribute to the vision of building perceiving machines.</em

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Implementation of Sensors and Artificial Intelligence for Environmental Hazards Assessment in Urban, Agriculture and Forestry Systems

    Get PDF
    The implementation of artificial intelligence (AI), together with robotics, sensors, sensor networks, Internet of Things (IoT), and machine/deep learning modeling, has reached the forefront of research activities, moving towards the goal of increasing the efficiency in a multitude of applications and purposes related to environmental sciences. The development and deployment of AI tools requires specific considerations, approaches, and methodologies for their effective and accurate applications. This Special Issue focused on the applications of AI to environmental systems related to hazard assessment in urban, agriculture, and forestry areas
    • 

    corecore