2,822 research outputs found

    Fusion of 3D LIDAR and Camera Data for Object Detection in Autonomous Vehicle Applications

    Get PDF
    It’s critical for an autonomous vehicle to acquire accurate and real-time information of the objects in its vicinity, which will fully guarantee the safety of the passengers and vehicle in various environment. 3D LIDAR can directly obtain the position and geometrical structure of the object within its detection range, while vision camera is very suitable for object recognition. Accordingly, this paper presents a novel object detection and identification method fusing the complementary information of two kind of sensors. We first utilize the 3D LIDAR data to generate accurate object-region proposals effectively. Then, these candidates are mapped into the image space where the regions of interest (ROI) of the proposals are selected and input to a convolutional neural network (CNN) for further object recognition. In order to identify all sizes of objects precisely, we combine the features of the last three layers of the CNN to extract multi-scale features of the ROIs. The evaluation results on the KITTI dataset demonstrate that : (1) Unlike sliding windows that produce thousands of candidate object-region proposals, 3D LIDAR provides an average of 86 real candidates per frame and the minimal recall rate is higher than 95%, which greatly lowers the proposals extraction time; (2) The average processing time for each frame of the proposed method is only 66.79ms, which meets the real-time demand of autonomous vehicles; (3) The average identification accuracies of our method for car and pedestrian on the moderate level are 89.04% and 78.18% respectively, which outperform most previous methods

    Introduction to computed tomography

    Get PDF

    Remote sensing in the coastal and marine environment. Proceedings of the US North Atlantic Regional Workshop

    Get PDF
    Presentations were grouped in the following categories: (1) a technical orientation of Earth resources remote sensing including data sources and processing; (2) a review of the present status of remote sensing technology applicable to the coastal and marine environment; (3) a description of data and information needs of selected coastal and marine activities; and (4) an outline of plans for marine monitoring systems for the east coast and a concept for an east coast remote sensing facility. Also discussed were user needs and remote sensing potentials in the areas of coastal processes and management, commercial and recreational fisheries, and marine physical processes

    Photogrammetric suite to manage the survey workflow in challenging environments and conditions

    Get PDF
    The present work is intended in providing new and innovative instruments to support the photogrammetric survey workflow during all its phases. A suite of tools has been conceived in order to manage the planning, the acquisition, the post-processing and the restitution steps, with particular attention to the rigorousness of the approach and to the final precision. The main focus of the research has been the implementation of the tool MAGO, standing for Adaptive Mesh for Orthophoto Generation. Its novelty consists in the possibility to automatically reconstruct \u201cunrolled\u201d orthophotos of adjacent fa\ue7ades of a building using the point cloud, instead of the mesh, as input source for the orthophoto reconstruction. The second tool has been conceived as a photogrammetric procedure based on Bundle Block Adjustment. The same issue is analysed from two mirrored perspectives: on the one hand, the use of moving cameras in a static scenario in order to manage real-time indoor navigation; on the other hand, the use of static cameras in a moving scenario in order to achieve the simultaneously reconstruction of the 3D model of the changing object. A third tool named U.Ph.O., standing for Unmanned Photogrammetric Office, has been integrated with a new module. The general aim is on the one hand to plan the photogrammetric survey considering the expected precision, computed on the basis of a network simulation, and on the other hand to check if the achieved survey has been collected compatibly with the planned conditions. The provided integration concerns the treatment of surfaces with a generic orientation further than the ones with a planimetric development. After a brief introduction, a general description about the photogrammetric principles is given in the first chapter of the dissertation; a chapter follows about the parallelism between Photogrammetry and Computer Vision and the contribution of this last in the development of the described tools. The third chapter specifically regards, indeed, the implemented software and tools, while the fourth contains the training test and the validation. Finally, conclusions and future perspectives are reported

    Semi-Automated DIRSIG scene modeling from 3D lidar and passive imagery

    Get PDF
    The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is an established, first-principles based scene simulation tool that produces synthetic multispectral and hyperspectral images from the visible to long wave infrared (0.4 to 20 microns). Over the last few years, significant enhancements such as spectral polarimetric and active Light Detection and Ranging (lidar) models have also been incorporated into the software, providing an extremely powerful tool for multi-sensor algorithm testing and sensor evaluation. However, the extensive time required to create large-scale scenes has limited DIRSIG’s ability to generate scenes ”on demand.” To date, scene generation has been a laborious, time-intensive process, as the terrain model, CAD objects and background maps have to be created and attributed manually. To shorten the time required for this process, this research developed an approach to reduce the man-in-the-loop requirements for several aspects of synthetic scene construction. Through a fusion of 3D lidar data with passive imagery, we were able to semi-automate several of the required tasks in the DIRSIG scene creation process. Additionally, many of the remaining tasks realized a shortened implementation time through this application of multi-modal imagery. Lidar data is exploited to identify ground and object features as well as to define initial tree location and building parameter estimates. These estimates are then refined by analyzing high-resolution frame array imagery using the concepts of projective geometry in lieu of the more common Euclidean approach found in most traditional photogrammetric references. Spectral imagery is also used to assign material characteristics to the modeled geometric objects. This is achieved through a modified atmospheric compensation applied to raw hyperspectral imagery. These techniques have been successfully applied to imagery collected over the RIT campus and the greater Rochester area. The data used include multiple-return point information provided by an Optech lidar linescanning sensor, multispectral frame array imagery from the Wildfire Airborne Sensor Program (WASP) and WASP-lite sensors, and hyperspectral data from the Modular Imaging Spectrometer Instrument (MISI) and the COMPact Airborne Spectral Sensor (COMPASS). Information from these image sources was fused and processed using the semi-automated approach to provide the DIRSIG input files used to define a synthetic scene. When compared to the standard manual process for creating these files, we achieved approximately a tenfold increase in speed, as well as a significant increase in geometric accuracy

    Human-Centric Machine Vision

    Get PDF
    Recently, the algorithms for the processing of the visual information have greatly evolved, providing efficient and effective solutions to cope with the variability and the complexity of real-world environments. These achievements yield to the development of Machine Vision systems that overcome the typical industrial applications, where the environments are controlled and the tasks are very specific, towards the use of innovative solutions to face with everyday needs of people. The Human-Centric Machine Vision can help to solve the problems raised by the needs of our society, e.g. security and safety, health care, medical imaging, and human machine interface. In such applications it is necessary to handle changing, unpredictable and complex situations, and to take care of the presence of humans

    Simulation-based Planning of Machine Vision Inspection Systems with an Application to Laser Triangulation

    Get PDF
    Nowadays, vision systems play a central role in industrial inspection. The experts typically choose the configuration of measurements in such systems empirically. For complex inspections, however, automatic inspection planning is essential. This book proposes a simulation-based approach towards inspection planning by contributing to all components of this problem: simulation, evaluation, and optimization. As an application, inspection of a complex cylinder head by laser triangulation is studied

    Studies into the detection of buried objects (particularly optical fibres) in saturated sediment. Part 2: design and commissioning of test tank

    No full text
    This report is the second in a series of five, designed to investigate the detection oftargets buried in saturated sediment, primarily through acoustical or acoustics-relatedmethods. Although steel targets are included for comparison, the major interest is intargets (polyethylene cylinders and optical fibres) which have a poor acousticimpedance mismatch with the host sediment. This particular report details theconstruction of a laboratory-scale test facility. This consisted of three maincomponents. Budget constraints were an over-riding consideration in the design.First, there is the design and production of a tank containing saturated sediment. Itwas the intention that the physical and acoustical properties of the laboratory systemshould be similar to those found in a real seafloor environment. Particularconsideration is given to those features of the test system which might affect theacoustic performance, such as reverberation, the presence of gas bubbles in thesediment, or a suspension of particles above it. Sound speed and attenuation wereidentified as being critical parameters, requiring particular attention. Hence, thesewere investigated separately for each component of the acoustic path.Second, there is the design and production of a transducer system. It was the intentionthat this would be suitable for an investigation into the non-invasive acousticdetection of buried objects. A focused reflector is considered to be the most costeffectiveway of achieving a high acoustic power and narrow beamwidth. Acomparison of different reflector sizes suggested that a larger aperture would result inless spherical aberration, thus producing a more uniform sound field. Diffractioneffects are reduced by specifying a tolerance of much less than an acousticwavelength over the reflector surface. The free-field performance of the transducerswas found to be in agreement with the model prediction. Several parameters havebeen determined in this report that pertain to the acoustical characteristics of the waterand sediment in the laboratory tank in the 10 – 100 kHz frequency range.Third, there is the design and production of an automated control system wasdeveloped to simplify the data acquisition process. This was, primarily, a motordrivenposition control system which allowed the transducers to be accuratelypositioned in the two-dimensional plane above the sediment. Thus, it was possible forthe combined signal generation, data acquisition and position control process to be coordinatedfrom a central computer.This series of reports is written in support of the article “The detection by sonar ofxdifficult targets (including centimetre-scale plastic objects and optical fibres) buriedin saturated sediment” by T G Leighton and R C P Evans, written for a Special Issueof Applied Acoustics which contains articles on the topic of the detection of objectsburied in marine sediment. Further support material can be found athttp://www.isvr.soton.ac.uk/FDAG/uaua/target_in_sand.HTM

    On the popularization of digital close-range photogrammetry: a handbook for new users.

    Get PDF
    Εθνικό Μετσόβιο Πολυτεχνείο--Μεταπτυχιακή Εργασία. Διεπιστημονικό-Διατμηματικό Πρόγραμμα Μεταπτυχιακών Σπουδών (Δ.Π.Μ.Σ.) “Γεωπληροφορική
    corecore