604 research outputs found

    Leveraging External Sensor Data for Enhanced Space Situational Awareness

    Get PDF
    Reliable Space Situational Awareness (SSA) is a recognized requirement in the current congested, contested, and competitive environment of space operations. A shortage of available sensors and reliable data sources are some current limiting factors for maintaining SSA. Unfortunately, cost constraints prohibit drastically increasing the sensor inventory. Alternative methods are sought to enhance current SSA, including utilizing non-traditional data sources (external sensors) to perform basic SSA catalog maintenance functions. Astronomical data, for example, routinely collects serendipitous satellite streaks in the course of observing deep space; but tactics, techniques, and procedures designed to glean useful information from those collects have yet to be rigorously developed. This work examines the feasibility and utility of performing ephemeris positional updates for a Resident Space Object (RSO) catalog using metric data obtained from RSO streaks gathered by astronomical telescopes. The focus of this work is on processing data from three possible streak categories: streaks that only enter, only exit, or cross completely through the astronomical image. Successful use of this data will aid in resolving uncorrelated tracks, space object identification, and threat detection. Incorporation of external data sources will also reduce the number of routine collects required by existing SSA sensors, freeing them up for more demanding tasks. The results clearly demonstrate that accurate orbital reconstruction can be performed using an RSO streak in a distorted image, without applying calibration frames and that partially bound streaks provide similar results to traditional data, with a mean degradation of 6:2% in right ascension and 42:69% in declination. The methodology developed can also be applied to dedicated SSA sensors to extract data from serendipitous streaks gathered while observing other RSOs

    Generic camera calibration for omnifocus imaging, depth estimation and a train monitoring system

    Get PDF
    Calibrating an imaging system for its geometric properties is an important step toward understanding the process of image formation and devising techniques to invert this process to decipher interesting properties of the imaged scene. In this dissertation, we propose new optically and physically motivated models for achieving state-of-the-art geometric and photometric camera calibration. The calibration parameters are then applied as input to new algorithms in omnifocus imaging, 3D scene depth from focus and machine vision based intermodal freight train analysis. In the first prat of this dissertation, we present new progress made in the areas of camera calibration with application to omnifocus imaging and 3D scene depth from focus and point spread function calibration. In camera calibration, we propose five new calibration methods for cameras whose imaging model can represented by ideal perspective projection with small distortions due to lens shape (radial distortion) or misaligned lens-sensor configuration (decentering). In the first calibration method, we generalize pupil-centric imaging model to handle arbitrarily rotated lens-sensor configuration, where we consider the sensor tilt to be about the physical optic axis. For such a setting, we derive an analytical solution to linear camera calibration based on collinearity constraint relating the known world points and measured image points assuming no radial distortion. Our second method considers a much simpler case of Gaussian thin-lens imaging model along with non-frontal image sensor and proposes analytical solution to the linear calibration equations derived from collinearity constraint. In the third method, we generalize radial alignment constraint to non-frontal sensor configuration and derive analytical solution to the resulting linear camera calibration equations. In the fourth method, we propose the use of focal stack images of a known checkerboard scene to calibrate cameras having non-frontal sensor. In the fifth method, we show that radial distortion is a result of changing entrance pupil location as a function of incident image rays and propose a collinearity based camera calibration method under this imaging model. Based on this model, we propose a new focus measure for omnifocus imaging and apply it to compute 3D scene depth from focus. We then propose a point spread function calibration method which computes the point spread function (PSF) of a CMOS image sensor using Hadamard patterns displayed on an LCD screen placed at a fixed distance from the sensor. In the second part of the dissertation, we describe a machine vision based train monitoring system, where we propose a motion-based background subtraction method to remove background between the gaps of an inter-modal freight train. The background subtracted image frames are used to compute a panoramic mosaic of the train and compute gap length in pixels. The gap length computed in metric units using the calibration parameters of the video camera allows for analyzing the fuel efficiency of loading pattern of the given inter-modal freight train

    Smart environment monitoring through micro unmanned aerial vehicles

    Get PDF
    In recent years, the improvements of small-scale Unmanned Aerial Vehicles (UAVs) in terms of flight time, automatic control, and remote transmission are promoting the development of a wide range of practical applications. In aerial video surveillance, the monitoring of broad areas still has many challenges due to the achievement of different tasks in real-time, including mosaicking, change detection, and object detection. In this thesis work, a small-scale UAV based vision system to maintain regular surveillance over target areas is proposed. The system works in two modes. The first mode allows to monitor an area of interest by performing several flights. During the first flight, it creates an incremental geo-referenced mosaic of an area of interest and classifies all the known elements (e.g., persons) found on the ground by an improved Faster R-CNN architecture previously trained. In subsequent reconnaissance flights, the system searches for any changes (e.g., disappearance of persons) that may occur in the mosaic by a histogram equalization and RGB-Local Binary Pattern (RGB-LBP) based algorithm. If present, the mosaic is updated. The second mode, allows to perform a real-time classification by using, again, our improved Faster R-CNN model, useful for time-critical operations. Thanks to different design features, the system works in real-time and performs mosaicking and change detection tasks at low-altitude, thus allowing the classification even of small objects. The proposed system was tested by using the whole set of challenging video sequences contained in the UAV Mosaicking and Change Detection (UMCD) dataset and other public datasets. The evaluation of the system by well-known performance metrics has shown remarkable results in terms of mosaic creation and updating, as well as in terms of change detection and object detection

    Developing Advanced Photogrammetric Methods for Automated Rockfall Monitoring

    Get PDF
    [eng] In recent years, photogrammetric models have become a widely used tool in the field of geosciences thanks to their ability to reproduce natural surfaces. As an alternative to other systems such as LiDAR (Light Detection and Ranging), photogrammetry makes it possible to obtain 3D points clouds at a lower cost and with a lower learning curve. This combination has allowed the democratisation of this 3D model creation strategy. On the other hand, rockfalls are one of the geological phenomena that represent a risk for society. It is the most common natural phenomenon in mountainous areas and, given its great speed, its hazard is very high. This doctoral thesis deals with the creation of photogrammetric systems and processing algorithms for the automatic monitoring of rockfalls. To this end, 3 fixed camera photogrammetric systems were designed and installed in 2 study areas. In addition, 3 different workflows have been developed, two of which are aimed at obtaining comparisons of higher quality using photogrammetric models and the other focused on automating the entire monitoring process with the aim of obtaining automatic monitoring systems of low temporal frequency. The photogrammetric RasPi system has been designed and installed in the study area of Puigcercós (Catalonia). This very low-cost system has been designed using Raspberry cameras. Despite being a very low-cost and low-resolution system, the results obtained demonstrate its ability to identify rockfalls and pre-failure deformation. The HRCam photogrammetric system has also been designed and installed in the Puigcercós study area. This system uses commercial cameras and more complex control systems. With this system, higher quality models have been obtained that enable better monitoring of rockfalls. Finally, the DSLR system has been designed similarly to the HRCam system but has been installed in a real risk area in the Tajo de San Pedro in the Alhambra (Andalusia). This system has been used to constantly monitor the rockfalls affecting this escarpment. In order to obtain 3D comparisons with the highest possible quality, two workflows have been developed. The first, called PCStacking, consists of stacking 3D models in order to calculate the median of the Z coordinates of each point to generate a new averaged point cloud. This thesis shows the application of the algorithm both with ad hoc created synthetic point clouds and with real point clouds. In both cases, the 25th and 75th percentile errors of the 3D comparisons were reduced from 3.2 cm to 1.4 cm in synthetic tests and from 1.5 cm to 0.5 cm in real conditions. The second workflow that has been developed is called MEMI (Multi-Epoch and Multi-Imagery). This workflow is capable of obtaining photogrammetric comparisons with a higher quality than those obtained with the classical workflow. The redundant use of images from the two periods to be compared reduces the error to a factor of 2 compared to the classical approach, yielding a standard deviation of the comparison of 3D models of 1.5 cm. Finally, the last workflow presented in this thesis is an update and an automation of the method for detecting rockfalls from point-clouds carried out by the RISKNAT research group. The update has been carried out with two objectives in mind. The first is to transfer the entire working method to free licence (both language and programming), and the second is to include in the processing the new algorithms and improvements that have recently been developed. The automation of the method has been performed to cope with the large amount of data generated by photogrammetric systems. It consists of automating all the processes, which means that everything from the capture of the image in the field to the obtention of the rockfalls is performed automatically. This automation poses important challenges, which, although not completely solved, are addressed in this thesis. Thanks to the creation of photogrammetric systems, 3D model improvement algorithms and automation of the rockfall identification workflow, this doctoral thesis presents a solid and innovative proposal in the field of low-cost automatic monitoring. The creation of these systems and algorithms constitutes a further step in the unimpeded expansion of monitoring and warning systems, whose ultimate goal is to enable us to live in a safer world and to build more resilient societies to deal with geological hazards.[cat] En els darrers anys, els models fotogramètrics s’han convertit en una eina molt utilitzada en l’àmbit de les geociències gràcies a la seva capacitat per reproduir superfícies naturals. Com a alternativa a altres sistemes com el LiDAR (Light Detection and Ranging), la fotogrametria permet obtenir núvols de punts 3D a un cost més baix i amb una corba d’aprenentatge menor. Per altra banda, els despreniments de roca són un dels fenòmens geològics que representen un risc per al conjunt de la societat. Aquesta tesi doctoral aborda la creació de sistemes fotogramètrics i algoritmes de processat per al monitoratge automàtic de despreniments de roca. Per una banda, s’ha dissenyat un sistema fotogramètric de molt baix cost fent servir càmeres Raspberry Pi, anomenat RasPi System, instal·lat a la zona d’estudi de Puigcercós (Catalunya). Per altra banda, s’ha dissenyat un sistema fotogramètric d’alta resolució anomenat HRCam també instal·lat a la zona d’estudi de Puigcercós. Finalment, s’ha dissenyat un tercer sistema fotogramètric de manera similar al sistema HRCam anomenat DSLR, instal·lat en una zona de risc real al Tajo de San Pedro de l’Alhambra (Andalusia). Per obtenir comparacions 3D amb la màxima qualitat possible, s’han desenvolupat dos fluxos de treball. El primer, anomenat PCStacking consisteix a realitzar un apilament de models 3D per tal de calcular la mediana de les coordenades Z de cada punt. El segon flux de treball que s’ha desenvolupat s’anomena MEMI (Multi-Epoch and Multi-Imagery). Aquest flux de treball és capaç d’obtenir comparacions fotogramètriques amb una qualitat superior a les que s’obtenen amb el flux de treball clàssic. Finalment, el darrer flux de treball que es presenta en aquesta tesi és una actualització i una automatització del mètode de detecció de despreniments de roca del grup de recerca RISKNAT. L’actualització s’ha dut a terme perseguint dos objectius. El primer, traspassar tot el mètode de treball a llicència lliure (tant llenguatge com programari) i el segon, incloure els nous algoritmes i millores desenvolupats en aquesta tesi en el processat fotogramètric Gràcies a la creació dels sistemes fotogramètrics, algoritmes de millora de models 3D i l’automatització en la identificació de despreniments aquesta tesi doctoral presenta una proposta sòlida i innovadora en el camp del monitoratge automàtic de baix cost. La creació d’aquests sistemes i algoritmes representen un avenç important en l’expansió dels sistemes de monitoratge i alerta que tenen com a objectiu final permetre'ns viure en un món més segur i construir societats més resilients enfront dels riscos geològics

    Efficient generation of occlusion-aware multispectral and thermographic point clouds

    Get PDF
    The reconstruction of 3D point clouds from image datasets is a time-consuming task that has been frequently solved by performing photogrammetric techniques on every data source. This work presents an approach to efficiently build large and dense point clouds from co-acquired images. In our case study, the sensors coacquire visible as well as thermal and multispectral imagery. Hence, RGB point clouds are reconstructed with traditional methods, whereas the rest of the data sources with lower resolution and less identifiable features are projected into the first one, i.e., the most complete and dense. To this end, the mapping process is accelerated using the Graphics Processing Unit (GPU) and multi-threading in the CPU (Central Processing Unit). The accurate colour aggregation in 3D points is guaranteed by taking into account the occlusion of foreground surfaces. Accordingly, our solution is shown to reconstruct much more dense point clouds than notable commercial software (286% on average), e.g., Pix4Dmapper and Agisoft Metashape, in much less time (−70% on average with respect to the best alternative).Spanish Ministry of Science, Innovation and Universities via a doctoral grant to the first author (FPU19/00100)Project TED2021- 132120B-I00 funded by MCIN/AEI/10.13039/501100011033/ and ERDF funds ‘‘A way of doing Europe’

    Depth from HDR: Depth Induction or Increased Realism?

    Get PDF
    Many people who first see a high dynamic range (HDR) display get the impression that it is a 3D display, even though it does not produce any binocular depth cues. Possible explanations of this effect include contrast-based depth induction and the increased re-alism due to the high brightness and contrast that makes an HDR display “like looking through a window”. In this paper we test both of these hypotheses by comparing the HDR depth illusion to real binocular depth cues using a carefully calibrated HDR stereo-scope. We confirm that contrast-based depth induction exists, but it is a vanishingly weak depth cue compared to binocular depth cues. We also demonstrate that for some observers, the increased con-trast of HDR displays indeed increases the realism. However, it is highly observer-dependent whether reduced, physically correct, or exaggerated contrast is perceived as most realistic, even in the pres-ence of the real-world reference scene. Similarly, observers differ in whether reduced, physically correct, or exaggerated stereo 3D is perceived as more realistic. To accommodate the binocular depth perception and realism concept of most observers, display technolo-gies must offer both HDR contrast and stereo personalization

    Design of an Active Multispectral SWIR Camera System for Skin Detection and Face Verification

    Get PDF
    Biometric face recognition is becoming more frequently used in different application scenarios. However, spoofing attacks with facial disguises are still a serious problem for state of the art face recognition algorithms. This work proposes an approach to face verification based on spectral signatures of material surfaces in the short wave infrared (SWIR) range. They allow distinguishing authentic human skin reliably from other materials, independent of the skin type. We present the design of an active SWIR imaging system that acquires four-band multispectral image stacks in real-time. The system uses pulsed small band illumination, which allows for fast image acquisition and high spectral resolution and renders it widely independent of ambient light. After extracting the spectral signatures from the acquired images, detected faces can be verified or rejected by classifying the material as "skin" or "no-skin". The approach is extensively evaluated with respect to both acquisition and classification performance. In addition, we present a database containing RGB and multispectral SWIR face images, as well as spectrometer measurements of a variety of subjects, which is used to evaluate our approach and will be made available to the research community by the time this work is published

    DragonflEYE: a passive approach to aerial collision sensing

    Get PDF
    "This dissertation describes the design, development and test of a passive wide-field optical aircraft collision sensing instrument titled 'DragonflEYE'. Such a ""sense-and-avoid"" instrument is desired for autonomous unmanned aerial systems operating in civilian airspace. The instrument was configured as a network of smart camera nodes and implemented using commercial, off-the-shelf components. An end-to-end imaging train model was developed and important figures of merit were derived. Transfer functions arising from intermediate mediums were discussed and their impact assessed. Multiple prototypes were developed. The expected performance of the instrument was iteratively evaluated on the prototypes, beginning with modeling activities followed by laboratory tests, ground tests and flight tests. A prototype was mounted on a Bell 205 helicopter for flight tests, with a Bell 206 helicopter acting as the target. Raw imagery was recorded alongside ancillary aircraft data, and stored for the offline assessment of performance. The ""range at first detection"" (R0), is presented as a robust measure of sensor performance, based on a suitably defined signal-to-noise ratio. The analysis treats target radiance fluctuations, ground clutter, atmospheric effects, platform motion and random noise elements. Under the measurement conditions, R0 exceeded flight crew acquisition ranges. Secondary figures of merit are also discussed, including time to impact, target size and growth, and the impact of resolution on detection range. The hardware was structured to facilitate a real-time hierarchical image-processing pipeline, with selected image processing techniques introduced. In particular, the height of an observed event above the horizon compensates for angular motion of the helicopter platform.
    corecore