26,825 research outputs found

    Monocular Vision as a Range Sensor

    Get PDF
    One of the most important abilities for a mobile robot is detecting obstacles in order to avoid collisions. Building a map of these obstacles is the next logical step. Most robots to date have used sensors such as passive or active infrared, sonar or laser range finders to locate obstacles in their path. In contrast, this work uses a single colour camera as the only sensor, and consequently the robot must obtain range information from the camera images. We propose simple methods for determining the range to the nearest obstacle in any direction in the robot’s field of view, referred to as the Radial Obstacle Profile. The ROP can then be used to determine the amount of rotation between two successive images, which is important for constructing a 360Âș view of the surrounding environment as part of map construction

    Fourier domain optical coherence tomography system with balance detection

    Get PDF
    A Fourier domain optical coherence tomography system with two spectrometers in balance detection is assembled using each an InGaAs linear camera. Conditions and adjustments of spectrometer parameters are presented to ensure anti-phase channeled spectrum modulation across the two cameras for a majority of wavelengths within the optical source spectrum. By blocking the signal to one of the spectrometers, the setup was used to compare the conditions of operation of a single camera with that of a balanced configuration. Using multiple layer samples, balanced detection technique is compared with techniques applied to conventional single camera setups, based on sequential deduction of averaged spectra collected with different on/off settings for the sample or reference beams. In terms of reducing the autocorrelation terms and fixed pattern noise, it is concluded that balance detection performs better than single camera techniques, is more tolerant to movement, exhibits longer term stability and can operate dynamically in real time. The cameras used exhibit larger saturation power than the power threshold where excess photon noise exceeds shot noise. Therefore, conditions to adjust the two cameras to reduce the noise when used in a balanced configuration are presented. It is shown that balance detection can reduce the noise in real time operation, in comparison with single camera configurations. However, simple deduction of an average spectrum in single camera configurations delivers less noise than the balance detection

    leave a trace - A People Tracking System Meets Anomaly Detection

    Full text link
    Video surveillance always had a negative connotation, among others because of the loss of privacy and because it may not automatically increase public safety. If it was able to detect atypical (i.e. dangerous) situations in real time, autonomously and anonymously, this could change. A prerequisite for this is a reliable automatic detection of possibly dangerous situations from video data. This is done classically by object extraction and tracking. From the derived trajectories, we then want to determine dangerous situations by detecting atypical trajectories. However, due to ethical considerations it is better to develop such a system on data without people being threatened or even harmed, plus with having them know that there is such a tracking system installed. Another important point is that these situations do not occur very often in real, public CCTV areas and may be captured properly even less. In the artistic project leave a trace the tracked objects, people in an atrium of a institutional building, become actor and thus part of the installation. Visualisation in real-time allows interaction by these actors, which in turn creates many atypical interaction situations on which we can develop our situation detection. The data set has evolved over three years and hence, is huge. In this article we describe the tracking system and several approaches for the detection of atypical trajectories

    A Feasibility Study on the Use of a Structured Light Depth-Camera for Three-Dimensional Body Measurements of Dairy Cows in Free-Stall Barns

    Get PDF
    Frequent checks on livestock\u2019s body growth can help reducing problems related to cow infertility or other welfare implications, and recognizing health\u2019s anomalies. In the last ten years, optical methods have been proposed to extract information on various parameters while avoiding direct contact with animals\u2019 body, generally causes stress. This research aims to evaluate a new monitoring system, which is suitable to frequently check calves and cow\u2019s growth through a three-dimensional analysis of their bodies\u2019 portions. The innovative system is based on multiple acquisitions from a low cost Structured Light Depth-Camera (Microsoft Kinect\u2122 v1). The metrological performance of the instrument is proved through an uncertainty analysis and a proper calibration procedure. The paper reports application of the depth camera for extraction of different body parameters. Expanded uncertainty ranging between 3 and 15 mm is reported in the case of ten repeated measurements. Coef\ufb01cients of determination R2> 0.84 and deviations lower than 6% from manual measurements where in general detected in the case of head size, hips distance, withers to tail length, chest girth, hips, and withers height. Conversely, lower performances where recognized in the case of animal depth (R2 = 0.74) and back slope (R2 = 0.12)

    Performance of ePix10K, a high dynamic range, gain auto-ranging pixel detector for FELs

    Full text link
    ePix10K is a hybrid pixel detector developed at SLAC for demanding free-electron laser (FEL) applications, providing an ultrahigh dynamic range (245 eV to 88 MeV) through gain auto-ranging. It has three gain modes (high, medium and low) and two auto-ranging modes (high-to-low and medium-to-low). The first ePix10K cameras are built around modules consisting of a sensor flip-chip bonded to 4 ASICs, resulting in 352x384 pixels of 100 ÎŒ\mum x 100 ÎŒ\mum each. We present results from extensive testing of three ePix10K cameras with FEL beams at LCLS, resulting in a measured noise floor of 245 eV rms, or 67 e−^- equivalent noise charge (ENC), and a range of 11000 photons at 8 keV. We demonstrate the linearity of the response in various gain combinations: fixed high, fixed medium, fixed low, auto-ranging high to low, and auto-ranging medium-to-low, while maintaining a low noise (well within the counting statistics), a very low cross-talk, perfect saturation response at fluxes up to 900 times the maximum range, and acquisition rates of up to 480 Hz. Finally, we present examples of high dynamic range x-ray imaging spanning more than 4 orders of magnitude dynamic range (from a single photon to 11000 photons/pixel/pulse at 8 keV). Achieving this high performance with only one auto-ranging switch leads to relatively simple calibration and reconstruction procedures. The low noise levels allow usage with long integration times at non-FEL sources. ePix10K cameras leverage the advantages of hybrid pixel detectors with high production yield and good availability, minimize development complexity through sharing the hardware, software and DAQ development with all other versions of ePix cameras, while providing an upgrade path to 5 kHz, 25 kHz and 100 kHz in three steps over the next few years, matching the LCLS-II requirements.Comment: 9 pages, 5 figure

    PCA-RECT: An Energy-efficient Object Detection Approach for Event Cameras

    Full text link
    We present the first purely event-based, energy-efficient approach for object detection and categorization using an event camera. Compared to traditional frame-based cameras, choosing event cameras results in high temporal resolution (order of microseconds), low power consumption (few hundred mW) and wide dynamic range (120 dB) as attractive properties. However, event-based object recognition systems are far behind their frame-based counterparts in terms of accuracy. To this end, this paper presents an event-based feature extraction method devised by accumulating local activity across the image frame and then applying principal component analysis (PCA) to the normalized neighborhood region. Subsequently, we propose a backtracking-free k-d tree mechanism for efficient feature matching by taking advantage of the low-dimensionality of the feature representation. Additionally, the proposed k-d tree mechanism allows for feature selection to obtain a lower-dimensional dictionary representation when hardware resources are limited to implement dimensionality reduction. Consequently, the proposed system can be realized on a field-programmable gate array (FPGA) device leading to high performance over resource ratio. The proposed system is tested on real-world event-based datasets for object categorization, showing superior classification performance and relevance to state-of-the-art algorithms. Additionally, we verified the object detection method and real-time FPGA performance in lab settings under non-controlled illumination conditions with limited training data and ground truth annotations.Comment: Accepted in ACCV 2018 Workshops, to appea
    • 

    corecore