127 research outputs found

    STRUCTURE-FROM-MOTION FOR CALIBRATION OF A VEHICLE CAMERA SYSTEM WITH NON-OVERLAPPING FIELDS-OF-VIEW IN AN URBAN ENVIRONMENT

    Get PDF
    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements

    Security event recognition for visual surveillance

    Get PDF
    With rapidly increasing deployment of surveillance cameras, the reliable methods for automatically analyzing the surveillance video and recognizing special events are demanded by different practical applications. This paper proposes a novel effective framework for security event analysis in surveillance videos. First, convolutional neural network (CNN) framework is used to detect objects of interest in the given videos. Second, the owners of the objects are recognized and monitored in real-time as well. If anyone moves any object, this person will be verified whether he/she is its owner. If not, this event will be further analyzed and distinguished between two different scenes: moving the object away or stealing it. To validate the proposed approach, a new video dataset consisting of various scenarios is constructed for more complex tasks. For comparison purpose, the experiments are also carried out on the benchmark databases related to the task on abandoned luggage detection. The experimental results show that the proposed approach outperforms the state-of-the-art methods and effective in recognizing complex security events. © 2017 Copernicus GmbH. All rights reserved

    Simulation Tools for Interpretation of High Resolution SAR Images of Urban Areas

    Full text link
    New powerful spaceborne sensors for monitoring urban areas have been designed and are ready for launch. More detailed SAR images will be soon available and, consequently, new tools for their interpretation are needed, above all when urban scenes are illuminated. In this paper, the authors propose some tools for the study and the analysis of high resolution SAR images based on a SAR raw signal simulator for urban areas. Comparing simulated SAR images with the real one, interpretation of SAR data is improved and fundamental support of the employed tools is further assessed

    Building feature extraction via a deterministic approach: application to real high resolution SAR images

    Get PDF
    Interpretation of high resolution SAR (synthetic aperture radar) images is still a hard task, especially when man-made objects crowd the scene under detection. This paper contributes to the analysis of this kind of data by adopting an approach, based on a scattering model, for the retrieval of buildings height from real SAR images and presenting first numerical results

    Differential modulation of corticospinal excitability during haptic sensing of 2-D patterns vs. textures

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Recently, we showed a selective enhancement in corticospinal excitability when participants actively discriminated raised 2-D symbols with the index finger. This extra-facilitation likely reflected activation in the premotor and dorsal prefrontal cortices modulating motor cortical activity during attention to haptic sensing. However, this parieto-frontal network appears to be finely modulated depending upon whether haptic sensing is directed towards material or geometric properties. To examine this issue, we contrasted changes in corticospinal excitability when young adults (n = 18) were engaged in either a roughness discrimination on two gratings with different spatial periods, or a 2-D pattern discrimination of the relative offset in the alignment of a row of small circles in the upward or downward direction.</p> <p>Results</p> <p>A significant effect of task conditions was detected on motor evoked potential amplitudes, reflecting the observation that corticospinal facilitation was, on average, ~18% greater in the pattern discrimination than in the roughness discrimination.</p> <p>Conclusions</p> <p>This differential modulation of corticospinal excitability during haptic sensing of 2-D patterns vs. roughness is consistent with the existence of preferred activation of a visuo-haptic cortical dorsal stream network including frontal motor areas during spatial vs. intensive processing of surface properties in the haptic system.</p

    Spatial Language Processing in the Blind: Evidence for a Supramodal Representation and Cortical Reorganization

    Get PDF
    Neuropsychological and imaging studies have shown that the left supramarginal gyrus (SMG) is specifically involved in processing spatial terms (e.g. above, left of), which locate places and objects in the world. The current fMRI study focused on the nature and specificity of representing spatial language in the left SMG by combining behavioral and neuronal activation data in blind and sighted individuals. Data from the blind provide an elegant way to test the supramodal representation hypothesis, i.e. abstract codes representing spatial relations yielding no activation differences between blind and sighted. Indeed, the left SMG was activated during spatial language processing in both blind and sighted individuals implying a supramodal representation of spatial and other dimensional relations which does not require visual experience to develop. However, in the absence of vision functional reorganization of the visual cortex is known to take place. An important consideration with respect to our finding is the amount of functional reorganization during language processing in our blind participants. Therefore, the participants also performed a verb generation task. We observed that only in the blind occipital areas were activated during covert language generation. Additionally, in the first task there was functional reorganization observed for processing language with a high linguistic load. As the visual cortex was not specifically active for spatial contents in the first task, and no reorganization was observed in the SMG, the latter finding further supports the notion that the left SMG is the main node for a supramodal representation of verbal spatial relations

    Multisensory visual–tactile object related network in humans: insights gained using a novel crossmodal adaptation approach

    Get PDF
    Neuroimaging techniques have provided ample evidence for multisensory integration in humans. However, it is not clear whether this integration occurs at the neuronal level or whether it reflects areal convergence without such integration. To examine this issue as regards visuo-tactile object integration we used the repetition suppression effect, also known as the fMRI-based adaptation paradigm (fMR-A). Under some assumptions, fMR-A can tag specific neuronal populations within an area and investigate their characteristics. This technique has been used extensively in unisensory studies. Here we applied it for the first time to study multisensory integration and identified a network of occipital (LOtv and calcarine sulcus), parietal (aIPS), and prefrontal (precentral sulcus and the insula) areas all showing a clear crossmodal repetition suppression effect. These results provide a crucial first insight into the neuronal basis of visuo-haptic integration of objects in humans and highlight the power of using fMR-A to study multisensory integration using non-invasinve neuroimaging techniques
    corecore