45 research outputs found

    FUSION OF 3D POINT CLOUDS WITH TIR IMAGES FOR INDOOR SCENE RECONSTRUCTION

    Get PDF
    Obtaining accurate 3D descriptions in the thermal infrared (TIR) is a quite challenging task due to the low geometric resolutions of TIR cameras and the low number of strong features in TIR images. Combining the radiometric information of the thermal infrared with 3D data from another sensor is able to overcome most of the limitations in the 3D geometric accuracy. In case of dynamic scenes with moving objects or a moving sensor system, a combination with RGB cameras and profile laserscanners is suitable. As a laserscanner is an active sensor in the visible red or near infrared (NIR) and the thermal infrared camera captures the radiation emitted by the objects in the observed scene, the combination of these two sensors for close range applications are independent from external illumination or textures in the scene. This contribution focusses on the fusion of point clouds from terrestrial laserscanners and RGB cameras with images from thermal infrared mounted together on a robot for indoor 3D reconstruction. The system is geometrical calibrated including the lever arm between the different sensors. As the field of view is different for the sensors, the different sensors record the same scene points not exactly at the same time. Thus, the 3D scene points of the laserscanner and the photogrammetric point cloud from the RGB camera have to be synchronized before point cloud fusion and adding the thermal channel to the 3D points

    Recursive Cluster Elimination Based Support Vector Machine for Disease State Prediction Using Resting State Functional and Effective Brain Connectivity

    Get PDF
    Brain state classification has been accomplished using features such as voxel intensities, derived from functional magnetic resonance imaging (fMRI) data, as inputs to efficient classifiers such as support vector machines (SVM) and is based on the spatial localization model of brain function. With the advent of the connectionist model of brain function, features from brain networks may provide increased discriminatory power for brain state classification.In this study, we introduce a novel framework where in both functional connectivity (FC) based on instantaneous temporal correlation and effective connectivity (EC) based on causal influence in brain networks are used as features in an SVM classifier. In order to derive those features, we adopt a novel approach recently introduced by us called correlation-purged Granger causality (CPGC) in order to obtain both FC and EC from fMRI data simultaneously without the instantaneous correlation contaminating Granger causality. In addition, statistical learning is accelerated and performance accuracy is enhanced by combining recursive cluster elimination (RCE) algorithm with the SVM classifier. We demonstrate the efficacy of the CPGC-based RCE-SVM approach using a specific instance of brain state classification exemplified by disease state prediction. Accordingly, we show that this approach is capable of predicting with 90.3% accuracy whether any given human subject was prenatally exposed to cocaine or not, even when no significant behavioral differences were found between exposed and healthy subjects.The framework adopted in this work is quite general in nature with prenatal cocaine exposure being only an illustrative example of the power of this approach. In any brain state classification approach using neuroimaging data, including the directional connectivity information may prove to be a performance enhancer. When brain state classification is used for disease state prediction, our approach may aid the clinicians in performing more accurate diagnosis of diseases in situations where in non-neuroimaging biomarkers may be unable to perform differential diagnosis with certainty

    Differential modulation of corticospinal excitability during haptic sensing of 2-D patterns vs. textures

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Recently, we showed a selective enhancement in corticospinal excitability when participants actively discriminated raised 2-D symbols with the index finger. This extra-facilitation likely reflected activation in the premotor and dorsal prefrontal cortices modulating motor cortical activity during attention to haptic sensing. However, this parieto-frontal network appears to be finely modulated depending upon whether haptic sensing is directed towards material or geometric properties. To examine this issue, we contrasted changes in corticospinal excitability when young adults (n = 18) were engaged in either a roughness discrimination on two gratings with different spatial periods, or a 2-D pattern discrimination of the relative offset in the alignment of a row of small circles in the upward or downward direction.</p> <p>Results</p> <p>A significant effect of task conditions was detected on motor evoked potential amplitudes, reflecting the observation that corticospinal facilitation was, on average, ~18% greater in the pattern discrimination than in the roughness discrimination.</p> <p>Conclusions</p> <p>This differential modulation of corticospinal excitability during haptic sensing of 2-D patterns vs. roughness is consistent with the existence of preferred activation of a visuo-haptic cortical dorsal stream network including frontal motor areas during spatial vs. intensive processing of surface properties in the haptic system.</p

    Spatial Language Processing in the Blind: Evidence for a Supramodal Representation and Cortical Reorganization

    Get PDF
    Neuropsychological and imaging studies have shown that the left supramarginal gyrus (SMG) is specifically involved in processing spatial terms (e.g. above, left of), which locate places and objects in the world. The current fMRI study focused on the nature and specificity of representing spatial language in the left SMG by combining behavioral and neuronal activation data in blind and sighted individuals. Data from the blind provide an elegant way to test the supramodal representation hypothesis, i.e. abstract codes representing spatial relations yielding no activation differences between blind and sighted. Indeed, the left SMG was activated during spatial language processing in both blind and sighted individuals implying a supramodal representation of spatial and other dimensional relations which does not require visual experience to develop. However, in the absence of vision functional reorganization of the visual cortex is known to take place. An important consideration with respect to our finding is the amount of functional reorganization during language processing in our blind participants. Therefore, the participants also performed a verb generation task. We observed that only in the blind occipital areas were activated during covert language generation. Additionally, in the first task there was functional reorganization observed for processing language with a high linguistic load. As the visual cortex was not specifically active for spatial contents in the first task, and no reorganization was observed in the SMG, the latter finding further supports the notion that the left SMG is the main node for a supramodal representation of verbal spatial relations

    DISCRIMINATION OF URBAN SETTLEMENT TYPES BASED ON SPACE-BORNE SAR DATASETS AND A CONDITIONAL RANDOM FIELDS MODEL

    No full text
    In this work we focused on the classification of Urban Settlement Types (USTs) based on two datasets from the TerraSAR-X satellite acquired at ascending and descending look directions. These data sets comprise the intensity, amplitude and coherence images from the ascending and descending datasets. In accordance to most official UST maps, the urban blocks of our study site were considered as the elements to be classified. The considered USTs classes in this paper are: Vegetated Areas, Single-Family Houses and Commercial and Residential Buildings. Three different groups of image attributes were utilized, namely: Relative Areas, Histogram of Oriented Gradients and geometrical and contextual attributes extracted from the nodes of a Max-Tree Morphological Profile. These image attributes were submitted to three powerful soft multi-class classification algorithms. In this way, each classifier output a membership value to each of the classes. This membership values were then treated as the potentials of the unary factors of a Conditional Random Fields (CRFs) model. The pairwise factors of the CRFs model were parameterised with a Potts function. The reclassification performed with the CRFs model enabled a slight increase of the classification’s accuracy from 76% to 79% out of 1926 urban blocks

    Towards airborne single pass decimeter resolution SAR interferometry over urban areas

    Full text link
    Airborne cross-track Synthetic Aperture Radar interferometers have the capability of deriving three-dimensional topographic information with just a single pass over the area of interest. In order to get a highly accurate height estimation, either a large interferometric baseline or a high radar frequency has to be used. The utilization of a millimeter wave SAR allows precise height estimation even for short baselines. Combined with a spatial resolution in the decimeter range, this enables the mapping of urban areas from airborne platforms. The side-looking SAR imaging geometry, however, leads to disturbing effects like layover and shadowing, which is even intensified by the shallow looking angle caused by the relatively low altitudes of airborne SAR systems. To solve this deficiency, enhanced InSAR processing strategies relying on multi-aspect and multi-baseline data, respectively, are shown to be necessary

    Reconstruction of building models from maps and laser altimeter data

    No full text
    Abstract. In this paper we describe a procedure for generating building models from large scale vector maps and laser altimeter data. First the vector map is analyzed to group the outlines of buildings and to obtain a hierarchical description of buildings or building complexes. The base area is used to mask the elevation data of single buildings and to derive a coarse 3D-description by prismatic models. Afterwards, details of the roof are analyzed. Based on the histogram of heights, flat roofs and sloped roofs are discriminated. For reconstructing flat roofs with superstructures, peaks are searched in the histogram and used to segment the height data. Compact segments are examined for a regular shape and approximated by additional prismatic objects. For reconstructing sloped roofs, the gradient field of the elevation data is calculated and a histogram of orientations is determined. Major orientations in the histogram are detected and used to segment the elevation image. For each segment containing homogeneous orientations and slopes, a spatial plane is fitted and a 3D-contour is constructed. In order to obtain a polygonal description, adjacent planes are intersected and common vertices are calculated.

    SIMULATION OF CLOSE-RANGE PHOTOGRAMMETRIC SYSTEMS FOR INDUSTRIAL SURFACE INSPECTION

    No full text
    Close-range photogrammetric measurement systems are increasingly used for high-precision surface inspection of car body parts. These measurement systems are based on an active light source, the projector, and one or more cameras. Many systems use a sequence of fringe projection, mostly a combination of the gray code and phase shift technique. Basically the quality of the measurement result depends on best possible positions of these sensors, which requires human expert knowledge and experience. But is it possible to use computer-based algorithms to find optimal measuring positions? Simulation processes are discovered as part of a research project aimed at the evaluation of the quality of measuring positions concerning to visibility, the attainable accuracy and realizable feature extraction. One approach is the simulation of the photogrammetric sensor using ray tracing techniques to create photorealistic pictures from the sensor cameras view. This image sequence could be processed with the evaluation software of the system manufacturer in order to calculate a three dimensional point cloud. Following an actual/target comparison should indicate differences that trace back to insufficient measuring positions. In this paper we show how to build up a virtual close range photogrammetric sensor using POV Ray, a free ray tracing software. After introducing the simulation concept, the design of a virtual close range photogrammetric sensor is presented. Based on practical examples of sampled scenes the potential of photorealistic ray tracing is shown. Finally the usability of this simulation approach is discussed. 1.1 Motivation 1
    corecore