880 research outputs found

    Multimodal Functional Network Connectivity: An EEG-fMRI Fusion in Network Space

    Get PDF
    EEG and fMRI recordings measure the functional activity of multiple coherent networks distributed in the cerebral cortex. Identifying network interaction from the complementary neuroelectric and hemodynamic signals may help to explain the complex relationships between different brain regions. In this paper, multimodal functional network connectivity (mFNC) is proposed for the fusion of EEG and fMRI in network space. First, functional networks (FNs) are extracted using spatial independent component analysis (ICA) in each modality separately. Then the interactions among FNs in each modality are explored by Granger causality analysis (GCA). Finally, fMRI FNs are matched to EEG FNs in the spatial domain using network-based source imaging (NESOI). Investigations of both synthetic and real data demonstrate that mFNC has the potential to reveal the underlying neural networks of each modality separately and in their combination. With mFNC, comprehensive relationships among FNs might be unveiled for the deep exploration of neural activities and metabolic responses in a specific task or neurological state

    A Neuroimaging Web Interface for Data Acquisition, Processing and Visualization of Multimodal Brain Images

    Get PDF
    Structural and functional brain images are generated as essential modalities for medical experts to learn about the different functions of the brain. These images are typically visually inspected by experts. Many software packages are available to process medical images, but they are complex and difficult to use. The software packages are also hardware intensive. As a consequence, this dissertation proposes a novel Neuroimaging Web Services Interface (NWSI) as a series of processing pipelines for a common platform to store, process, visualize and share data. The NWSI system is made up of password-protected interconnected servers accessible through a web interface. The web-interface driving the NWSI is based on Drupal, a popular open source content management system. Drupal provides a user-based platform, in which the core code for the security and design tools are updated and patched frequently. New features can be added via modules, while maintaining the core software secure and intact. The webserver architecture allows for the visualization of results and the downloading of tabulated data. Several forms are ix available to capture clinical data. The processing pipeline starts with a FreeSurfer (FS) reconstruction of T1-weighted MRI images. Subsequently, PET, DTI, and fMRI images can be uploaded. The Webserver captures uploaded images and performs essential functionalities, while processing occurs in supporting servers. The computational platform is responsive and scalable. The current pipeline for PET processing calculates all regional Standardized Uptake Value ratios (SUVRs). The FS and SUVR calculations have been validated using Alzheimer\u27s Disease Neuroimaging Initiative (ADNI) results posted at Laboratory of Neuro Imaging (LONI). The NWSI system provides access to a calibration process through the centiloid scale, consolidating Florbetapir and Florbetaben tracers in amyloid PET images. The interface also offers onsite access to machine learning algorithms, and introduces new heat maps that augment expert visual rating of PET images. NWSI has been piloted using data and expertise from Mount Sinai Medical Center, the 1Florida Alzheimer’s Disease Research Center (ADRC), Baptist Health South Florida, Nicklaus Children\u27s Hospital, and the University of Miami. All results were obtained using our processing servers in order to maintain data validity, consistency, and minimal processing bias

    Color balance in LASER scanner point clouds

    Get PDF
    Color balancing is an important domain in the field of photography and imaging. Its use is necessitated because of the color inconsistencies that arise due to a number of factors before and after capturing an image. Any deviation from the original color of a scene is an irregularity which is dealt with color balancing techniques. Images may deviate from their accurate representation because of different illuminant ambient conditions, non-linear behavior of the camera sensors, the conversion of file format from a wider color gamut of raw camera format to a file format with a narrower color gamut and so on. Many approaches exist to correct the color inconsistencies. One of the basic techniques is to do a histogram equalization to increase the contrast in an image by utilizing the whole dynamic range of the brightness values. To remove color casts introduced due to false illuminant selection at the time of image capture many white balancing techniques exist. The white balancing can be employed before image capture right in the camera using hardware filters with dials to set illuminant conditions in the scene. A lot of research has been done regarding the effectiveness of white balancing after image capture. The choice of color space and the file format is quite important to consider before white balancing. Another side to color balancing is color transfer whereby the image statistics of one image are transferred to another image. Histogram matching is quite widely used to match the histogram of a source image to that of a target. Other statistics for color transfer are to match the mean and standard deviation of a source image to a target image. These two approaches for color transfer are analyzed and tested in this thesis on images displaying the same scene but with different color casts. Color transfer matching the means and standard deviations is selected because of its superior color balancing and ease of implementation. While a lot of color balancing work has been done in 2D images, no significant work is done in the 3D domain. There exist 3D scanners which scan a scene to build its 3D model. The 3D equivalent of the 2D pixel is a scan point which is obtained by reflecting a laser beam from a point in a scene. Hundreds of thousands of such points make up a single scan which displays the scene that was in the view of the 3D scanner. Because a single scan cannot capture scene behind obstructions or the scenes out of the scanners’ range, more than one scans are undertaken from different positions and time. More than one scans grouped together make up a data structure called a point cloud. Due to these changes in position and time, luminance conditions alter. As a result the scan points from different scans representing the same scene show a considerably different color cast. Color balancing by matching the means and standard deviation is applied on the point cloud. The color inconsistencies such as sharp color gradients between points of different scans, the presence of stray color streaks from one scan into another are greatly reduced. The results are quite appealing as the resulting point clouds show a smooth gradient between different scans

    Evaluation of Skylab (EREP) data for forest and rangeland surveys

    Get PDF
    The author has identified the following significant results. Four widely separated sites (near Augusta, Georgia; Lead, South Dakota; Manitou, Colorado; and Redding, California) were selected as typical sites for forest inventory, forest stress, rangeland inventory, and atmospheric and solar measurements, respectively. Results indicated that Skylab S190B color photography is good for classification of Level 1 forest and nonforest land (90 to 95 percent correct) and could be used as a data base for sampling by small and medium scale photography using regression techniques. The accuracy of Level 2 forest and nonforest classes, however, varied from fair to poor. Results of plant community classification tests indicate that both visual and microdensitometric techniques can separate deciduous, conifirous, and grassland classes to the region level in the Ecoclass hierarchical classification system. There was no consistency in classifying tree categories at the series level by visual photointerpretation. The relationship between ground measurements and large scale photo measurements of foliar cover had a correlation coefficient of greater than 0.75. Some of the relationships, however, were site dependent

    Analysis of Hot Springs in Yellowstone National Park Using ASTER and AVIRIS Remote Sensing

    Get PDF
    Data from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and the Airborne Visible/IR Image Spectrometer (AVIRIS) were used to characterize hot spring deposits in the Lower, Midway, and Upper Geyser Basins of Yellowstone National Park from the visible/near infrared (VNIR) to thermal infrared (TIR) wavelengths. Field observations of these basins provided the critical ground truth for comparison to the remote sensing results. Fourteen study sites were selected based on diversity in size, deposit type, and thermal activity. Field work included detailed site surveys such as land cover analysis, photography, Global Positioning System (GPS) data collection, radiometric analysis, and VNIR spectroscopy. Samples of hot spring deposits, geyser deposits, and soil were also collected. Analysis of ASTER provided broad scale characteristics of the hot springs and their deposits, including the identification of thermal anomalies. AVIRIS high spectral resolution short-wave infrared (SWIR) spectroscopy provided the ability to detect hydrothermally altered minerals as well as a calibration for the multispectral SWIR ASTER data. From the image analysis, differences in these basins were identified including the extent of thermal alteration, the location and abundance of alteration minerals, and a comparison of active, near-extinct, and extinct geysers. The activity level of each region was determined using a combination of the VNIR-SWIR-TIR spectral differences as well as the presence of elevated temperatures, detected by the TIR subsystem of ASTER. The results of this study can be applied to the exploration of extinct mineralized hydrothermal deposits on both Earth and Mars

    Evaluation of ERTS-1 data for inventory of forest and rangeland and detection of forest stress

    Get PDF
    The author has identified the following significant results. Results of photointerpretation indicated that ERTS is a good classifier of forest and nonforest lands (90 to 95 percent accurate). Photointerpreters could make this separation as accurately as signature analysis of the computer compatible tapes. Further breakdowns of cover types at each site could not be accurately classified by interpreters (60 percent) or computer analysts (74 percent). Exceptions were water, wet meadow, and coniferous stands. At no time could the large bark beetle infestations (many over 300 meters in size) be detected on ERTS images. The ERTS wavebands are too broad to distinguish the yellow, yellow-red, and red colors of the dying pine foliage from healthy green-yellow foliage. Forest disturbances could be detected on ERTS color composites about 90 percent of the time when compared with six-year-old photo index mosaics. ERTS enlargements (1:125,000 scale, preferably color prints) would be useful to forest managers of large ownerships over 5,000 hectares (12,500 acres) for broad area planning. Black-and-white enlargements can be used effectively as aerial navigation aids for precision aerial photography where maps are old or not available

    The Default-Mode Network Represents Aesthetic Appeal that Generalizes Across Visual Domains

    Get PDF
    Visual aesthetic evaluations, which impact decision-making and well-being, recruit the ventral visual pathway, subcortical reward circuitry, and parts of the medial prefrontal cortex overlapping with the default-mode network (DMN). However, it is unknown whether these networks represent aesthetic appeal in a domain-general fashion, independent of domain-specific representations of stimulus content (artworks versus architecture or natural landscapes). Using a classification approach, we tested whether the DMN or ventral occipitotemporal cortex (VOT) contains a domain-general representation of aesthetic appeal. Classifiers were trained on multivoxel functional MRI response patterns collected while observers made aesthetic judgments about images from one aesthetic domain. Classifier performance (high vs. low aesthetic appeal) was then tested on response patterns from held-out trials from the same domain to derive a measure of domain-specific coding, or from a different domain to derive a measure of domain-general coding. Activity patterns in category-selective VOT contained a degree of domain-specific information about aesthetic appeal, but did not generalize across domains. Activity patterns from the DMN, however, were predictive of aesthetic appeal across domains. Importantly, the ability to predict aesthetic appeal varied systematically; predictions were better for observers who gave more extreme ratings to images subsequently labeled as high or low. These findings support a model of aesthetic appreciation whereby domain-specific representations of the content of visual experiences in VOT feed in to a core domain-general representation of visual aesthetic appeal in the DMN. Whole-brain searchlight analyses identified additional prefrontal regions containing information relevant for appreciation of cultural artifacts (artwork and architecture) but not landscapes

    A Data-Driven Investigation of Gray Matter–Function Correlations in Schizophrenia during a Working Memory Task

    Get PDF
    The brain is a vastly interconnected organ and methods are needed to investigate its long range structure(S)–function(F) associations to better understand disorders such as schizophrenia that are hypothesized to be due to distributed disconnected brain regions. In previous work we introduced a methodology to reduce the whole brain S–F correlations to a histogram and here we reduce the correlations to brain clusters. The application of our approach to sMRI [gray matter (GM) concentration maps] and functional magnetic resonance imaging data (general linear model activation maps during Encode and Probe epochs of a working memory task) from patients with schizophrenia (SZ, n = 100) and healthy controls (HC, n = 100) presented the following results. In HC the whole brain correlation histograms for GM–Encode and GM–Probe overlap for Low and Medium loads and at High the histograms separate, but in SZ the histograms do not overlap for any of the load levels and Medium load shows the maximum difference. We computed GM–F differential correlation clusters using activation for Probe Medium, and they included regions in the left and right superior temporal gyri, anterior cingulate, cuneus, middle temporal gyrus, and the cerebellum. Inter-cluster GM–Probe correlations for Medium load were positive in HC but negative in SZ. Within group inter-cluster GM–Encode and GM–Probe correlation comparisons show no differences in HC but in SZ differences are evident in the same clusters where HC vs. SZ differences occurred for Probe Medium, indicating that the S–F integrity during Probe is aberrant in SZ. Through a data-driven whole brain analysis approach we find novel brain clusters and show how the S–F differential correlation changes during Probe and Encode at three memory load levels. Structural and functional anomalies have been extensively reported in schizophrenia and here we provide evidences to suggest that evaluating S–F associations can provide important additional information

    Algorithms for the reconstruction, analysis, repairing and enhancement of 3D urban models from multiple data sources

    Get PDF
    Over the last few years, there has been a notorious growth in the field of digitization of 3D buildings and urban environments. The substantial improvement of both scanning hardware and reconstruction algorithms has led to the development of representations of buildings and cities that can be remotely transmitted and inspected in real-time. Among the applications that implement these technologies are several GPS navigators and virtual globes such as Google Earth or the tools provided by the Institut Cartogràfic i Geològic de Catalunya. In particular, in this thesis, we conceptualize cities as a collection of individual buildings. Hence, we focus on the individual processing of one structure at a time, rather than on the larger-scale processing of urban environments. Nowadays, there is a wide diversity of digitization technologies, and the choice of the appropriate one is key for each particular application. Roughly, these techniques can be grouped around three main families: - Time-of-flight (terrestrial and aerial LiDAR). - Photogrammetry (street-level, satellite, and aerial imagery). - Human-edited vector data (cadastre and other map sources). Each of these has its advantages in terms of covered area, data quality, economic cost, and processing effort. Plane and car-mounted LiDAR devices are optimal for sweeping huge areas, but acquiring and calibrating such devices is not a trivial task. Moreover, the capturing process is done by scan lines, which need to be registered using GPS and inertial data. As an alternative, terrestrial LiDAR devices are more accessible but cover smaller areas, and their sampling strategy usually produces massive point clouds with over-represented plain regions. A more inexpensive option is street-level imagery. A dense set of images captured with a commodity camera can be fed to state-of-the-art multi-view stereo algorithms to produce realistic-enough reconstructions. One other advantage of this approach is capturing high-quality color data, whereas the geometric information is usually lacking. In this thesis, we analyze in-depth some of the shortcomings of these data-acquisition methods and propose new ways to overcome them. Mainly, we focus on the technologies that allow high-quality digitization of individual buildings. These are terrestrial LiDAR for geometric information and street-level imagery for color information. Our main goal is the processing and completion of detailed 3D urban representations. For this, we will work with multiple data sources and combine them when possible to produce models that can be inspected in real-time. Our research has focused on the following contributions: - Effective and feature-preserving simplification of massive point clouds. - Developing normal estimation algorithms explicitly designed for LiDAR data. - Low-stretch panoramic representation for point clouds. - Semantic analysis of street-level imagery for improved multi-view stereo reconstruction. - Color improvement through heuristic techniques and the registration of LiDAR and imagery data. - Efficient and faithful visualization of massive point clouds using image-based techniques.Durant els darrers anys, hi ha hagut un creixement notori en el camp de la digitalització d'edificis en 3D i entorns urbans. La millora substancial tant del maquinari d'escaneig com dels algorismes de reconstrucció ha portat al desenvolupament de representacions d'edificis i ciutats que es poden transmetre i inspeccionar remotament en temps real. Entre les aplicacions que implementen aquestes tecnologies hi ha diversos navegadors GPS i globus virtuals com Google Earth o les eines proporcionades per l'Institut Cartogràfic i Geològic de Catalunya. En particular, en aquesta tesi, conceptualitzem les ciutats com una col·lecció d'edificis individuals. Per tant, ens centrem en el processament individual d'una estructura a la vegada, en lloc del processament a gran escala d'entorns urbans. Avui en dia, hi ha una àmplia diversitat de tecnologies de digitalització i la selecció de l'adequada és clau per a cada aplicació particular. Aproximadament, aquestes tècniques es poden agrupar en tres famílies principals: - Temps de vol (LiDAR terrestre i aeri). - Fotogrametria (imatges a escala de carrer, de satèl·lit i aèries). - Dades vectorials editades per humans (cadastre i altres fonts de mapes). Cadascun d'ells presenta els seus avantatges en termes d'àrea coberta, qualitat de les dades, cost econòmic i esforç de processament. Els dispositius LiDAR muntats en avió i en cotxe són òptims per escombrar àrees enormes, però adquirir i calibrar aquests dispositius no és una tasca trivial. A més, el procés de captura es realitza mitjançant línies d'escaneig, que cal registrar mitjançant GPS i dades inercials. Com a alternativa, els dispositius terrestres de LiDAR són més accessibles, però cobreixen àrees més petites, i la seva estratègia de mostreig sol produir núvols de punts massius amb regions planes sobrerepresentades. Una opció més barata són les imatges a escala de carrer. Es pot fer servir un conjunt dens d'imatges capturades amb una càmera de qualitat mitjana per obtenir reconstruccions prou realistes mitjançant algorismes estèreo d'última generació per produir. Un altre avantatge d'aquest mètode és la captura de dades de color d'alta qualitat. Tanmateix, la informació geomètrica resultant sol ser de baixa qualitat. En aquesta tesi, analitzem en profunditat algunes de les mancances d'aquests mètodes d'adquisició de dades i proposem noves maneres de superar-les. Principalment, ens centrem en les tecnologies que permeten una digitalització d'alta qualitat d'edificis individuals. Es tracta de LiDAR terrestre per obtenir informació geomètrica i imatges a escala de carrer per obtenir informació sobre colors. El nostre objectiu principal és el processament i la millora de representacions urbanes 3D amb molt detall. Per a això, treballarem amb diverses fonts de dades i les combinarem quan sigui possible per produir models que es puguin inspeccionar en temps real. La nostra investigació s'ha centrat en les següents contribucions: - Simplificació eficaç de núvols de punts massius, preservant detalls d'alta resolució. - Desenvolupament d'algoritmes d'estimació normal dissenyats explícitament per a dades LiDAR. - Representació panoràmica de baixa distorsió per a núvols de punts. - Anàlisi semàntica d'imatges a escala de carrer per millorar la reconstrucció estèreo de façanes. - Millora del color mitjançant tècniques heurístiques i el registre de dades LiDAR i imatge. - Visualització eficient i fidel de núvols de punts massius mitjançant tècniques basades en imatges
    • …
    corecore