140 research outputs found

    Hyperspectral Remote Sensing Data Analysis and Future Challenges

    Full text link

    Final Report of the ModSysC2020 Working Group - Data, Models and Theories for Complex Systems: new challenges and opportunities

    Get PDF
    Final Report of the ModSysC2020 Working Group at University Montpellier 2At University Montpellier 2, the modeling and simulation of complex systems has been identified as a major scientific challenge and one of the priority axes in interdisciplinary research, with major potential impact on training, economy and society. Many research groups and laboratories in Montpellier are already working in that direction, but typically in isolation within their own scientific discipline. Several local actions have been initiated in order to structure the scientific community with interdisciplinary projects, but with little coordination among the actions. The goal of the ModSysC2020 (modeling and simulation of complex systems in 2020) working group was to analyze the local situation (forces and weaknesses, current projects), identify the critical research directions and propose concrete actions in terms of research projects, equipment facilities, human resources and training to be encouraged. To guide this perspective, we decomposed the scientific challenge into four main themes, for which there is strong background in Montpellier: (1) modeling and simulation of complex systems; (2) algorithms and computing; (3) scientific data management; (4) production, storage and archiving of data from the observation of the natural and biological media. In this report, for each theme, we introduce the context and motivations, analyze the situation in Montpellier, identify research directions and propose specific actions in terms of interdisciplinary research projects and training. We also provide an analysis of the socio-economical aspects of modeling and simulation through use cases in various domains such as life science and healthcare, environmental science and energy. Finally, we discuss the importance of revisiting students training in fundamental domains such as modeling, computer programming and database which are typically taught too late, in specialized masters

    High Performance Implementation of Support Vector Machines Using OpenCL

    Get PDF
    Support Vector Machines are a machine learning approach that is well studied, thoroughly vetted and effective in a large number of applications. The objective of this thesis is to accelerate an implementation of Support Vector Machines (SVM) using a heterogeneous computing system programmed using OpenCL in C/C++. LIBSVM, a widely-available, popular and open source implementation of SVM is chosen, allowing the presented work to be integrated seamlessly into existing systems. The proposed framework is evaluated in terms of speed and accuracy when performing training and classification on a number of standard data sets. Testing was based on two work station GPUs, the NVIDIA GTX 480 and Tesla K20, and a modern, work station CPU (Intel i5 Quad Core, 3 GHz). We find that, for large data sets, training is accelerated by a factor ranging from 9 to 22. In general, speedup increases with the total number of training samples in the data set until the GPU device is fully utilized. While these gains in speedup are significant, they do not match the ideal parallel speedup, that is the total number of cores in the parallel system. Our findings indicate that performance is hampered by the portions of the SVM training algorithm that are sequential. In addition, we find that the classification phase of the SVM system is accelerated by a factor of up to 12. During classification only a relatively small number of samples are classified compared to the typical number of training samples, and the computational complexity of classification grows only linearly with the number of samples processed, as opposed to the training phase where it grows quadratically. The contri- butions of this thesis include the use of OpenCL for accelerating SVM training and testing on heterogeneous systems, and the performance analysis of the acceleration of SVM

    A Statistical Approach to the Inverse Problem in Magnetoencephalography

    Get PDF
    Magnetoencephalography (MEG) is an imaging technique used to measure the magnetic field outside the human head produced by the electrical activity inside the brain. The MEG inverse problem, identifying the location of the electric sources from the magnetic signal measurements, is ill-posed; that is, there is an infinite number of mathematically correct solutions. Common source localization methods assume the source does not vary with time and do not provide estimates of the variability of the fitted model. We reformulate the MEG inverse problem by considering time-varying sources and we model their time evolution using a state space model. Based on our model, we investigate the inverse problem by finding the posterior source distribution given the multiple channels of observations at each time rather than fitting fixed source estimates. A computational challenge arises because the data likelihood is nonlinear, where Markov chain Monte Carlo (MCMC) methods including conventional Gibbs sampling are difficult to implement. We propose two new Monte Carlo methods based on sequential importance sampling. Unlike the usual MCMC sampling scheme, our new methods work in this situation without needing to tune a high-dimensional transition kernel which has a very high-cost. We have created a set of C programs under LINUX and use Parallel Virtual Machine (PVM) software to speed up the computation.Common methods used to estimate the number of sources in the MEG data include principal component analysis and factor analysis, both of which make use of the eigenvalue distribution of the data. Other methods involve the information criterion and minimum description length. Unfortunately, all these methods are very sensitive to the signal-to-noise ratio (SNR). First, we consider a wavelet approach, a residual analysis approach and a Fourier approach to estimate the noise variance. Second, a Neyman-Pearson detection theory-based eigenthresholding method is used to decide the number of signal sources. We apply our methods to simulated data where we know the truth. A real MEG dataset without a human subject is also tested. Our methods allow us to estimate the noise more accurately and are robust in deciding the number of signal sources

    Visual Techniques for Geological Fieldwork Using Mobile Devices

    Get PDF
    Visual techniques in general and 3D visualisation in particular have seen considerable adoption within the last 30 years in the geosciences and geology. Techniques such as volume visualisation, for analysing subsurface processes, and photo-coloured LiDAR point-based rendering, to digitally explore rock exposures at the earth’s surface, were applied within geology as one of the first adopting branches of science. A large amount of digital, geological surface- and volume data is nowadays available to desktop-based workflows for geological applications such as hydrocarbon reservoir exploration, groundwater modelling, CO2 sequestration and, in the future, geothermal energy planning. On the other hand, the analysis and data collection during fieldwork has yet to embrace this ”digital revolution”: sedimentary logs, geological maps and stratigraphic sketches are still captured in each geologist’s individual fieldbook, and physical rocks samples are still transported to the lab for subsequent analysis. Is this still necessary, or are there extended digital means of data collection and exploration in the field ? Are modern digital interpretation techniques accurate and intuitive enough to relevantly support fieldwork in geology and other geoscience disciplines ? This dissertation aims to address these questions and, by doing so, close the technological gap between geological fieldwork and office workflows in geology. The emergence of mobile devices and their vast array of physical sensors, combined with touch-based user interfaces, high-resolution screens and digital cameras provide a possible digital platform that can be used by field geologists. Their ubiquitous availability increases the chances to adopt digital workflows in the field without additional, expensive equipment. The use of 3D data on mobile devices in the field is furthered by the availability of 3D digital outcrop models and the increasing ease of their acquisition. This dissertation assesses the prospects of adopting 3D visual techniques and mobile devices within field geology. The research of this dissertation uses previously acquired and processed digital outcrop models in the form of textured surfaces from optical remote sensing and photogrammetry. The scientific papers in this thesis present visual techniques and algorithms to map outcrop photographs in the field directly onto the surface models. Automatic mapping allows the projection of photo interpretations of stratigraphy and sedimentary facies on the 3D textured surface while providing the domain expert with simple-touse, intuitive tools for the photo interpretation itself. The developed visual approach, combining insight from all across the computer sciences dealing with visual information, merits into the mobile device Geological Registration and Interpretation Toolset (GRIT) app, which is assessed on an outcrop analogue study of the Saltwick Formation exposed at Whitby, North Yorkshire, UK. Although being applicable to a diversity of study scenarios within petroleum geology and the geosciences, the particular target application of the visual techniques is to easily provide field-based outcrop interpretations for subsequent construction of training images for multiple point statistics reservoir modelling, as envisaged within the VOM2MPS project. Despite the success and applicability of the visual approach, numerous drawbacks and probable future extensions are discussed in the thesis based on the conducted studies. Apart from elaborating on more obvious limitations originating from the use of mobile devices and their limited computing capabilities and sensor accuracies, a major contribution of this thesis is the careful analysis of conceptual drawbacks of established procedures in modelling, representing, constructing and disseminating the available surface geometry. A more mathematically-accurate geometric description of the underlying algebraic surfaces yields improvements and future applications unaddressed within the literature of geology and the computational geosciences to this date. Also, future extensions to the visual techniques proposed in this thesis allow for expanded analysis, 3D exploration and improved geological subsurface modelling in general.publishedVersio
    • …
    corecore