1,952 research outputs found

    Graph Signal Processing: Overview, Challenges and Applications

    Full text link
    Research in Graph Signal Processing (GSP) aims to develop tools for processing data defined on irregular graph domains. In this paper we first provide an overview of core ideas in GSP and their connection to conventional digital signal processing. We then summarize recent developments in developing basic GSP tools, including methods for sampling, filtering or graph learning. Next, we review progress in several application areas using GSP, including processing and analysis of sensor network data, biological data, and applications to image processing and machine learning. We finish by providing a brief historical perspective to highlight how concepts recently developed in GSP build on top of prior research in other areas.Comment: To appear, Proceedings of the IEE

    Multi-modal dictionary learning for image separation with application in art investigation

    Get PDF
    In support of art investigation, we propose a new source separation method that unmixes a single X-ray scan acquired from double-sided paintings. In this problem, the X-ray signals to be separated have similar morphological characteristics, which brings previous source separation methods to their limits. Our solution is to use photographs taken from the front and back-side of the panel to drive the separation process. The crux of our approach relies on the coupling of the two imaging modalities (photographs and X-rays) using a novel coupled dictionary learning framework able to capture both common and disparate features across the modalities using parsimonious representations; the common component models features shared by the multi-modal images, whereas the innovation component captures modality-specific information. As such, our model enables the formulation of appropriately regularized convex optimization procedures that lead to the accurate separation of the X-rays. Our dictionary learning framework can be tailored both to a single- and a multi-scale framework, with the latter leading to a significant performance improvement. Moreover, to improve further on the visual quality of the separated images, we propose to train coupled dictionaries that ignore certain parts of the painting corresponding to craquelure. Experimentation on synthetic and real data - taken from digital acquisition of the Ghent Altarpiece (1432) - confirms the superiority of our method against the state-of-the-art morphological component analysis technique that uses either fixed or trained dictionaries to perform image separation.Comment: submitted to IEEE Transactions on Images Processin

    Dimension Reduction in Big Data Environment-A Survey

    Get PDF
    Relational database management system is able to tackle data set which is structured in some way and by means of querying to the system user gets certain answer. But if the data set itself does not lie under any sort of structure, it is generally very tedious job for user to get answer to certain query. This is the new challenge coming out for the last decade to the scientists, researchers, industrialists and this new form of data is termed as big data. Parallel computation not only from the concept of hardware, but different application dependent software is now being developed to tackle this new data set for solving the challenges generally attached with large data set such as data curation, search, querying, storage etc. Information sensing devices, RFID readers, cloud storage now days are making data set to grow in an increasing manner. The goal of big data analytics is to help industry and organizations to take intelligent decisions by analyzing huge number of transactions that remain untouched till today by conventional business intelligent systems. As the size of dataset grows large also with redundancy, software and people need to analyze only useful information for particular application and this newly reduced dataset are useful compare to noisy and large data

    Nonintrusive parametric NVH study of a vehicle body structure

    Get PDF
    This is an Accepted Manuscript of an article published by Taylor & Francis Group in Mechanics based design of structures and machines on 27/06/22, available online at: http://www.tandfonline.com/10.1080/15397734.2022.2098140A reduced order model technique is presented to perform the parametric Noise, Vibration and Harshness (NVH) study of a vehicle body-in-white (BIW) structure characterized by material and shape design variables. The ultimate goal is to develop a methodology which allows to efficiently explore the variation in the design space of the BIW static and dynamic global stiffnesses, such that the NVH performance can be evaluated already in the preliminary phase of the development process. The proposed technique is based on the proper generalized decomposition (PGD) method. The obtained PGD solution presents an explicit dependency on the introduced design variables, which allows to obtain solutions in 0.1 milliseconds and therefore opens the door to fast optimization studies and real-time visualizations of the results in a pre-defined range of parameters. The method is nonintrusive, such that an interaction with commercial software is possible. A parametrized finite element (FE) model of the BIW is built by means of the ANSA CAE preprocessor software, which allows to account for material and geometric parameters. A comparison between the parametric NVH solutions and the full-order FE simulations is performed using the MSC-Nastran software, to validate the accuracy of the proposed method. In addition, an optimization study is presented to find the optimal materials and shape properties with respect to the NVH performance. Finally, in order to support the designers in the decision-making process, a graphical interface app is developed which allows to visualize in real-time how changes in the design variables affect pre-defined quantities of interest.This project is part of the Marie Skłodowska-Curie ITN-EJD ProTechTion funded by the European Union Horizon 2020 research and innovation program with Grant Number 764636. The work of Fabiola Cavaliere, Sergio Zlotnik and Pedro D ıez is partially supported by the MCIN/AEI/10.13039/501100011033, Spain (Grant Number: PID2020-113463RB-C32, PID2020-113463RB-C33 and CEX2018-000797-S). Ruben Sevilla also acknowledges the support of the Engineering and Physical Sciences Research Council (Grant Number: EP/T009071/1).Peer ReviewedPostprint (published version

    Volumetric Medical Images Visualization on Mobile Devices

    Get PDF
    Volumetric medical images visualization is an important tool in the diagnosis and treatment of diseases. Through history, one of the most dificult tasks for Medicine Specialists has been the accurate location of broken bones and of the damaged tissues during Chemotherapy treatment, among other applications; like techniques used in Neurological Studies. Thus these situations enhance the need of visualization in Medicine. New technologies, the improvement and development of new hardware as well as software and the updating of old ones for graphic applications have resulted in specialized systems for medical visualization. However the use of these techniques in mobile devices has been poor due to its low performance. In our work, we propose a client-server scheme, where the model is compressed in the server side and is reconstructed in a nal thin-client device. The technique restricts the natural density values to achieve good bone visualization in medical models, transforming the rest of the data to zero. Our proposal uses a tridimensional Haar Wavelet Function locally applied inside units blocks of 16x16x16, similar to the Wavelet Based 3D Compression Scheme for Interactive Visualization of Very Large Volume Data approach. We also implement a quantization algorithm which handles error coeficients according to the frequency distributions of these coe cients. Finally, we made an evaluation of the volume visualization; on current mobile devices .We present the speci cations for the implementation of our technique in the Nokia n900 Mobile Phone

    On-The-Fly Processing of continuous high-dimensional data streams

    Full text link
    [EN] A novel method and software system for rational handling of time series of multi-channel measurements is presented. This quantitative learning tool, the On-The-Fly Processing (OTFP), develops reduced-rank bilinear subspace models that summarise massive streams of multivariate responses, capturing the evolving covariation patterns among the many input variables over time and space. Thereby, a considerable data compression can be achieved without significant loss of useful systematic information. The underlying proprietary OTFP methodology is relatively fast and simple it is linear/bilinear and does not require a lot of raw data or huge cross-correlation matrices to be kept in memory. Unlike conventional compression methods, the approach allows the high-dimensional data stream to be graphically interpreted and quantitatively utilised in its compressed state. Unlike adaptive moving-window methods, it allows all past and recent time points to be reconstructed and displayed simultaneously. This new approach is applied to four different case-studies: (i) multi-channel Vis-NIR spectroscopy of the Belousov Zhabotinsky reaction, a complex, ill understood chemical process; (ii) quality control of oranges by hyperspectral imaging; (iii) environmental monitoring by airborne hyperspectral imaging; (iv) multi-sensor process analysis in the petrochemical industry. These examples demonstrate that the OTFP can automatically develop high-fidelity subspace data models, which simplify the storage/transmission and the interpretation of more or less continuous time series of high-dimensional measurements to the extent there are covariations among the measured variables.This research work was partially supported by the Spanish Ministry of Economy and Competitiveness under the project DPI2014-55276-C5-1R, Shell Global Solutions International B.V. (Amsterdam, The Netherlands), Idletechs AS (Trondheim, Norway), the Norwegian Research Council (Grant 223254) through the Centre of Autonomous Marine Operations and Systems (AMOS) at the Norwegian University of Science and Technology (Trondheim, Norway) and the Ministry of Education, Youth and Sports of the Czech Republic (CENAKVA project CZ.1.05/2.1.00/01.0024 and CENAKVA II project L01205 under the NPU I program). The authors want to acknowledge Prof. Bjorn Alsberg for providing the Vis-NIR equipment and the Laboratorio de Sistemas e Tecnologia Subaquatica of the University of Porto, the Hydrographic Institute of the Portuguese Navy and the University of the Azores for carrying out the REP15 exercise, during which the hyperspectral push broom image was collected.Vitale, R.; Zhyrova, A.; Fortuna, JF.; De Noord, OE.; Ferrer, A.; Martens, H. (2017). On-The-Fly Processing of continuous high-dimensional data streams. Chemometrics and Intelligent Laboratory Systems. 161:118-129. doi:10.1016/j.chemolab.2016.11.003S11812916

    Doctor of Philosophy

    Get PDF
    dissertationWith modern computational resources rapidly advancing towards exascale, large-scale simulations useful for understanding natural and man-made phenomena are becoming in- creasingly accessible. As a result, the size and complexity of data representing such phenom- ena are also increasing, making the role of data analysis to propel science even more integral. This dissertation presents research on addressing some of the contemporary challenges in the analysis of vector fields--an important type of scientific data useful for representing a multitude of physical phenomena, such as wind flow and ocean currents. In particular, new theories and computational frameworks to enable consistent feature extraction from vector fields are presented. One of the most fundamental challenges in the analysis of vector fields is that their features are defined with respect to reference frames. Unfortunately, there is no single ""correct"" reference frame for analysis, and an unsuitable frame may cause features of interest to remain undetected, thus creating serious physical consequences. This work develops new reference frames that enable extraction of localized features that other techniques and frames fail to detect. As a result, these reference frames objectify the notion of ""correctness"" of features for certain goals by revealing the phenomena of importance from the underlying data. An important consequence of using these local frames is that the analysis of unsteady (time-varying) vector fields can be reduced to the analysis of sequences of steady (time- independent) vector fields, which can be performed using simpler and scalable techniques that allow better data management by accessing the data on a per-time-step basis. Nevertheless, the state-of-the-art analysis of steady vector fields is not robust, as most techniques are numerical in nature. The residing numerical errors can violate consistency with the underlying theory by breaching important fundamental laws, which may lead to serious physical consequences. This dissertation considers consistency as the most fundamental characteristic of computational analysis that must always be preserved, and presents a new discrete theory that uses combinatorial representations and algorithms to provide consistency guarantees during vector field analysis along with the uncertainty visualization of unavoidable discretization errors. Together, the two main contributions of this dissertation address two important concerns regarding feature extraction from scientific data: correctness and precision. The work presented here also opens new avenues for further research by exploring more-general reference frames and more-sophisticated domain discretizations
    corecore