2,226 research outputs found

    Scaling Multidimensional Inference for Big Structured Data

    Get PDF
    In information technology, big data is a collection of data sets so large and complex that it becomes difficult to process using traditional data processing applications [151]. In a world of increasing sensor modalities, cheaper storage, and more data oriented questions, we are quickly passing the limits of tractable computations using traditional statistical analysis methods. Methods which often show great results on simple data have difficulties processing complicated multidimensional data. Accuracy alone can no longer justify unwarranted memory use and computational complexity. Improving the scaling properties of these methods for multidimensional data is the only way to make these methods relevant. In this work we explore methods for improving the scaling properties of parametric and nonparametric models. Namely, we focus on the structure of the data to lower the complexity of a specific family of problems. The two types of structures considered in this work are distributive optimization with separable constraints (Chapters 2-3), and scaling Gaussian processes for multidimensional lattice input (Chapters 4-5). By improving the scaling of these methods, we can expand their use to a wide range of applications which were previously intractable open the door to new research questions

    Psnr Based Optimization Applied to Algebraic Reconstruction Technique for Image Reconstruction on a Multi-core System

    Get PDF
    The present work attempts to reveal a parallel Algebraic Reconstruction Technique (pART) to reduce the computational speed of reconstructing artifact-free images from projections. ART is an iterative algorithm well known to reconstruct artifact-free images with limited number of projections. In this work, a novel idea has been focused on to optimize the number of iterations mandatory based on Peak to Signal Noise Ratio (PSNR) to reconstruct an image. However, it suffers of worst computation speed. Hence, an attempt is made to reduce the computation time by running iterative algorithm on a multi-core parallel environment. The execution times are computed for both serial and parallel implementations of ART using different projection data, and, tabulated for comparison. The experimental results demonstrate that the parallel computing environment provides a source of high computational power leading to obtain reconstructed image instantaneously

    Solitary magnetic perturbations at the ELM onset

    Full text link
    Edge localised modes (ELMs) allow maintaining sufficient purity of tokamak H-mode plasmas and thus enable stationary H-mode. On the other hand in a future device ELMs may cause divertor power flux densities far in excess of tolerable material limits. The size of the energy loss per ELM is determined by saturation effects in the non-linear phase of the ELM, which at present is hardly understood. Solitary magnetic perturbations (SMPs) are identified as dominant features in the radial magnetic fluctuations below 100kHz. They are typically observed close (+-0.1ms) to the onset of pedestal erosion. SMPs are field aligned structures rotating in the electron diamagnetic drift direction with perpendicular velocities of about 10km/s. A comparison of perpendicular velocities suggests that the perturbation evoking SMPs is located at or inside the separatrix. Analysis of very pronounced examples showed that the number of peaks per toroidal turn is 1 or 2, which is clearly lower than corresponding numbers in linear stability calculations. In combination with strong peaking of the magnetic signals this results in a solitary appearance resembling modes like palm tree modes, edge snakes or outer modes. This behavior has been quantified as solitariness and correlated to main plasma parameters. SMPs may be considered as a signature of the non-linear ELM-phase originating at the separatrix or further inside. Thus they provide a handle to investigate the transition from linear to non-linear ELM phase. By comparison with data from gas puff imaging processes in the non-linear phase at or inside the separatrix and in the scrape-off-layer (SOL) can be correlated. A connection between the passing of an SMP and the onset of radial filament propagation has been found. Eventually the findings related to SMPs may contribute to a future quantitative understanding of the non-linear ELM evolution.Comment: submitted to Nuclear Fusio

    Image processing techniques for high-speed atomic force microscopy

    Full text link
    Atomic force microscopy (AFM) is a powerful tool for imaging topography or other characteristics of sample surfaces at nanometer-scale spatial resolution by recording the interaction of a sharp probe with the surface. Dispute its excellent spatial resolution, one of the enduring challenges in AFM imaging is its poor temporal resolution relative to the rate of dynamics in many systems of interest. This has led to a large research effort on the development of high-speed AFM (HS-AFM). Most of these efforts focus on mechanical improvement and control algorithm design. This dissertation investigates a complementary HS-AFM approach based on the idea of undersampling which aims at increasing the imaging rate of the instrument by reducing the number of pixels in the sample surface that need to be acquired to create a high-quality image. The first part of this work focuses on the reconstruction of images sub-sampled according to a scheme known as μ path patterns. These patterns consist of randomly placed short and disjoint scans and are designed specifically for fast, efficient, and consistent data acquisition in AFM. We compare compressive sensing (CS) reconstruction methods with inpainting methods on recovering μ-path undersampled images. The results illustrate that the reconstruction quality depends on the choice of reconstruction methods and the sample under study, with CS generally producing a superior result for samples with sparse frequency content and inpainting performing better for samples with information limited to low frequencies. Motivated by the comparison, a basis pursuit vertical variation (BPVV) method, combing CS and inpainting, is proposed. Based on single image reconstruction results, we also extend our analysis to the problem of multiple AFM frames, in which higher overall video reconstruction quality is achieved by pixel sharing among different frames. The second part of the thesis considers patterns for sub-sampling in AFM. The allocation of measurements plays an important role in producing accurate reconstructions of the sample surface. We analyze the expected image reconstruction error using a greedy CS algorithm of our design, termed simplified matching pursuit (SMP), and propose a Monte Carlo-based strategy to create μ-path patterns that minimize the expected error. Because these μ path patterns involve a collection of disjoint scan paths, they require the tip of the instrument to be repeatedly lifted from and re-engaged to the surface. In many cases, the re-engagements make up a significant portion of the total data acquisition time. We therefore extend our Monte Carlo design strategy to find continuous scan patterns that minimize the reconstruction error without requiring the tip to be lifted from the surface. For the final part of the work, we provide a hardware demonstration on a commercial AFM. We describe hardware implementation details and image a calibration grating using the proposed μ-path and continuous scan patterns. The sample surface is reconstructed from acquired data using CS and inpainting methods. The recovered image quality and achievable imaging rate are compared to full raster-scans of the sample. The experimental results show that the proposed scanning combining with reconstruction methods can produce higher image quality with less imaging time

    End-to-end learning of brain tissue segmentation from imperfect labeling

    Full text link
    Segmenting a structural magnetic resonance imaging (MRI) scan is an important pre-processing step for analytic procedures and subsequent inferences about longitudinal tissue changes. Manual segmentation defines the current gold standard in quality but is prohibitively expensive. Automatic approaches are computationally intensive, incredibly slow at scale, and error prone due to usually involving many potentially faulty intermediate steps. In order to streamline the segmentation, we introduce a deep learning model that is based on volumetric dilated convolutions, subsequently reducing both processing time and errors. Compared to its competitors, the model has a reduced set of parameters and thus is easier to train and much faster to execute. The contrast in performance between the dilated network and its competitors becomes obvious when both are tested on a large dataset of unprocessed human brain volumes. The dilated network consistently outperforms not only another state-of-the-art deep learning approach, the up convolutional network, but also the ground truth on which it was trained. Not only can the incredible speed of our model make large scale analyses much easier but we also believe it has great potential in a clinical setting where, with little to no substantial delay, a patient and provider can go over test results.Comment: Published as a conference paper at IJCNN 2017 Preprint versio
    • …
    corecore