198 research outputs found

    A distributed Quadtree Dictionary approach to multi-resolution visualization of scattered neutron data

    Get PDF
    Grid computing is described as dependable, seamless, pervasive access to resources and services, whereas mobile computing allows the movement of people from place to place while staying connected to resources at each location. Mobile grid computing is a new computing paradigm, which joins these two technologies by enabling access to the collection of resources within a user\u27s virtual organization while still maintaining the freedom of mobile computing through a service paradigm. A major problem in virtual organization is needs mismatch, in which one resources requests a service from another resources it is unable to fulfill, since virtual organizations are necessarily heterogeneous collections of resources. In this dissertation we propose a solution to the needs mismatch problem in the case of high energy physics data. Specifically, we propose a Quadtree Dictionary (QTD) algorithm to provide lossless, multi-resolution compression of datasets and enable their visualization on devices of all capabilities. As a prototype application, we extend the Integrated Spectral Analysis Workbench (ISAW) developed at the Intense Pulsed Neutron Source Division of the Argonne National Laboratory into a mobile Grid application, Mobile ISAW. In this dissertation we compare our QTD algorithm with several existing compression techniques on ISAW\u27s Single-Crystal Diffractometer (SCD) datasets. We then extend our QTD algorithm to a distributed setting and examine its effectiveness on the next generation of SCD datasets. In both a serial and distributed setting, our QTD algorithm performs no worse than existing techniques such as the square wavelet transform in terms of energy conservation, while providing the worst-case savings of 8:1

    Efficient Scalable Video Coding Based on Matching Pursuits

    Get PDF

    Zoom: A multi-resolution tasking framework for crowdsourced geo-spatial sensing

    Full text link
    Abstract—As sensor networking technologies continue to de-velop, the notion of adding large-scale mobility into sensor networks is becoming feasible by crowd-sourcing data collection to personal mobile devices. However, tasking such networks at fine granularity becomes problematic because the sensors are heterogeneous, owned by the crowd and not the network operators. In this paper, we present Zoom, a multi-resolution tasking framework for crowdsourced geo-spatial sensor networks. Zoom allows users to define arbitrary sensor groupings over heterogeneous, unstructured and mobile networks and assign different sensing tasks to each group. The key idea is the separation of the task information ( what task a particular sensor should perform) from the task implementation ( code). Zoom consists of (i) a map, an overlay on top of a geographic region, to represent both the sensor groups and the task information, and (ii) adaptive encoding of the map at multiple resolutions and region-of-interest cropping for resource-constrained devices, allowing sensors to zoom in quickly to a specific region to determine their task. Simulation of a realistic traffic application over an area of 1 sq. km with a task map of size 1.5 KB shows that more than 90 % of nodes are tasked correctly. Zoom also outperforms Logical Neighborhoods, the state-of-the-art tasking protocol in task information size for similar tasks. Its encoded map size is always less than 50 % of Logical Neighborhood’s predicate size. I

    Quadtree Structured Approximation Algorithms

    Get PDF
    The success of many image restoration algorithms is often due to their ability to sparsely describe the original signal. Many sparse promoting transforms exist, including wavelets, the so called ‘lets’ family of transforms and more recent non-local learned transforms. The first part of this thesis reviews sparse approximation theory, particularly in relation to 2-D piecewise polynomial signals. We also show the connection between this theory and current state of the art algorithms that cover the following image restoration and enhancement applications: denoising, deconvolution, interpolation and multi-view super resolution. In [63], Shukla et al. proposed a compression algorithm, based on a sparse quadtree decomposition model, which could optimally represent piecewise polynomial images. In the second part of this thesis we adapt this model to image restoration by changing the rate-distortion penalty to a description-length penalty. Moreover, one of the major drawbacks of this type of approximation is the computational complexity required to find a suitable subspace for each node of the quadtree. We address this issue by searching for a suitable subspace much more efficiently using the mathematics of updating matrix factorisations. Novel algorithms are developed to tackle the four problems previously mentioned. Simulation results indicate that we beat state of the art results when the original signal is in the model (e.g. depth images) and are competitive for natural images when the degradation is high.Open Acces

    DeepMatching: Hierarchical Deformable Dense Matching

    Get PDF
    We introduce a novel matching algorithm, called DeepMatching, to compute dense correspondences between images. DeepMatching relies on a hierarchical, multi-layer, correlational architecture designed for matching images and was inspired by deep convolutional approaches. The proposed matching algorithm can handle non-rigid deformations and repetitive textures and efficiently determines dense correspondences in the presence of significant changes between images. We evaluate the performance of DeepMatching, in comparison with state-of-the-art matching algorithms, on the Mikolajczyk (Mikolajczyk et al 2005), the MPI-Sintel (Butler et al 2012) and the Kitti (Geiger et al 2013) datasets. DeepMatching outperforms the state-of-the-art algorithms and shows excellent results in particular for repetitive textures.We also propose a method for estimating optical flow, called DeepFlow, by integrating DeepMatching in the large displacement optical flow (LDOF) approach of Brox and Malik (2011). Compared to existing matching algorithms, additional robustness to large displacements and complex motion is obtained thanks to our matching approach. DeepFlow obtains competitive performance on public benchmarks for optical flow estimation

    An investigation into Quadtree fractal image and video compression

    Get PDF
    Digital imaging is the representation of drawings, photographs and pictures in a format that can be displayed and manipulated using a conventional computer. Digital imaging has enjoyed increasing popularity over recent years, with the explosion of digital photography, the Internet and graphics-intensive applications and games. Digitised images, like other digital media, require a relatively large amount of storage space. These storage requirements can become problematic as demands for higher resolution images increases and the resolution capabilities of digital cameras improve. It is not uncommon for a personal computer user to have a collection of thousands of digital images, mainly photographs, whilst the Internet’s Web pages present a practically infinite source. These two factors 一 image size and abundance 一 inevitably lead to a storage problem. As with other large files, data compression can help reduce these storage requirements. Data compression aims to reduce the overall storage requirements for a file by minimising redundancy. The most popular image compression method, JPEG, can reduce the storage requirements for a photographic image by a factor of ten whilst maintaining the appearance of the original image 一 or can deliver much greater levels of compression with a slight loss of quality as a trade-off. Whilst JPEG's efficiency has made it the definitive image compression algorithm, there is always a demand for even greater levels of compression and as a result new image compression techniques are constantly being explored. One such technique utilises the unique properties of Fractals. Fractals are relatively small mathematical formulae that can be used to generate abstract and often colourful images with infinite levels of detail. This property is of interest in the area of image compression because a detailed, high-resolution image can be represented by a few thousand bytes of formulae and coefficients rather than the more typical multi-megabyte filesizes. The real challenge associated with Fractal image compression is to determine the correct set of formulae and coefficients to represent the image a user is trying to compress; it is trivial to produce an image from a given formula but it is much, much harder to produce a formula from a given image. เท theory, Fractal compression can outperform JPEG for a given image and quality level, if the appropiate formulae can be determined. Fractal image compression can also be applied to digital video sequences, which are typically represented by a long series of digital images 一 or 'frames'

    DCMS: A data analytics and management system for molecular simulation

    Get PDF
    Molecular Simulation (MS) is a powerful tool for studying physical/chemical features of large systems and has seen applications in many scientific and engineering domains. During the simulation process, the experiments generate a very large number of atoms and intend to observe their spatial and temporal relationships for scientific analysis. The sheer data volumes and their intensive interactions impose significant challenges for data accessing, managing, and analysis. To date, existing MS software systems fall short on storage and handling of MS data, mainly because of the missing of a platform to support applications that involve intensive data access and analytical process. In this paper, we present the database-centric molecular simulation (DCMS) system our team developed in the past few years. The main idea behind DCMS is to store MS data in a relational database management system (DBMS) to take advantage of the declarative query interface (i.e., SQL), data access methods, query processing, and optimization mechanisms of modern DBMSs. A unique challenge is to handle the analytical queries that are often compute-intensive. For that, we developed novel indexing and query processing strategies (including algorithms running on modern co-processors) as integrated components of the DBMS. As a result, researchers can upload and analyze their data using efficient functions implemented inside the DBMS. Index structures are generated to store analysis results that may be interesting to other users, so that the results are readily available without duplicating the analysis. We have developed a prototype of DCMS based on the PostgreSQL system and experiments using real MS data and workload show that DCMS significantly outperforms existing MS software systems. We also used it as a platform to test other data management issues such as security and compression

    A Deep Learning Approach for the Computation of Curvature in the Level-Set Method

    Full text link
    We propose a deep learning strategy to estimate the mean curvature of two-dimensional implicit interfaces in the level-set method. Our approach is based on fitting feed-forward neural networks to synthetic data sets constructed from circular interfaces immersed in uniform grids of various resolutions. These multilayer perceptrons process the level-set values from mesh points next to the free boundary and output the dimensionless curvature at their closest locations on the interface. Accuracy analyses involving irregular interfaces, both in uniform and adaptive grids, show that our models are competitive with traditional numerical schemes in the L1L^1 and L2L^2 norms. In particular, our neural networks approximate curvature with comparable precision in coarse resolutions, when the interface features steep curvature regions, and when the number of iterations to reinitialize the level-set function is small. Although the conventional numerical approach is more robust than our framework, our results have unveiled the potential of machine learning for dealing with computational tasks where the level-set method is known to experience difficulties. We also establish that an application-dependent map of local resolutions to neural models can be devised to estimate mean curvature more effectively than a universal neural network.Comment: Submitted to SIAM Journal on Scientific Computin
    corecore