353 research outputs found

    Streaming visualisation of quantitative mass spectrometry data based on a novel raw signal decomposition method

    Get PDF
    As data rates rise, there is a danger that informatics for high-throughput LC-MS becomes more opaque and inaccessible to practitioners. It is therefore critical that efficient visualisation tools are available to facilitate quality control, verification, validation, interpretation, and sharing of raw MS data and the results of MS analyses. Currently, MS data is stored as contiguous spectra. Recall of individual spectra is quick but panoramas, zooming and panning across whole datasets necessitates processing/memory overheads impractical for interactive use. Moreover, visualisation is challenging if significant quantification data is missing due to data-dependent acquisition of MS/MS spectra. In order to tackle these issues, we leverage our seaMass technique for novel signal decomposition. LC-MS data is modelled as a 2D surface through selection of a sparse set of weighted B-spline basis functions from an over-complete dictionary. By ordering and spatially partitioning the weights with an R-tree data model, efficient streaming visualisations are achieved. In this paper, we describe the core MS1 visualisation engine and overlay of MS/MS annotations. This enables the mass spectrometrist to quickly inspect whole runs for ionisation/chromatographic issues, MS/MS precursors for coverage problems, or putative biomarkers for interferences, for example. The open-source software is available from http://seamass.net/viz/

    Information Extraction and Modeling from Remote Sensing Images: Application to the Enhancement of Digital Elevation Models

    Get PDF
    To deal with high complexity data such as remote sensing images presenting metric resolution over large areas, an innovative, fast and robust image processing system is presented. The modeling of increasing level of information is used to extract, represent and link image features to semantic content. The potential of the proposed techniques is demonstrated with an application to enhance and regularize digital elevation models based on information collected from RS images

    Subdivision Surface based One-Piece Representation

    Get PDF
    Subdivision surfaces are capable of modeling and representing complex shapes of arbi-trary topology. However, methods on how to build the control mesh of a complex surfaceare not studied much. Currently, most meshes of complicated objects come from trian-gulation and simplification of raster scanned data points, like the Stanford 3D ScanningRepository. This approach is costly and leads to very dense meshes.Subdivision surface based one-piece representation means to represent the final objectin a design process with only one subdivision surface, no matter how complicated theobject\u27s topology or shape. Hence the number of parts in the final representation isalways one.In this dissertation we present necessary mathematical theories and geometric algo-rithms to support subdivision surface based one-piece representation. First, an explicitparametrization method is presented for exact evaluation of Catmull-Clark subdivisionsurfaces. Based on it, two approaches are proposed for constructing the one-piece rep-resentation of a given object with arbitrary topology. One approach is to construct theone-piece representation by using the interpolation technique. Interpolation is a naturalway to build models, but the fairness of the interpolating surface is a big concern inprevious methods. With similarity based interpolation technique, we can obtain bet-ter modeling results with less undesired artifacts and undulations. Another approachis through performing Boolean operations. Up to this point, accurate Boolean oper-ations over subdivision surfaces are not approached yet in the literature. We presenta robust and error controllable Boolean operation method which results in a one-piecerepresentation. Because one-piece representations resulting from the above two methodsare usually dense, error controllable simplification of one-piece representations is needed.Two methods are presented for this purpose: adaptive tessellation and multiresolutionanalysis. Both methods can significantly reduce the complexity of a one-piece represen-tation and while having accurate error estimation.A system that performs subdivision surface based one-piece representation was im-plemented and a lot of examples have been tested. All the examples show that our ap-proaches can obtain very good subdivision based one-piece representation results. Eventhough our methods are based on Catmull-Clark subdivision scheme, we believe they canbe adapted to other subdivision schemes as well with small modifications

    A hybrid representation for modeling, interactive editing, and real-time visualization of terrains with volumetric features

    Get PDF
    Cataloged from PDF version of article.Terrain rendering is a crucial part of many real-time applications. The easiest way to process and visualize terrain data in real time is to constrain the terrain model in several ways. This decreases the amount of data to be processed and the amount of processing power needed, but at the cost of expressivity and the ability to create complex terrains. The most popular terrain representation is a regular 2D grid, where the vertices are displaced in a third dimension by a displacement map, called a heightmap. This is the simplest way to represent terrain, and although it allows fast processing, it cannot model terrains with volumetric features. Volumetric approaches sample the 3D space by subdividing it into a 3D grid and represent the terrain as occupied voxels. They can represent volumetric features but they require computationally intensive algorithms for rendering, and their memory requirements are high. We propose a novel representation that combines the voxel and heightmap approaches, and is expressive enough to allow creating terrains with caves, overhangs, cliffs, and arches, and efficient enough to allow terrain editing, deformations, and rendering in real time

    Actas do 10º Encontro Português de Computação Gráfica

    Get PDF
    Actas do 10º Encontro Portugês de Computação Gráfica, Lisboa, 1-3 de Outubro de 2001A investigação, o desenvolvimento e o ensino na área da Computação Gráfica constituem, em Portugal, uma realidade positiva e de largas tradições. O Encontro Português de Computação Gráfica (EPCG), realizado no âmbito das actividades do Grupo Português de Computação Gráfica (GPCG), tem permitido reunir regularmente, desde o 1º EPCG realizado também em Lisboa, mas no já longínquo mês de Julho de 1988, todos os que trabalham nesta área abrangente e com inúmeras aplicações. Pela primeira vez no historial destes Encontros, o 10º EPCG foi organizado em ligação estreita com as comunidades do Processamento de Imagem e da Visão por Computador, através da Associação Portuguesa de Reconhecimento de Padrões (APRP), salientando-se, assim, a acrescida colaboração, e a convergência, entre essas duas áreas e a Computação Gráfica. Este é o livro de actas deste 10º EPCG.INSATUniWebIcep PortugalMicrografAutodes

    Continuous and Adaptive Cartographic Generalization of River Networks

    Get PDF
    The focus of our research is on a new automated smoothing method and its applications. Traditionally, the application of a smoothing method to a collection of polylines produces a new smoothed dataset. Although the new dataset was derived from the original dataset, it is stored independently. Since many smoothing methods are slow to execute, this is a valid trade-off. However, this greatly increases the data storage requirements for each new smoothing. A consequence of this approach is that interactive map systems can only offer maps at a discrete set of scales. It is desirable to have a fast enough method that would support the reuse of a single base dataset for on-the-fly smoothing for the production of maps at any scale.We were able to create a framework for the automated smoothing of river networks based on the following major contributions:– A wavelet--based method for polyline smoothing and endpoint preservation– Inverse Mirror Periodic (IMP) representation of functions and signals, and dimensional wavelets– Smoothing of features that does not change abruptly between scales– Features are pruned in a continuous manner with respect to scale– River network connectedness is maintained for all scales– Reuse of a base geographic dataset for all scales– Design and implementation of an interactive map viewer for linear hydrographic features that renders in subsecond timeWe have created an interactive map that can smoothly zoom to any region. Numerical experiments show that our wavelet-based method produces cartographically appropriate smoothing for tributaries. The system is implemented to view hydrographic data, such as the USGS National Hydrography Dataset (NHD). The map demonstrates that a wavelet--based approach is well suited for basic generalization operations. It provides smoothing and pruning that is continuously dependent on map scale

    Wavelet-based multiresolution data representations for scalable distributed GIS services

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2002.Includes bibliographical references (p. 155-160).Demand for providing scalable distributed GIS services has been growing greatly as the Internet continues to boom. However, currently available data representations for these services are limited by a deficiency of scalability in data formats. In this research, four types of multiresolution data representations based on wavelet theories have been put forward. The designed Wavelet Image (WImg) data format helps us to achieve dynamic zooming and panning of compressed image maps in a prototype GIS viewer. The Wavelet Digital Elevation Model (WDEM) format is developed to deal with cell-based surface data. A WDEM is better than a raster pyramid in that a WDEM provides a non-redundant multiresolution representation. The Wavelet Arc (WArc) format is developed for decomposing curves into a multiresolution format through the lifting scheme. The Wavelet Triangulated Irregular Network (WTIN) format is developed to process general terrain surfaces based on the second generation wavelet theory. By designing a strategy to resample a terrain surface at subdivision points through the modified Butterfly scheme, we achieve the result: only one wavelet coefficient needs to be stored for each point in the final representation. In contrast to this result, three wavelet coefficients need to be stored for each point in a general 3D object wavelet-based representation. Our scheme is an interpolation scheme and has much better performance than the Hat wavelet filter on a surface. Boundary filters are designed to make the representation consistent with the rectangular boundary constraint.(cont.) We use a multi-linked list and a quadtree array as the data structures for computing. A method to convert a high resolution DEM to a WTIN is also provided. These four wavelet-based representations provide consistent and efficient multiresolution formats for online GIS. This makes scalable distributed GIS services more efficient and implementable.by Jingsong Wu.Ph.D

    Granite: A scientific database model and implementation

    Get PDF
    The principal goal of this research was to develop a formal comprehensive model for representing highly complex scientific data. An effective model should provide a conceptually uniform way to represent data and it should serve as a framework for the implementation of an efficient and easy-to-use software environment that implements the model. The dissertation work presented here describes such a model and its contributions to the field of scientific databases. In particular, the Granite model encompasses a wide variety of datatypes used across many disciplines of science and engineering today. It is unique in that it defines dataset geometry and topology as separate conceptual components of a scientific dataset. We provide a novel classification of geometries and topologies that has important practical implications for a scientific database implementation. The Granite model also offers integrated support for multiresolution and adaptive resolution data. Many of these ideas have been addressed by others, but no one has tried to bring them all together in a single comprehensive model. The datasource portion of the Granite model offers several further contributions. In addition to providing a convenient conceptual view of rectilinear data, it also supports multisource data. Data can be taken from various sources and combined into a unified view. The rod storage model is an abstraction for file storage that has proven an effective platform upon which to develop efficient access to storage. Our spatial prefetching technique is built upon the rod storage model, and demonstrates very significant improvement in access to scientific datasets, and also allows machines to access data that is far too large to fit in main memory. These improvements bring the extremely large datasets now being generated in many scientific fields into the realm of tractability for the ordinary researcher. We validated the feasibility and viability of the model by implementing a significant portion of it in the Granite system. Extensive performance evaluations of the implementation indicate that the features of the model can be provided in a user-friendly manner with an efficiency that is competitive with more ad hoc systems and more specialized application specific solutions

    A Flexible Kernel for Adaptive Mesh Refinement on GPU

    Get PDF
    International audienceWe present a flexible GPU kernel for adaptive on-the-fly refinement of meshes with arbitrary topology. By simply reserving a small amount of GPU memory to store a set of adaptive refinement patterns, on-the-fly refinement is performed by the GPU, without any preprocessing nor additional topology data structure. The level of adaptive refinement can be controlled by specifying a per-vertex depth-tag, in addition to usual position, normal, color and texture coordinates. This depth-tag is used by the kernel to instanciate the correct refinement pattern. Finally, the refined patch produced for each triangle can be displaced by the vertex shader, using any kind of geometric refinement, such as Bezier patch smoothing, scalar valued displacement, procedural geometry synthesis or subdivision surfaces. This refinement engine does neither require multi-pass rendering nor any use of fragment processing nor special preprocess of the input mesh structure. It can be implemented on any GPU with vertex shading capabilities
    corecore