195,021 research outputs found

    A Visual Approach to Analysis of Stress Tensor Fields

    Get PDF
    We present a visual approach for the exploration of stress tensor fields. In contrast to common tensor visualization methods that only provide a single view to the tensor field, we pursue the idea of providing various perspectives onto the data in attribute and object space. Especially in the context of stress tensors, advanced tensor visualization methods have a young tradition. Thus, we propose a combination of visualization techniques domain experts are used to with statistical views of tensor attributes. The application of this concept to tensor fields was achieved by extending the notion of shape space. It provides an intuitive way of finding tensor invariants that represent relevant physical properties. Using brushing techniques, the user can select features in attribute space, which are mapped to displayable entities in a three-dimensional hybrid visualization in object space. Volume rendering serves as context, while glyphs encode the whole tensor information in focus regions. Tensorlines can be included to emphasize directionally coherent features in the tensor field. We show that the benefit of such a multi-perspective approach is manifold. Foremost, it provides easy access to the complexity of tensor data. Moreover, including well-known analysis tools, such as Mohr diagrams, users can familiarize themselves gradually with novel visualization methods. Finally, by employing a focus-driven hybrid rendering, we significantly reduce clutter, which was a major problem of other three-dimensional tensor visualization methods

    The Use of Computer Visualization in the Analysis of Breathing Curves

    Get PDF
    Researchers in diverse domains use advanced computer techniques to describe complex entities and processes and impressively visualize those which are often unavailable for direct observation. However, one should emphasize that computer visualization conception implies more than just a convenient, impressive, or high rate information transfer. It also considers a problem of perception, processing, and further cultivation of such important personal qualities as intuition, professional "talent", and figurative thinking which are of value for experts in any domain. Computer visualization comprises traditionally such sophisticated techniques as computer graphics, animation and virtual reality. Traditionally, computer graphics has delivered strong instruments for creating, processing, and interacting with data representations. Interactive paradigm has led to emergence of a new scope within the problem of artificial intelligence, which is called cognitive computer graphics. The use of cognitive graphics allows physicians, analyzing modest volume of information, to draw significant conclusions. In whole, cognitive graphics forms a separate subfield in medical research. Visualization provides experts with data in current state of patients just to monitor their conditions continuously. In this article, we show the use of computer visualization to study characteristics (including network imprints) of such a common disease as bronchial asthma. The patients were grouped according to influence degree of psychological factors to the occurrence, progression and course stages of the disease. The study focuses on comparison and analysis of the patient' spirograms and demonstrate presence of physiological and psycho-physiological features among patients with diagnoses of bronchial asthma. In this respect computer visualization provides a solid platform for thorough research and deep analysis in spirometry

    Multiple dataset visualization (MDV) framework for scalar volume data

    Get PDF
    Many applications require comparative analysis of multiple datasets representing different samples, conditions, time instants, or views in order to develop a better understanding of the scientific problem/system under consideration. One effective approach for such analysis is visualization of the data. In this PhD thesis, we propose an innovative multiple dataset visualization (MDV) approach in which two or more datasets of a given type are rendered concurrently in the same visualization. MDV is an important concept for the cases where it is not possible to make an inference based on one dataset, and comparisons between many datasets are required to reveal cross-correlations among them. The proposed MDV framework, which deals with some fundamental issues that arise when several datasets are visualized together, follows a multithreaded architecture consisting of three core components, data preparation/loading, visualization and rendering. The visualization module - the major focus of this study, currently deals with isosurface extraction and texture-based rendering techniques. For isosurface extraction, our all-in-memory approach keeps datasets under consideration and the corresponding geometric data in the memory. Alternatively, the only-polygons- or points-in-memory only keeps the geometric data in memory. To address the issues related to storage and computation, we develop adaptive data coherency and multiresolution schemes. The inter-dataset coherency scheme exploits the similarities among datasets to approximate the portions of isosurfaces of datasets using the isosurface of one or more reference datasets whereas the intra/inter-dataset multiresolution scheme processes the selected portions of each data volume at varying levels of resolution. The graphics hardware-accelerated approaches adopted for MDV include volume clipping, isosurface extraction and volume rendering, which use 3D textures and advanced per fragment operations. With appropriate user-defined threshold criteria, we find that various MDV techniques maintain a linear time-N relationship, improve the geometry generation and rendering time, and increase the maximum N that can be handled (N: number of datasets). Finally, we justify the effectiveness and usefulness of the proposed MDV by visualizing 3D scalar data (representing electron density distributions in magnesium oxide and magnesium silicate) from parallel quantum mechanical simulation

    A Business Intelligence Solution, based on a Big Data Architecture, for processing and analyzing the World Bank data

    Get PDF
    The rapid growth in data volume and complexity has needed the adoption of advanced technologies to extract valuable insights for decision-making. This project aims to address this need by developing a comprehensive framework that combines Big Data processing, analytics, and visualization techniques to enable effective analysis of World Bank data. The problem addressed in this study is the need for a scalable and efficient Business Intelligence solution that can handle the vast amounts of data generated by the World Bank. Therefore, a Big Data architecture is implemented on a real use case for the International Bank of Reconstruction and Development. The findings of this project demonstrate the effectiveness of the proposed solution. Through the integration of Apache Spark and Apache Hive, data is processed using Extract, Transform and Load techniques, allowing for efficient data preparation. The use of Apache Kylin enables the construction of a multidimensional model, facilitating fast and interactive queries on the data. Moreover, data visualization techniques are employed to create intuitive and informative visual representations of the analysed data. The key conclusions drawn from this project highlight the advantages of a Big Data-driven Business Intelligence solution in processing and analysing World Bank data. The implemented framework showcases improved scalability, performance, and flexibility compared to traditional approaches. In conclusion, this bachelor thesis presents a Business Intelligence solution based on a Big Data architecture for processing and analysing the World Bank data. The project findings emphasize the importance of scalable and efficient data processing techniques, multidimensional modelling, and data visualization for deriving valuable insights. The application of these techniques contributes to the field by demonstrating the potential of Big Data Business Intelligence solutions in addressing the challenges associated with large-scale data analysis

    14-08 Big Data Analytics to Aid Developing Livable Communities

    Get PDF
    In transportation, ubiquitous deployment of low-cost sensors combined with powerful computer hardware and high-speed network makes big data available. USDOT defines big data research in transportation as a number of advanced techniques applied to the capture, management and analysis of very large and diverse volumes of data. Data in transportation are usually well organized into tables and are characterized by relatively low dimensionality and yet huge numbers of records. Therefore, big data research in transportation has unique challenges on how to effectively process huge amounts of data records and data streams. The purpose of this study is to conduct research on the problems caused by large data volume and data streams and to develop applications for data analysis in transportation. To process large number of records efficiently, we have proposed to aggregate the data at multiple resolutions and to explore the data at various resolutions to balance between accuracy and speed. Techniques and algorithms in statistical analysis and data visualization have been developed for efficient data analytics using multiresolution data aggregation. Results will be helpful in setting up a primitive stage towards a rigorous framework for general analytical processing of big data in transportation

    An interactive ImageJ plugin for semi-automated image denoising in electron microscopy

    Get PDF
    The recent advent of 3D in electron microscopy (EM) has allowed for detection of nanometer resolution structures. This has caused an explosion in dataset size, necessitating the development of automated workflows. Moreover, large 3D EM datasets typically require hours to days to be acquired and accelerated imaging typically results in noisy data. Advanced denoising techniques can alleviate this, but tend to be less accessible to the community due to low-level programming environments, complex parameter tuning or a computational bottleneck. We present DenoisEM: an interactive and GPU accelerated denoising plugin for ImageJ that ensures fast parameter tuning and processing through parallel computing. Experimental results show that DenoisEM is one order of magnitude faster than related software and can accelerate data acquisition by a factor of 4 without significantly affecting data quality. Lastly, we show that image denoising benefits visualization and (semi-)automated segmentation and analysis of ultrastructure in various volume EM datasets

    A review of data visualization: opportunities in manufacturing sequence management.

    No full text
    Data visualization now benefits from developments in technologies that offer innovative ways of presenting complex data. Potentially these have widespread application in communicating the complex information domains typical of manufacturing sequence management environments for global enterprises. In this paper the authors review the visualization functionalities, techniques and applications reported in literature, map these to manufacturing sequence information presentation requirements and identify the opportunities available and likely development paths. Current leading-edge practice in dynamic updating and communication with suppliers is not being exploited in manufacturing sequence management; it could provide significant benefits to manufacturing business. In the context of global manufacturing operations and broad-based user communities with differing needs served by common data sets, tool functionality is generally ahead of user application

    Object-based representation and analysis of light and electron microscopic volume data using Blender

    Get PDF
    This is the final version of the article. Available from the publisher via the DOI in this record.BACKGROUND: Rapid improvements in light and electron microscopy imaging techniques and the development of 3D anatomical atlases necessitate new approaches for the visualization and analysis of image data. Pixel-based representations of raw light microscopy data suffer from limitations in the number of channels that can be visualized simultaneously. Complex electron microscopic reconstructions from large tissue volumes are also challenging to visualize and analyze. RESULTS: Here we exploit the advanced visualization capabilities and flexibility of the open-source platform Blender to visualize and analyze anatomical atlases. We use light-microscopy-based gene expression atlases and electron microscopy connectome volume data from larval stages of the marine annelid Platynereis dumerilii. We build object-based larval gene expression atlases in Blender and develop tools for annotation and coexpression analysis. We also represent and analyze connectome data including neuronal reconstructions and underlying synaptic connectivity. CONCLUSIONS: We demonstrate the power and flexibility of Blender for visualizing and exploring complex anatomical atlases. The resources we have developed for Platynereis will facilitate data sharing and the standardization of anatomical atlases for this species. The flexibility of Blender, particularly its embedded Python application programming interface, means that our methods can be easily extended to other organisms.The research leading to these results received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013)/European Research Council Grant Agreement 260821
    corecore