3,292 research outputs found

    Designing Improved Sediment Transport Visualizations

    Get PDF
    Monitoring, or more commonly, modeling of sediment transport in the coastal environment is a critical task with relevance to coastline stability, beach erosion, tracking environmental contaminants, and safety of navigation. Increased intensity and regularity of storms such as Superstorm Sandy heighten the importance of our understanding of sediment transport processes. A weakness of current modeling capabilities is the ability to easily visualize the result in an intuitive manner. Many of the available visualization software packages display only a single variable at once, usually as a two-dimensional, plan-view cross-section. With such limited display capabilities, sophisticated 3D models are undermined in both the interpretation of results and dissemination of information to the public. Here we explore a subset of existing modeling capabilities (specifically, modeling scour around man-made structures) and visualization solutions, examine their shortcomings and present a design for a 4D visualization for sediment transport studies that is based on perceptually-focused data visualization research and recent and ongoing developments in multivariate displays. Vector and scalar fields are co-displayed, yet kept independently identifiable utilizing human perception\u27s separation of color, texture, and motion. Bathymetry, sediment grain-size distribution, and forcing hydrodynamics are a subset of the variables investigated for simultaneous representation. Direct interaction with field data is tested to support rapid validation of sediment transport model results. Our goal is a tight integration of both simulated data and real world observations to support analysis and simulation of the impact of major sediment transport events such as hurricanes. We unite modeled results and field observations within a geodatabase designed as an application schema of the Arc Marine Data Model. Our real-world focus is on the Redbird Artificial Reef Site, roughly 18 nautical miles offshor- Delaware Bay, Delaware, where repeated surveys have identified active scour and bedform migration in 27 m water depth amongst the more than 900 deliberately sunken subway cars and vessels. Coincidently collected high-resolution multibeam bathymetry, backscatter, and side-scan sonar data from surface and autonomous underwater vehicle (AUV) systems along with complementary sub-bottom, grab sample, bottom imagery, and wave and current (via ADCP) datasets provide the basis for analysis. This site is particularly attractive due to overlap with the Delaware Bay Operational Forecast System (DBOFS), a model that provides historical and forecast oceanographic data that can be tested in hindcast against significant changes observed at the site during Superstorm Sandy and in predicting future changes through small-scale modeling around the individual reef objects

    Visual Viper: a portable visualization library for streamlined scientific communications.

    Get PDF
    À medida que o sector da saúde passa por uma transformação digital, a afluência de dados de saúde para profissionais de saúde e investigadores tem disparado. A crescente necessidade de criar visualizações de dados para compreender esta informação levou ao desenvolvimento do Visual Viper, uma biblioteca Python destinada a automatizar a visualização de dados, para agilizar o processo frequentemente trabalhoso de gerar visualizações. O Visual Viper usa Vega-Lite, uma gramática de alto nível de gráficos interativos, para criar visualizações a partir de várias fontes de dados de investigação através de uma interface de programação de aplicações ('application programming interface' - API) conveniente. Esta automação poupa tempo e facilita a consistência da comunicação científica. A funcionalidade da biblioteca compreende componentes interligados: começa por obter dados de uma fonte selecionada, segue-se a transformação destes dados para se adequarem aos requisitos de visualização. subsequentemente o Visual Viper renderiza os gráficos usando Vega-Lite e exporta-os para uso na comunicação científica. Este pipeline é implementado utilizando uma arquitetura de 'plugins' modular e extensível, que permite acomodar diferentes fontes de dados e tipos de visualização. Cada etapa permite pode ser modificada de forma independente, possibilitando uma personalização extensa com base em casos de uso específicos e sem afetar a funcionalidade geral da biblioteca. Alguns paradigmas importantes usados no desenvolvimento do Visual Viper incluem a programação orientada a objetos ('object oriented programming' - OOP) e desenvolvimento orientado a testes ('test-driven development' - TDD), ambos proporcionando estrutura, eficiência e funcionalidade. Ao usar OOP, a biblioteca adota uma estrutura clara para o código, tornando-o mais fácil de gerir e manter. Os princípios de encapsulamento, herança e polimorfismo proporcionam eficiência e flexibilidade, enquanto o uso de classes como 'DatasetBuilder', 'ChartNotationBuilder' e 'ChartDeployer' facilitam a reutilização de código. A classe 'DatasetBuilder' é projetada para obter e pré-processar dados de várias fontes, a classe 'ChartNotationBuilder' é responsável por criar o layout do gráfico e estética visual baseada nos dados pré-processados, e a classe 'ChartDeployer' lida com a implantação das visualizações finalizadas. Cada uma dessas classes encapsula funções e dados relacionados, reduzindo a complexidade e tornando o código mais fácil de manter e estender. TDD também apresentou um papel crucial no desenvolvimento do Visual Viper. Esta abordagem, que envolve escrever testes antes do código propriamente dito, garante que todas as funções estão a funcionar como pretendido, levando assim a uma melhoria da qualidade do código, simplificação da depuração e um ciclo de desenvolvimento mais rápido. A implementação do Visual Viper garante que este pode funcionar em vários ambientes sem alterações significativas (é agnóstico ao ambiente), e pode operar independentemente em máquinas locais, AWS Lambda, ou como uma API Web. Em conclusão, o Visual Viper fornece uma ferramenta robusta e flexível para a visualização de dados, reforçando a eficiência da comunicação científica no sector da saúde.As the healthcare sector undergoes digital transformation, the influx of data for health professionals and researchers has surged. The increased need for data visualizations to comprehend this information led to the development of Visual Viper, a Python library aimed at automating data visualization, to streamline the often labor-intensive process of generating visualizations. Visual Viper uses Vega-Lite, a high-level grammar of interactive graphics, to create visualizations from various research data sources via a convenient application programming interface (API). This automation saves time and facilitates the consistency of science communication. The library's functionality comprises interconnected components. It begins by retrieving data from a selected source, followed by transforming this data to suit visualization requirements. Subsequently, Visual Viper renders the charts using Vega-Lite and deploys them for use in scientific communication. Implemented within a modular and extensible plugin architecture, it accommodates different data sources and visualization types. Each stage allows independent modification, enabling extensive customization based on specific use-cases without affecting the library's overall functionality. Some important paradigms used in Visual Viper's development include the application of object-oriented programming (OOP) and test-driven development (TDD). By using OOP, the library adopts a structured codebase that is easier to manage and maintain. The principles of encapsulation, inheritance, and polymorphism ensure efficiency and flexibility, while the use of classes facilitates code reuse. The 'DatasetBuilder' class is designed to fetch and preprocess data from various sources, 'ChartNotationBuilder' class is responsible for creating the chart layout and visual aesthetics based on the preprocessed data, and the 'ChartDeployer' class handles the deployment of the finished visualizations. These classes encapsulate related functions and data, reducing complexity and aiding code maintenance and extension. The TDD approach, which involves writing tests before the actual code, ensures all functions are operating as intended, thus leading to improved code quality, simplified debugging, and a faster development cycle. Its implementation ensures the library can run in various environments without significant changes (Environment Agnostic), and it can operate independently on local machines, Lambda, or as a Web API (Serverless Deployment). Future steps for Visual Viper include development of plugins including Google Sheets Dataset Builder and Figma Chart Deployer and creation of additional Vega Lite Chart Notation Builders such as Bar Chart, Survival Chart, and Forest Plot. Once these steps are complete, Visual Viper will be packaged as an importable Python package, with an efficiency evaluation to follow. In conclusion, Visual Viper provides a robust and flexible tool for data visualization, bolstering the efficiency of scientific communication in the healthcare sector

    Knowledge-based systems and geological survey

    Get PDF
    This personal and pragmatic review of the philosophy underpinning methods of geological surveying suggests that important influences of information technology have yet to make their impact. Early approaches took existing systems as metaphors, retaining the separation of maps, map explanations and information archives, organised around map sheets of fixed boundaries, scale and content. But system design should look ahead: a computer-based knowledge system for the same purpose can be built around hierarchies of spatial objects and their relationships, with maps as one means of visualisation, and information types linked as hypermedia and integrated in mark-up languages. The system framework and ontology, derived from the general geoscience model, could support consistent representation of the underlying concepts and maintain reference information on object classes and their behaviour. Models of processes and historical configurations could clarify the reasoning at any level of object detail and introduce new concepts such as complex systems. The up-to-date interpretation might centre on spatial models, constructed with explicit geological reasoning and evaluation of uncertainties. Assuming (at a future time) full computer support, the field survey results could be collected in real time as a multimedia stream, hyperlinked to and interacting with the other parts of the system as appropriate. Throughout, the knowledge is seen as human knowledge, with interactive computer support for recording and storing the information and processing it by such means as interpolating, correlating, browsing, selecting, retrieving, manipulating, calculating, analysing, generalising, filtering, visualising and delivering the results. Responsibilities may have to be reconsidered for various aspects of the system, such as: field surveying; spatial models and interpretation; geological processes, past configurations and reasoning; standard setting, system framework and ontology maintenance; training; storage, preservation, and dissemination of digital records

    Instrumenting the Acquisition Design Process: Developing Methods for Engineering Process Metrics Capture and Analysis

    Get PDF
    Excerpt from the Proceedings of the Nineteenth Annual Acquisition Research SymposiumThere is a deficit of data on the detailed execution of design acquisition processes, data which is needed to truly understand and improve them. Simultaneously, the movement to digital engineering, and specifically model based engineering, offers a key opportunity to gather continual data needed to move acquisition processes forward. To address this issue, methods must be developed and implemented to capture key process metrics on the full product life cycle, which includes conception, design, development, and test. The engineering acquisition process should be instrumented, capturing engineering metrics at a level of granularity sufficient to provide actionable information to other acquisition programs. These methods would be implemented on a set of diverse engineering programs, utilizing internal engineering design tools, product data and life-cycle management tools, and manpower reporting systems to capture data. This paper first discusses a number of specific examples of process instrumentation undertaken by the authors, then concludes with recommended lines of research for fully instrumenting acquisition processes.Approved for public release; distribution is unlimited

    Beyond XSPEC: Towards Highly Configurable Analysis

    Full text link
    We present a quantitative comparison between software features of the defacto standard X-ray spectral analysis tool, XSPEC, and ISIS, the Interactive Spectral Interpretation System. Our emphasis is on customized analysis, with ISIS offered as a strong example of configurable software. While noting that XSPEC has been of immense value to astronomers, and that its scientific core is moderately extensible--most commonly via the inclusion of user contributed "local models"--we identify a series of limitations with its use beyond conventional spectral modeling. We argue that from the viewpoint of the astronomical user, the XSPEC internal structure presents a Black Box Problem, with many of its important features hidden from the top-level interface, thus discouraging user customization. Drawing from examples in custom modeling, numerical analysis, parallel computation, visualization, data management, and automated code generation, we show how a numerically scriptable, modular, and extensible analysis platform such as ISIS facilitates many forms of advanced astrophysical inquiry.Comment: Accepted by PASP, for July 2008 (15 pages

    Adaptive model-driven user interface development systems

    Get PDF
    Adaptive user interfaces (UIs) were introduced to address some of the usability problems that plague many software applications. Model-driven engineering formed the basis for most of the systems targeting the development of such UIs. An overview of these systems is presented and a set of criteria is established to evaluate the strengths and shortcomings of the state-of-the-art, which is categorized under architectures, techniques, and tools. A summary of the evaluation is presented in tables that visually illustrate the fulfillment of each criterion by each system. The evaluation identified several gaps in the existing art and highlighted the areas of promising improvement

    Bioconductor: open software development for computational biology and bioinformatics.

    Get PDF
    The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry into interdisciplinary scientific research, and promoting the achievement of remote reproducibility of research results. We describe details of our aims and methods, identify current challenges, compare Bioconductor to other open bioinformatics projects, and provide working examples
    corecore