12 research outputs found

    Research directions in data wrangling: Visualizations and transformations for usable and credible data

    Get PDF
    In spite of advances in technologies for working with data, analysts still spend an inordinate amount of time diagnosing data quality issues and manipulating data into a usable form. This process of ‘data wrangling’ often constitutes the most tedious and time-consuming aspect of analysis. Though data cleaning and integration arelongstanding issues in the database community, relatively little research has explored how interactive visualization can advance the state of the art. In this article, we review the challenges and opportunities associated with addressing data quality issues. We argue that analysts might more effectively wrangle data through new interactive systems that integrate data verification, transformation, and visualization. We identify a number of outstanding research questions, including how appropriate visual encodings can facilitate apprehension of missing data, discrepant values, and uncertainty; how interactive visualizations might facilitate data transform specification; and how recorded provenance and social interaction might enable wider reuse, verification, and modification of data transformations

    Extended Focus & Context for Visualizing Abstract Data on Maps

    No full text
    Traditionally, the subject of cartography has been geographic data, while information visualization has dealt with abstract, non-spatial information. With the ever increasing amount of data harvested worldwide, more and more of this data includes both geographic and abstract information at the same time. Efficient visualization of this data calls for the combination of maps with techniques for complex, structured information spaces, a key concept among these being focus & context techniques. In this paper, an extended focus & context concept is proposed that efficiently supports the combination of visualization techniques with maps.

    Performance analysis of wind turbines with leading-edge erosion and erosion-safe mode operation

    No full text
    For offshore wind turbines, Leading-Edge Erosion (LEE) due to rain is posing a serious risk to structural integrity and can lead to a performance loss of the order of a few percent of the Annual Energy Production (AEP). A proposed mitigation strategy is the so-called Erosion-Safe Mode (ESM). In this work, the AEP losses caused by LEE or by operating in the ESM are compared for two reference turbines, i.e. the IEA 15MW and the NREL 5MW turbines. For both turbines, the performance is evaluated in uniform and sheared inflow conditions. The effects of erosion are modeled by creating clean and rough airfoil polars in XFOIL. It is assumed that erosion occurs once a critical blade element section speed is exceeded. Power curves for LEE and ESM are calculated by using the free-wake vortex method CACTUS. Results show that LEE negatively affects the power production below rated capacity, while operating in ESM predominantly sheds performance at rated power of the turbine. This study, therefore, shows that a break-even point for the ESM exists. The AEP loss due to erosion can be successfully mitigated with the ESM at sites with low mean wind speed, however, at sites with higher mean wind speed, operation with erosion leads to a lower AEP loss. The break-even point shows little sensitivity to the blade design and to mean shear variations, but strongly depends on the frequency ESM needs to be applied. The latter is driven by the predicted amount of damaging rain events. In conclusion, erosion-optimal operation is strongly governed by the site characteristics and much less by turbine design, and the viability of an ESM strategy can be significantly expanded by a better understanding of blade damage mechanisms and improved forecasting of the related weather events. Wind Energ

    Visualizing and clustering high throughput sub-cellular localization imaging

    Get PDF
    The expansion of automatic imaging technologies has created a need to be able to efficiently compare and review large sets of image data. To enable comparisons of image data between samples we need to define the normal variation within distinct images of the same sample
    corecore