145 research outputs found

    Inviwo -- A Visualization System with Usage Abstraction Levels

    Full text link
    The complexity of today's visualization applications demands specific visualization systems tailored for the development of these applications. Frequently, such systems utilize levels of abstraction to improve the application development process, for instance by providing a data flow network editor. Unfortunately, these abstractions result in several issues, which need to be circumvented through an abstraction-centered system design. Often, a high level of abstraction hides low level details, which makes it difficult to directly access the underlying computing platform, which would be important to achieve an optimal performance. Therefore, we propose a layer structure developed for modern and sustainable visualization systems allowing developers to interact with all contained abstraction levels. We refer to this interaction capabilities as usage abstraction levels, since we target application developers with various levels of experience. We formulate the requirements for such a system, derive the desired architecture, and present how the concepts have been exemplary realized within the Inviwo visualization system. Furthermore, we address several specific challenges that arise during the realization of such a layered architecture, such as communication between different computing platforms, performance centered encapsulation, as well as layer-independent development by supporting cross layer documentation and debugging capabilities

    The life of a New York City noise sensor network

    Full text link
    Noise pollution is one of the topmost quality of life issues for urban residents in the United States. Continued exposure to high levels of noise has proven effects on health, including acute effects such as sleep disruption, and long-term effects such as hypertension, heart disease, and hearing loss. To investigate and ultimately aid in the mitigation of urban noise, a network of 55 sensor nodes has been deployed across New York City for over two years, collecting sound pressure level (SPL) and audio data. This network has cumulatively amassed over 75 years of calibrated, high-resolution SPL measurements and 35 years of audio data. In addition, high frequency telemetry data has been collected that provides an indication of a sensors' health. This telemetry data was analyzed over an 18 month period across 31 of the sensors. It has been used to develop a prototype model for pre-failure detection which has the ability to identify sensors in a prefail state 69.1% of the time. The entire network infrastructure is outlined, including the operation of the sensors, followed by an analysis of its data yield and the development of the fault detection approach and the future system integration plans for this.Comment: This article belongs to the Section Intelligent Sensors, 24 pages, 15 figures, 3 tables, 45 reference

    On-the-fly Map Generator for OpenStreetMap Data Using WebGL

    Get PDF
    This project describes an approach to create an On-the-fly Map Generator for Openstreetmap Data Using WebGL. The most common methods to generate online maps generate PNG overlay tile images from a wide range of data sources, like GeoJSON, GeoTIFF, PostGIS, CSV, and SQLite, etc., based on the coordinates and zoom-level. This project aims to send vector data for the map to the browser and hence render maps on-the-fly using WebGL. We push all of the vector computation to the GPU. This means that less data needs to be sent to the browser. We have compared existing approaches to our method of generating maps and have been able to show that our method provides a faster and better solution

    Automated Detection Of Herbarium Specimens Via Transfer Learning In Convolutional Neural Networks

    Get PDF
    There are thousands of herbaria (collections of dried and mounted plants) all over the world, containing millions of specimens which have yet to be digitized and made available to online research communities. Recent global transcription efforts have utilized crowd-sourced volunteers to perform data entry, especially in areas where optical character recognition continues to fail. The relatively new process of transfer learning in artificial neural networks has shown promise in reducing training complexity in difficult image classification problems, despite notable differences in target tasks and domains. Within this work, the technique of transfer learning is applied to the digital specimen collection of the I.W. Carpenter Jr. Herbarium housed at Appalachian State University, in an effort to assess its feasibility. It is shown that within the confines of the ASU herbarium, the technique of transfer learning combined with modern neural networks can effectively classify specimen images to the point where volunteer-based transcriptions of certain fields may no longer be necessary

    A Citizen Science Approach for Analyzing Social Media With Crowdsourcing

    Get PDF
    Social media have the potential to provide timely information about emergency situations and sudden events. However, finding relevant information among the millions of posts being added every day can be difficult, and in current approaches developing an automatic data analysis project requires time and technical skills. This work presents a new approach for the analysis of social media posts, based on configurable automatic classification combined with Citizen Science methodologies. The process is facilitated by a set of flexible, automatic and open-source data processing tools called the Citizen Science Solution Kit. The kit provides a comprehensive set of tools that can be used and personalized in different situations, particularly during natural emergencies, starting from images and text contained in the posts. The tools can be employed by citizen scientists for filtering, classifying, and geolocating the content with a human-in-the-loop approach to support the data analyst, including feedback and suggestions on how to configure the automated tools, and techniques to gather inputs from citizens. Using flooding scenario as a guiding example, this paper illustrates the structure and functioning of the different tools proposed to support citizens scientists in their projects, and a methodological approach to their use. The process is then validated by discussing three case studies based on the Albania earthquake of 2019, the Covid-19 pandemic, and the Thailand floods of 2021. The results suggest that a flexible approach to tools composition and configuration can support a timely setup of an analysis project by citizen scientists, especially in case of emergencies in unexpected locations.ISSN:2169-353

    Scalable, Data- intensive Network Computation

    Get PDF
    To enable groups of collaborating researchers at different locations to effectively share large datasets and investigate their spontaneous hypotheses on the fly, we are interested in de- veloping a distributed system that can be easily leveraged by a variety of data intensive applications. The system is composed of (i) a number of best effort logistical depots to en- able large-scale data sharing and in-network data processing, (ii) a set of end-to-end tools to effectively aggregate, manage and schedule a large number of network computations with attendant data movements, and (iii) a Distributed Hash Table (DHT) on top of the generic depot services for scalable data management. The logistical depot is extended by following the end-to-end principles and is modeled with a closed queuing network model. Its performance characteristics are studied by solving the steady state distributions of the model using local balance equations. The modeling results confirm that the wide area network is the performance bottleneck and running concurrent jobs can increase resource utilization and system throughput. As a novel contribution, techniques to effectively support resource demanding data- intensive applications using the ¯ne-grained depot services are developed. These techniques include instruction level scheduling of operations, dynamic co-scheduling of computation and replication, and adaptive workload control. Experiments in volume visualization have proved the effectiveness of these techniques. Due to the unique characteristic of data- intensive applications and our co-scheduling algorithm, a DHT is implemented on top of the basic storage and computation services. It demonstrates the potential of the Logistical Networking infrastructure to serve as a service creation platform

    Software supply chain monitoring in containerised open-source digital forensics and incident response tools

    Get PDF
    Abstract. Legal context makes software development challenging for the tool-oriented Digital Forensics and Incident Response (DFIR) field. Digital evidence must be complete, accurate, reliable, and acquirable in reproducible methods in order to be used in court. However, the lack of sufficient software quality is a well-known problem in this context. The popularity of Open-source Software (OSS) based development has increased the tool availability on different channels, highlighting their varying quality. The lengthened software supply chain has introduced additional factors affecting the tool quality and control over the use of the exact software version. Prior research on the quality level has primarily targeted the fundamental codebase of the tool, not the underlying dependencies. There is no research about the role of the software supply chain for quality factors in the DFIR context. The research in this work focuses on the container-based package ecosystem, where the case study includes 51 maintained open-source DFIR tools published as Open Container Initiative (OCI) containers. The package ecosystem was improved, and an experimental system was implemented to monitor upstream release version information and provide it for both package maintainers and end-users. The system guarantees that the described tool version matches the actual version of the tool package, and all information about tool versions is available. The primary purpose is to bring more control over the packages and support the reproducibility and documentation requirement of the investigations while also helping with the maintenance work. The tools were also monitored and maintained for six months to observe software dependency-related factors affecting the tool functionality between different versions. After that period, the maintenance was halted for additional six months, and the tool’s current package version was rebuilt to limit gathered information for the changed dependencies. A significant amount of different built time and runtime failures were discovered, which have either prevented or hindered the tool installation or significantly affected the tool used in the investigation process. Undocumented, changed or too new environment-related dependencies were the significant factors leading to tool failures. These findings support known software dependency-related problems. The nature of the failures suggests that tool package maintainers are required to possess a prominent level of various kinds of skills for making operational tool packages, and maintenance is an effort-intensive job. If the investigator does not have similar skills and there is a dependency-related failure present in the software, the software may not be usable.Ohjelmistotoimitusketjun seuranta kontitetuissa avoimen lähdekoodin digitaaliforensiikan ja tietoturvapoikkeamien reagoinnin työkaluissa. Tiivistelmä. Oikeudellinen asiayhteys tekee ohjelmistokehityksestä haasteellista työkalupainotteiselle digitaaliforensiikalle ja tietoturvapoikkeamiin reagoinnille (DFIR). Digitaalisen todistusaineiston on oltava kokonaista, täsmällistä, luotettavaa ja hankittavissa toistettavilla menetelmillä, jotta sitä voidaan käyttää tuomioistuimessa. Laadun puute on kuitenkin tässä yhteydessä tunnettu ongelma. Avoimeen lähdekoodin perustuva ohjelmistokehitys on kasvattanut suosiotaan, mikä on luonnollisesti lisännyt työkalujen saatavuutta eri kanavilla, korostaen niiden vaihtelevaa laatua. Ohjelmistotoimitusketjun pidentyminen on tuonut mukanaan työkalujen laatuun ja täsmällisen ohjelmistoversion hallintaan vaikuttavia lisätekijöitä. Laatutasoa koskevassa aikaisemmassa tutkimuksessa on keskitytty pääasiassa työkalun olennaiseen koodipohjaan; ei sen taustalla oleviin riippuvuuksiin. Ohjelmistotoimitusketjun merkityksestä laadullisiin tekijöihin ei ole olemassa tutkimusta DFIR-asiayhteydessä. Tämän työn tutkimuksessa keskitytään konttipohjaiseen pakettiekosysteemiin, missä tapaustutkimuksen kohteena on 51 ylläpidettyä avoimen lähdekoodin DFIR-työkalua, jotka julkaistaan ns. OCI-kontteina. Työssä parannettiin pakettiekosysteemiä ja toteutettiin kokeellinen järjestelmä, jolla seurattiin julkaisuversiotietoja ja tarjottiin niitä sekä pakettien ylläpitäjille että loppukäyttäjille. Järjestelmä takaa, että kuvattu työkaluversio vastaa työkalupaketin todellista versiota, ja kaikki tieto työkaluversioista on saatavilla. Ensisijaisena tarkoituksena oli lisätä ohjelmistopakettien hallintaa ja tukea tutkintojen toistettavuus- ja dokumentointivaatimusta, kuten myös auttaa pakettien ylläpitotyössä. Työssä myös seurattiin ja ylläpidettiin työkaluja kuuden kuukauden ajan sellaisten ohjelmistoriippuvuuksien aiheuttamien tekijöiden tunnistamiseksi, jotka vaikuttavat työkalun toimivuuteen eri versioiden välillä. Lisäksi odotettiin vielä kuusi kuukautta ilman ylläpitoa, ja työkalun nykyinen pakettiversio rakennettiin uudelleen, jotta kerätty tieto voitiin rajoittaa vain muuttuneisiin riippuvuuksiin. Työn aikana löydettiin huomattava määrä erilaisia rakennusaika- ja suoritusaikavirheitä, mitkä ovat joko estäneet tai haitanneet työkalun asennusta, tai muuten vaikuttaneet merkittävästi tutkinnassa käytettyyn työkaluun. Dokumentoimattomat, muuttuneet tai liian uudet ympäristöriippuvuudet olivat merkittäviä työkaluvirheisiin johtaneita tekijöitä. Nämä löydökset tukevat ennestään tunnettuja ohjelmistoriippuvuusongelmia. Virheiden luonteesta voidaan päätellä, että työkalujen ylläpitäjiltä vaaditaan paljon erilaista osaamista toiminnallisten työkalupakettien ylläpitämisessä, ja ylläpitäminen vaatii paljon vaivaa. Jos tutkijalla ei ole vastaavia taitoja ja ohjelmistossa on riippuvuuksiin liittyvä virhe, ohjelmisto saattaa olla käyttökelvoton
    corecore