2,832 research outputs found

    Towards rapid modeling and prototyping of indoor and outdoor monitoring applications

    Get PDF
    Nowadays, the capability to remotely monitor indoor and outdoor environments would allow to reduce energy consumption and improve the overall management and users’ experience of network application systems. The most known solutions adopting remote control are related to domotics (e.g., smart homes and industry 4.0 applications). An important stimulus for the development of such smart approaches is the growth of the Internet of Things (IoT) technologies and the increasing investment in the development of green houses, buildings, and, in general, heterogeneous environments. While the benefits for the humans and the environment are evident, a pervasive adoption and distribution of remote monitoring solutions are hindered by the following issue: modeling, designing, prototyping, and further developing the remote applications and underlying architecture require a certain amount of time. Moreover, such systems must be often customized on the basis of the need of the specific domain and involved entities. For such reasons, in this paper, we provide the experience made in addressing some relevant indoor and outdoor case studies through IoT-targeted tools, technologies and protocols, highlighting the advantages and disadvantages of the considered solutions as well as insights that can be useful for future practitioners

    Mapping the Focal Points of WordPress: A Software and Critical Code Analysis

    Get PDF
    Programming languages or code can be examined through numerous analytical lenses. This project is a critical analysis of WordPress, a prevalent web content management system, applying four modes of inquiry. The project draws on theoretical perspectives and areas of study in media, software, platforms, code, language, and power structures. The applied research is based on Critical Code Studies, an interdisciplinary field of study that holds the potential as a theoretical lens and methodological toolkit to understand computational code beyond its function. The project begins with a critical code analysis of WordPress, examining its origins and source code and mapping selected vulnerabilities. An examination of the influence of digital and computational thinking follows this. The work also explores the intersection of code patching and vulnerability management and how code shapes our sense of control, trust, and empathy, ultimately arguing that a rhetorical-cultural lens can be used to better understand code\u27s controlling influence. Recurring themes throughout these analyses and observations are the connections to power and vulnerability in WordPress\u27 code and how cultural, processual, rhetorical, and ethical implications can be expressed through its code, creating a particular worldview. Code\u27s emergent properties help illustrate how human values and practices (e.g., empathy, aesthetics, language, and trust) become encoded in software design and how people perceive the software through its worldview. These connected analyses reveal cultural, processual, and vulnerability focal points and the influence these entanglements have concerning WordPress as code, software, and platform. WordPress is a complex sociotechnical platform worthy of further study, as is the interdisciplinary merging of theoretical perspectives and disciplines to critically examine code. Ultimately, this project helps further enrich the field by introducing focal points in code, examining sociocultural phenomena within the code, and offering techniques to apply critical code methods

    Linking provenance and its metadata in multi-organizational environments

    Get PDF
    Reproducibility issues are widely reported in life sciences. As a response, scientific communities have called for enhanced provenance information documenting the complete research life cycle, starting from biological or environmental material acquisition and ending with translating research results into practice. The integrity and trustworthiness of such provenance can be achieved by applying versioning mechanisms and cryptographic techniques, such as hashes or digital signatures, which are provenance metadata. However, the available provenance literature lacks an analysis of mechanisms for the exchange of provenance and its metadata between organizations as well as a grounded proposal of linking provenance and its metadata. In this work, we provide an in-depth analysis of the approaches for coupling provenance information and its metadata with documented research objects in the context of multi-organizational processes, leading to the categorization of possible approaches, description of their key properties, and derivation of requirements for underlying provenance models. We address the requirements by proposing a mechanism for linking provenance and its metadata by extending the Common Provenance Model, the open conceptual foundation for the ISO 23494 provenance standard series, currently under development. The concepts are demonstrated and validated on two complex use cases. This work is intended as a harmonized source of information on provenance coupling in the context of exchange of provenance between organizations, which can be used when designing or choosing a provenance solution. This type of usage is exemplified in the extension of the Common Provenance Model as another step toward a provenance standard for life sciences

    Automatic generation of business process models from user stories

    Get PDF
    In this paper, we propose an automated approach to extract business process models from requirements, which are presented as user stories. In agile software development, the user story is a simple description of the functionality of the software. It is presented from the user's point of view and is written in natural language. Acceptance criteria are a list of specifications on how a new software feature is expected to operate. Our approach analyzes a set of acceptance criteria accompanying the user story, in order, first, to automatically generate the components of the business model, and then to produce the business model as an activity diagram which is a unified modeling language (UML) behavioral diagram. We start with the use of natural language processing (NLP) techniques to extract the elements necessary to define the rules for retrieving artifacts from the business model. These rules are then developed in Prolog language and imported into Python code. The proposed approach was evaluated on a set of use cases using different performance measures. The results indicate that our method is capable of generating correct and accurate process models

    Chatbots for Modelling, Modelling of Chatbots

    Full text link
    Tesis Doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingeniería Informática. Fecha de Lectura: 28-03-202

    Politiken des (digitalen) Spiels: Transdisziplinäre Perspektiven

    Get PDF
    Spiele sind durch Produktion, Distribution und Konsumption in politische Strukturen eingebunden. Sie spiegeln nicht nur ihre Umwelt wider, sondern werden auch maßgeblich durch diese geformt. Die Beiträger*innen fragen transdisziplinär nach der Analyse solcher "Politiken des Spiels": Innerhalb welcher rechtlichen, gesellschaftlichen und politischen Regeln findet das Spiel statt? In welchen Machtverhältnissen stehen die am Spiel beteiligten Akteur*innen? Und wie geht die Branche mit aktuellen politischen Diskursen um? Dabei betrachten sie zahlreiche Formen des Spiel(en)s in diachroner sowie synchroner Perspektive und machen deutlich: Spielen ist ein hochpolitischer Akt

    Machine Learning Algorithm for the Scansion of Old Saxon Poetry

    Get PDF
    Several scholars designed tools to perform the automatic scansion of poetry in many languages, but none of these tools deal with Old Saxon or Old English. This project aims to be a first attempt to create a tool for these languages. We implemented a Bidirectional Long Short-Term Memory (BiLSTM) model to perform the automatic scansion of Old Saxon and Old English poems. Since this model uses supervised learning, we manually annotated the Heliand manuscript, and we used the resulting corpus as labeled dataset to train the model. The evaluation of the performance of the algorithm reached a 97% for the accuracy and a 99% of weighted average for precision, recall and F1 Score. In addition, we tested the model with some verses from the Old Saxon Genesis and some from The Battle of Brunanburh, and we observed that the model predicted almost all Old Saxon metrical patterns correctly misclassified the majority of the Old English input verses

    Konzeption und Realisierung eines Multiagentensystems zur Unterstützung von Entscheidungsträgern bei der Bewältigung von Erdbebenkatastrophen

    Get PDF
    Weltweit stellen Großschadensereignisse aufgrund von Naturkatastrophen Gesellschaften vor schwer zu bewältigende Probleme. Selbst in Industrienation, die landesweit über ausreichende Ressourcen verfügen, ist das Krisenmanagement in einer betroffenen Region oft eine Herausforderung, wie der Hurrikan Katrina 2005 in den USA oder das Oderhochwasser 1997 in Deutschland zeigten. Bei Erdbebenkatastrophen ist ein zeitnahes Krisenmanagement entscheidend für eine Minimierung der Schäden. Die Orte, die potenziell gefährdet sind, lassen sich meist gut eingrenzen. Es gibt allerdings aktuell keine Möglichkeit, Starkbeben mit einem entsprechenden Schadensumfang frühzeitig vorauszusehen. Die Optimierung der Koordination von Einsatzkräften hat das Potenzial, die Bewältigung solcher Großschadensereignisse deutlich zu verbessern. Aufbauend auf den Ergebnissen vorangegangener Forschung zum Management von Erdbebenkatastrophen am Institut für Technologie und Management im Baubetrieb wurde in der vorliegenden Arbeit ein Entscheidungsunterstützungssystem für die Mitarbeiter einer Einsatzleitstelle geschaffen. In einem theoretischen Teil werden mögliche Hilfestellungen untersucht und bewertet, deren praktischer Nutzen durch die Umsetzung in einem Programm, dem Disaster Management Tool (DMT), evaluiert wird. Ein Modell des Entscheidungsprozesses von Personen aus dem Zivilschutz dient als Anhaltspunkt für mögliche Hilfestellungen sowie deren Präsentation in der Benutzungsoberfläche des Systems. Die Entscheidungshilfen basieren auf der Auswertung einer Faktenbasis durch Algorithmen und Regeln, die in einer Wissensbasis abgelegt sind. Die Regeln beruhen auf Literaturrecherchen, aber insbesondere auf dem Expertenwissen von Zivilschutzmitarbeitern, welches in Befragungen erhoben wurde. Die im System genutzte Fakten- und Wissensbasis zeichnet sich vor allem durch ihre Fähigkeit zur Verarbeitung unscharfer Informationen aus. Die Implementierung der theoretischen Modelle zur Entscheidungsunterstützung im DMT basiert auf dem Konzept eines Multiagentensystems. Das System dient, aufgrund seiner auf Standards basierenden Plattform und der Nutzung offener Datenformate, auch als Machbarkeitsstudie für das Design einer flexiblen und interoperablen Systemarchitektur. Die gewonnenen Erkenntnisse beschränken sich dabei nicht auf das Katastrophenmanagement nach Starkbeben, sondern lassen sich auch auf Schadensereignisse aufgrund anderer Ursachen übertragen
    corecore