2,010 research outputs found

    Toward Inclusive Design for Visual Law

    Get PDF
    The explosion in visual representations of legal concepts and processes is a thrilling innovation which can expand open access to law. By and large, however, visual representations of the law have not adequately fulfilled the promise of access. No matter how unintentionally, implementations of visual access to law frequently overlook people with visual disabilities. This neglect is not necessary, and inclusion is not futile. The synthesis, summarization, simplification, and interpretation required to produce visual representations of law have the potential to support understanding for everyone by making legal information more discoverable and reusable. This paper distinguishes between features of visual law that require vision and features of visual law that can be made accessible to all. It argues that inclusive design deserves greater attention in order to avoid increasing inequality in access to law

    The Cognitive Interaction Toolkit – Improving Reproducibility of Robotic Systems Experiments

    Get PDF
    Lier F, Wienke J, Nordmann A, Wachsmuth S, Wrede S. The Cognitive Interaction Toolkit – Improving Reproducibility of Robotic Systems Experiments. In: Brugali D, Broenink JF, Kroeger T, MacDonald BA, eds. SIMPAR: International Conference on Simulation, Modeling, and Programming for Autonomous Robots. Lecture Notes in Computer Science . Vol 8810. Cham: Springer; 2014: 400-411.Research on robot systems either integrating a large number of capabilities in a single architecture or displaying outstanding perfor- mance in a single domain achieved considerable progress over the last years. Results are typically validated through experimental evaluation or demonstrated live, e.g., at robotics competitions. While common robot hardware, simulation and programming platforms yield an improved ba- sis, many of the described experiments still cannot be reproduced easily by interested researchers to confirm the reported findings. We consider this a critical challenge for experimental robotics. Hence, we address this problem with a novel process which facilitates the reproduction of robotics experiments. We identify major obstacles to experiment repli- cation and introduce an integrated approach that allows (i) aggregation and discovery of required research artifacts, (ii) automated software build and deployment, as well as (iii) experiment description, repeatable exe- cution and evaluation. We explain the usage of the introduced process along an exemplary robotics experiment and discuss our approach in the context of current ecosystems for robot programming and simulation

    A Simple Standard for Sharing Ontological Mappings (SSSOM).

    Get PDF
    Despite progress in the development of standards for describing and exchanging scientific information, the lack of easy-to-use standards for mapping between different representations of the same or similar objects in different databases poses a major impediment to data integration and interoperability. Mappings often lack the metadata needed to be correctly interpreted and applied. For example, are two terms equivalent or merely related? Are they narrow or broad matches? Or are they associated in some other way? Such relationships between the mapped terms are often not documented, which leads to incorrect assumptions and makes them hard to use in scenarios that require a high degree of precision (such as diagnostics or risk prediction). Furthermore, the lack of descriptions of how mappings were done makes it hard to combine and reconcile mappings, particularly curated and automated ones. We have developed the Simple Standard for Sharing Ontological Mappings (SSSOM) which addresses these problems by: (i) Introducing a machine-readable and extensible vocabulary to describe metadata that makes imprecision, inaccuracy and incompleteness in mappings explicit. (ii) Defining an easy-to-use simple table-based format that can be integrated into existing data science pipelines without the need to parse or query ontologies, and that integrates seamlessly with Linked Data principles. (iii) Implementing open and community-driven collaborative workflows that are designed to evolve the standard continuously to address changing requirements and mapping practices. (iv) Providing reference tools and software libraries for working with the standard. In this paper, we present the SSSOM standard, describe several use cases in detail and survey some of the existing work on standardizing the exchange of mappings, with the goal of making mappings Findable, Accessible, Interoperable and Reusable (FAIR). The SSSOM specification can be found at http://w3id.org/sssom/spec. Database URL: http://w3id.org/sssom/spec

    The semantic drift of quotations in blogspace: a case study in short-term cultural evolution

    Get PDF
    First revision (major) for Cognitive ScienceWe present an empirical case study which connects psycholinguistics with the field of cultural evolution, in order to test for the existence of cultural attractors in the evolution of quotations. Such attractors have been proposed as a useful concept for understanding cultural evolution in relation with individual cognition, but their existence has been hard to test. We focus on the transformation of quotations when they are copied from blog to blog or media website: by coding words with a number of well-studied lexical features, we show that the way words are substituted in quotations is consistent (1) with the hypothesis of cultural attractors, and (2) with known effects of the word features. In particular, words known to be harder to recall in lists have a higher tendency to be substituted, and words easier to recall are produced instead. Our results support the hypothesis that cultural attractors can result from the combination of individual cognitive biases in the interpretation and reproduction of representations

    HybridMDSD: Multi-Domain Engineering with Model-Driven Software Development using Ontological Foundations

    Get PDF
    Software development is a complex task. Executable applications comprise a mutlitude of diverse components that are developed with various frameworks, libraries, or communication platforms. The technical complexity in development retains resources, hampers efficient problem solving, and thus increases the overall cost of software production. Another significant challenge in market-driven software engineering is the variety of customer needs. It necessitates a maximum of flexibility in software implementations to facilitate the deployment of different products that are based on one single core. To reduce technical complexity, the paradigm of Model-Driven Software Development (MDSD) facilitates the abstract specification of software based on modeling languages. Corresponding models are used to generate actual programming code without the need for creating manually written, error-prone assets. Modeling languages that are tailored towards a particular domain are called domain-specific languages (DSLs). Domain-specific modeling (DSM) approximates technical solutions with intentional problems and fosters the unfolding of specialized expertise. To cope with feature diversity in applications, the Software Product Line Engineering (SPLE) community provides means for the management of variability in software products, such as feature models and appropriate tools for mapping features to implementation assets. Model-driven development, domain-specific modeling, and the dedicated management of variability in SPLE are vital for the success of software enterprises. Yet, these paradigms exist in isolation and need to be integrated in order to exhaust the advantages of every single approach. In this thesis, we propose a way to do so. We introduce the paradigm of Multi-Domain Engineering (MDE) which means model-driven development with multiple domain-specific languages in variability-intensive scenarios. MDE strongly emphasize the advantages of MDSD with multiple DSLs as a neccessity for efficiency in software development and treats the paradigm of SPLE as indispensable means to achieve a maximum degree of reuse and flexibility. We present HybridMDSD as our solution approach to implement the MDE paradigm. The core idea of HybidMDSD is to capture the semantics of particular DSLs based on properly defined semantics for software models contained in a central upper ontology. Then, the resulting semantic foundation can be used to establish references between arbitrary domain-specific models (DSMs) and sophisticated instance level reasoning ensures integrity and allows to handle partiucular change adaptation scenarios. Moreover, we present an approach to automatically generate composition code that integrates generated assets from separate DSLs. All necessary development tasks are arranged in a comprehensive development process. Finally, we validate the introduced approach with a profound prototypical implementation and an industrial-scale case study.Softwareentwicklung ist komplex: ausfĂŒhrbare Anwendungen beinhalten und vereinen eine Vielzahl an Komponenten, die mit unterschiedlichen Frameworks, Bibliotheken oder Kommunikationsplattformen entwickelt werden. Die technische KomplexitĂ€t in der Entwicklung bindet Ressourcen, verhindert effiziente Problemlösung und fĂŒhrt zu insgesamt hohen Kosten bei der Produktion von Software. ZusĂ€tzliche Herausforderungen entstehen durch die Vielfalt und Unterschiedlichkeit an KundenwĂŒnschen, die der Entwicklung ein hohes Maß an FlexibilitĂ€t in Software-Implementierungen abverlangen und die Auslieferung verschiedener Produkte auf Grundlage einer Basis-Implementierung nötig machen. Zur Reduktion der technischen KomplexitĂ€t bietet sich das Paradigma der modellgetriebenen Softwareentwicklung (MDSD) an. Software-Spezifikationen in Form abstrakter Modelle werden hier verwendet um Programmcode zu generieren, was die fehleranfĂ€llige, manuelle Programmierung Ă€hnlicher Komponenten ĂŒberflĂŒssig macht. Modellierungssprachen, die auf eine bestimmte ProblemdomĂ€ne zugeschnitten sind, nennt man domĂ€nenspezifische Sprachen (DSLs). DomĂ€nenspezifische Modellierung (DSM) vereint technische Lösungen mit intentionalen Problemen und ermöglicht die Entfaltung spezialisierter Expertise. Um der Funktionsvielfalt in Software Herr zu werden, bietet der Forschungszweig der Softwareproduktlinienentwicklung (SPLE) verschiedene Mittel zur Verwaltung von VariabilitĂ€t in Software-Produkten an. Hierzu zĂ€hlen Feature-Modelle sowie passende Werkzeuge, um Features auf Implementierungsbestandteile abzubilden. Modellgetriebene Entwicklung, domĂ€nenspezifische Modellierung und eine spezielle Handhabung von VariabilitĂ€t in Softwareproduktlinien sind von entscheidender Bedeutung fĂŒr den Erfolg von Softwarefirmen. Zur Zeit bestehen diese Paradigmen losgelöst voneinander und mĂŒssen integriert werden, damit die Vorteile jedes einzelnen fĂŒr die Gesamtheit der Softwareentwicklung entfaltet werden können. In dieser Arbeit wird ein Ansatz vorgestellt, der dies ermöglicht. Es wird das Multi-Domain Engineering Paradigma (MDE) eingefĂŒhrt, welches die modellgetriebene Softwareentwicklung mit mehreren domĂ€nenspezifischen Sprachen in variabilitĂ€tszentrierten Szenarien beschreibt. MDE stellt die Vorteile modellgetriebener Entwicklung mit mehreren DSLs als eine Notwendigkeit fĂŒr Effizienz in der Entwicklung heraus und betrachtet das SPLE-Paradigma als unabdingbares Mittel um ein Maximum an Wiederverwendbarkeit und FlexibilitĂ€t zu erzielen. In der Arbeit wird ein Ansatz zur Implementierung des MDE-Paradigmas, mit dem Namen HybridMDSD, vorgestellt

    Semantically defined Analytics for Industrial Equipment Diagnostics

    Get PDF
    In this age of digitalization, industries everywhere accumulate massive amount of data such that it has become the lifeblood of the global economy. This data may come from various heterogeneous systems, equipment, components, sensors, systems and applications in many varieties (diversity of sources), velocities (high rate of changes) and volumes (sheer data size). Despite significant advances in the ability to collect, store, manage and filter data, the real value lies in the analytics. Raw data is meaningless, unless it is properly processed to actionable (business) insights. Those that know how to harness data effectively, have a decisive competitive advantage, through raising performance by making faster and smart decisions, improving short and long-term strategic planning, offering more user-centric products and services and fostering innovation. Two distinct paradigms in practice can be discerned within the field of analytics: semantic-driven (deductive) and data-driven (inductive). The first emphasizes logic as a way of representing the domain knowledge encoded in rules or ontologies and are often carefully curated and maintained. However, these models are often highly complex, and require intensive knowledge processing capabilities. Data-driven analytics employ machine learning (ML) to directly learn a model from the data with minimal human intervention. However, these models are tuned to trained data and context, making it difficult to adapt. Industries today that want to create value from data must master these paradigms in combination. However, there is great need in data analytics to seamlessly combine semantic-driven and data-driven processing techniques in an efficient and scalable architecture that allows extracting actionable insights from an extreme variety of data. In this thesis, we address these needs by providing: ‱ A unified representation of domain-specific and analytical semantics, in form of ontology models called TechOnto Ontology Stack. It is highly expressive, platform-independent formalism to capture conceptual semantics of industrial systems such as technical system hierarchies, component partonomies etc and its analytical functional semantics. ‱ A new ontology language Semantically defined Analytical Language (SAL) on top of the ontology model that extends existing DatalogMTL (a Horn fragment of Metric Temporal Logic) with analytical functions as first class citizens. ‱ A method to generate semantic workflows using our SAL language. It helps in authoring, reusing and maintaining complex analytical tasks and workflows in an abstract fashion. ‱ A multi-layer architecture that fuses knowledge- and data-driven analytics into a federated and distributed solution. To our knowledge, the work in this thesis is one of the first works to introduce and investigate the use of the semantically defined analytics in an ontology-based data access setting for industrial analytical applications. The reason behind focusing our work and evaluation on industrial data is due to (i) the adoption of semantic technology by the industries in general, and (ii) the common need in literature and in practice to allow domain expertise to drive the data analytics on semantically interoperable sources, while still harnessing the power of analytics to enable real-time data insights. Given the evaluation results of three use-case studies, our approach surpass state-of-the-art approaches for most application scenarios.Im Zeitalter der Digitalisierung sammeln die Industrien ĂŒberall massive Daten-mengen, die zum Lebenselixier der Weltwirtschaft geworden sind. Diese Daten können aus verschiedenen heterogenen Systemen, GerĂ€ten, Komponenten, Sensoren, Systemen und Anwendungen in vielen Varianten (Vielfalt der Quellen), Geschwindigkeiten (hohe Änderungsrate) und Volumina (reine DatengrĂ¶ĂŸe) stammen. Trotz erheblicher Fortschritte in der FĂ€higkeit, Daten zu sammeln, zu speichern, zu verwalten und zu filtern, liegt der eigentliche Wert in der Analytik. Rohdaten sind bedeutungslos, es sei denn, sie werden ordnungsgemĂ€ĂŸ zu verwertbaren (GeschĂ€fts-)Erkenntnissen verarbeitet. Wer weiß, wie man Daten effektiv nutzt, hat einen entscheidenden Wettbewerbsvorteil, indem er die Leistung steigert, indem er schnellere und intelligentere Entscheidungen trifft, die kurz- und langfristige strategische Planung verbessert, mehr benutzerorientierte Produkte und Dienstleistungen anbietet und Innovationen fördert. In der Praxis lassen sich im Bereich der Analytik zwei unterschiedliche Paradigmen unterscheiden: semantisch (deduktiv) und Daten getrieben (induktiv). Die erste betont die Logik als eine Möglichkeit, das in Regeln oder Ontologien kodierte DomĂ€nen-wissen darzustellen, und wird oft sorgfĂ€ltig kuratiert und gepflegt. Diese Modelle sind jedoch oft sehr komplex und erfordern eine intensive Wissensverarbeitung. Datengesteuerte Analysen verwenden maschinelles Lernen (ML), um mit minimalem menschlichen Eingriff direkt ein Modell aus den Daten zu lernen. Diese Modelle sind jedoch auf trainierte Daten und Kontext abgestimmt, was die Anpassung erschwert. Branchen, die heute Wert aus Daten schaffen wollen, mĂŒssen diese Paradigmen in Kombination meistern. Es besteht jedoch ein großer Bedarf in der Daten-analytik, semantisch und datengesteuerte Verarbeitungstechniken nahtlos in einer effizienten und skalierbaren Architektur zu kombinieren, die es ermöglicht, aus einer extremen Datenvielfalt verwertbare Erkenntnisse zu gewinnen. In dieser Arbeit, die wir auf diese BedĂŒrfnisse durch die Bereitstellung: ‱ Eine einheitliche Darstellung der DomĂ€nen-spezifischen und analytischen Semantik in Form von Ontologie Modellen, genannt TechOnto Ontology Stack. Es ist ein hoch-expressiver, plattformunabhĂ€ngiger Formalismus, die konzeptionelle Semantik industrieller Systeme wie technischer Systemhierarchien, Komponenten-partonomien usw. und deren analytische funktionale Semantik zu erfassen. ‱ Eine neue Ontologie-Sprache Semantically defined Analytical Language (SAL) auf Basis des Ontologie-Modells das bestehende DatalogMTL (ein Horn fragment der metrischen temporĂ€ren Logik) um analytische Funktionen als erstklassige BĂŒrger erweitert. ‱ Eine Methode zur Erzeugung semantischer workflows mit unserer SAL-Sprache. Es hilft bei der Erstellung, Wiederverwendung und Wartung komplexer analytischer Aufgaben und workflows auf abstrakte Weise. ‱ Eine mehrschichtige Architektur, die Wissens- und datengesteuerte Analysen zu einer föderierten und verteilten Lösung verschmilzt. Nach unserem Wissen, die Arbeit in dieser Arbeit ist eines der ersten Werke zur EinfĂŒhrung und Untersuchung der Verwendung der semantisch definierten Analytik in einer Ontologie-basierten Datenzugriff Einstellung fĂŒr industrielle analytische Anwendungen. Der Grund fĂŒr die Fokussierung unserer Arbeit und Evaluierung auf industrielle Daten ist auf (i) die Übernahme semantischer Technologien durch die Industrie im Allgemeinen und (ii) den gemeinsamen Bedarf in der Literatur und in der Praxis zurĂŒckzufĂŒhren, der es der Fachkompetenz ermöglicht, die Datenanalyse auf semantisch inter-operablen Quellen voranzutreiben, und nutzen gleichzeitig die LeistungsfĂ€higkeit der Analytik, um Echtzeit-Daten-einblicke zu ermöglichen. Aufgrund der Evaluierungsergebnisse von drei AnwendungsfĂ€llen Übertritt unser Ansatz fĂŒr die meisten Anwendungsszenarien Modernste AnsĂ€tze

    Using Machine Learning and Graph Mining Approaches to Improve Software Requirements Quality: An Empirical Investigation

    Get PDF
    Software development is prone to software faults due to the involvement of multiple stakeholders especially during the fuzzy phases (requirements and design). Software inspections are commonly used in industry to detect and fix problems in requirements and design artifacts, thereby mitigating the fault propagation to later phases where the same faults are harder to find and fix. The output of an inspection process is list of faults that are present in software requirements specification document (SRS). The artifact author must manually read through the reviews and differentiate between true-faults and false-positives before fixing the faults. The first goal of this research is to automate the detection of useful vs. non-useful reviews. Next, post-inspection, requirements author has to manually extract key problematic topics from useful reviews that can be mapped to individual requirements in an SRS to identify fault-prone requirements. The second goal of this research is to automate this mapping by employing Key phrase extraction (KPE) algorithms and semantic analysis (SA) approaches to identify fault-prone requirements. During fault-fixations, the author has to manually verify the requirements that could have been impacted by a fix. The third goal of my research is to assist the authors post-inspection to handle change impact analysis (CIA) during fault fixation using NL processing with semantic analysis and mining solutions from graph theory. The selection of quality inspectors during inspections is pertinent to be able to carry out post-inspection tasks accurately. The fourth goal of this research is to identify skilled inspectors using various classification and feature selection approaches. The dissertation has led to the development of automated solution that can identify useful reviews, help identify skilled inspectors, extract most prominent topics/keyphrases from fault logs; and help RE author during the fault-fixation post inspection

    Design Tools

    Get PDF
    This book aims at encompassing the panorama of design tools being developed, tested and adopted by researchers and professors at the Department of Design of Politecnico di Milano. The tools are organized in a taxonomy that reflects the path of choice of a possible user in need for the right tool for a task to be performed. The taxonomy is based on a formalization of the design process proposed by the authors, which characterizes the Design System at Politecnico di Milano. The book essentially offers two main contributions: an original taxonomy that guides towards the organization of design tools and their usage with different actors; a representative collection of design tools developed within the Department of Design of Politecnico di Milano with specific instructions on how to use them. Design Tools is addressed both to practitioners and academics in the field of design that are interested in getting to know more about the discourse around design tools in general and in particular how this discourse takes a shape within Politecnico di Milano and resolves in usable and shareable tools

    Notes on the Music: A social data infrastructure for music annotation

    Get PDF
    Beside transmitting musical meaning from composer to reader, symbolic music notation affords the dynamic addition of layers of information by annotation. This allows music scores to serve as rudimentary communication frameworks. Music encodings bring these affordances into the digital realm; though annotations may be represented as digital pen-strokes upon a score image, they must be captured using machine-interpretable semantics to fully benefit from this transformation. This is challenging, as annota- tors’ requirements are heterogeneous, varying both across different types of user (e.g., musician, scholar) and within these groups, de- pending on the specific use-case. A hypothetical all-encompassing tool catering to every conceivable annotation type, even if it were possible to build, would vastly complicate user interaction. This additional complexity would significantly increase cognitive load and impair usability, particularly in dynamic real-time usage con- texts, e.g., live annotation during music rehearsal or performance. To address this challenge, we present a social data infrastructure that facilitates the creation of use-case specific annotation toolkits. Its components include a selectable-score module that supports customisable click-and-drag selection of score elements (e.g., notes, measures, directives); the Web Annotations data model, extended to support the creation of custom, Web-addressable annotation types supporting the specification and (re-)use of annotation palettes; and the Music Encoding and Linked Data (MELD) Javascript client library, used to build interfaces that map annotation types to render- ing and interaction handlers. We have extended MELD to support the Solid platform for social Linked Data, allowing annotations to be privately stored in user-controlled Personal Online Datastores (Pods), or selectively shared or published. To demonstrate the feasi- bility of our proposed approach, we present annotation interfaces employing the outlined infrastructure in three distinct use-cases: scholarly communication; music rehearsal; and rating during music listening
    • 

    corecore