218 research outputs found

    A Semantic Framework for Enabling Radio Spectrum Policy Management and Evaluation

    Full text link
    Because radio spectrum is a finite resource, its usage and sharing is regulated by government agencies. These agencies define policies to manage spectrum allocation and assignment across multiple organizations, systems, and devices. With more portions of the radio spectrum being licensed for commercial use, the importance of providing an increased level of automation when evaluating such policies becomes crucial for the efficiency and efficacy of spectrum management. We introduce our Dynamic Spectrum Access Policy Framework for supporting the United States government's mission to enable both federal and non-federal entities to compatibly utilize available spectrum. The DSA Policy Framework acts as a machine-readable policy repository providing policy management features and spectrum access request evaluation. The framework utilizes a novel policy representation using OWL and PROV-O along with a domain-specific reasoning implementation that mixes GeoSPARQL, OWL reasoning, and knowledge graph traversal to evaluate incoming spectrum access requests and explain how applicable policies were used. The framework is currently being used to support live, over-the-air field exercises involving a diverse set of federal and commercial radios, as a component of a prototype spectrum management system

    KnowText: Auto-generated Knowledge Graphs for custom domain applications

    Get PDF
    While industrial Knowledge Graphs enable information extraction from massive data volumes creating the backbone of the Semantic Web, the specialised, custom designed knowledge graphs focused on enterprise specific information are an emerging trend. We present “KnowText”, an application that performs automatic generation of custom Knowledge Graphs from unstructured text and enables fast information extraction based on graph visualisation and free text query methods designed for non-specialist users. An OWL ontology automatically extracted from text is linked to the knowledge graph and used as a knowledge base. A basic ontological schema is provided including 16 Classes and Data type Properties. The extracted facts and the OWL ontology can be downloaded and further refined. KnowText is designed for applications in business (CRM, HR, banking). Custom KG can serve for locally managing existing data, often stored as “sensitive” information or proprietary accounts, which are not on open web access. KnowText deploys a custom KG from a collection of text documents and enable fast information extraction based on its graph based visualisation and text based query methods

    Contributions of formal language theory to the study of dialogues

    Get PDF
    For more than 30 years, the problem of providing a formal framework for modeling dialogues has been a topic of great interest for the scientific areas of Linguistics, Philosophy, Cognitive Science, Formal Languages, Software Engineering and Artificial Intelligence. In the beginning the goal was to develop a "conversational computer", an automated system that could engage in a conversation in the same way as humans do. After studies showed the difficulties of achieving this goal Formal Language Theory and Artificial Intelligence have contributed to Dialogue Theory with the study and simulation of machine to machine and human to machine dialogues inspired by Linguistic studies of human interactions. The aim of our thesis is to propose a formal approach for the study of dialogues. Our work is an interdisciplinary one that connects theories and results in Dialogue Theory mainly from Formal Language Theory, but also from another areas like Artificial Intelligence, Linguistics and Multiprogramming. We contribute to Dialogue Theory by introducing a hierarchy of formal frameworks for the definition of protocols for dialogue interaction. Each framework defines a transition system in which dialogue protocols might be uniformly expressed and compared. The frameworks we propose are based on finite state transition systems and Grammar systems from Formal Language Theory and a multi-agent language for the specification of dialogue protocols from Artificial Intelligence. Grammar System Theory is a subfield of Formal Language Theory that studies how several (a finite number) of language defining devices (language processors or grammars) jointly develop a common symbolic environment (a string or a finite set of strings) by the application of language operations (for instance rewriting rules). For the frameworks we propose we study some of their formal properties, we compare their expressiveness, we investigate their practical application in Dialogue Theory and we analyze their connection with theories of human-like conversation from Linguistics. In addition we contribute to Grammar System Theory by proposing a new approach for the verification and derivation of Grammar systems. We analyze possible advantages of interpreting grammars as multiprograms that are susceptible of verification and derivation using the Owicki-Gries logic, a Hoare-based logic from the Multiprogramming field

    Improving achievement in mathematics in primary and secondary schools

    Get PDF

    Sensecape: Enabling Multilevel Exploration and Sensemaking with Large Language Models

    Full text link
    People are increasingly turning to large language models (LLMs) for complex information tasks like academic research or planning a move to another city. However, while they often require working in a nonlinear manner - e.g., to arrange information spatially to organize and make sense of it, current interfaces for interacting with LLMs are generally linear to support conversational interaction. To address this limitation and explore how we can support LLM-powered exploration and sensemaking, we developed Sensecape, an interactive system designed to support complex information tasks with an LLM by enabling users to (1) manage the complexity of information through multilevel abstraction and (2) seamlessly switch between foraging and sensemaking. Our within-subject user study reveals that Sensecape empowers users to explore more topics and structure their knowledge hierarchically. We contribute implications for LLM-based workflows and interfaces for information tasks

    Intention Recognition With ProbLog

    Get PDF
    In many scenarios where robots or autonomous systems may be deployed, the capacity to infer and reason about the intentions of other agents can improve the performance or utility of the system. For example, a smart home or assisted living facility is better able to select assistive services to deploy if it understands the goals of the occupants in advance. In this article, we present a framework for reasoning about intentions using probabilistic logic programming. We employ ProbLog, a probabilistic extension to Prolog, to infer the most probable intention given observations of the actions of the agent and sensor readings of important aspects of the environment. We evaluated our model on a domain modeling a smart home. The model achieved 0.75 accuracy at full observability. The model was robust to reduced observability

    An extensible framework for provenance in human terrain visual analytics

    Get PDF
    We describe and demonstrate an extensible framework that supports data exploration and provenance in the context of Human Terrain Analysis (HTA). Working closely with defence analysts we extract requirements and a list of features that characterise data analysed at the end of the HTA chain. From these, we select an appropriate non-classified data source with analogous features, and model it as a set of facets. We develop ProveML, an XML-based extension of the Open Provenance Model, using these facets and augment it with the structures necessary to record the provenance of data, analytical process and interpretations. Through an iterative process, we develop and refine a prototype system for Human Terrain Visual Analytics (HTVA), and demonstrate means of storing, browsing and recalling analytical provenance and process through analytic bookmarks in ProveML. We show how these bookmarks can be combined to form narratives that link back to the live data. Throughout the process, we demonstrate that through structured workshops, rapid prototyping and structured communication with intelligence analysts we are able to establish requirements, and design schema, techniques and tools that meet the requirements of the intelligence community. We use the needs and reactions of defence analysts in defining and steering the methods to validate the framework

    Continuous Rationale Management

    Get PDF
    Continuous Software Engineering (CSE) is a software life cycle model open to frequent changes in requirements or technology. During CSE, software developers continuously make decisions on the requirements and design of the software or the development process. They establish essential decision knowledge, which they need to document and share so that it supports the evolution and changes of the software. The management of decision knowledge is called rationale management. Rationale management provides an opportunity to support the change process during CSE. However, rationale management is not well integrated into CSE. The overall goal of this dissertation is to provide workflows and tool support for continuous rationale management. The dissertation contributes an interview study with practitioners from the industry, which investigates rationale management problems, current practices, and features to support continuous rationale management beneficial for practitioners. Problems of rationale management in practice are threefold: First, documenting decision knowledge is intrusive in the development process and an additional effort. Second, the high amount of distributed decision knowledge documentation is difficult to access and use. Third, the documented knowledge can be of low quality, e.g., outdated, which impedes its use. The dissertation contributes a systematic mapping study on recommendation and classification approaches to treat the rationale management problems. The major contribution of this dissertation is a validated approach for continuous rationale management consisting of the ConRat life cycle model extension and the comprehensive ConDec tool support. To reduce intrusiveness and additional effort, ConRat integrates rationale management activities into existing workflows, such as requirements elicitation, development, and meetings. ConDec integrates into standard development tools instead of providing a separate tool. ConDec enables lightweight capturing and use of decision knowledge from various artifacts and reduces the developers' effort through automatic text classification, recommendation, and nudging mechanisms for rationale management. To enable access and use of distributed decision knowledge documentation, ConRat defines a knowledge model of decision knowledge and other artifacts. ConDec instantiates the model as a knowledge graph and offers interactive knowledge views with useful tailoring, e.g., transitive linking. To operationalize high quality, ConRat introduces the rationale backlog, the definition of done for knowledge documentation, and metrics for intra-rationale completeness and decision coverage of requirements and code. ConDec implements these agile concepts for rationale management and a knowledge dashboard. ConDec also supports consistent changes through change impact analysis. The dissertation shows the feasibility, effectiveness, and user acceptance of ConRat and ConDec in six case study projects in an industrial setting. Besides, it comprehensively analyses the rationale documentation created in the projects. The validation indicates that ConRat and ConDec benefit CSE projects. Based on the dissertation, continuous rationale management should become a standard part of CSE, like automated testing or continuous integration

    Systemic classification of concern-based design methods in the context of enterprise architecture

    Get PDF
    Enterprise Architecture (EA) is a relatively new domain that is rapidly developing. "The primary reason for developing EA is to support business by providing the fundamental technology and process structure for an IT strategy” [TOGAF]. EA models have to model enterprises facets that span from marketing to IT. As a result, EA models tend to become large. Large EA models create a problem for model management. Concern-based design methods (CBDMs) aim to solve this problem by considering EA models as a composition of smaller, manageable parts—concerns. There are dozens of different CBDMs that can be used in the context of EA: from very generic methods to specific methods for business modeling or IT implementations. This variety of methods can cause two problems for those who develop and use innovative CBDMs in the field of Enterprise Architecture (EA). The first problem is to choose specific CBDMs that can be used in a given EA methodology: this is a problem for researchers who develop their own EA methodology. The second problem is to find similar methods (with the same problem domain or with similar frameworks) in order to make a comparative analysis with these methods: this is a problem of researchers who develop their own CBDMs related to a specific problem domain in EA (such as business process modeling or aspect oriented programming). We aim to address both of these problems by means of a definition of generic Requirements for CBDMs based on the system inquiry. We use these requirements to classify twenty CBDMs in the context of EA. We conclude with a short discussion about trends that we have observed in the field of concern-based design and modelin

    ENIGMA and global neuroscience: A decade of large-scale studies of the brain in health and disease across more than 40 countries

    Get PDF
    This review summarizes the last decade of work by the ENIGMA (Enhancing NeuroImaging Genetics through Meta Analysis) Consortium, a global alliance of over 1400 scientists across 43 countries, studying the human brain in health and disease. Building on large-scale genetic studies that discovered the first robustly replicated genetic loci associated with brain metrics, ENIGMA has diversified into over 50 working groups (WGs), pooling worldwide data and expertise to answer fundamental questions in neuroscience, psychiatry, neurology, and genetics. Most ENIGMA WGs focus on specific psychiatric and neurological conditions, other WGs study normal variation due to sex and gender differences, or development and aging; still other WGs develop methodological pipelines and tools to facilitate harmonized analyses of "big data" (i.e., genetic and epigenetic data, multimodal MRI, and electroencephalography data). These international efforts have yielded the largest neuroimaging studies to date in schizophrenia, bipolar disorder, major depressive disorder, post-traumatic stress disorder, substance use disorders, obsessive-compulsive disorder, attention-deficit/hyperactivity disorder, autism spectrum disorders, epilepsy, and 22q11.2 deletion syndrome. More recent ENIGMA WGs have formed to study anxiety disorders, suicidal thoughts and behavior, sleep and insomnia, eating disorders, irritability, brain injury, antisocial personality and conduct disorder, and dissociative identity disorder. Here, we summarize the first decade of ENIGMA's activities and ongoing projects, and describe the successes and challenges encountered along the way. We highlight the advantages of collaborative large-scale coordinated data analyses for testing reproducibility and robustness of findings, offering the opportunity to identify brain systems involved in clinical syndromes across diverse samples and associated genetic, environmental, demographic, cognitive, and psychosocial factors
    corecore