73 research outputs found

    BiTRDF: Extending RDF for BiTemporal Data

    Full text link
    The Internet is not only a platform for communication, transactions, and cloud storage, but it is also a large knowledge store where people as well as machines can create, manipulate, infer, and make use of data and knowledge. The Semantic Web was developed for this purpose. It aims to help machines understand the meaning of data and knowledge so that machines can use the data and knowledge in decision making. The Resource Description Framework (RDF) forms the foundation of the Semantic Web which is organized as the Semantic Web Layer Cake. RDF is limited and can only express a binary relationship with the format of . However, expressing higher order relationships requires reification which is very cumbersome. Naturally, time varying data is very common and cannot be represented by only binary relationships. We first surveyed approaches that use reification or extend RDF for higher order relationships. Then we proposed a new data model, BiTemporal RDF (BiTRDF), that incorporates both valid time and transaction time explicitly into standard RDF resources. We defined the BiTRDF model with its elements, vocabulary, semantics, and entailment, and the BiTemporal SPARQL (BiT-SPARQL) query language. We discussed the foundation for implementing BiTRDF and we also explored different approaches to implement the BiTRDF model. We concluded this thesis with potential research directions. This thesis lays the foundation for a new approach to easily embed any or more dimensions, such as temporal data, spatial data, probabilistic data, confidence levels, etc

    Creating and Capturing Artificial Emotions in Autonomous Robots and Software Agents

    Get PDF
    This paper presents ARTEMIS, a control system for autonomous robots or software agents. ARTEMIS is able to create and capture artificial emotions during interactions with its environment, and we describe the underlying mechanisms for this. The control system also realizes the capturing of knowledge about its past artificial emotions. A specific interpretation of a knowledge graph, called an Agent Knowledge Graph, represents these artificial emotions. For this, we devise a formalism which enriches the traditional factual knowledge in knowledge graphs with the representation of artificial emotions. As proof of concept, we realize a concrete software agent based on the ARTEMIS control system. This software agent acts as a user assistant and executes the user’s orders. The environment of this user assistant consists of autonomous service agents. The execution of user’s orders requires interaction with these autonomous service agents. These interactions lead to artificial emotions within the assistant. The first experiments show that it is possible to realize an autonomous agent with plausible artificial emotions with ARTEMIS and to record these artificial emotions in its Agent Knowledge Graph. In this way, autonomous agents based on ARTEMIS can capture essential knowledge that supports successful planning and decision making in complex dynamic environments and surpass emotionless agents

    A systematic literature review on Wikidata

    Get PDF
    To review the current status of research on Wikidata and, in particular, of articles that either describe applications of Wikidata or provide empirical evidence, in order to uncover the topics of interest, the fields that are benefiting from its applications and which researchers and institutions are leading the work

    Temporal Data Modeling and Reasoning for Information Systems

    Get PDF
    Temporal knowledge representation and reasoning is a major research field in Artificial Intelligence, in Database Systems, and in Web and Semantic Web research. The ability to model and process time and calendar data is essential for many applications like appointment scheduling, planning, Web services, temporal and active database systems, adaptive Web applications, and mobile computing applications. This article aims at three complementary goals. First, to provide with a general background in temporal data modeling and reasoning approaches. Second, to serve as an orientation guide for further specific reading. Third, to point to new application fields and research perspectives on temporal knowledge representation and reasoning in the Web and Semantic Web

    Tracing the Compositional Process. Sound art that rewrites its own past: formation, praxis and a computer framework

    Get PDF
    The domain of this thesis is electroacoustic computer-based music and sound art. It investigates a facet of composition which is often neglected or ill-defined: the process of composing itself and its embedding in time. Previous research mostly focused on instrumental composition or, when electronic music was included, the computer was treated as a tool which would eventually be subtracted from the equation. The aim was either to explain a resultant piece of music by reconstructing the intention of the composer, or to explain human creativity by building a model of the mind. Our aim instead is to understand composition as an irreducible unfolding of material traces which takes place in its own temporality. This understanding is formalised as a software framework that traces creation time as a version graph of transactions. The instantiation and manipulation of any musical structure implemented within this framework is thereby automatically stored in a database. Not only can it be queried ex post by an external researcher—providing a new quality for the empirical analysis of the activity of composing—but it is an integral part of the composition environment. Therefore it can recursively become a source for the ongoing composition and introduce new ways of aesthetic expression. The framework aims to unify creation and performance time, fixed and generative composition, human and algorithmic “writing”, a writing that includes indeterminate elements which condense as concurrent vertices in the version graph. The second major contribution is a critical epistemological discourse on the question of ob- servability and the function of observation. Our goal is to explore a new direction of artistic research which is characterised by a mixed methodology of theoretical writing, technological development and artistic practice. The form of the thesis is an exercise in becoming process-like itself, wherein the epistemic thing is generated by translating the gaps between these three levels. This is my idea of the new aesthetics: That through the operation of a re-entry one may establish a sort of process “form”, yielding works which go beyond a categorical either “sound-in-itself” or “conceptualism”. Exemplary processes are revealed by deconstructing a series of existing pieces, as well as through the successful application of the new framework in the creation of new pieces

    Advanced Analysis on Temporal Data

    Get PDF
    Due to the increase in CPU power and the ever increasing data storage capabilities, more and more data of all kind is recorded, including temporal data. Time series, the most prevalent type of temporal data are derived in a broad number of application domains. Prominent examples include stock price data in economy, gene expression data in biology, the course of environmental parameters in meteorology, or data of moving objects recorded by traffic sensors. This large amount of raw data can only be analyzed by automated data mining algorithms in order to generate new knowledge. One of the most basic data mining operations is the similarity query, which computes a similarity or distance value for two objects. Two aspects of such an similarity function are of special interest. First, the semantics of a similarity function and second, the computational cost for the calculation of a similarity value. The semantics is the actual similarity notion and is highly dependant on the analysis task at hand. This thesis addresses both aspects. We introduce a number of new similarity measures for time series data and show how they can efficiently be calculated by means of index structures and query algorithms. The first of the new similarity measures is threshold-based. Two time series are considered as similar, if they exceed a user-given threshold during similar time intervals. Aside from formally defining this similarity measure, we show how to represent time series in such a way that threshold-based queries can be efficiently calculated. Our representation allows for the specification of the threshold value at query time. This is for example useful for data mining task that try to determine crucial thresholds. The next similarity measure considers a relevant amplitude range. This range is scanned with a certain resolution and for each considered amplitude value features are extracted. We consider the change in the feature values over the amplitude values and thus, generate so-called feature sequences. Different features can finally be combined to answer amplitude-level-based similarity queries. In contrast to traditional approaches which aggregate global feature values along the time dimension, we capture local characteristics and monitor their change for different amplitude values. Furthermore, our method enables the user to specify a relevant range of amplitude values to be considered and so the similarity notion can be adapted to the current requirements. Next, we introduce so-called interval-focused similarity queries. A user can specify one or several time intervals that should be considered for the calculation of the similarity value. Our main focus for this similarity measure was the efficient support of the corresponding query. In particular we try to avoid loading the complete time series objects into main memory, if only a relatively small portion of a time series is of interest. We propose a time series representation which can be used to calculate upper and lower distance bounds, so that only a few time series objects have to be completely loaded and refined. Again, the relevant time intervals do not have to be known in advance. Finally, we define a similarity measure for so-called uncertain time series, where several amplitude values are given for each point in time. This can be due to multiple recordings or to errors in measurements, so that no exact value can be specified. We show how to efficiently support queries on uncertain time series. The last part of this thesis shows how data mining methods can be used to discover crucial threshold parameters for the threshold-based similarity measure. Furthermore we present a data mining tool for time series

    Remote Sensing Approaches and Related Techniques to Map and Study Landslides

    Get PDF
    Landslide is one of the costliest and fatal geological hazards, threatening and influencing the socioeconomic conditions in many countries globally. Remote sensing approaches are widely used in landslide studies. Landslide threats can also be investigated through slope stability model, susceptibility mapping, hazard assessment, risk analysis, and other methods. Although it is possible to conduct landslide studies using in-situ observation, it is time-consuming, expensive, and sometimes challenging to collect data at inaccessible terrains. Remote sensing data can be used in landslide monitoring, mapping, hazard prediction and assessment, and other investigations. The primary goal of this chapter is to review the existing remote sensing approaches and techniques used to study landslides and explore the possibilities of potential remote sensing tools that can effectively be used in landslide studies in the future. This chapter also provides critical and comprehensive reviews of landslide studies focus¬ing on the role played by remote sensing data and approaches in landslide hazard assessment. Further, the reviews discuss the application of remotely sensed products for landslide detection, mapping, prediction, and evaluation around the world. This systematic review may contribute to better understanding the extensive use of remotely sensed data and spatial analysis techniques to conduct landslide studies at a range of scales

    Towards a data-driven treatment of epilepsy: computational methods to overcome low-data regimes in clinical settings

    Get PDF
    Epilepsy is the most common neurological disorder, affecting around 1 % of the population. One third of patients with epilepsy are drug-resistant. If the epileptogenic zone can be localized precisely, curative resective surgery may be performed. However, only 40 to 70 % of patients remain seizure-free after surgery. Presurgical evaluation, which in part aims to localize the epileptogenic zone (EZ), is a complex multimodal process that requires subjective clinical decisions, often relying on a multidisciplinary team’s experience. Thus, the clinical pathway could benefit from data-driven methods for clinical decision support. In the last decade, deep learning has seen great advancements due to the improvement of graphics processing units (GPUs), the development of new algorithms and the large amounts of generated data that become available for training. However, using deep learning in clinical settings is challenging as large datasets are rare due to privacy concerns and expensive annotation processes. Methods to overcome the lack of data are especially important in the context of presurgical evaluation of epilepsy, as only a small proportion of patients with epilepsy end up undergoing surgery, which limits the availability of data to learn from. This thesis introduces computational methods that pave the way towards integrating data-driven methods into the clinical pathway for the treatment of epilepsy, overcoming the challenge presented by the relatively small datasets available. We used transfer learning from general-domain human action recognition to characterize epileptic seizures from video–telemetry data. We developed a software framework to predict the location of the epileptogenic zone given seizure semiologies, based on retrospective information from the literature. We trained deep learning models using self-supervised and semi-supervised learning to perform quantitative analysis of resective surgery by segmenting resection cavities on brain magnetic resonance images (MRIs). Throughout our work, we shared datasets and software tools that will accelerate research in medical image computing, particularly in the field of epilepsy

    Quantitative Multimodal Mapping Of Seizure Networks In Drug-Resistant Epilepsy

    Get PDF
    Over 15 million people worldwide suffer from localization-related drug-resistant epilepsy. These patients are candidates for targeted surgical therapies such as surgical resection, laser thermal ablation, and neurostimulation. While seizure localization is needed prior to surgical intervention, this process is challenging, invasive, and often inconclusive. In this work, I aim to exploit the power of multimodal high-resolution imaging and intracranial electroencephalography (iEEG) data to map seizure networks in drug-resistant epilepsy patients, with a focus on minimizing invasiveness. Given compelling evidence that epilepsy is a disease of distorted brain networks as opposed to well-defined focal lesions, I employ a graph-theoretical approach to map structural and functional brain networks and identify putative targets for removal. The first section focuses on mesial temporal lobe epilepsy (TLE), the most common type of localization-related epilepsy. Using high-resolution structural and functional 7T MRI, I demonstrate that noninvasive neuroimaging-based network properties within the medial temporal lobe can serve as useful biomarkers for TLE cases in which conventional imaging and volumetric analysis are insufficient. The second section expands to all forms of localization-related epilepsy. Using iEEG recordings, I provide a framework for the utility of interictal network synchrony in identifying candidate resection zones, with the goal of reducing the need for prolonged invasive implants. In the third section, I generate a pipeline for integrated analysis of iEEG and MRI networks, paving the way for future large-scale studies that can effectively harness synergy between different modalities. This multimodal approach has the potential to provide fundamental insights into the pathology of an epileptic brain, robustly identify areas of seizure onset and spread, and ultimately inform clinical decision making
    corecore