42 research outputs found

    CAREER: Data Management for Ad-Hoc Geosensor Networks

    Get PDF
    This project explores data management methods for geosensor networks, i.e. large collections of very small, battery-driven sensor nodes deployed in the geographic environment that measure the temporal and spatial variations of physical quantities such as temperature or ozone levels. An important task of such geosensor networks is to collect, analyze and estimate information about continuous phenomena under observation such as a toxic cloud close to a chemical plant in real-time and in an energy-efficient way. The main thrust of this project is the integration of spatial data analysis techniques with in-network data query execution in sensor networks. The project investigates novel algorithms such as incremental, in-network kriging that redefines a traditional, highly computationally intensive spatial data estimation method for a distributed, collaborative and incremental processing between tiny, energy and bandwidth constrained sensor nodes. This work includes the modeling of location and sensing characteristics of sensor devices with regard to observed phenomena, the support of temporal-spatial estimation queries, and a focus on in-network data aggregation algorithms for complex spatial estimation queries. Combining high-level data query interfaces with advanced spatial analysis methods will allow domain scientists to use sensor networks effectively in environmental observation. The project has a broad impact on the community involving undergraduate and graduate students in spatial database research at the University of Maine as well as being a key component of a current IGERT program in the areas of sensor materials, sensor devices and sensor. More information about this project, publications, simulation software, and empirical studies are available on the project\u27s web site (http://www.spatial.maine.edu/~nittel/career/)

    A markov-model-based framework for supporting real-time generation of synthetic memory references effectively and efficiently

    Get PDF
    Driven by several real-life case studies and in-lab developments, synthetic memory reference generation has a long tradition in computer science research. The goal is that of reproducing the running of an arbitrary program, whose generated traces can later be used for simulations and experiments. In this paper we investigate this research context and provide principles and algorithms of a Markov-Model-based framework for supporting real-time generation of synthetic memory references effectively and efficiently. Specifically, our approach is based on a novel Machine Learning algorithm we called Hierarchical Hidden/ non Hidden Markov Model (HHnHMM). Experimental results conclude this paper

    On the edges of clustering

    Get PDF

    The Translational Medicine Ontology and Knowledge Base: driving personalized medicine by bridging the gap between bench and bedside

    Get PDF
    Background: Translational medicine requires the integration of knowledge using heterogeneous data from health care to the life sciences. Here, we describe a collaborative effort to produce a prototype Translational Medicine Knowledge Base (TMKB) capable of answering questions relating to clinical practice and pharmaceutical drug discovery. Results: We developed the Translational Medicine Ontology (TMO) as a unifying ontology to integrate chemical, genomic and proteomic data with disease, treatment, and electronic health records. We demonstrate the use of Semantic Web technologies in the integration of patient and biomedical data, and reveal how such a knowledge base can aid physicians in providing tailored patient care and facilitate the recruitment of patients into active clinical trials. Thus, patients, physicians and researchers may explore the knowledge base to better understand therapeutic options, efficacy, and mechanisms of action. Conclusions: This work takes an important step in using Semantic Web technologies to facilitate integration of relevant, distributed, external sources and progress towards a computational platform to support personalized medicine. Availability: TMO can be downloaded from http://code.google.com/p/translationalmedicineontology and TMKB can be accessed at http://tm.semanticscience.org/sparql

    Annual Report 2020-2021

    Get PDF
    LETTER FROM THE DEAN As I write this letter during the beginning of the 2021–22 academic year, we have started to welcome the majority of our students to campus— many for the very first time, and some for the first time in a year and a half. It has been wonderful to be together, in-person, again. Four quarters of learning and working remotely was challenging, to be sure, but I have been consistently amazed by the resilience, innovation, and hard work of our students, faculty, and staff, even in the most difficult of circumstances. This annual report, covering the 2020–21 academic year—one that was entirely virtual—highlights many of those examples: from a second place national ranking by our Security Daemons team to hosting a blockbuster virtual screenwriting conference with top talent; from gaming grants helping us reach historically excluded youth to alumni successes across our three schools. Recently, I announced that, after 40 years at DePaul and 15 years as the Dean of CDM, I will be stepping down from the deanship at the end of the 2021–22 academic year. I began my tenure at DePaul in 1981 as an assistant professor, with the founding of the Department of Computer Science, joining seven faculty members who were leaving the mathematics department for this new venture. It has been amazing to watch our college grow during that time. We now have more than 40 undergraduate and graduate degree programs, over 22,000 college alumni, and a catalog of nationally ranked programs. And we plan to keep going. If there is anything I’ve learned at CDM, it’s that a lot can be accomplished in a year (as this report shows), and I’m committed to working hard and continuing the progress we’ve made together in 2021–22. David MillerDeanhttps://via.library.depaul.edu/cdmannual/1004/thumbnail.jp

    An Overview of Moving Object Trajectory Compression Algorithms

    Get PDF
    Compression technology is an efficient way to reserve useful and valuable data as well as remove redundant and inessential data from datasets. With the development of RFID and GPS devices, more and more moving objects can be traced and their trajectories can be recorded. However, the exponential increase in the amount of such trajectory data has caused a series of problems in the storage, processing, and analysis of data. Therefore, moving object trajectory compression undoubtedly becomes one of the hotspots in moving object data mining. To provide an overview, we survey and summarize the development and trend of moving object compression and analyze typical moving object compression algorithms presented in recent years. In this paper, we firstly summarize the strategies and implementation processes of classical moving object compression algorithms. Secondly, the related definitions about moving objects and their trajectories are discussed. Thirdly, the validation criteria are introduced for evaluating the performance and efficiency of compression algorithms. Finally, some application scenarios are also summarized to point out the potential application in the future. It is hoped that this research will serve as the steppingstone for those interested in advancing moving objects mining

    A framework to support the annotation, discovery and evaluation of data in ecology, for a better visibility and reuse of data and an increased societal value gained from environmental projects

    Get PDF
    Die vorliegende Dissertationsschrift beschäftigt sich im Kern mit der Verwendung von Metadaten in alltäglichen, datenbezogenen Arbeitsabläufen von Ökologen. Die vorgelegte Arbeit befasst sich dabei mit der Erstellung eines Rahmenwerkes zur Unterstützung der Annotation ökologischer Daten, der effizienten Suche nach ökologischen Daten in Datenbanken und der Einbindung von Metadaten während der Datenanalyse. Weiterhin behandelt die Arbeit die Dokumentation von Analysen sowie die Auswertung von Metadaten zur Entwicklung von Werkzeugen für eine Aufbereitung von Informationen über ökologische Projekte. Diese Informationen können zur Evaluation und Maximierung des aus den Projekten gezogenen gesellschaftlichen Mehrwerts eingesetzt werden. Die vorliegende Arbeit ist als kumulative Dissertation in englischer Sprache abgefasst. Sie basiert auf zwei Veröffentlichungen als Erstautor und einem zur Einreichung vorbereiteten Manuskript

    Measuring Expressive Music Performances: a Performance Science Model using Symbolic Approximation

    Get PDF
    Music Performance Science (MPS), sometimes termed systematic musicology in Northern Europe, is concerned with designing, testing and applying quantitative measurements to music performances. It has applications in art musics, jazz and other genres. It is least concerned with aesthetic judgements or with ontological considerations of artworks that stand alone from their instantiations in performances. Musicians deliver expressive performances by manipulating multiple, simultaneous variables including, but not limited to: tempo, acceleration and deceleration, dynamics, rates of change of dynamic levels, intonation and articulation. There are significant complexities when handling multivariate music datasets of significant scale. A critical issue in analyzing any types of large datasets is the likelihood of detecting meaningless relationships the more dimensions are included. One possible choice is to create algorithms that address both volume and complexity. Another, and the approach chosen here, is to apply techniques that reduce both the dimensionality and numerosity of the music datasets while assuring the statistical significance of results. This dissertation describes a flexible computational model, based on symbolic approximation of timeseries, that can extract time-related characteristics of music performances to generate performance fingerprints (dissimilarities from an ‘average performance’) to be used for comparative purposes. The model is applied to recordings of Arnold Schoenberg’s Phantasy for Violin with Piano Accompaniment, Opus 47 (1949), having initially been validated on Chopin Mazurkas.1 The results are subsequently used to test hypotheses about evolution in performance styles of the Phantasy since its composition. It is hoped that further research will examine other works and types of music in order to improve this model and make it useful to other music researchers. In addition to its benefits for performance analysis, it is suggested that the model has clear applications at least in music fraud detection, Music Information Retrieval (MIR) and in pedagogical applications for music education
    corecore