567,530 research outputs found

    Research issues in real-time database systems

    Get PDF
    Cataloged from PDF version of article.Today's real-time systems are characterized by managing large volumes of data. Efficient database management algorithms for accessing and manipulating data are required to satisfy timing constraints of supported applications. Real-time database systems involve a new research area investigating possible ways of applying database systems technology to real-time systems. Management of real-time information through a database system requires the integration of concepts from both real-time systems and database systems. Some new criteria need to be developed to involve timing constraints of real-time applications in many database systems design issues, such as transaction/query processing, data buffering, CPU, and IO scheduling. In this paper, a basic understanding of the issues in real-time database systems is provided and the research efforts in this area are introduced. Different approaches to various problems of real-time database systems are briefly described, and possible future research directions are discussed

    Research issues in real-time database systems. Survey paper

    Get PDF
    Today's real-time systems are characterized by managing large volumes of data. Efficient database management algorithms for accessing and manipulating data are required to satisfy timing constraints of supported applications. Real-time database systems involve a new research area investigating possible ways of applying database systems technology to real-time systems. Management of real-time information through a database system requires the integration of concepts from both real-time systems and database systems. Some new criteria need to be developed to involve timing constraints of real-time applications in many database systems design issues, such as transaction/query processing, data buffering, CPU, and IO scheduling. In this paper, a basic understanding of the issues in real-time database systems is provided and the research efforts in this area are introduced. Different approaches to various problems of real-time database systems are briefly described, and possible future research directions are discussed. © 1995

    Processing real-time transactions in a replicated database system

    Get PDF
    A database system supporting a real-time application has to provide real-time information to the executing transactions. Each real-time transaction is associated with a timing constraint, typically in the form of a deadline. It is difficult to satisfy all timing constraints due to the consistency requirements of the underlying database. In scheduling the transactions it is aimed to process as many transactions as possible within their deadlines. Replicated database systems possess desirable features for real-time applications, such as a high level of data availability, and potentially improved response time for queries. On the other hand, multiple copy updates lead to a considerable overhead due to the communication required among the data sites holding the copies. In this paper, we investigate the impact of storing multiple copies of data on satisfying the timing constraints of real-time transactions. A detailed performance model of a distributed database system is employed in evaluating the effects of various workload parameters and design alternatives on the system performance. The performance is expressed in terms of the fraction of satisfied transaction deadlines. A comparison of several real-time concurrency control protocols, which are based on different approaches in involving timing constraints of transactions in scheduling, is also provided in performance experiments. © 1994 Kluwer Academic Publishers

    Real-time classification technique for early detection and prevention of myocardial infarction on wearable devices

    Get PDF
    Continuous monitoring of patients suffering from cardiovascular diseases and, in particular, myocardial infarction (MI) places a considerable burden on health-care systems and government budgets. The rise of wearable devices alleviates this burden, allowing for long-term patient monitoring in ambulatory settings. One of the major challenges in this area is to design ultra-low energy wearable devices for long-term monitoring of patients’ vital signs. In this work, we present a real-time event-driven classification technique, based on support vector machines (SVM) and statistical outlier detection. The main goal of this technique is to maintain a high classification accuracy while reducing the complexity of the classification algorithm. This technique leads to a reduction in energy consumption and thus battery lifetime extension. We validate our approach on a well-established and complete myocardial infarction (MI) database (Physiobank, PTB Diagnostic ECG database [1]). Our experimental evaluation demonstrates that our real-time classification scheme outperforms the existing approaches in terms of energy consumption and battery lifetime by a factor of 3, while maintaining the classification accuracy at a medically-acceptable level of 90%

    Data based predictive control: Application to water distribution networks

    Get PDF
    In this thesis, the main goal is to propose novel data based predictive controllers to cope with complex industrial infrastructures such as water distribution networks. This sort of systems have several inputs and out- puts, complicate nonlinear dynamics, binary actuators and they are usually perturbed by disturbances and noise and require real-time control implemen- tation. The proposed controllers have to deal successfully with these issues while using the available information, such as past operation data of the process, or system properties as fading dynamics. To this end, the control strategies presented in this work follow a predic- tive control approach. The control action computed by the proposed data- driven strategies are obtained as the solution of an optimization problem that is similar in essence to those used in model predictive control (MPC) based on a cost function that determines the performance to be optimized. In the proposed approach however, the prediction model is substituted by an inference data based strategy, either to identify a model, an unknown control law or estimate the future cost of a given decision. As in MPC, the proposed strategies are based on a receding horizon implementation, which implies that the optimization problems considered have to be solved online. In order to obtain problems that can be solved e ciently, most of the strategies proposed in this thesis are based on direct weight optimization for ease of implementation and computational complexity reasons. Linear convex combination is a simple and strong tool in continuous domain and computational load associated with the constrained optimization problems generated by linear convex combination are relatively soft. This fact makes the proposed data based predictive approaches suitable to be used in real time applications. The proposed approaches selects the most adequate information (similar to the current situation according to output, state, input, disturbances,etc.), in particular, data which is close to the current state or situation of the system. Using local data can be interpreted as an implicit local linearisation of the system every time we solve the model-free data driven optimization problem. This implies that even though, model free data driven approaches presented in this thesis are based on linear theory, they can successfully deal with nonlinear systems because of the implicit information available in the database. Finally, a learning-based approach for robust predictive control design for multi-input multi-output (MIMO) linear systems is also presented, in which the effect of the estimation and measuring errors or the effect of unknown perturbations in large scale complex system is considered

    Conceptual Modelling and The Quality of Ontologies: Endurantism Vs. Perdurantism

    Full text link
    Ontologies are key enablers for sharing precise and machine-understandable semantics among different applications and parties. Yet, for ontologies to meet these expectations, their quality must be of a good standard. The quality of an ontology is strongly based on the design method employed. This paper addresses the design problems related to the modelling of ontologies, with specific concentration on the issues related to the quality of the conceptualisations produced. The paper aims to demonstrate the impact of the modelling paradigm adopted on the quality of ontological models and, consequently, the potential impact that such a decision can have in relation to the development of software applications. To this aim, an ontology that is conceptualised based on the Object-Role Modelling (ORM) approach (a representative of endurantism) is re-engineered into a one modelled on the basis of the Object Paradigm (OP) (a representative of perdurantism). Next, the two ontologies are analytically compared using the specified criteria. The conducted comparison highlights that using the OP for ontology conceptualisation can provide more expressive, reusable, objective and temporal ontologies than those conceptualised on the basis of the ORM approach

    A unified view of data-intensive flows in business intelligence systems : a survey

    Get PDF
    Data-intensive flows are central processes in today’s business intelligence (BI) systems, deploying different technologies to deliver data, from a multitude of data sources, in user-preferred and analysis-ready formats. To meet complex requirements of next generation BI systems, we often need an effective combination of the traditionally batched extract-transform-load (ETL) processes that populate a data warehouse (DW) from integrated data sources, and more real-time and operational data flows that integrate source data at runtime. Both academia and industry thus must have a clear understanding of the foundations of data-intensive flows and the challenges of moving towards next generation BI environments. In this paper we present a survey of today’s research on data-intensive flows and the related fundamental fields of database theory. The study is based on a proposed set of dimensions describing the important challenges of data-intensive flows in the next generation BI setting. As a result of this survey, we envision an architecture of a system for managing the lifecycle of data-intensive flows. The results further provide a comprehensive understanding of data-intensive flows, recognizing challenges that still are to be addressed, and how the current solutions can be applied for addressing these challenges.Peer ReviewedPostprint (author's final draft

    Using Ontologies for the Design of Data Warehouses

    Get PDF
    Obtaining an implementation of a data warehouse is a complex task that forces designers to acquire wide knowledge of the domain, thus requiring a high level of expertise and becoming it a prone-to-fail task. Based on our experience, we have detected a set of situations we have faced up with in real-world projects in which we believe that the use of ontologies will improve several aspects of the design of data warehouses. The aim of this article is to describe several shortcomings of current data warehouse design approaches and discuss the benefit of using ontologies to overcome them. This work is a starting point for discussing the convenience of using ontologies in data warehouse design.Comment: 15 pages, 2 figure

    EHNQ: Subjective and objective quality evaluation of enhanced night-time images

    Get PDF
    Vision-based practical applications, such as consumer photography and automated driving systems, greatly rely on enhancing the visibility of images captured in night-time environments. For this reason, various image enhancement algorithms (EHAs) have been proposed. However, little attention has been given to the quality evaluation of enhanced night-time images. In this paper, we conduct the first dedicated exploration of the subjective and objective quality evaluation of enhanced night-time images. First, we build an enhanced night-time image quality (EHNQ) database, which is the largest of its kind so far. It includes 1,500 enhanced images generated from 100 real night-time images using 15 different EHAs. Subsequently, we perform a subjective quality evaluation and obtain subjective quality scores on the EHNQ database. Thereafter, we present an objective blind quality index for enhanced night-time images (BEHN). Enhanced night-time images usually suffer from inappropriate brightness and contrast, deformed structure, and unnatural colorfulness. In BEHN, we capture perceptual features that are highly relevant to these three types of corruptions, and we design an ensemble training strategy to map the extracted features into the quality score. Finally, we conduct extensive experiments on EHNQ and EAQA databases. The experimental and analysis results validate the performance of the proposed BEHN compared with the state-of-the-art approaches. Our EHNQ database is publicly available for download at https://sites.google.com/site/xiangtaooo/
    corecore