31 research outputs found

    Adaptive Energy Aware Cooperation Strategy in Heterogeneous Multi-domain Sensor Networks

    Get PDF
    AbstractIn some applications of sensor networks, multi-domain exists and cooperation among domains could lead to longer lifetime. In this paper, we consider heterogeneous multi-domain sensor networks. It means that different networks belong to different domains and sensors are deployed at the same physical location and their topology is heterogonous. Apparently, domains life time can be increased by means of cooperation in packet forwarding; however selfishness is inevitable from rational perspective. We investigate this problem to find out cooperation of authorities while their sensors are energy aware. When sensors are energy aware, spontaneous cooperation cannot take place. Therefore we presented the Adaptive Energy Aware strategy, a novel algorithm that is based on TIT-FOR-TAT, starts with generosity and ends up with conservative behaviour. Our simulation results showed that this algorithm could prolong its network lifetime in competition with other networks

    A rule-based semantic approach for data integration, standardization and dimensionality reduction utilizing the UMLS: Application to predicting bariatric surgery outcomes

    Get PDF
    Utilization of existing clinical data for improving patient outcomes poses a number of challenging and complex problems involving lack of data integration, the absence of standardization across inhomogeneous data sources and computationally-demanding and time-consuming exploration of very large datasets. In this paper, we will present a robust semantic data integration, standardization and dimensionality reduction method to tackle and solve these problems. Our approach enables the integration of clinical data from diverse sources by resolving canonical inconsistencies and semantic heterogeneity as required by the National Library of Medicine's Unified Medical Language System (UMLS) to produce standardized medical data. Through a combined application of rule-based semantic networks and machine learning, our approach enables a large reduction in dimensionality of the data and thus allows for fast and efficient application of data mining techniques to large clinical datasets. An example application of the techniques developed in our study is presented for the prediction of bariatric surgery outcomes

    Feature reduction improves classification accuracy in healthcare

    Get PDF
    Our work focuses on inductive transfer learning, a setting in which one assumes that both source and target tasks share the same features and label spaces. We demonstrate that transfer learning can be successfully used for feature reduction and hence for more efficient classification performance. Further, our experiments show that this approach increases the precision of the classification task as well

    Modeling Uncertainty In Deductive Databases

    No full text
    . Information Source Tracking (IST) method has been developed recently for the modeling and manipulation of uncertain and inaccurate data in relational databases. In this paper we extend the IST method to deductive databases. We show that positive uncertain databases, i.e. IST-based deductive databases with only positive literals in the heads and the bodies of the rules, enjoy a least model/least fixpoint semantics. Query processing in this model is studied next. We extend the top-down and bottom-up evaluation techniques of logic programming and deductive databases to our model. Finally, we study negation for uncertain databases, concentrating on stratified uncertain databases. 1 Introduction Database systems are evolving into knowledge-base systems, and are increasingly used in applications where handling inaccurate data is essential. In a recent study, uncertainty management was listed as one of the important future challenges in database research. "Further research [in un..

    On A Theory of Probabilistic Deductive Databases

    No full text
    We propose a framework for modeling uncertainty where both belief and doubt can be given independent, first-class status. We adopt probability theory as the mathematical formalism for manipulating uncertainty. An agent can express the uncertainty in her knowledge about a piece of information in the form of a confidence level, consisting of a pair of intervals of probability, one for each of her belief and doubt. The space of confidence levels naturally leads to the notion of a trilattice, similar in spirit to Fitting's bilattices. Intuitively, the points in such a trilattice can be ordered according to truth, information, or precision. We develop a framework for probabilistic deductive databases by associating confidence levels with the facts and rules of a classical deductive database. While the trilattice structure offers a variety of choices for defining the semantics of probabilistic deductive databases, our choice of semantics is based on the truth-ordering, which we find to be closest to the classical framework for deductive databases. In addition to proposing a declarative semantics based on valuations and an equivalent semantics based on fixpoint theory, we also propose a proof procedure and prove it sound and complete. We show that while classical Datalog query programs have a polynomial time data complexity, certain query programs in the probabilistic deductive database framework do not even terminate on some input databases. We identify a large natural class of query programs of practical interest in our framework, and show that programs in this class possess polynomial time data complexity, i.e. not only do they terminate on every input database, they are guaranteed to do so in a number of steps polynomial in the input database size

    Recognizing Credible Experts in Inaccurate Databases

    No full text
    : While the problem of incomplete data in databases has been extensively studied, a relatively unexplored form of uncertainty in databases, called inaccurate data, demands due attention. Inaccurate data results when data are contributed by various information agents with associated credibility. Though the data itself is total or complete, the reliability of the data now depends on the agents' credibility. Several issues of this form of data reliability has been reported recently where the credibility of agents were assumed to be known, static and uniform throughout the database. In this paper we address the issue of credibility maintenance of information agents and take the view that the agent credibility is dynamic and is a function of the database knowledge, the agent's performance relative to other agents, and the agent's expertise. We present a method to identify agents' field of expertise (called the contexts) and use agents' context dependent credibility to calculate ..

    Trusting an Information Agent

    No full text
    : While the common kinds of uncertainties in databases (e.g., null values, disjunction, corrupt/missing data, domain mismatch, etc.) have been extensively studied, a relatively unexplored form of uncertainty in databases, called inaccurate data, demands due attention. Inaccurate data results when data are contributed by various information agents with some known reliability. Though the data itself is total or complete, the reliability of the data now depends on the agent's reliability. Several issues of this form of data reliability have been reported recently where the reliability of agents were assumed to be known and static. In this paper we address the issue of reliability maintenance of information agents and take the view that the agent reliability is dynamic and is a function of the database knowledge and the agent evidences (facts that are observed to be true or false). We propose a method of quantifying the level of trust (or the agent reliability) that the datab..
    corecore