50 research outputs found

    A Bibliography and Index of Our Works on Belief Data: Concept of Error and Multilevel Security

    Get PDF
    In 1988 we initiated our work on belief data. The work proceeded in two phases: in the first phase we formalized the concept of error in everyday record keeping, and in the second phase we considered multilevel security. The purpose of this report is to create an awareness about our works on belief data and to serve as a guide for the following manuscripts. The first two manuscripts are on the concept of errors, and the latter three are on multilevel security. Except [TR97-17], all manuscripts are in their original form

    A formal treatment of updates and errors in a relational database

    Get PDF
    The concept of an update is external to the classical relational model. When an update is made, the old information is lost at the logical level, and such information may at best be stored in the form of an update log. As a result, the classical relational model is incapable of supporting a query language for updates. We consider two orthogonal concepts of time: the real world time, which captures the changes in the real world, and the transaction time, which is the time when some knowledge of the history of the real world is added to the database. We give a temporal relational model which timestamps the values of attributes with two dimensional timestamps. In this model a formal semantics of updates is given naturally, and the model may be used for querying for the nature of updates and errors. We introduce the concept of a user domain, which is a subset of the universe of time. The user domains support a hierarchy of users, giving each user an appropriate interface. A user domain may be two dimensional, one dimensional, or zero dimensional. The user with the zero dimensional user domain sees the classical snapshot database, and the classical relational algebra as the user interface. Thus, our framework is literally a consistent extension of the classical relational model. One use of our model is that it can be used to impose a logical structure upon the update log: we show that the update log can essentially be recovered from our model, and thus there is no loss of essential information if the update log is discarded. This work is a promising application of temporal databases to mainstream databases

    The Concept of Error in a Database: An Application of Temporal Databases

    Get PDF
    The existing database models do not capture the difference between updates intended to make changes and corrections. The information about errors is external to the database and such information cannot be queried. We give a model to capture the concept of error in a database. The model consisting of 2-dimensional temporal relations is a consistent extension of the classical relational model as well as our l-dimensional temporal relational model. To circumvent the identity of an object from becoming corrupt due to the presence of errors, we make a copy of the correct identity and permanently glue (anchor) it to the object. The transition from the l-dimensional case to the 2-dimensional case is complex, but most of this complexity is absorbed by the system and not passed on to the user. This paper is a promising application of temporal databases to main stream databases

    Efficient Self-Join Algorithm in Interval-based Temporal Data Models

    Get PDF
    Interval-based temporal data model is a popular data model in temporal databases. It uses time intervals for representing the period of validity of a tuple, leading to unavoidable self-joins when combining tuples for objects. It requires k+1-way self-join for k conjunctive conditions. Join operations are one of the most expensive operations in databases and they are even more serious in temporal databases because of growing data. There are many join algorithms for temporal databases. However, they focus on joining different inputs rather than an identical input, leading to multiple scans for the identical input. Advanced 2-way join algorithms avoid a quadratic disk I/O complexity, but they are affected by the number of self-joins and partition sizes. In this paper, we address the problem of self-joins in the interval-based temporal data model and introduce a stream-based self-join algorithm. The proposed algorithm shows that it achieves a single relation scan for k-way self-join and its performance is not affected by partition sizes

    RAST: Requirement Analysis Support Tool based on Linguistic Information

    Get PDF
    Requirement specifications written in natural language might cause miscommunication among developers depending on how they are understood. Without removing any ambiguities, we cannot construct systems satisfying the need for software safety. Removing ambiguities should be performed in early steps during the development. In this paper, Requirement Analysis Support Tool (RAST) will be introduced. RAST system analyzes requirement specifications based on linguistic information. By using RAST system, we can expect that requirement engineers can communicate with each other in common notations, and use logical expressions rather than using requirement specifications written in natural language

    Algebraic Identities and Query Optimization In a Parametric Model For Relational Temporal Databases

    Get PDF
    This paper presents algebraic identities and algebraic query optimization for a parametric model for temporal databases. The parametric model has several features not present in the classical model. In this model a key is explicitly designated with a relation and an operator is available to change the key. The algebra for the parametric model is three-sorted; it includes relational expressions that evaluate to relations, domain expressions that evaluate to time domains, and boolean expressions that evaluate to TRUE or FALSE. The identities in the parametric model are classified as weak identities and strong identities. Weak identities in this model are largely counterparts of the identities in classical relational databases. Rather than establishing weak identities from scratch, a meta inference mechanism, introduced in the paper, allows weak identities to be induced from their classical counterpart. On the other hand, the strong identities will be established from scratch. An algorithm for algebraic optimization to transform a query to an equivalent query that will execute more efficiently is presented

    Cloud and IoT-based emerging services systems

    Get PDF
    The emerging services and analytics advocate the service delivery in a polymorphic view that successfully serves a variety of audience. The amalgamation of numerous modern technologies such as cloud computing, Internet of Things (IoT) and Big Data is the potential support behind the emerging services Systems. Today, IoT, also dubbed as ubiquitous sensing is taking the center stage over the traditional paradigm. The evolution of IoT necessitates the expansion of cloud horizon to deal with emerging challenges. In this paper, we study the cloud-based emerging services, useful in IoT paradigm, that support the effective data analytics. Also, we conceive a new classification called CNNC {Clouda, NNClouda} for cloud data models; further, some important case studies are also discussed to further strengthen the classification. An emerging service, data analytics in autonomous vehicles, is then described in details. Challenges and recommendations related to privacy, security and ethical concerns have been discussed
    corecore