339,849 research outputs found
Distribution design in object oriented databases : a thesis presented in partial fulfilment of the requirements for the degree of Master of Information Science in Information Systems
The advanced development of object oriented database systems has attracted much research. However, very few of them contribute to the distribution design of object oriented databases. The main tasks of distribution design are fragmenting the database schema and allocating the fragments to different sites of a network. The aim of fragmentation and allocation is to improve the performance and increase the availability of a database system. Even though much research has been done on distributed databases, the research almost always refers to the relational data model (RDM). Very few efforts provide distribution design techniques for distributed object oriented databases. The aim of this work is to generalise distribution design techniques from relational databases for object oriented databases. First, the characteristics of distributed databases in general and the techniques used for fragmentation and allocation for the RDM are reviewed. Then, fragmentation operations for a rather generic object oriented data model (OODM) are developed. As with the RDM, these operations include horizontal and vertical fragmentation. A third operation named splitting is also introduced for OODM. Finally, normal predicates are introduced for OODM. A heuristic procedure for horizontal fragmenting of OODBs is also presented. The adaption of horizontal fragmentation techniques for relational databases to object oriented databases is the main result of this work
Evaluation of Functional Data Models for Database Design and Use
The problems of design, operation, and maintenance of databases using the three most
popular database management systems (Hierarchical, CQDASYL/DBTG, and Relational) are
well known. Users wishing to use these systems have to make conscious and often complex
mappings between the real-world structures and the data structuring options (data models)
provided by these systems. In addition, much of the semantics associated with the data
either does not get expressed at all or gets embedded procedurally in application programs in
an ad-hoc way.
In recent years, a large number of data models (called semantic data models) have been
proposed with the aim of simplifying database design and use. However, the lack of usable
implementations of these proposals has so far inhibited the widespread use of these concepts.
The present work reports on an effort to evaluate and extend one such semantic model by
means of an implementation. It is based on the functional data model proposed earlier by
Shipman[SHIP81). We call this 'Extended Functional Data Model' (EFDM).
EFDM, like Shipman's proposals, is a marriage of three of the advanced modelling concepts
found in both database and artificial intelligence research: the concept of entity to represent
an object in the real world, the concept of type hierarchy among entity types, and the
concept of derived data for modelling procedural knowledge. The functional notation of the
model lends itself to high level data manipulation languages. The data selection in these
languages is expressed simply as function application. Further, the functional approach makes
it possible to incorporate general purpose computation facilities in the data languages without
having to embed them in procedural languages. In addition to providing the usual database
facilities, the implementation also provides a mechanism to specify multiple user views of the
database
Short Duration Traffic Flow Prediction Using Kalman Filtering
The research examined predicting short-duration traffic flow counts with the
Kalman filtering technique (KFT), a computational filtering method. Short-term
traffic prediction is an important tool for operation in traffic management and
transportation system. The short-term traffic flow value results can be used
for travel time estimation by route guidance and advanced traveler information
systems. Though the KFT has been tested for homogeneous traffic, its efficiency
in heterogeneous traffic has yet to be investigated. The research was conducted
on Mirpur Road in Dhaka, near the Sobhanbagh Mosque. The stream contains a
heterogeneous mix of traffic, which implies uncertainty in prediction. The
propositioned method is executed in Python using the pykalman library. The
library is mostly used in advanced database modeling in the KFT framework,
which addresses uncertainty. The data was derived from a three-hour traffic
count of the vehicle. According to the Geometric Design Standards Manual
published by Roads and Highways Division (RHD), Bangladesh in 2005, the
heterogeneous traffic flow value was translated into an equivalent passenger
car unit (PCU). The PCU obtained from five-minute aggregation was then utilized
as the suggested model's dataset. The propositioned model has a mean absolute
percent error (MAPE) of 14.62, indicating that the KFT model can forecast
reasonably well. The root mean square percent error (RMSPE) shows an 18.73%
accuracy which is less than 25%; hence the model is acceptable. The developed
model has an R2 value of 0.879, indicating that it can explain 87.9 percent of
the variability in the dataset. If the data were collected over a more extended
period of time, the R2 value could be closer to 1.0.Comment: Shooting writing. Good researc
DFM synthesis approach based on product-process interface modelling. Application to the peen forming process.
Engineering design approach are curently CAD-centred design process. Manufacturing information is selected and assessed very late in the design process and above all as a reactive task instead of being proactive to lead the design choices. DFM appraoches are therefore assesment methods that compare several design alternatives and not real design approaches at all. Main added value of this research work concerns the use of a product-process interface model to jointly manage both the product and the manufacturing data in a proactive DFM way. The DFM synthesis approach and the interface model are presented via the description of the DFM software platform
Compensation methods to support generic graph editing: A case study in automated verification of schema requirements for an advanced transaction model
Compensation plays an important role in advanced transaction models, cooperative work, and workflow systems. However, compensation operations are often simply written as a^−1 in
transaction model literature. This notation ignores any operation parameters, results, and side effects. A schema designer intending to use an advanced transaction model is expected (required) to write correct method code. However, in the days of cut-and-paste, this is much easier said than done. In this paper, we demonstrate the feasibility of using an off-the-shelf theorem prover (also called a proof assistant) to perform automated verification of compensation requirements for an OODB schema. We report on the results of a case study in verification for a particular advanced transaction model that supports cooperative applications. The case study is based on an OODB schema that provides generic graph editing functionality for the creation, insertion, and manipulation of nodes and links
Compensation methods to support cooperative applications: A case study in automated verification of schema requirements for an advanced transaction model
Compensation plays an important role in advanced transaction models, cooperative work and workflow systems. A schema designer is typically required to supply for each transaction another transaction to semantically undo the effects of . Little attention has been paid to the verification of the desirable properties of such operations, however. This paper demonstrates the use of a higher-order logic theorem prover for verifying that compensating transactions return a database to its original state. It is shown how an OODB schema is translated to the language of the theorem prover so that proofs can be performed on the compensating transactions
Functionally Specified Distributed Transactions in Co-operative Scenarios
Addresses the problem of specifying co-operative, distributed transactions in a manner that can be subject to verification and testing. Our approach combines the process-algebraic language LOTOS and the object-oriented database modelling language TM to obtain a clear and formal protocol for distributed database transactions meant to describe co-operation scenarios. We argue that a separation of concerns, namely the interaction of database applications on the one hand and data modelling on the other, results in a practical, modular approach that is formally well-founded. An advantage of this is that we may vary over transaction models to support the language combinatio
Moa and the multi-model architecture: a new perspective on XNF2
Advanced non-traditional application domains such as geographic information systems and digital library systems demand advanced data management support. In an effort to cope with this demand, we present the concept of a novel multi-model DBMS architecture which provides evaluation of queries on complexly structured data without sacrificing efficiency. A vital role in this architecture is played by the Moa language featuring a nested relational data model based on XNF2, in which we placed renewed interest. Furthermore, extensibility in Moa avoids optimization obstacles due to black-box treatment of ADTs. The combination of a mapping of queries on complexly structured data to an efficient physical algebra expression via a nested relational algebra, extensibility open to optimization, and the consequently better integration of domain-specific algorithms, makes that the Moa system can efficiently and effectively handle complex queries from non-traditional application domains
Two Case Studies of Subsystem Design for General-Purpose CSCW Software Architectures
This paper discusses subsystem design guidelines for the software architecture of general-purpose computer supported cooperative work systems, i.e., systems that are designed to be applicable in various application areas requiring explicit collaboration support. In our opinion, guidelines for subsystem level design are rarely given most guidelines currently given apply to the programming language level. We extract guidelines from a case study of the redesign and extension of an advanced commercial workflow management system and place them into the context of existing software engineering research. The guidelines are then validated against the design decisions made in the construction of a widely used web-based groupware system. Our approach is based on the well-known distinction between essential (logical) and physical architectures. We show how essential architecture design can be based on a direct mapping of abstract functional concepts as found in general-purpose systems to modules in the essential architecture. The essential architecture is next mapped to a physical architecture by applying software clustering and replication to achieve the required distribution and performance characteristics
- …