608,643 research outputs found

    Living with inconsistencies in a multidatabase system

    Get PDF
    Integration of autonomous sources of information is one of the most important problems in implementation of the global information systems. This paper considers multidatabase systems as one of the typical architectures of global information services and addresses a problem of storing and processing inconsistent information in such systems. A new data model proposed in the paper separates sure from inconsistent information and introduces a system of elementary operations on the containers with sure and inconsistent information. A review of the implementation aspects in an environment of a typical relational database management system concludes the paper

    The HERA-B database services: for detector configuration, calibration, alignment, slow control and data classification

    Get PDF
    Abstract The database services for the distributed application environment of the HERA-B experiment are presented. Achieving the required 10 6 trigger reduction implies that all reconstruction, including calibration and alignment procedures, must run online, making extensive usage of the database systems. The associations from the events to the database objects are carefully introduced considering efficiency and flexibility. The challenges of managing the slow control information were addressed by introducing data and update objects used in special processing on dedicated servers. The system integrates the DAQ client/server protocols with customized active database servers and relies on a high-performance database support toolkit. For applications that required complex selection mechanisms, as in the data-quality databases, the relevant data is replicated using a relational database management system

    Performance issues in mid-sized relational database machines

    Get PDF
    Relational database systems have provided end users and application programmers with an improved working environment over older hierarchial and networked database systems. End users now use interactive query languages to inspect and manage their data. And application programs are easier to write and maintain due to the separation of physical data storage information from the application program itself. These and other benefits do not come without a price however. System resource consumption has long been the perceived problem with relational systems. The additional resource demands usually force computing sites to upgrade existing systems or add additional facilities. One method of protecting the current investment in systems is to use specialized hardware designed specifically for relational database processing. \u27Database Machines\u27 provide that alternative. Since the commercial introduction of database machines in the early 1980\u27s, both software and hardware vendors of relational database systems have claimed superior performance over competing products. Without a STANDARD performance measurement technique, the database user community has been flooded with benchmarks and claims from vendors which are immediately discarded by some competitors as being biased towards a particular system design. This thesis discusses the issues of relational database performance measurement with an emphasis on database machines, however; these performance issues are applicable to both hardware and software systems. A discussion of hardware design, performance metrics, software and database design is included. Also provided are recommended guidelines to use in evaluating relational database systems in lieu of a standard benchmark methodology

    Compound key word generation from document databases using a hierarchical clustering art model

    Get PDF
    The growing availability of databases on the information highways motivates the development of new processing tools able to deal with a heterogeneous and changing information environment. A highly desirable feature of data processing systems handling this type of information is the ability to automatically extract its own key words. In this paper we address the specific problem of creating semantic term associations from a text database. The proposed method uses a hierarchical model made up of Fuzzy Adaptive Resonance Theory (ART) neural networks. First, the system uses several Fuzzy ART modules to cluster isolated words into semantic classes, starting from the database raw text. Next, this knowledge is used together with coocurrence information to extract semantically meaningful term associations. These associations are asymmetric and one-to-many due to the polisemy phenomenon. The strength of the associations between words can be measured numerically. Besides this, they implicitly define a hierarchy between descriptors. The underlying algorithm is appropriate for employment on large databases. The operation of the system is illustrated on several real databases

    An Associative Semantic Network for Machine-Aided Indexing, Classification and Searching

    Get PDF
    Capturing and exploiting textual database associations has played a pivotal role in the evolution of automated information systems. A variety of statistical, linguistic and artificial intelligence approaches have been described in the literature.Many of these R and D concepts and techniques are now being incorporated into commercially available search systems and services. This paper discusses prior work and reports on research in progress aimed at creating and utilizing a global semantic associative database, AURA (Associative User Retrieval Aid) to facilitate machine-assisted indexing, classification and searching in the large-scale information processing environment of NLM's core bibliographic databases, MEDLINE and CATLINE. AURA is a semantic network of over two million natural language phrases derived from more than a million MEDLINE titles. These natural language phrases are associatively linked to NLM's MeSH (Medical Subject Headings) and UMLS Metathesaurus (Unified Medical Language System) controlled vocabulary and classification resources

    Dynamic Web-Based Business Processing Systems Using Active Server Pages

    Get PDF
    Virtually every business organization is considering how to re-engineer appropriate business processes to utilize the Web to increase efficiency.  The importance of event-driven application development in a relational database environment has become the primary model applied in business and web commerce related activities.  By providing students with a fundamental understanding of how these information systems can be re-engineered to work efficiently in a web-based environment, we can help prepare the next generation of business professionals with a solid working knowledge to enable them to achieve rapid productivity in real world business environments. This paper presents a Case study that describes a practical and manageable methodology which can be utilized to educate students concerning the concepts and skills necessary to design, develop, and implement dynamic web-based business processing systems. This case will provide the reader with the conceptual, practical, and technical knowledge necessary to understand the fundamentals of these web-based processing systems, and to explicitly describe how to develop these effective systems with a minimal amount of hardware and software resources

    The role of expert systems in federated distributed multi-database systems/Ince Levent

    Get PDF
    A shared information system is a series of computer systems interconnected by some kind of communication network. There are data repositories residing on each computer. These data repositories must somehow be integrated. The purpose for using distributed and multi-database systems is to allow users to view collections of data repositories as if they were a single entity. Multidatabase systems, better known as heterogeneous multidatabase systems, are characterized by dissimilar data models, concurrency and optimization strategies and access methods. Unlike homogenous systems, the data models that compose the global database can be based on different types of data models. It is not necessary that all participant databases use the same data model. Federated distributed database systems are a special case of multidatabase systems. They are completely autonomous and do not rely on the global data dictionary to process distributed queries. Processing distributed query requests in federated databases is very difficult since there are multiple independent databases with their own rules for query optimization, deadlock detection, and concurrency. Expert systems can play a role in this type of environment by supplying a knowledge base that contains rules for data object conversion, rules for resolving naming conflicts, and rules for exchanging data.http://archive.org/details/theroleofexperts109459362Turkish Navy author.Approved for public release; distribution is unlimited

    Living with inconsistencies in a multidatabase system

    Get PDF
    Integration of autonomous sources of information is one of the most important problems in implementation of the global information systems. This paper considers multidatabase systems as one of the typical architectures of global information services and addresses a problem of storing and processing inconsistent information in such systems. A new data model proposed in the paper separates sure from inconsistent information and introduces a system of elementary operations on the containers with sure and inconsistent information. A review of the implementation aspects in an environment of a typical relational database management system concludes the paper

    Design and Implementation of an Automated Hospital Management System with MERN Stack

    Get PDF
    A hospital is a place that needs more effective and efficient management of information, people, and assets. This paper demonstrates the design and implementation of an autonomous system with mern stack that can manage doctor information, patient information, inventory information, and administrative functionalities in a hospital environment. This was written with the intention of eliminating the problems of manual hospital management systems such as data redundancy, data inaccuracy, poor accesibility and lack of data security. The paper addressed the problems of time consumption of storing, retrieving, updating, and processing hospital data, generating ambiguous and inaccurate reports, and poor access control to sensitive information in manual paper-based hospital management systems and provides an intuitive and modern approach to solving those problems using an autonomous web application to improve the overall efficiency of a hospital environment. The tools used to implement the proposed system are reactjs library with redux, expressjs, nodejs, and mongodb as the online cloud database. Google oauth 2.0 is used as the authorization protocol. This proposed solution provides an excellent way of authentication and authorization, generating reports for statistical and information gathering purposes, managing administrative tasks, and improving information storing, manipulating and retrieving infrastructure

    Resource-efficient processing of large data volumes

    Get PDF
    The complex system environment of data processing applications makes it very challenging to achieve high resource efficiency. In this thesis, we develop solutions that improve resource efficiency at multiple system levels by focusing on three scenarios that are relevant—but not limited—to database management systems. First, we address the challenge of understanding complex systems by analyzing memory access characteristics via efficient memory tracing. Second, we leverage information about memory access characteristics to optimize the cache usage of algorithms and to avoid cache pollution by applying hardware-based cache partitioning. Third, after optimizing resource usage within a multicore processor, we optimize resource usage across multiple computer systems by addressing the problem of resource contention for bulk loading, i.e., ingesting large volumes of data into the system. We develop a distributed bulk loading mechanism, which utilizes network bandwidth and compute power more efficiently and improves both bulk loading throughput and query processing performance
    • …
    corecore