6,026 research outputs found

    When Things Matter: A Data-Centric View of the Internet of Things

    Full text link
    With the recent advances in radio-frequency identification (RFID), low-cost wireless sensor devices, and Web technologies, the Internet of Things (IoT) approach has gained momentum in connecting everyday objects to the Internet and facilitating machine-to-human and machine-to-machine communication with the physical world. While IoT offers the capability to connect and integrate both digital and physical entities, enabling a whole new class of applications and services, several significant challenges need to be addressed before these applications and services can be fully realized. A fundamental challenge centers around managing IoT data, typically produced in dynamic and volatile environments, which is not only extremely large in scale and volume, but also noisy, and continuous. This article surveys the main techniques and state-of-the-art research efforts in IoT from data-centric perspectives, including data stream processing, data storage models, complex event processing, and searching in IoT. Open research issues for IoT data management are also discussed

    Representing and analysing molecular and cellular function in the computer

    Get PDF
    Determining the biological function of a myriad of genes, and understanding how they interact to yield a living cell, is the major challenge of the post genome-sequencing era. The complexity of biological systems is such that this cannot be envisaged without the help of powerful computer systems capable of representing and analysing the intricate networks of physical and functional interactions between the different cellular components. In this review we try to provide the reader with an appreciation of where we stand in this regard. We discuss some of the inherent problems in describing the different facets of biological function, give an overview of how information on function is currently represented in the major biological databases, and describe different systems for organising and categorising the functions of gene products. In a second part, we present a new general data model, currently under development, which describes information on molecular function and cellular processes in a rigorous manner. The model is capable of representing a large variety of biochemical processes, including metabolic pathways, regulation of gene expression and signal transduction. It also incorporates taxonomies for categorising molecular entities, interactions and processes, and it offers means of viewing the information at different levels of resolution, and dealing with incomplete knowledge. The data model has been implemented in the database on protein function and cellular processes 'aMAZE' (http://www.ebi.ac.uk/research/pfbp/), which presently covers metabolic pathways and their regulation. Several tools for querying, displaying, and performing analyses on such pathways are briefly described in order to illustrate the practical applications enabled by the model

    A benchmarking methodology for the centralized-database computer with expandable and parallel database processors and stores

    Get PDF
    In this paper a benchmarking methodology for a new kind of database computers is introduced. The emergence in the research community and in the commercial world of this kind of database computer (known as the multiple-backed database computers), were each computer system is configured with two or more identical processors and their associated stores for concurrent execution of transactions and for parallel processing of a centralized database spread over separate stores, is evident. The motivation and characterization of the multiple-backend database computer are first given. The need and lack of a methodology for benchmarking the new computer with a variable number of backends for the same database or with a fixed number of backends for different capacities are also evident. The measures (benchmarks) of the new computer are articulated and established and the design of the methodology for conducting the measurements is then given. Because of the novelty of the database computer architecture, the benchmarking methodology is rather elaborate and somewhat complicated. To aid our understanding of the methodology, a concrete sample is given herein. This sample also illustrates the use of the methodology. Meanwhile, a CAD system which computerizes the benchmarking methodology for systematically assisting the design of test databases and test-transaction mixes, for automatically tallying the design data and workloads, and for completely generating the test databases and test-transaction mixes is being implementedPrepared for: Chief of Naval Research Arlington, VAhttp://archive.org/details/benchmarkingmeth00demu61153N; RRO14-0 8-01 N0001485WR24046NAApproved for public release; distribution is unlimited

    Engineering Practices of Determining Transmission Capacity and Delay of Interconnecting Line Taking into Account its Configuration and Cost

    Get PDF
    This article contains information on engineering practice of determining transmission capacity of computer network line. The article presents a variant of engineering synthesis of computer network, which is a combined process of mathematical and heuristic methods combining. The engineering synthesis is offered as vector and global, because it must result in network development, optimal in terms of its practical use. All the significant network quality indicators, including economic and practical, are taken into consideration. In case of engineering synthesis, it is not possible that only one quality indicator is significant: there are always at least two significant indicators – a cost and an indicator that characterizes the main effect that is achieved in case of network use (efficacy). If at least one of the quality indicators significant for practical use is not taken into account, such network cannot be considered optimal. Computer network synthesis usually consists of structure synthesis, parameters optimization and discrete network selection. If network topology is maintained unchanged, it is possible to formulate an optimization task for line transmission capacity. The solution of transmission capacity task, which is constantly changing, may be chosen as a starting point for the selection of discrete indicator of transmission capacity

    Factors shaping the evolution of electronic documentation systems

    Get PDF
    The main goal is to prepare the space station technical and managerial structure for likely changes in the creation, capture, transfer, and utilization of knowledge. By anticipating advances, the design of Space Station Project (SSP) information systems can be tailored to facilitate a progression of increasingly sophisticated strategies as the space station evolves. Future generations of advanced information systems will use increases in power to deliver environmentally meaningful, contextually targeted, interconnected data (knowledge). The concept of a Knowledge Base Management System is emerging when the problem is focused on how information systems can perform such a conversion of raw data. Such a system would include traditional management functions for large space databases. Added artificial intelligence features might encompass co-existing knowledge representation schemes; effective control structures for deductive, plausible, and inductive reasoning; means for knowledge acquisition, refinement, and validation; explanation facilities; and dynamic human intervention. The major areas covered include: alternative knowledge representation approaches; advanced user interface capabilities; computer-supported cooperative work; the evolution of information system hardware; standardization, compatibility, and connectivity; and organizational impacts of information intensive environments
    • …
    corecore