239,466 research outputs found

    Front-End electronics configuration system for CMS

    Get PDF
    The four LHC experiments at CERN have decided to use a commercial SCADA (Supervisory Control And Data Acquisition) product for the supervision of their DCS (Detector Control System). The selected SCADA, which is therefore used for the CMS DCS, is PVSS II from the company ETM. This SCADA has its own database, which is suitable for storing conventional controls data such as voltages, temperatures and pressures. In addition, calibration data and FE (Front-End) electronics configuration need to be stored. The amount of these data is too large to be stored in the SCADA database [1]. Therefore an external database will be used for managing such data. However, this database should be completely integrated into the SCADA framework, it should be accessible from the SCADA and the SCADA features, e.g. alarming, logging should be benefited from. For prototyping, Oracle 8i was selected as the external database manager. The development of the control system for calibration constants and FE electronics configuration has been done in close collaboration with the CMS tracker group and JCOP (Joint COntrols Project)(1). (1)The four LHC experiments and the CERN IT/CO group has merged their efforts to build the experiments controls systems and set up the JCOP at the end of December, 1997 for this purpose.Comment: 3 pages, 4 figures, Icaleps'01 conference PSN WEDT00

    BioWorkbench: A High-Performance Framework for Managing and Analyzing Bioinformatics Experiments

    Get PDF
    Advances in sequencing techniques have led to exponential growth in biological data, demanding the development of large-scale bioinformatics experiments. Because these experiments are computation- and data-intensive, they require high-performance computing (HPC) techniques and can benefit from specialized technologies such as Scientific Workflow Management Systems (SWfMS) and databases. In this work, we present BioWorkbench, a framework for managing and analyzing bioinformatics experiments. This framework automatically collects provenance data, including both performance data from workflow execution and data from the scientific domain of the workflow application. Provenance data can be analyzed through a web application that abstracts a set of queries to the provenance database, simplifying access to provenance information. We evaluate BioWorkbench using three case studies: SwiftPhylo, a phylogenetic tree assembly workflow; SwiftGECKO, a comparative genomics workflow; and RASflow, a RASopathy analysis workflow. We analyze each workflow from both computational and scientific domain perspectives, by using queries to a provenance and annotation database. Some of these queries are available as a pre-built feature of the BioWorkbench web application. Through the provenance data, we show that the framework is scalable and achieves high-performance, reducing up to 98% of the case studies execution time. We also show how the application of machine learning techniques can enrich the analysis process

    APPLICATION OF INFORMATION AND COMMUNICATION TECHNOLOGY IN MANAGEMENT OF INFORMATION RESOURCES AND SERVICES IN KADUNA STATE TERTIARY INSTITUTIONS’ LIBRARIES KADUNA-NIGERIA

    Get PDF
    The application and diffusion of information and communication technology cannot be viewed in isolation from development in telecommunication technology. Innovation in computer and telecommunication technology have resulted in major changes in basic library operations as well as managing information in different offices and organization, such as circulatory reference services, cataloguing and classification, collection development (ordering and acquisition). However, the innovations have prompted many organizations to employ the use of ICT devices to further manage information and records of the organization. On this note, many organizations, now adopt the use of computer systems, database management systems, development of network systems to create, store, preserve, secure and use information for effective decision making in the organization. This paper highlights the prospects and problems of I.C.T in Kaduna state tertiary institutions’ libraries. Recommendation for functional I.C.T. in Kaduna state tertiary institutions’ libraries have also been given

    Potential applications of geospatial information systems for planning and managing aged care services in Australia

    Get PDF
    [Abstract]: This paper discusses the potential applications of Geospatial Information Technology (GITs) to assist in planning and managing aged care programs in Australia. Aged care is complex due to the numbers of participants at all levels of including planning of services, investing in capacity, funding, providing services, auditing, monitoring quality, and in accessing and using facilities and services. There is a vast array of data spread across the entities that are joined to aged care. The decision-making process for investment in capacity and service provision might be aided by technology including GIT. This is also expected to assist in managing and analysing the vast amount of demographic, geographic, socio-economic and behavioral data that might indicate current and future demand for services the aged and frail-aged population. Mapping spatio-temporal changes in near real time can assist in the successful planning and management of aged care programs. Accurate information on the location of aged care services centres and mapping the special needs of clients and their service needs may assist in monitoring access to services and assist in identifying areas where there are logistic challenges for accessing services to meet needs. GIT can also identifying migrations of aged people and of the cohorts of the population who are likely to be the next wave of clients for aged care services. GITs include remote sensing, geographic information systems (GIS) and global positioning systems (GPS) technologies, which can be used to develop a user friendly digital system for monitoring, evaluating and planning aged care and community care in Australia. Whilst remote sensing data can provide current spatiotemporal inventory of features such as locations of carer services, infrastructure, on a consistent and continuous coordinate system, a GIS can assist in storing, cross analysing, modeling and mapping of spatial data pertaining to the needs of the older people. GITs can assist in the development of a single one-stop digital database which will prove a better model for managing aged care in Australia. GIT will also be a component of technologies such as activity monitors to provide tracking functionality. This will assist in tracking dementia sufferers who may be prone to wandering and be exposed to risk

    WFIRST Coronagraph Technology Requirements: Status Update and Systems Engineering Approach

    Full text link
    The coronagraphic instrument (CGI) on the Wide-Field Infrared Survey Telescope (WFIRST) will demonstrate technologies and methods for high-contrast direct imaging and spectroscopy of exoplanet systems in reflected light, including polarimetry of circumstellar disks. The WFIRST management and CGI engineering and science investigation teams have developed requirements for the instrument, motivated by the objectives and technology development needs of potential future flagship exoplanet characterization missions such as the NASA Habitable Exoplanet Imaging Mission (HabEx) and the Large UV/Optical/IR Surveyor (LUVOIR). The requirements have been refined to support recommendations from the WFIRST Independent External Technical/Management/Cost Review (WIETR) that the WFIRST CGI be classified as a technology demonstration instrument instead of a science instrument. This paper provides a description of how the CGI requirements flow from the top of the overall WFIRST mission structure through the Level 2 requirements, where the focus here is on capturing the detailed context and rationales for the CGI Level 2 requirements. The WFIRST requirements flow starts with the top Program Level Requirements Appendix (PLRA), which contains both high-level mission objectives as well as the CGI-specific baseline technical and data requirements (BTR and BDR, respectively)... We also present the process and collaborative tools used in the L2 requirements development and management, including the collection and organization of science inputs, an open-source approach to managing the requirements database, and automating documentation. The tools created for the CGI L2 requirements have the potential to improve the design and planning of other projects, streamlining requirement management and maintenance. [Abstract Abbreviated]Comment: 16 pages, 4 figure

    Adaptive Management of Multimodel Data and Heterogeneous Workloads

    Get PDF
    Data management systems are facing a growing demand for a tighter integration of heterogeneous data from different applications and sources for both operational and analytical purposes in real-time. However, the vast diversification of the data management landscape has led to a situation where there is a trade-off between high operational performance and a tight integration of data. The difference between the growth of data volume and the growth of computational power demands a new approach for managing multimodel data and handling heterogeneous workloads. With PolyDBMS we present a novel class of database management systems, bridging the gap between multimodel database and polystore systems. This new kind of database system combines the operational capabilities of traditional database systems with the flexibility of polystore systems. This includes support for data modifications, transactions, and schema changes at runtime. With native support for multiple data models and query languages, a PolyDBMS presents a holistic solution for the management of heterogeneous data. This does not only enable a tight integration of data across different applications, it also allows a more efficient usage of resources. By leveraging and combining highly optimized database systems as storage and execution engines, this novel class of database system takes advantage of decades of database systems research and development. In this thesis, we present the conceptual foundations and models for building a PolyDBMS. This includes a holistic model for maintaining and querying multiple data models in one logical schema that enables cross-model queries. With the PolyAlgebra, we present a solution for representing queries based on one or multiple data models while preserving their semantics. Furthermore, we introduce a concept for the adaptive planning and decomposition of queries across heterogeneous database systems with different capabilities and features. The conceptual contributions presented in this thesis materialize in Polypheny-DB, the first implementation of a PolyDBMS. Supporting the relational, document, and labeled property graph data model, Polypheny-DB is a suitable solution for structured, semi-structured, and unstructured data. This is complemented by an extensive type system that includes support for binary large objects. With support for multiple query languages, industry standard query interfaces, and a rich set of domain-specific data stores and data sources, Polypheny-DB offers a flexibility unmatched by existing data management solutions

    A Platform Independent Web-Based Data Managementn System For Random Access File

    Get PDF
    With the advent of the Web, the Internet has evolved into a user-friendly medium capable of high speed, on demand information delivery. Putting data onto the World Wide Web in the form of a fully accessible, searchable database can open up a wide variety of possibilities for teaching, learning and research. There are many different types of web-based database management system (DBMS), e.g., Oracle, Informix/Illustra, IBM DB2, Object Design, ranging from small systems that run on personal computers to huge systems that run on mainframes. However, these systems have limitations such as being platform dependent, not portable and expensive. This thesis describes the development of WebDB, a platform independent webbased data management system using Java servIets and random access files to address the problems. It is developed in order to provide the management functions to WebEd2000's database. WebEd2000 is a working prototype of Web-based distance learning system developed at the Broadband and ATM Research Group, Universiti Putra Malaysia (UPM). It enables delivering conventional lecture notes over the Web and providing various tools to help in managing and maintaining course materials in a server. The WebDB approach is for the ease of the centralized management of database administrator over the WebEd2000 users and maintains the database. It also enables instructors to access to their database and update it when necessary. Instead of WebEd2000 database, the system allows its users to put another database on the server. WebDB is mainly developed using the combination of Java servlets and JavaScript technologies. The server-side servlets are used to handle the requests from client and the responses from server. The random access file served as database repository in the database server where all the data is stored. The client-side JavaScript is used to enable DHTML features and perform less-security-concern processes in order to reduce the workload of the web-server. Lastly, WebEd can be easily set up and deployed in any platform and web-servers with minimal modifications. Portability is achieved by utilizing Java technology for the system applications and random access file as the data repository
    corecore