257,695 research outputs found

    Enroller: an experiment in aggregating resources

    Get PDF
    This chapter describes a collaborative project between e-scientists and humanists working to create an online repository of linguistic data sets and tools. Corpora, dictionaries, and a thesaurus are brought together to enable a new method of research. It combines our most advanced knowledge in both computing and linguistic research techniques

    Workflows, processes and technical solutions for seeding the research data commons

    Get PDF
    Queensland University of Technology (QUT) completed an Australian National Data Service (ANDS) funded “Seeding the Commons Project” to contribute metadata to Research Data Australia. The project employed two Research Data Librarians from October 2009 through to July 2010. Technical support for the project was provided by QUT’s High Performance Computing and Research Support Specialists. ---------- The project identified and described QUT’s category 1 (ARC / NHMRC) research datasets. Metadata for the research datasets was stored in QUT’s Research Data Repository (Architecta Mediaflux). Metadata which was suitable for inclusion in Research Data Australia was made available to the Australian Research Data Commons (ARDC) in RIF-CS format. ---------- Several workflows and processes were developed during the project. 195 data interviews took place in connection with 424 separate research activities which resulted in the identification of 492 datasets. ---------- The project had a high level of technical support from QUT High Performance Computing and Research Support Specialists who developed the Research Data Librarian interface to the data repository that enabled manual entry of interview data and dataset metadata, creation of relationships between repository objects. The Research Data Librarians mapped the QUT metadata repository fields to RIF-CS and an application was created by the HPC and Research Support Specialists to generate RIF-CS files for harvest by the Australian Research Data Commons (ARDC). ---------- This poster will focus on the workflows and processes established for the project including: ---------- • Interview processes and instruments • Data Ingest from existing systems (including mapping to RIF-CS) • Data entry and the Data Librarian interface to Mediaflux • Verification processes • Mapping and creation of RIF-CS for the ARD

    Developing research data management services at QUT

    Get PDF
    QUT Library and the High Performance Computing and Research Support (HPC) Team have been collaborating on developing and delivering a range of research support services, including those designed to assist researchers to manage their data. QUT’s Management of Research Data policy has been available since 2010 and is complemented by the Data Management Guidelines and Checklist. QUT has partnered with the Australian Research Data Service (ANDS) on a number of projects including Seeding the Commons, Metadata Hub (with Griffith University) and the Data Capture program. The HPC Team has also been developing the QUT Research Data Repository based on the Architecta Mediaflux system and have run several pilots with faculties. Library and HPC staff have been trained in the principles of research data management and are providing a range of research data management seminars and workshops for researchers and HDR students

    Two research contributions in 64-bit computing: Testing and Applications

    No full text
    Following the release of Windows 64-bit and Redhat Linux 64-bit operating systems (OS) in late April 2005, this is the one of the first 64-bit OS research project completed in a British university. The objective is to investigate (1) the increase/decrease in performance compared to 32-bit computing; (2) the techniques used to develop 64-bit applications; and (3) how 64-bit computing should be used in IT and research organizations to improve their work. This paper summarizes research discoveries for this investigation, including two major research contributions in (1) testing and (2) application development. The first contribution includes performance, stress, application, multiplatform, JDK and compatibility testing for AMD and Intel models. Comprehensive testing results reveal that 64-bit computing has a better performance in application performance, system performance and stress testing, but a worse performance in compatibility testing than the traditional 32-bit computing. A 64-bit dual-core processor has been tested and the results show that it performs better than a 64-bit single-core processor, but only in application that requires very high demands of CPU and memory consumption. The second contribution is .NET 1.1 64-bit implementations. Without additional troubleshooting, .NET 1.1 does not work on 64-bit Windows operating systems in stable ways. After stabilizing .NET environment, the next step is the application development, which is a dynamic repository with functions such as registration, download, login-logout, product submissions, database storage and statistical reports. The technology is based on Visual Studio .NET 2003, .NET 1.1 Framework with Service Pack 1, SQL Server 2000 with Service Pack 4 and IIS Server 6.0 on the Windows Server 2003 Enterprise x64 platform with Service Pack 1

    Workflow Repository Integration with the P-GRADE Portal

    Get PDF
    Grid computing is a field of research focused on providing large-scale infrastructures with shared resources. These infrastructures allow researchers around the world to execute their computationally intensive applications. However, the current state of Grid computing lacks a widespread, easy-to-access solution for sharing these complex applications. The goal of this project was to integrate a repository service with the P-GRADE Portal, a gateway to multiple Grids, in an effort to promote collaboration between individual Grid users as well as research communities

    Toward a framework for data quality in cloud-based health information system

    No full text
    This Cloud computing is a promising platform for health information systems in order to reduce costs and improve accessibility. Cloud computing represents a shift away from computing being purchased as a product to be a service delivered over the Internet to customers. Cloud computing paradigm is becoming one of the popular IT infrastructures for facilitating Electronic Health Record (EHR) integration and sharing. EHR is defined as a repository of patient data in digital form. This record is stored and exchanged securely and accessible by different levels of authorized users. Its key purpose is to support the continuity of care, and allow the exchange and integration of medical information for a patient. However, this would not be achieved without ensuring the quality of data populated in the healthcare clouds as the data quality can have a great impact on the overall effectiveness of any system. The assurance of the quality of data used in healthcare systems is a pressing need to help the continuity and quality of care. Identification of data quality dimensions in healthcare clouds is a challenging issue as data quality of cloud-based health information systems arise some issues such as the appropriateness of use, and provenance. Some research proposed frameworks of the data quality dimensions without taking into consideration the nature of cloud-based healthcare systems. In this paper, we proposed an initial framework that fits the data quality attributes. This framework reflects the main elements of the cloud-based healthcare systems and the functionality of EHR
    • …
    corecore