2,086 research outputs found

    A Provenance Methodology And Architecture For Scientific Projects Containing Automated And Manual Processes

    Full text link
    The management of provenance metadata is a pressing issue for high profile, complex, science projects needing to trace their data products’ lineage in order to withstand scrutiny. To represent, capture, transfer, store and deliver provenance data from a project’s processes, specialized metadata, new IT system components and the human and automated procedures are necessary. The collection of metadata, components and procedures can be termed a provenance methodolo-gy and architecture. Through our involvement with several large Australian science projects ([4], [5], [6], [7], [11]), we have developed a methodology that provides: Use Case assessments of project clients’ requirements for provenance; team structures and project processes to facilitate provenance requirements; systems’ behaviour to capture provenance from automated processes; behavioural patterns for project staff to capture provenance from manual processes; procedures for process compiling, storing and using provenance records. Semantic web provenance ontologies have been created ([1], [2], [3]) that allow generic, ab-stracted provenance representation and we have extended the PROV ontology through our prov-enance data management ontology (PROMS-O) [8] in order to address provenance Use Cases required by our projects that PROV-O does not address. Due to our project experience, we have developed a provenance architecture that specifies: a single provenance representation format for all project processes; the use of a persistent ID systems to alias other systems’ URIs; an archival systems to store data and provide access to versions of their data via URIs; provenance management systems to store and provide access to provenance data; provenance exporters to capture and transmit provenance data from automated systems; provenance procedures to collect provenance data from human processes, and; an overarching integration architecture. In this paper, we briefly mention our work regarding each of the points above which, together, provide a range of pointers to projects wanting to embark on provenance management

    A Services\u27 Frameworks And Support Services For Environmental Information Communities

    Full text link
    For environmental datasets to be used effectively via the Internet, they must present standardized data and metadata services and link the two. The Open Geospatial Consortium\u27s (OGC) web services (WFS, WMS, CSW etc.), have seen widespread use over many years however few organizations have deployed information architectures based solely on OGC standards for all their datasets. Collections of organizations within a thematically-based community certainly cannot realistically be expected to do so. To enable service use flexibility we present a services framework - a Data Brokering Layer (DBL). A DBL presents access to data and metadata services for datasets, and links between them, in a standardized manner based on Linked Data and Semantic Web principles. By specifying regular access methods to any data or metadata service relevant for a dataset, community organizers allow a wide range of services for use within their community. Additionally, a community service profile testing service – a Conformance Service – may be run that reveals the day-to-day status of all of a community’s services to be known allowing both better end-user experiences and also that data providers’ data is acceptable to a community and continues to remains available for use. We present DBL and Conformance Service designs as well as a whole-of-community architecture that facilitates the use of the two. We describe implementations of them within two Australian environmental information communities: eReefs and Bioregional Assessments and plans for wider deployment

    Challenges In The Simultaneous Development And Deployment Of A Large Integrated Modelling System

    Full text link
    Many of our natural resource management issues cannot be adequately informed by a single discipline or sub-discipline, and require an integration of information from multiple natural and human systems. As we are unable to observe and monitor more than a few important indicators there is a strong reliance on supplementing observed information with modelled information. Following a period of record drought in the 1990’s, the Australian government recognised the need for better quality, more integrated, and nationally consistent water information. The Australian Water Resources Assessment system (AWRA) is an integrated hydrological modelling system developed by CSIRO and Australian Bureau of Meteorology (the Bureau) as part of the Water Information Research and Development Alliance (WIRADA) to support the development of two new water information products produced by the Bureau. This paper outlines the informatics, systems implementation and integration challenges in the development and deployment of the proto-operational AWRA system. A key challenge of model integration is how you access and repurpose data, how you reconcile semantic differences between both models and disparate input data sources, how you translate terms when passing between often conceptually different modelling components and how you ensure consistent identity between real world objects. The rapid development of AWRA and simultaneous transfer to an operational environment also raised many additional challenges, such as supporting multiple technologies and differing development rates of each model component, while still maintaining a working system. Additionally the continentally sized model extent, combined with techniques relatively new to the hydrologic domain, such as data assimilation and continental calibration, have introduced significant computational overheads. While an in-house fit for purpose operational build of AWRA is currently under development within the Bureau, the research challenges undertaken early in AWRA’s development still hold many valuable lessons. We have found that the use of file standards such as NetCDF, services-based modelling, and scientific workflow technologies such as ‘The WorkBench’ combined with strong model governance has mostly reduced the burden of system development and deployment and exposes some important lessons for future integrated modelling and systems integration efforts

    An Application Of Services Based Modelling Paradigm To The Hydrologic Domain Using Ewater Source

    Full text link
    The traditional paradigm for the deployment of hydrological models involves the capturing and testing of model concepts and numerical consistency for robustness and accuracy, which is then distributed as binary files with or without source code. The model software is then populated with data and parameters and run locally within the modeller’s organisation, often on their own desktop. This modelling workflow is used by many organisations; however, there are several limitations and potential issues. Once the software is outside the developer’s organisation they rely on the modeller to apply updates and bug fixes in a timely manner, and to correctly describe the model version used for reporting. The developer also loses control of the quality and suitability of the input data for a particular application of the model. With more prevalent access to high bandwidth internet and flexible computing infrastructure there is an increased opportunity to better control model access through the exposure of modelling functionality through web services. As well as giving the developer tighter control over model versioning and IP, it also allows closer coupling of the model to both data sources and computational resources, which is especially beneficial to multi-run use cases such as uncertainty analysis and calibration, where the ability to easily scale to many model instances is of most value. The eWater Source modelling system is an important use case for Australia’s hydrologic community, and provides a rich array of functionality. Source is especially suited to the services modelling paradigm as it has project load times much greater than simulation runtimes, the services based approach allows the hiding of these load times by keeping the project in memory for each instance of a Source Server. This paper investigates the use of a Source service interface for providing hydrological modelling web services

    Insights into hominid evolution from the gorilla genome sequence.

    Get PDF
    Gorillas are humans' closest living relatives after chimpanzees, and are of comparable importance for the study of human origins and evolution. Here we present the assembly and analysis of a genome sequence for the western lowland gorilla, and compare the whole genomes of all extant great ape genera. We propose a synthesis of genetic and fossil evidence consistent with placing the human-chimpanzee and human-chimpanzee-gorilla speciation events at approximately 6 and 10 million years ago. In 30% of the genome, gorilla is closer to human or chimpanzee than the latter are to each other; this is rarer around coding genes, indicating pervasive selection throughout great ape evolution, and has functional consequences in gene expression. A comparison of protein coding genes reveals approximately 500 genes showing accelerated evolution on each of the gorilla, human and chimpanzee lineages, and evidence for parallel acceleration, particularly of genes involved in hearing. We also compare the western and eastern gorilla species, estimating an average sequence divergence time 1.75 million years ago, but with evidence for more recent genetic exchange and a population bottleneck in the eastern species. The use of the genome sequence in these and future analyses will promote a deeper understanding of great ape biology and evolution

    Comprehensive Rare Variant Analysis via Whole-Genome Sequencing to Determine the Molecular Pathology of Inherited Retinal Disease

    Get PDF
    Inherited retinal disease is a common cause of visual impairment and represents a highly heterogeneous group of conditions. Here, we present findings from a cohort of 722 individuals with inherited retinal disease, who have had whole-genome sequencing (n = 605), whole-exome sequencing (n = 72), or both (n = 45) performed, as part of the NIHR-BioResource Rare Diseases research study. We identified pathogenic variants (single-nucleotide variants, indels, or structural variants) for 404/722 (56%) individuals. Whole-genome sequencing gives unprecedented power to detect three categories of pathogenic variants in particular: structural variants, variants in GC-rich regions, which have significantly improved coverage compared to whole-exome sequencing, and variants in non-coding regulatory regions. In addition to previously reported pathogenic regulatory variants, we have identified a previously unreported pathogenic intronic variant in CHM\textit{CHM} in two males with choroideremia. We have also identified 19 genes not previously known to be associated with inherited retinal disease, which harbor biallelic predicted protein-truncating variants in unsolved cases. Whole-genome sequencing is an increasingly important comprehensive method with which to investigate the genetic causes of inherited retinal disease.This work was supported by The National Institute for Health Research England (NIHR) for the NIHR BioResource – Rare Diseases project (grant number RG65966). The Moorfields Eye Hospital cohort of patients and clinical and imaging data were ascertained and collected with the support of grants from the National Institute for Health Research Biomedical Research Centre at Moorfields Eye Hospital, National Health Service Foundation Trust, and UCL Institute of Ophthalmology, Moorfields Eye Hospital Special Trustees, Moorfields Eye Charity, the Foundation Fighting Blindness (USA), and Retinitis Pigmentosa Fighting Blindness. M.M. is a recipient of an FFB Career Development Award. E.M. is supported by UCLH/UCL NIHR Biomedical Research Centre. F.L.R. and D.G. are supported by Cambridge NIHR Biomedical Research Centre

    Penilaian Kinerja Keuangan Koperasi di Kabupaten Pelalawan

    Full text link
    This paper describe development and financial performance of cooperative in District Pelalawan among 2007 - 2008. Studies on primary and secondary cooperative in 12 sub-districts. Method in this stady use performance measuring of productivity, efficiency, growth, liquidity, and solvability of cooperative. Productivity of cooperative in Pelalawan was highly but efficiency still low. Profit and income were highly, even liquidity of cooperative very high, and solvability was good

    Differential cross section measurements for the production of a W boson in association with jets in proton–proton collisions at √s = 7 TeV

    Get PDF
    Measurements are reported of differential cross sections for the production of a W boson, which decays into a muon and a neutrino, in association with jets, as a function of several variables, including the transverse momenta (pT) and pseudorapidities of the four leading jets, the scalar sum of jet transverse momenta (HT), and the difference in azimuthal angle between the directions of each jet and the muon. The data sample of pp collisions at a centre-of-mass energy of 7 TeV was collected with the CMS detector at the LHC and corresponds to an integrated luminosity of 5.0 fb[superscript −1]. The measured cross sections are compared to predictions from Monte Carlo generators, MadGraph + pythia and sherpa, and to next-to-leading-order calculations from BlackHat + sherpa. The differential cross sections are found to be in agreement with the predictions, apart from the pT distributions of the leading jets at high pT values, the distributions of the HT at high-HT and low jet multiplicity, and the distribution of the difference in azimuthal angle between the leading jet and the muon at low values.United States. Dept. of EnergyNational Science Foundation (U.S.)Alfred P. Sloan Foundatio

    Juxtaposing BTE and ATE – on the role of the European insurance industry in funding civil litigation

    Get PDF
    One of the ways in which legal services are financed, and indeed shaped, is through private insurance arrangement. Two contrasting types of legal expenses insurance contracts (LEI) seem to dominate in Europe: before the event (BTE) and after the event (ATE) legal expenses insurance. Notwithstanding institutional differences between different legal systems, BTE and ATE insurance arrangements may be instrumental if government policy is geared towards strengthening a market-oriented system of financing access to justice for individuals and business. At the same time, emphasizing the role of a private industry as a keeper of the gates to justice raises issues of accountability and transparency, not readily reconcilable with demands of competition. Moreover, multiple actors (clients, lawyers, courts, insurers) are involved, causing behavioural dynamics which are not easily predicted or influenced. Against this background, this paper looks into BTE and ATE arrangements by analysing the particularities of BTE and ATE arrangements currently available in some European jurisdictions and by painting a picture of their respective markets and legal contexts. This allows for some reflection on the performance of BTE and ATE providers as both financiers and keepers. Two issues emerge from the analysis that are worthy of some further reflection. Firstly, there is the problematic long-term sustainability of some ATE products. Secondly, the challenges faced by policymakers that would like to nudge consumers into voluntarily taking out BTE LEI

    Severe early onset preeclampsia: short and long term clinical, psychosocial and biochemical aspects

    Get PDF
    Preeclampsia is a pregnancy specific disorder commonly defined as de novo hypertension and proteinuria after 20 weeks gestational age. It occurs in approximately 3-5% of pregnancies and it is still a major cause of both foetal and maternal morbidity and mortality worldwide1. As extensive research has not yet elucidated the aetiology of preeclampsia, there are no rational preventive or therapeutic interventions available. The only rational treatment is delivery, which benefits the mother but is not in the interest of the foetus, if remote from term. Early onset preeclampsia (<32 weeks’ gestational age) occurs in less than 1% of pregnancies. It is, however often associated with maternal morbidity as the risk of progression to severe maternal disease is inversely related with gestational age at onset2. Resulting prematurity is therefore the main cause of neonatal mortality and morbidity in patients with severe preeclampsia3. Although the discussion is ongoing, perinatal survival is suggested to be increased in patients with preterm preeclampsia by expectant, non-interventional management. This temporising treatment option to lengthen pregnancy includes the use of antihypertensive medication to control hypertension, magnesium sulphate to prevent eclampsia and corticosteroids to enhance foetal lung maturity4. With optimal maternal haemodynamic status and reassuring foetal condition this results on average in an extension of 2 weeks. Prolongation of these pregnancies is a great challenge for clinicians to balance between potential maternal risks on one the eve hand and possible foetal benefits on the other. Clinical controversies regarding prolongation of preterm preeclamptic pregnancies still exist – also taking into account that preeclampsia is the leading cause of maternal mortality in the Netherlands5 - a debate which is even more pronounced in very preterm pregnancies with questionable foetal viability6-9. Do maternal risks of prolongation of these very early pregnancies outweigh the chances of neonatal survival? Counselling of women with very early onset preeclampsia not only comprises of knowledge of the outcome of those particular pregnancies, but also knowledge of outcomes of future pregnancies of these women is of major clinical importance. This thesis opens with a review of the literature on identifiable risk factors of preeclampsia
    corecore