5,287 research outputs found

    No. 1: Towards the Harmonization of Immigration and Refugee Law in SADC

    Get PDF
    The MIDSA project on legal harmonization of immigration and refugee law in the Southern African Development Community had four main objectives: (a) to collect and collate information on national legislation in a single publication as a resource for policy-makers; (b) to identify points of similarity and difference in national immigration law between SADC-member states; (c) to investigate the possibilities for harmonization of national immigration policy and law; and (d) in the interests of good governance and regional cooperation and integration to make specific recommendations for harmonization. A second, parallel, SAMP study is investigating the issue of harmonization of migration data collection systems within SADC. For ease of inter-country comparison, the report contains a series of comparative tables covering all facets of the immigration regime of the SADC states. The tables can be used as a resource in themselves but are also used to supplement the analysis in the text proper. This executive summary focuses on the main findings and recommendations of the narrative report. The states of the SADC have committed themselves to increased regional cooperation and integration. This commitment is reflected in a series of Protocols to which the various states are signatory. The Protocol dealing with the cross-border migration of people within SADC (the so-called “Draft Free Movement Protocol”) owed too much to European (Schengen) precedent and too little to the political and economic realities of the region. As a result, the Protocol (and a modified version called the “Facilitation of Movement Protocol”) was rejected by certain states in the region (primarily the migrant-receiving states). The level of opposition was such that the Protocol was shelved by SADC in 2000. While this publication is not designed to promote or contest the idea of free movement, it is the belief of the MIDSA partners that good migration governance is a general aim to which all can subscribe. To that end it makes perfect sense for the individual states of SADC to re-examine their current legislation. Migration has changed dramatically in the last decade and a review of the adequacy of existing legal and policy instruments would be a positive development for all states. Beyond the issue of updating legislation and making it more relevant to current management challenges, it is clear that regional cooperation in migration management would be facilitated by a set of basic principles and laws that applied more-or-less across the region. Obviously each country has certain unique features and each state reserves the right to pursue its own immigration policy. However, there are many features of migration governance that are common to all and there is nothing to be lost, and a great deal to be gained, by simplification and standardization. A regional review of this nature also allows for an analysis of the degree to which individual states have been influenced by or subscribe to international conventions and norms in the migration and refugee protection areas. A secondary purpose of this publication is therefore to stimulate a regional debate on the extent to which individual SADC states do or should adhere to the principles of international conventions and guidelines on the movement of peoples and the protection of the persecuted

    OpenTox predictive toxicology framework: toxicological ontology and semantic media wiki-based OpenToxipedia

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The OpenTox Framework, developed by the partners in the OpenTox project (<url>http://www.opentox.org</url>), aims at providing a unified access to toxicity data, predictive models and validation procedures. Interoperability of resources is achieved using a common information model, based on the OpenTox ontologies, describing predictive algorithms, models and toxicity data. As toxicological data may come from different, heterogeneous sources, a deployed ontology, unifying the terminology and the resources, is critical for the rational and reliable organization of the data, and its automatic processing.</p> <p>Results</p> <p>The following related ontologies have been developed for OpenTox: a) Toxicological ontology – listing the toxicological endpoints; b) Organs system and Effects ontology – addressing organs, targets/examinations and effects observed in <it>in vivo</it> studies; c) ToxML ontology – representing semi-automatic conversion of the ToxML schema; d) OpenTox ontology– representation of OpenTox framework components: chemical compounds, datasets, types of algorithms, models and validation web services; e) ToxLink–ToxCast assays ontology and f) OpenToxipedia community knowledge resource on toxicology terminology.</p> <p>OpenTox components are made available through standardized REST web services, where every compound, data set, and predictive method has a unique resolvable address (URI), used to retrieve its Resource Description Framework (RDF) representation, or to initiate the associated calculations and generate new RDF-based resources.</p> <p>The services support the integration of toxicity and chemical data from various sources, the generation and validation of computer models for toxic effects, seamless integration of new algorithms and scientifically sound validation routines and provide a flexible framework, which allows building arbitrary number of applications, tailored to solving different problems by end users (e.g. toxicologists).</p> <p>Availability</p> <p>The OpenTox toxicological ontology projects may be accessed via the OpenTox ontology development page <url>http://www.opentox.org/dev/ontology</url>; the OpenTox ontology is available as OWL at <url>http://opentox.org/api/1 1/opentox.owl</url>, the ToxML - OWL conversion utility is an open source resource available at <url>http://ambit.svn.sourceforge.net/viewvc/ambit/branches/toxml-utils/</url></p

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio

    Quality Assurance for the POCT Systems

    Get PDF

    Doctor of Philosophy

    Get PDF
    dissertationClinical decision support (CDS) and electronic clinical quality measurement (eCQM) are 2 important computerized strategies aimed at improving the quality of healthcare. Unfortunately, computer-facilitated quality improvement faces many barriers. One problem area is the lack of integration of CDS and eCQM, which leads to duplicative efforts, inefficiencies, misalignment of CDS and eCQM implementations, and lack of appropriate automated feedback on clinicians' performance. Another obstacle in the acceptance of electronic interventions can be the inadequate accuracy of electronic phenotyping, which leads to alert fatigue and clinicians' mistrust of eCQM results. To address these 2 problems, the research pursued 3 primary aims: Aim 1. Explore beliefs and perceptions regarding the integration of CDS and eCQM functionality and activities within a healthcare organization. Aim 2. Evaluate and demonstrate feasibility of implementing quality measures using a CDS infrastructure. Aim 3. Assess and improve strategies for human validation of electronic phenotype evaluation results. To address Aim 1, a qualitative study based on interviews with domain experts was performed. Through semistructured in-depth and critical incident interviews, stakeholders' insights about CDS and eCQM integration were obtained. The experts identified multiple barriers to the integration of CDS and eCQM and offered advice for addressing those barriers, which the research team synthesized into 10 recommendations. To address Aim 2, the feasibility of using a standards-based CDS framework aligned with anticipated electronic health record (EHR) certification criteria to implement electronic quality measurement (QM) was evaluated. The CDS-QM framework was used to automate a complex national quality measure at an academic healthcare system which had previously relied on time-consuming manual chart abstractions. To address Aim 3, a randomized controlled study was conducted to evaluate whether electronic phenotyping results should be used to support manual chart review during single-reviewer electronic phenotyping validation. The accuracy, duration, and cost of manual chart review were evaluated with and without the availability of electronic phenotyping results, including relevant patient-specific details. Providing electronic phenotyping results was associated with improved overall accuracy of manual chart review and decreased review duration per test case. Overall, the findings informed new strategies for enhancing efficiency and accuracy of computer-facilitated quality improvement

    Deep learning in medical imaging and radiation therapy

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd

    Methodological challenges and analytic opportunities for modeling and interpreting Big Healthcare Data

    Full text link
    Abstract Managing, processing and understanding big healthcare data is challenging, costly and demanding. Without a robust fundamental theory for representation, analysis and inference, a roadmap for uniform handling and analyzing of such complex data remains elusive. In this article, we outline various big data challenges, opportunities, modeling methods and software techniques for blending complex healthcare data, advanced analytic tools, and distributed scientific computing. Using imaging, genetic and healthcare data we provide examples of processing heterogeneous datasets using distributed cloud services, automated and semi-automated classification techniques, and open-science protocols. Despite substantial advances, new innovative technologies need to be developed that enhance, scale and optimize the management and processing of large, complex and heterogeneous data. Stakeholder investments in data acquisition, research and development, computational infrastructure and education will be critical to realize the huge potential of big data, to reap the expected information benefits and to build lasting knowledge assets. Multi-faceted proprietary, open-source, and community developments will be essential to enable broad, reliable, sustainable and efficient data-driven discovery and analytics. Big data will affect every sector of the economy and their hallmark will be ‘team science’.http://deepblue.lib.umich.edu/bitstream/2027.42/134522/1/13742_2016_Article_117.pd

    Informacijos saugos reikalavimų harmonizavimo, analizės ir įvertinimo automatizavimas

    Get PDF
    The growing use of Information Technology (IT) in daily operations of enterprises requires an ever-increasing level of protection over organization’s assets and information from unauthorised access, data leakage or any other type of information security breach. Because of that, it becomes vital to ensure the necessary level of protection. One of the best ways to achieve this goal is to implement controls defined in Information security documents. The problems faced by different organizations are related to the fact that often, organizations are required to be aligned with multiple Information security documents and their requirements. Currently, the organization’s assets and information protection are based on Information security specialist’s knowledge, skills and experience. Lack of automated tools for multiple Information security documents and their requirements harmonization, analysis and visualization lead to the situation when Information security is implemented by organizations in ineffective ways, causing controls duplication or increased cost of security implementation. An automated approach for Information security documents analysis, mapping and visualization would contribute to solving this issue. The dissertation consists of an introduction, three main chapters and general conclusions. The first chapter introduces existing Information security regulatory documents, current harmonization techniques, information security implementation cost evaluation methods and ways to analyse Information security requirements by applying graph theory optimisation algorithms (Vertex cover and Graph isomorphism). The second chapter proposes ways to evaluate information security implementation and costs through a controls-based approach. The effectiveness of this method could be improved by implementing automated initial data gathering from Business processes diagrams. In the third chapter, adaptive mapping on the basis of Security ontology is introduced for harmonization of different security documents; such an approach also allows to apply visualization techniques for harmonization results presentation. Graph optimization algorithms (vertex cover algorithm and graph isomorphism algorithm) for Minimum Security Baseline identification and verification of achieved results against controls implemented in small and medium-sized enterprises were proposed. It was concluded that the proposed methods provide sufficient data for adjustment and verification of security controls applicable by multiple Information security documents.Dissertatio

    Data-Driven Network Neuroscience: On Data Collection and Benchmark

    Full text link
    This paper presents a comprehensive and quality collection of functional human brain network data for potential research in the intersection of neuroscience, machine learning, and graph analytics. Anatomical and functional MRI images of the brain have been used to understand the functional connectivity of the human brain and are particularly important in identifying underlying neurodegenerative conditions such as Alzheimer's, Parkinson's, and Autism. Recently, the study of the brain in the form of brain networks using machine learning and graph analytics has become increasingly popular, especially to predict the early onset of these conditions. A brain network, represented as a graph, retains richer structural and positional information that traditional examination methods are unable to capture. However, the lack of brain network data transformed from functional MRI images prevents researchers from data-driven explorations. One of the main difficulties lies in the complicated domain-specific preprocessing steps and the exhaustive computation required to convert data from MRI images into brain networks. We bridge this gap by collecting a large amount of available MRI images from existing studies, working with domain experts to make sensible design choices, and preprocessing the MRI images to produce a collection of brain network datasets. The datasets originate from 5 different sources, cover 3 neurodegenerative conditions, and consist of a total of 2,642 subjects. We test our graph datasets on 5 machine learning models commonly used in neuroscience and on a recent graph-based analysis model to validate the data quality and to provide domain baselines. To lower the barrier to entry and promote the research in this interdisciplinary field, we release our brain network data https://doi.org/10.17608/k6.auckland.21397377 and complete preprocessing details including codes
    corecore