220 research outputs found

    Characterization of neurological disorders using evolutionary algorithms

    Get PDF
    The life expectancy increasing, in the last few decades, leads to a large diffusion of neurodegenerative age-related diseases such as Parkinson’s disease. Neurodegenerative diseases are part of the huge category of neurological disorders, which comprises all the disorders affecting the central nervous system. These conditions have a terrible impact on life quality of both patients and their families, but also on the costs associated to the society for their diagnosis and management. In order to reduce their impact on individuals and society, new better strategies for the diagnosis and monitoring of neurological disorders need to be considered. The main aim of this study is investigating the use of artificial intelligence techniques as a tool to help the doctors in the diagnosis and the monitoring of two specific neurological disorders (Parkinson’s disease and dystonia), for which no objective clinical assessments exist. The evolutionary algorithms are chosen as the artificial intelligence technique to evolve the best classifiers. The classifiers evolved by the chosen technique are then compared with those evolved by two popular well-known techniques: artificial neural network and support vector machine. All the evolved classifiers are not only able to distinguish among patients and healthy subjects but also among different subgroups of patients. For Parkinson’s disease: two different cognitive impairment subgroups of patients are considered, with the aim of an early diagnosis and a better monitoring. For dystonia: two kinds of dystonia patients are considered (organic and functional) to have a better insight in the division of the two groups. The results obtained for Parkinson’s disease are encouraging and evidenced some differences among the cognitive impairment subgroups. Dystonia results are not satisfactory at this stage, but the study presents some limitations that could be overcome in future work

    Enhancing Covid-19 Decision-Making by Creating an Assurance Case for Simulation Models

    Full text link
    Simulation models have been informing the COVID-19 policy-making process. These models, therefore, have significant influence on risk of societal harms. But how clearly are the underlying modelling assumptions and limitations communicated so that decision-makers can readily understand them? When making claims about risk in safety-critical systems, it is common practice to produce an assurance case, which is a structured argument supported by evidence with the aim to assess how confident we should be in our risk-based decisions. We argue that any COVID-19 simulation model that is used to guide critical policy decisions would benefit from being supported with such a case to explain how, and to what extent, the evidence from the simulation can be relied on to substantiate policy conclusions. This would enable a critical review of the implicit assumptions and inherent uncertainty in modelling, and would give the overall decision-making process greater transparency and accountability.Comment: 6 pages and 2 figure

    Creating a Safety Assurance Case for an ML Satellite-Based Wildfire Detection and Alert System

    Full text link
    Wildfires are a common problem in many areas of the world with often catastrophic consequences. A number of systems have been created to provide early warnings of wildfires, including those that use satellite data to detect fires. The increased availability of small satellites, such as CubeSats, allows the wildfire detection response time to be reduced by deploying constellations of multiple satellites over regions of interest. By using machine learned components on-board the satellites, constraints which limit the amount of data that can be processed and sent back to ground stations can be overcome. There are hazards associated with wildfire alert systems, such as failing to detect the presence of a wildfire, or detecting a wildfire in the incorrect location. It is therefore necessary to be able to create a safety assurance case for the wildfire alert ML component that demonstrates it is sufficiently safe for use. This paper describes in detail how a safety assurance case for an ML wildfire alert system is created. This represents the first fully developed safety case for an ML component containing explicit argument and evidence as to the safety of the machine learning

    Assurance Argument Patterns and Processes for Machine Learning in Safety-Related Systems

    Get PDF
    Machine Learnt (ML) components are now widely accepted for use in a range of applications with results that are reported to exceed, under certain conditions, human performance. The adoption of ML components in safety-related domains is restricted, however, unless sufficient assurance can be demonstrated that the use of these components does not compromise safety. In this paper, we present patterns that can be used to develop assurance arguments for demonstrating the safety of the ML components. The argument patterns provide reusable templates for the types of claims that must be made in a compelling argument. On their own, the patterns neither detail the assurance artefacts that must be generated to support the safety claims for a particular system, nor provide guidance on the activities that are required to generate these artefacts. We have therefore also developed a process for the engineering of ML components in which the assurance evidence can be generated at each stage in the ML lifecycle in order to instantiate the argument patterns and create the assurance case for ML components. The patterns and the process could help provide a practical and clear basis for a justifiable deployment of ML components in safety-related systems

    Enhancing COVID-19 decision making by creating an assurance case for epidemiological models

    Get PDF
    When the UK government was first confronted with the very real threat of a COVID-19 pandemic, policy-makers turned quickly, and initially almost exclusively, to scientific data provided by epidemiological models. These models have had a direct and significant influence on the policies and decisions, such as social distancing and closure of schools, which aim to reduce the risk posed by COVID-19 to public health.1 The models suggested that depending on the strategies chosen, the number of deaths could vary by hundreds of thousands. From a safety engineering perspective, it is clear that the data generated by epidemiological models are safety critical, and that, therefore, the models themselves should be regarded as safety-critical systems

    Transfer Assurance for Machine Learning in Autonomous Systems

    Get PDF
    This paper introduces the concept of transfer assurance for Machine Learning (ML) components used as part of an autonomous system (AS). In previous work we developed the first approach for assuring the safety of ML components such that a compelling safety case can be created for their safe deployment. During operation it may be necessary to update an ML component by re-training the model using new or updated development data. If model re-training is required post-deployment, the safety case that was created for the ML component may no longer be valid, since a new model has been created that can no longer be assured to meet its safety requirements. In particular, the nature of machine learnt components means that one may not be able to predict how even small changes in the development data may affect the model and its performance. As a result, current practice would require that a full assurance assessment is undertaken for the re-learned model, and that a new safety case is created. Given the desirability of updating ML components during operation, we see it as imperative that the assurance process become more proportionate to the size of the change that is made to the model, whilst ensuring that assurance can still be demonstrated. Retraining ML components is known to be a costly and complex process and as such techniques such as transfer learning have been developed which aim to reduce this burden through incremental development. Approaches such as transfer learning provide an inspiration for how the challenge of efficiently assuring updated models could be addressed through understanding which aspects of a model may have been affected by changes to the development data. We refer to this as transfer assurance, where parts of the assurance case for an ML component can remain fixed whilst other parts are re-assessed

    The show must go on: a snapshot of Italian academic working life during mandatory work from home

    Get PDF
    During the COVID-19 pandemic, universities worldwide have provided continuity to research and teaching through mandatory work from home. Taking into account the specificities of the Italian academic environment and using the Job Demand-Resource-Recovery model, the present study provides, through an online survey, for the first time a description of the experiences of a large sample of academics (N = 2365) and technical and administrative staff (N = 4086) working in Italian universities. The study analyzes the main differences between genders, roles or work areas, in terms of some job demands, recovery experiences, and outcomes, all important dimensions to achieve goals 3, 4, and 5 of the 2030 Agenda for Sustainable Development. The results support the reflections on gender equality measures in universities and provide a general framework useful for further in-depth analysis and development of measures in order to improve well-being (SDG 3), quality of education (SDG 4), and gender equality (SDG 5)
    • …
    corecore