307,278 research outputs found

    Exiting the risk assessment maze: A meta-survey

    Get PDF
    Organizations are exposed to threats that increase the risk factor of their ICT systems. The assurance of their protection is crucial, as their reliance on information technology is a continuing challenge for both security experts and chief executives. As risk assessment could be a necessary process in an organization, one of its deliverables could be utilized in addressing threats and thus facilitate the development of a security strategy. Given the large number of heterogeneous methods and risk assessment tools that exist, comparison criteria can provide better understanding of their options and characteristics and facilitate the selection of a method that best fits an organization’s needs. This paper aims to address the problem of selecting an appropriate risk assessment method to assess and manage information security risks, by proposing a set of comparison criteria, grouped into 4 categories. Based upon them, it provides a comparison of the 10 popular risk assessment methods that could be utilized by organizations to determine the method that is more suitable for their needs. Finally, a case study is presented to demonstrate the selection of a method based on the proposed criteri

    Animal dietary exposure : overview of current approaches used at EFSA

    Get PDF
    At EFSA, animal dietary exposure estimates are undertaken by several Panels/Units to assess the risk of feed contaminants, pesticide residues, genetically modified feed and feed additives. Guidance documents describing methodologies for animal dietary exposure assessment are available both at EFSA and international levels. Although appropriate within pertinent regulatory frameworks, the methodologies used to assess animal dietary exposure vary across risk assessment areas. There are different approaches ranging from quick worst-case estimations to more refined methods assessing actual exposure, resulting from the use of a heterogeneous selection of animal populations and default values to estimate feed intake. Furthermore, current feed classification systems in place at international and national levels contain a large and heterogeneous number of feed materials, which may benefit from further harmonisation efforts. This technical report presents an overview of the current approaches in place at EFSA to assess the exposure to chemicals in feed. The possibility for a greater harmonisation of feed classification and terminology is also addressed by comparing the structure of the EU catalogue of feed materials and the Harmonised OECD tables of feedstuffs derived from field crops with the EFSA FoodEx2 system

    Mining health knowledge graph for health risk prediction

    Get PDF
    Nowadays classification models have been widely adopted in healthcare, aiming at supporting practitioners for disease diagnosis and human error reduction. The challenge is utilising effective methods to mine real-world data in the medical domain, as many different models have been proposed with varying results. A large number of researchers focus on the diversity problem of real-time data sets in classification models. Some previous works developed methods comprising of homogeneous graphs for knowledge representation and then knowledge discovery. However, such approaches are weak in discovering different relationships among elements. In this paper, we propose an innovative classification model for knowledge discovery from patients’ personal health repositories. The model discovers medical domain knowledge from the massive data in the National Health and Nutrition Examination Survey (NHANES). The knowledge is conceptualised in a heterogeneous knowledge graph. On the basis of the model, an innovative method is developed to help uncover potential diseases suffered by people and, furthermore, to classify patients’ health risk. The proposed model is evaluated by comparison to a baseline model also built on the NHANES data set in an empirical experiment. The performance of proposed model is promising. The paper makes significant contributions to the advancement of knowledge in data mining with an innovative classification model specifically crafted for domain-based data. In addition, by accessing the patterns of various observations, the research contributes to the work of practitioners by providing a multifaceted understanding of individual and public health

    Models of everywhere revisited: a technological perspective

    Get PDF
    The concept ‘models of everywhere’ was first introduced in the mid 2000s as a means of reasoning about the environmental science of a place, changing the nature of the underlying modelling process, from one in which general model structures are used to one in which modelling becomes a learning process about specific places, in particular capturing the idiosyncrasies of that place. At one level, this is a straightforward concept, but at another it is a rich multi-dimensional conceptual framework involving the following key dimensions: models of everywhere, models of everything and models at all times, being constantly re-evaluated against the most current evidence. This is a compelling approach with the potential to deal with epistemic uncertainties and nonlinearities. However, the approach has, as yet, not been fully utilised or explored. This paper examines the concept of models of everywhere in the light of recent advances in technology. The paper argues that, when first proposed, technology was a limiting factor but now, with advances in areas such as Internet of Things, cloud computing and data analytics, many of the barriers have been alleviated. Consequently, it is timely to look again at the concept of models of everywhere in practical conditions as part of a trans-disciplinary effort to tackle the remaining research questions. The paper concludes by identifying the key elements of a research agenda that should underpin such experimentation and deployment

    Assessing the joint impact of DNAPL source-zone behavior and degradation products on the probabilistic characterization of human health risk

    Get PDF
    The release of industrial contaminants into the subsurface has led to a rapid degradation of groundwater resources. Contamination caused by Dense Non-Aqueous Phase Liquids (DNAPLs) is particularly severe owing to their limited solubility, slow dissolution and in many cases high toxicity. A greater insight into how the DNAPL source zone behavior and the contaminant release towards the aquifer impact human health risk is crucial for an appropriate risk management. Risk analysis is further complicated by the uncertainty in aquifer properties and contaminant conditions. This study focuses on the impact of the DNAPL release mode on the human health risk propagation along the aquifer under uncertain conditions. Contaminant concentrations released from the source zone are described using a screening approach with a set of parameters representing several scenarios of DNAPL architecture. The uncertainty in the hydraulic properties is systematically accounted for by high-resolution Monte Carlo simulations. We simulate the release and the transport of the chlorinated solvent perchloroethylene and its carcinogenic degradation products in randomly heterogeneous porous media. The human health risk posed by the chemical mixture of these contaminants is characterized by the low-order statistics and the probability density function of common risk metrics. We show that the zone of high risk (hot spot) is independent of the DNAPL mass release mode, and that the risk amplitude is mostly controlled by heterogeneities and by the source zone architecture. The risk is lower and less uncertain when the source zone is formed mostly by ganglia than by pools. We also illustrate how the source zone efficiency (intensity of the water flux crossing the source zone) affects the risk posed by an exposure to the chemical mixture. Results display that high source zone efficiencies are counter-intuitively beneficial, decreasing the risk because of a reduction in the time available for the production of the highly toxic subspecies.Peer ReviewedPostprint (author's final draft

    Mining heterogeneous information graph for health status classification

    Get PDF
    In the medical domain, there exists a large volume of data from multiple sources such as electronic health records, general health examination results, and surveys. The data contain useful information reflecting people’s health and provides great opportunities for studies to improve the quality of healthcare. However, how to mine these data effectively and efficiently still remains a critical challenge. In this paper, we propose an innovative classification model for knowledge discovery from patients’ personal health repositories. By based on analytics of massive data in the National Health and Nutrition Examination Survey, the study builds a classification model to classify patients’health status and reveal the specific disease potentially suffered by the patient. This paper makes significant contributions to the advancement of knowledge in data mining with an innovative classification model specifically crafted for domain-based data. Moreover, this research contributes to the healthcare community by providing a deep understanding of people’s health with accessibility to the patterns in various observations

    Landslide risk management through spatial analysis and stochastic prediction for territorial resilience evaluation

    Get PDF
    Natural materials, such as soils, are influenced by many factors acting during their formative and evolutionary process: atmospheric agents, erosion and transport phenomena, sedimentation conditions that give soil properties a non-reducible randomness by using sophisticated survey techniques and technologies. This character is reflected not only in spatial variability of properties which differs from point to point, but also in multivariate correlation as a function of reciprocal distance. Cognitive enrichment, offered by the response of soils associated with their intrinsic spatial variability, implies an increase in the evaluative capacity of the contributing causes and potential effects in failure phenomena. Stability analysis of natural slopes is well suited to stochastic treatment of uncertainty which characterized landslide risk. In particular, this study has been applied through a back- analysis procedure to a slope located in Southern Italy that was subject to repeated phenomena of hydrogeological instability (extended for several kilometres in recent years). The back-analysis has been carried out by applying spatial analysis to the controlling factors as well as quantifying the hydrogeological hazard through unbiased estimators. A natural phenomenon, defined as stochastic process characterized by mutually interacting spatial variables, has led to identify the most critical areas, giving reliability to the scenarios and improving the forecasting content. Moreover, the phenomenological characterization allows the optimization of the risk levels to the wide territory involved, supporting decision-making process for intervention priorities as well as the effective allocation of the available resources in social, environmental and economic contexts

    Design Challenges for GDPR RegTech

    Get PDF
    The Accountability Principle of the GDPR requires that an organisation can demonstrate compliance with the regulations. A survey of GDPR compliance software solutions shows significant gaps in their ability to demonstrate compliance. In contrast, RegTech has recently brought great success to financial compliance, resulting in reduced risk, cost saving and enhanced financial regulatory compliance. It is shown that many GDPR solutions lack interoperability features such as standard APIs, meta-data or reports and they are not supported by published methodologies or evidence to support their validity or even utility. A proof of concept prototype was explored using a regulator based self-assessment checklist to establish if RegTech best practice could improve the demonstration of GDPR compliance. The application of a RegTech approach provides opportunities for demonstrable and validated GDPR compliance, notwithstanding the risk reductions and cost savings that RegTech can deliver. This paper demonstrates a RegTech approach to GDPR compliance can facilitate an organisation meeting its accountability obligations
    • 

    corecore