7,988 research outputs found

    BCAS: A Web-enabled and GIS-based Decision Support System for the Diagnosis and Treatment of Breast Cancer

    Get PDF
    For decades, geographical variations in cancer rates have been observed but the precise determinants of such geographic differences in breast cancer development are unclear. Various statistical models have been proposed. Applications of these models, however, require that the data be assembled from a variety of sources, converted into the statistical models’ parameters and delivered effectively to researchers and policy makers. A web-enabled and GIS-based system can be developed to provide the needed functionality. This article overviews the conceptual web-enabled and GIS-based system (BCAS), illustrates the system’s use in diagnosing and treating breast cancer and examines the potential benefits and implications for breast cancer research and practice

    Promoting Public Health and Safety: A Predictive Modeling Software Analysis on Perceived Road Fatality Contributory Factors

    Get PDF
    Extensive literature search was conducted to computationally analyze the relationship between key perceived road fatality factors and public health impacts, in terms of mortality and morbidity. Heterogeneous sources of data on road fatality 1970-2005 and that based on interview questionnaire on European road drivers’ perception were sourced. Computational analysis was performed on these data using the Multilayer Perceptron model within the dtreg predictive modeling software. Driver factors had the highest relative significance. Drivers played significant role as causative agents of road accidents. A good degree of correlation was also observed when compared with results obtained by previous researchers. Sweden, UK, Finland, Denmark, Germany, France, Netherlands, and Austria, where road safety targets were set and EU targets adopted, experienced a faster and sharper reduction of road fatalities. However, Belgium, Ireland, Italy, Greece and Portugal experienced slow, but little reduction in cases of road fatalities. Spain experienced an increase in road fatalities possibly due to road fatalities enhancing factors. Estonia, Slovenia, Cyprus, Hungry, Czech Republic, Slovakia and Poland experienced a fluctuating but decreasing trend. Enforcement of road safety principles and regulations are needed to decrease the incidences of fatal accidents. Adoption of the EU target of -50% reductions of fatalities in all countries will help promote public health and safety

    CFBM - A Framework for Data Driven Approach in Agent-Based Modeling and Simulation

    Get PDF
    Recently, there has been a shift from modeling driven approach to data driven approach in Agent Based Modeling and Simulation (ABMS). This trend towards the use of data-driven approaches in simulation aims at using more and more data available from the observation systems into simulation models [1, 2]. In a data driven approach, the empirical data collected from the target system are used not only for the design of the simulation models but also in initialization, evaluation of the output of the simulation platform. That raises the question how to manage empirical data, simulation data and compare those data in such agent-based simulation platform. In this paper, we first introduce a logical framework for data driven approach in agent-based modeling and simulation. The introduced framework is based on the combination of Business Intelligence solution and a multi-agent based platform called CFBM (Combination Framework of Business intelligence and Multi-agent based platform). Secondly, we demonstrate the application of CFBM for data driven approach via the development of a Brown Plant Hopper Surveillance Models (BSMs), where CFBM is used not only to manage and integrate the whole empirical data collected from the target system and the data produced by the simulation model, but also to initialize and validate the models. The successful development of the CFBM consists not only in remedying the limitation of agent-based modeling and simulation with regard to data management but also in dealing with the development of complex simulation systems with large amount of input and output data supporting a data driven approach

    Common polygenic risk for autism spectrum disorder (ASD) is associated with cognitive ability in the general population

    Get PDF
    Acknowledgements Generation Scotland has received core funding from the Chief Scientist Office of the Scottish Government Health Directorates CZD/16/6 and the Scottish Funding Council HR03006. We are grateful to all the families who took part, the general practitioners and the Scottish School of Primary Care for their help in recruiting them and the whole Generation Scotland team, which includes interviewers, computer and laboratory technicians, clerical workers, research scientists, volunteers, managers, receptionists, health-care assistants and nurses. We acknowledge with gratitude the financial support received for this work from the Dr Mortimer and Theresa Sackler Foundation. For the Lothian Birth Cohorts (LBC1921 and LBC1936), we thank Paul Redmond for database management assistance; Alan Gow, Martha Whiteman, Alison Pattie, Michelle Taylor, Janie Corley, Caroline Brett and Caroline Cameron for data collection and data entry; nurses and staff at the Wellcome Trust Clinical Research Facility, where blood extraction and genotyping was performed; staff at the Lothian Health Board; and the staff at the SCRE Centre, University of Glasgow. The research was supported by a program grant from Age UK (Disconnected Mind) and by grants from the Biotechnology and Biological Sciences Research Council (BBSRC). The work was undertaken by The University of Edinburgh Centre for Cognitive Ageing and Cognitive Epidemiology, part of the cross council Lifelong Health and Wellbeing Initiative (MR/K026992/1). Funding from the Medical Research Council (MRC) and BBSRC is gratefully acknowledged. DJM is an NRS Career Research Fellow funded by the CSO. BATS were funded by the Australian Research Council (A79600334, A79906588, A79801419, DP0212016, DP0664638, and DP1093900) and the National Health and Medical Research Council (389875) Australia. MKL is supported by a Perpetual Foundation Wilson Fellowship. SEM is supported by a Future Fellowship (FT110100548) from the Australian Research Council. GWM is supported by a National Health and Medical Research Council (NHMRC), Australia, Fellowship (619667). We thank the twins and siblings for their participation, Marlene Grace, Ann Eldridge and Natalie Garden for cognitive assessments, Kerrie McAloney, Daniel Park, David Smyth and Harry Beeby for research support, Anjali Henders and staff in the Molecular Epidemiology Laboratory for DNA sample processing and preparation and Scott Gordon for quality control and management of the genotypes. This work is supported by a Stragetic Award from the Wellcome Trust, reference 104036/Z/14/Z.Peer reviewedPublisher PD

    Integrated smoothed location model and data reduction approaches for multi variables classification

    Get PDF
    Smoothed Location Model is a classification rule that deals with mixture of continuous variables and binary variables simultaneously. This rule discriminates groups in a parametric form using conditional distribution of the continuous variables given each pattern of the binary variables. To conduct a practical classification analysis, the objects must first be sorted into the cells of a multinomial table generated from the binary variables. Then, the parameters in each cell will be estimated using the sorted objects. However, in many situations, the estimated parameters are poor if the number of binary is large relative to the size of sample. Large binary variables will create too many multinomial cells which are empty, leading to high sparsity problem and finally give exceedingly poor performance for the constructed rule. In the worst case scenario, the rule cannot be constructed. To overcome such shortcomings, this study proposes new strategies to extract adequate variables that contribute to optimum performance of the rule. Combinations of two extraction techniques are introduced, namely 2PCA and PCA+MCA with new cutpoints of eigenvalue and total variance explained, to determine adequate extracted variables which lead to minimum misclassification rate. The outcomes from these extraction techniques are used to construct the smoothed location models, which then produce two new approaches of classification called 2PCALM and 2DLM. Numerical evidence from simulation studies demonstrates that the computed misclassification rate indicates no significant difference between the extraction techniques in normal and non-normal data. Nevertheless, both proposed approaches are slightly affected for non-normal data and severely affected for highly overlapping groups. Investigations on some real data sets show that the two approaches are competitive with, and better than other existing classification methods. The overall findings reveal that both proposed approaches can be considered as improvement to the location model, and alternatives to other classification methods particularly in handling mixed variables with large binary size

    Heuristic Approaches for Generating Local Process Models through Log Projections

    Full text link
    Local Process Model (LPM) discovery is focused on the mining of a set of process models where each model describes the behavior represented in the event log only partially, i.e. subsets of possible events are taken into account to create so-called local process models. Often such smaller models provide valuable insights into the behavior of the process, especially when no adequate and comprehensible single overall process model exists that is able to describe the traces of the process from start to end. The practical application of LPM discovery is however hindered by computational issues in the case of logs with many activities (problems may already occur when there are more than 17 unique activities). In this paper, we explore three heuristics to discover subsets of activities that lead to useful log projections with the goal of speeding up LPM discovery considerably while still finding high-quality LPMs. We found that a Markov clustering approach to create projection sets results in the largest improvement of execution time, with discovered LPMs still being better than with the use of randomly generated activity sets of the same size. Another heuristic, based on log entropy, yields a more moderate speedup, but enables the discovery of higher quality LPMs. The third heuristic, based on the relative information gain, shows unstable performance: for some data sets the speedup and LPM quality are higher than with the log entropy based method, while for other data sets there is no speedup at all.Comment: paper accepted and to appear in the proceedings of the IEEE Symposium on Computational Intelligence and Data Mining (CIDM), special session on Process Mining, part of the Symposium Series on Computational Intelligence (SSCI
    • …
    corecore