1,886 research outputs found

    Longitudinal Study on Sustained Attention to Response Task (SART): Clustering Approach for Mobility and Cognitive Decline

    Get PDF
    The Sustained Attention to Response Task (SART) is a computer-based go/no-go task to measure neurocognitive function in older adults. However, simplified average features of this complex dataset lead to loss of primary information and fail to express associations between test performance and clinically meaningful outcomes. Here, we combine a novel method to visualise individual trial (raw) information obtained from the SART test in a large population-based study of ageing in Ireland and an automatic clustering technique. We employed a thresholding method, based on the individual trial number of mistakes, to identify poorer SART performances and a fuzzy clusters algorithm to partition the dataset into 3 subgroups, based on the evolution of SART performance after 4 years. Raw SART data were available for 3468 participants aged 50 years and over at baseline. The previously reported SART visualisation-derived feature 'bad performance', indicating the number of SART trials with at least 4 mistakes, and its evolution over time, combined with the fuzzy c-mean (FCM) algorithm, individuated 3 clusters corresponding to 3 degrees of physiological dysregulation. The biggest cluster (94% of the cohort) was constituted by healthy participants, a smaller cluster (5% of the cohort) by participants who showed improvement in cognitive and psychological status, and the smallest cluster (1% of the cohort) by participants whose mobility and cognitive functions dramatically declined after 4 years. We were able to identify in a cohort of relatively high-functioning community-dwelling adults a very small group of participants who showed clinically significant decline. The selected smallest subset manifested not only mobility deterioration, but also cognitive decline, the latter being usually hard to detect in population-based studies. The employed techniques could identify at-risk participants with more specificity than current methods, and help clinicians better identify and manage the small proportion of community-dwelling older adults who are at significant risk of functional decline and loss of independence

    Industry 4.0 And Short-Term Outlook for AEC Industry Workforce

    Get PDF
    Technology is uniquely transforming our society to a significant degree. This transformation has been described as Industry 4.0 and encompasses machine learning, computerization, automation, artificial intelligence, and robotics. Industry 4.0 is currently impacting the United States’ workplace and is projected in continue uniquely changing our society over the next twenty years or so. Looking specifically at the AEC industry, this paper researches how the AEC industry workplace could be impacted by Industry 4.0 over the next several years. The hypothesis that jobs more at risk for automation should see low or negative growth and lower wages over the next several years was tested by using U.S. Bureau of Labor Statistics (BLS) occupational wage data and growth projections to create an opportunity value for each occupation, and then evaluating the relationship between the opportunity value and probability of automation. A statistical significance was found between the two variables. The hypothesis that certain skills are particularly associated with high growth/high wage jobs versus low growth/low wage jobs was tested by scraping important skills/qualities from the individual occupational webpages hosted by the U.S. Bureau of Labor Statistics, and then comparing the approximately top 80% of skills scraped between the two groups. Certain skills/qualities were found to be particularly associated with each group. Finally, the occupations associated with the AEC industry were compared with the findings from the first two hypotheses. The discoveries were that the AEC industry is potentially more susceptible to Industry 4.0 than other industries. This research is of significance because research into how the AEC industry workplace will be impacted by Industry 4.0 over the next several years was not found in the research background, and it has implications on potential career choices, skill requirements, and areas of research and development. Recommendations for future work include utilizing new data sources, Monte Carlo simulations, cohort analysis, and cluster analysis to make more specific forecasts on Industry 4.0’s impact on the AEC industry.M.S

    Evaluation of lntelligent Medical Systems

    Get PDF
    This thesis presents novel, robust, analytic and algorithmic methods for calculating Bayesian posterior intervals of receiver operating characteristic (ROC) curves and confusion matrices used for the evaluation of intelligent medical systems tested with small amounts of data. Intelligent medical systems are potentially important in encapsulating rare and valuable medical expertise and making it more widely available. The evaluation of intelligent medical systems must make sure that such systems are safe and cost effective. To ensure systems are safe and perform at expert level they must be tested against human experts. Human experts are rare and busy which often severely restricts the number of test cases that may be used for comparison. The performance of expert human or machine can be represented objectively by ROC curves or confusion matrices. ROC curves and confusion matrices are complex representations and it is sometimes convenient to summarise them as a single value. In the case of ROC curves, this is given as the Area Under the Curve (AUC), and for confusion matrices by kappa, or weighted kappa statistics. While there is extensive literature on the statistics of ROC curves and confusion matrices they are not applicable to the measurement of intelligent systems when tested with small data samples, particularly when the AUC or kappa statistic is high. A fundamental Bayesian study has been carried out, and new methods devised, to provide better statistical measures for ROC curves and confusion matrices at low sample sizes. They enable exact Bayesian posterior intervals to be produced for: (1) the individual points on a ROC curve; (2) comparison between matching points on two uncorrelated curves; . (3) the AUC of a ROC curve, using both parametric and nonparametric assumptions; (4) the parameters of a parametric ROC curve; and (5) the weight of a weighted confusion matrix. These new methods have been implemented in software to provide a powerful and accurate tool for developers and evaluators of intelligent medical systems in particular, and to a much wider audience using ROC curves and confusion matrices in general. This should enhance the ability to prove intelligent medical systems safe and effective and should lead to their widespread deployment. The mathematical and computational methods developed in this thesis should also provide the basis for future research into determination of posterior intervals for other statistics at small sample sizes

    Advanced forecasting methods for renewable generation and loads in modern power systems

    Get PDF
    The PhD Thesis deals with the problem of forecasting in power systems, i.e., a wide topic that today covers many and many needs, and that is universally acknowledged to require further deep research efforts. After a brief discussion on the classification of forecasting systems and on the methods that are currently available in literature for forecasting electrical variables, stressing pros and cons of each approach, the PhD Thesis provides four contributes to the state of the art on forecasting in power systems where literature is somehow weak. The first provided contribute is a Bayesian-based probabilistic method to forecast photovoltaic (PV) power in short-term scenarios. Parameters of the predictive distributions are estimated by means of an exogenous linear regression model and through the Bayesian inference of past observations. The second provided contribute is a probabilistic competitive ensemble method once again to forecast PV power in short-term scenarios. The idea is to improve the quality of forecasts obtained through some individual probabilistic predictors, by combining them in a probabilistic competitive approach based on a linear pooling of predictive cumulative density functions. A multi-objective optimization method is proposed in order to guarantee elevate sharpness and reliability characteristics of the predictive distribution. The third contribute is aimed to the development of a deterministic industrial load forecasting method suitable in short-term scenarios, at both aggregated and single-load levels, and for both active and reactive powers. The deterministic industrial load forecasting method is based on multiple linear regression and support vector regression models, selected by means of 10-fold cross-validation or lasso analysis. The fourth contribute provides advanced PDFs for the statistical characterization of Extreme Wind Speeds (EWS). In particular, the PDFs proposed in the PhD Thesis are an Inverse Burr distribution and a mixture Inverse Burr – Inverse Weibull distribution. The mixture of an Inverse Burr and an Inverse Weibull distribution allows to increase the versatility of the tool, although increasing the number of parameters to be estimated. This complicates the parameter estimation process, since traditional techniques such as the maximum likelihood estimation suffer from convergence problems. Therefore, an expectation-maximization procedure is specifically developed for the parameter estimation. All of the contributes presented in the PhD Thesis are tested on actual data, and compared to the state-of-the-art benchmarks to assess the suitability of each proposal

    User-controlled cyber-security using automated key generation

    Get PDF
    Traditionally, several different methods are fully capable of providing an adequate degree of security to the threats and attacks that exists for revealing different keys. Though almost all the traditional methods give a good level of immunity to any possible breach in security keys, the biggest issue that exist with these methods is the dependency over third-party applications. Therefore, use of third-party applications is not an acceptable method to be used by high-security applications. For high-security applications, it is more secure that the key generation process is in the hands of the end users rather than a third-party. Giving access to third parties for high-security applications can also make the applications more venerable to data theft, security breach or even a loss in their integrity. In this research, the evolutionary computing tool Eureqa is used for the generation of encryption keys obtained by modelling pseudo-random input data. Previous approaches using this tool have required a calculation time too long for practical use and addressing this drawback is the main focus of the research. The work proposes a number of new approaches to the generation of secret keys for the encryption and decryption of data files and they are compared in their ability to operate in a secure manner using a range of statistical tests and in their ability to reduce calculation time using realistic practical assessments. A number of common tests of performance are the throughput, chi-square, histogram, time for encryption and decryption, key sensitivity and entropy analysis. From the results of the statistical tests, it can be concluded that the proposed data encryption and decryption algorithms are both reliable and secure. Being both reliable and secure eliminates the need for the dependency over third-party applications for the security keys. It also takes less time for the users to generate highly secure keys compared to the previously known techniques.The keys generated via Eureqa also have great potential to be adapted to data communication applications which require high security

    Determining the Points of Change in Time Series of Polarimetric SAR Data

    Get PDF
    • …
    corecore