865 research outputs found

    Augmenting Adaptation with Retrospective Model Correction for Non-Stationary Regression Problems

    Get PDF
    Existing adaptive predictive methods often use multiple adaptive mechanisms as part of their coping strategy in non-stationary environments. We address a scenario when selective deployment of these adaptive mechanisms is possible. In this case, deploying each adaptive mechanism results in different candidate models, and only one of these candidates is chosen to make predictions on the subsequent data. After observing the error of each of candidate, it is possible to revert the current model to the one which had the least error. We call this strategy retrospective model correction. In this work we aim to investigate the benefits of such approach. As a vehicle for the investigation we use an adaptive ensemble method for regression in batch learning mode which employs several adaptive mechanisms to react to changes in the data. Using real world data from the process industry we show empirically that the retrospective model correction is indeed beneficial for the predictive accuracy, especially for the weaker adaptive mechanisms

    Automated Adaptation Strategies for Stream Learning

    Get PDF
    Automation of machine learning model development is increasingly becoming an established research area. While automated model selection and automated data pre-processing have been studied in depth, there is, however, a gap concerning automated model adaptation strategies when multiple strategies are available. Manually developing an adaptation strategy can be time consuming and costly. In this paper we address this issue by proposing the use of flexible adaptive mechanism deployment for automated development of adaptation strategies. Experimental results after using the proposed strategies with five adaptive algorithms on 36 datasets confirm their viability. These strategies achieve better or comparable performance to the custom adaptation strategies and the repeated deployment of any single adaptive mechanism

    Multiple adaptive mechanisms for predictive models on streaming data.

    Get PDF
    Making predictions on non-stationary streaming data remains a challenge in many application areas. Changes in data may cause a decrease in predictive accuracy, which in a streaming setting require a prompt response. In recent years many adaptive predictive models have been proposed for dealing with these issues. Most of these methods use more than one adaptive mechanism, deploying all of them at the same time at regular intervals or in some other fixed manner. However, this manner is often determined in an ad-hoc way, as the effects of adaptive mechanisms are largely unexplored. This thesis therefore investigates different aspects of adaptation with multiple adaptive mechanisms with the aim to increase knowledge in the area, and propose heuristic approaches for more accurate adaptive predictive models. This is done by systematising and formalising the “adaptive mechanism” notion, proposing a categorisation of adaptive mechanisms and a metric to measure their usefulness, comparing the results after deployment of different orders of adaptive mechanisms during the run of the predictive method, and suggesting techniques on how to select the most appropriate adaptive mechanisms. The literature review suggests that during the prediction process, adaptive mechanisms are selected to be deployed in a certain order which is usually fixed beforehand at the design time of the algorithm. For this reason, it was investigated whether changing the selection method for the adaptive mechanisms significantly affects predictive accuracy and whether there are certain deployment orders which provide better results than others. Commonly used adaptive mechanism selection methods are then examined and new methods are proposed. A novel regression ensemble method which uses several common adaptive mechanisms has been developed to be used as a vehicle for the experimentation. The predictive accuracy and behaviour of adaptive mechanisms while predicting on different real world datasets from the process industry were analysed. Empirical results suggest that different selection of adaptive mechanisms result in significantly different performance. It has been found that while some adaptive mechanisms adapt the predictive model better than others, there is none which is the best at all times. Finally, flexible orders of adaptive mechanisms generated using the proposed selection techniques often result in significantly more accurate models than fixed orders commonly used in literature

    Multiple Adaptive Mechanisms for Data-driven Soft Sensors.

    Get PDF
    Recent data-driven soft sensors often use multiple adaptive mechanisms to cope with non-stationary environments. These mechanisms are usually deployed in a prescribed order which does not change. In this work we use real world data from the process industry to compare deploying adaptive mechanisms in a fixed manner to deploying them in a flexible way, which results in varying adaptation sequences. We demonstrate that flexible deployment of available adaptive methods coupled with techniques such as cross-validatory selection and retrospective model correction, can benefit the predictive accuracy over time. As a vehicle for this study, we use a soft-sensor for batch processes based on an adaptive ensemble method which employs several adaptive mechanisms to react to the changes in data

    Microeconomic impact of remittances on household welfare: Evidences from Bangladesh

    Full text link

    Uncertainty quantification for probabilistic machine learning in earth observation using conformal prediction

    Full text link
    Unreliable predictions can occur when using artificial intelligence (AI) systems with negative consequences for downstream applications, particularly when employed for decision-making. Conformal prediction provides a model-agnostic framework for uncertainty quantification that can be applied to any dataset, irrespective of its distribution, post hoc. In contrast to other pixel-level uncertainty quantification methods, conformal prediction operates without requiring access to the underlying model and training dataset, concurrently offering statistically valid and informative prediction regions, all while maintaining computational efficiency. In response to the increased need to report uncertainty alongside point predictions, we bring attention to the promise of conformal prediction within the domain of Earth Observation (EO) applications. To accomplish this, we assess the current state of uncertainty quantification in the EO domain and found that only 20% of the reviewed Google Earth Engine (GEE) datasets incorporated a degree of uncertainty information, with unreliable methods prevalent. Next, we introduce modules that seamlessly integrate into existing GEE predictive modelling workflows and demonstrate the application of these tools for datasets spanning local to global scales, including the Dynamic World and Global Ecosystem Dynamics Investigation (GEDI) datasets. These case studies encompass regression and classification tasks, featuring both traditional and deep learning-based workflows. Subsequently, we discuss the opportunities arising from the use of conformal prediction in EO. We anticipate that the increased availability of easy-to-use implementations of conformal predictors, such as those provided here, will drive wider adoption of rigorous uncertainty quantification in EO, thereby enhancing the reliability of uses such as operational monitoring and decision making

    Long-term-robust adaptation strategies for reservoir operation considering magnitude and timing of climate change: application to Diyala River Basin in Iraq

    Get PDF
    2020 Spring.Includes bibliographical references.Vulnerability assessment due to climate change impacts is of paramount importance for reservoir operation to achieve the goals of water resources management. This requires accurate forcing and basin data to build a valid hydrology model and assessment of the sensitivity of model results to the forcing data and uncertainty of model parameters. The first objective of this study is to construct the model and identify its sensitivity to the model parameters and uncertainty of the forcing data. The second objective is to develop a Parametric Regional Weather Generator (RP-WG) for use in areas with limited data availability that mimics observed characteristics. The third objective is to propose and assess a decision-making framework to evaluate pre-specified reservoir operation plans, determine the theoretical optimal plan, and identify the anticipated best timeframe for implementation by considering all possible climate scenarios. To construct the model, the Variable Infiltration Capacity (VIC) platform was selected to simulate the characteristics of the Diyala River Basin (DRB) in Iraq. Several methods were used to obtain the forcing data and they were validated using the Kling–Gupta efficiency (KGE) metric. Variables considered include precipitation, temperature, and wind speed. Model sensitivity and uncertainty were examined by the Generalized Likelihood Uncertainty Estimation (GLUE) and the Differential Evolution Adaptive Metropolis (DREAM) techniques. The proposed RP-WG was based on (1) a First-order, Two-state Markov Chain to simulate precipitation occurrences; (2) use of Wilks' technique to produce correlated weather variables at multiple sites with conservation of spatial, temporal, and cross correlations; and (3) the capability to produce a wide range of synthetic climate scenarios. A probabilistic decision-making framework under nonstationary hydroclimatic conditions was proposed with four stages: (1) climate exposure generation (2) supply scenario calculations, (3) demand scenario calculations, and (4) multi-objective performance assessment. The framework incorporated a new metric called Maximum Allowable Time to examine the timeframe for robust adaptations. Three synthetic pre-suggested plans were examined to avoid undesirable long-term climate change impacts, while the theoretical-optimal plan was identified by the Non-dominated Sorting Genetic Algorithm II. The multiplicative random cascade and Schaake Shuffle techniques were used to determine daily precipitation data, while a set of correction equations was developed to adjust the daily temperature and wind speed. The depth of the second soil layer caused most sensitivity in the VIC model, and the uncertainty intervals demonstrated the validity of the VIC model to generate reasonable forecasts. The daily VIC outputs were calibrated with a KGE average of 0.743, and they were free from non-normality, heteroscedasticity, and auto-correlation. Results of the PR-WG evaluation show that it exhibited high values of the KGE, preserved the statistical properties of the observed variables, and conserved the spatial, temporal, and cross correlations among the weather variables at all sites. Finally, risk assessment results show that current operational rules are robust for flood protection but vulnerable in drought periods. This implies that the project managers should pay special attention to the drought and spur new technologies to counteract. Precipitation changes were dominant in flood and drought management, and temperature and wind speed changes effects were significant during drought. The results demonstrated the framework's effectiveness to quantify detrimental climate change effects in magnitude and timing with the ability to provide a long-term guide (and timeframe) to avert the negative impacts

    Potential Output: Measurement Methods, "New" Economy Influences and Scenarios for 2001-2010 - A comparison of the EU-15 and the US.

    Get PDF
    This paper presents an overview of the various methodologies for estimating potential output at the macroeconomic level. Emphasis was laid on the production function approach which is used together with the univariate statistical HP filter method to produce potential output estimates for the US and the EU15 economies (as well as the individual EU Member States) from 1966-2002. The paper also assesses the role of "new" economy influences on potential growth and provides estimates of the likely contribution to past and future output capacity from this source. Finally, potential growth scenarios are given for the EU15 as a whole and the US for the period 2001-2010, with the central scenario pointing to annual average growth rates of 2 ¾% for the EU15 and 3% for the US over the next ten years.macroeconomic analysis, potential growth, potential output

    Financialisation, Profitability and the Rise of Household Debt in the United Kingdom, 1971 - 2015

    Get PDF
    Over the last fifty year, the trajectory of household debt has undergone several transformations in some jurisdictions, especially in advanced capitalist countries such as the United States and the United Kingdom, where household debt has risen to historically high levels. This phenomenon appeared in tandem with the end of Fordist salaries and the unresolved crisis of over-accumulation, leading to massive losses of profitability in manufacturing. The central argument put forth in this thesis is that the rise of household debt has not been a spontaneous balancing act of the markets in general, but a concrete political strategy of capital to restore the collapsing rate of profit in the real economy by way of financial speculation, especially household debt speculation. In this context, the study examines, in a historically and theoretically informed manner, the unequal relationship of capital-labour relations and how this has intensified since the collapse of the Bretton Woods system in 1971. This period saw the liberalisation of the exchange rate and credit was unleashed, empowering financial capital at the expense of labour, thereby placing financial services and banks at the centre of national and global political economies. The analysis provides both a concrete historical and contemporary perspective into underlying factors, causes and consequences that surround the growth of household debt in the United Kingdom and the fictitious levels of debt-led economic growth experienced, not just in the United Kingdom, but in many other countries in the world. Lastly, the empirical analysis into the rise of household debt reveals that the housing market and the financial market have often been identified to be the causal agents at the root of most financial crises. The ARDL method was employed to investigate the presence of cointegrating relationships between the individual regressor. The evidence confirms that declining rate of profit, end of high Fordist wages and house price movements contributed significantly to the rise of household debt in the United Kingdom. The operation of financial institutions to extract rent and profits from over-indebted households, especially in the 1990s and 2000s, eventually blew up during the global financial crisis of 2007-08, the consequences of which are felt to the present day

    An Examination into the Putative Mechanisms Underlying Human Sensorimotor Learning and Decision Making

    Get PDF
    Sensorimotor learning can be defined as a process by which an organism benefits from its experience, such that its future behaviour is better adapted to its environment. Humans are sensorimotor learners par excellence, and neurologically intact adults possess an incredible repertoire of skilled behaviours. Nevertheless, despite the topic fascinating scientists for centuries, there remains a lack of understanding about how humans truly learn. There is a need to better understand sensorimotor learning mechanisms in order to develop treatments for individuals with movement problems, improve training regimes (e.g. surgery) and accelerate motor learning in tasks such as handwriting in children and stroke rehabilitation. This thesis set out to improve our understanding of sensorimotor learning processes and develop methodologies and tools that enable other scientists to tackle these research questions using the power of recent developments in computer science (particularly immersive technologies). Errors in sensorimotor learning are the specific focus of the experimental chapters of this thesis, where the goal is to address our understanding of error perception and correction in motor learning and provide a computational understanding of how we process different types of error to inform subsequent behaviour. A brief summary of the approaches employed, and tools developed over the course of this thesis are presented below. Chapter 1 of this thesis provides a concise overview of the literature on human sensorimotor learning. It introduces the concept of internal models of human interactions with the environment, constructed and refined by the brain in the learning process. Highlighted in this chapter are potential mechanisms for promoting learning (e.g. error augmentation, motor variability) and outstanding challenges for the field (e.g. redundancy, credit assignment). In Chapter 2 a computational model based on information acquisition is developed. The model suggests that disruptive forces applied to human movements during training could improve learning because they allow the learner to sample more information from their environment. Chapter 3 investigates whether sensorimotor learning can be accelerated through forcing participants to explore (and thus acquire more information) a novel workspace. The results imply that exploration may be a necessary component of learning but manipulating it in this way is not sufficient to accelerate learning. This work serves to highlight the critical role of error correction in learning. The process of conducting the experimental work in Chapters 2 and 3 highlighted the need for an application programme interface that would allow researchers to rapidly deploy experiments that allow one to examine learning in a controlled but ecologically relevant manner. Virtual reality systems (that measure human interactions with computer generated worlds) provide a powerful tool for exploring sensorimotor learning and their use in the study of human behaviour is now more feasible due to recent technological advances. To this end, Chapter 4 reports the development of the Unity Experiment Framework - a new tool to assist in the development of virtual reality experiments in the Unity game engine. Chapter 5 builds on the findings from Chapters 2 & 3 on learning by addressing the specific contributions of visual error. It utilises the Unity Experiment Framework to explore whether visually increasing the error signal in a novel aiming task can accelerate motor learning. A novel aiming task is developed which requires participants to learn the mapping between rotations of the handheld virtual reality controllers and the movement of a cursor in Cartesian space. The results show that the visual disturbance does not accelerate the learning of skilled movements, implying a crucial role for mechanical forces, or physical error correction, which is consistent with the findings reported in Chapter 2. Uncontrolled manifold analysis provides insight into how the variability in selected solutions related to learning and performance, as the task deliberately allowed a variety of solutions from a redundant parameter space. Chapter 6 extends the scope of this thesis by examining how error information from the sensorimotor system influences higher order action selection processes. Chapter 5 highlighted the loose definition of “error” in sensorimotor learning and here, the goal was to advance our understanding of error learning by discriminating between different sources of error to better understand their contributions to future behaviour. This issue is illustrated through the example of a tennis player who, on a given point, has the options of selecting a backhand or forehand shot available to her. If the shot is ineffective (and produces an error signal), to optimise future behaviour, the brain needs to rapidly determine whether the error was due to poor shot selection, or whether the correct shot was selected but just poorly executed. To examine these questions, a novel ‘action bandit’ task was developed where participants made reaching movements towards targets, with each target having distinct probabilities of execution and selection error. The results revealed a significant selection bias towards a target that produced a higher frequency of execution errors (rather than a target associated with more selection error) despite no difference in expected value. This behaviour may be explained by a gating mechanism, where learning from the lack of reward is discounted following sensorimotor errors. However, execution errors also increase uncertainty about the appropriateness of a selected choice and the need to reduce uncertainty could equally account for these results. Subsequent experiments test these competing hypotheses and show this putative gating mechanism can be dynamically regulated though coupling of selections and execution errors. Development of models of these processes highlighted the dynamics of the mechanisms that drive the behaviour. In Chapter 7, the motor component of the task was removed to examine whether this effect is not unique to execution errors, but a feature of any two-stage decision-making process with, multiple error types which are presumed to be dissociated. These observations highlight the complex role error plays in learning and suggest the credit assignment process is guided and modulated by internal models of the task at hand. Finally, Chapter 8 closes this thesis with a summary of the key findings and arising from this work in the context of the literature on motor learning and decision making. It is noted here that this thesis sought to cover two broad research topics of motor learning and decision making that have, until recently, been studied by separate groups of researchers, with very little overlap in literature. A key goal of this programme of research was to contribute towards bringing together these hitherto disparate fields by focussing on breadth to establish common ground. As the experimental work developed, it became clear that the processing of error required a multi-pronged approach. Within each experimental chapter, the focus on error was accordingly narrowed and definitions refined. This culminated in developing and testing how individuals discriminate between errors in the sensorimotor and cognitive domains, thus presenting a framework for understanding how motor learning and decision making interact
    corecore