1,167,808 research outputs found
ELM regime classification by conformal prediction on an information manifold
Characterization and control of plasma instabilities known as edge-localized modes (ELMs) is crucial for the operation of fusion reactors. Recently, machine learning methods have demonstrated good potential in making useful inferences from stochastic fusion data sets. However, traditional classification methods do not offer an inherent estimate of the goodness of their prediction. In this paper, a distance-based conformal predictor classifier integrated with a geometric-probabilistic framework is presented. The first benefit of the approach lies in its comprehensive treatment of highly stochastic fusion data sets, by modeling the measurements with probability distributions in a metric space. This enables calculation of a natural distance measure between probability distributions: the Rao geodesic distance. Second, the predictions are accompanied by estimates of their accuracy and reliability. The method is applied to the classification of regimes characterized by different types of ELMs based on the measurements of global parameters and their error bars. This yields promising success rates and outperforms state-of-the-art automatic techniques for recognizing ELM signatures. The estimates of goodness of the predictions increase the confidence of classification by ELM experts, while allowing more reliable decisions regarding plasma control and at the same time increasing the robustness of the control system
Meta-RaPS Hybridization with Machine Learning Algorithms
This dissertation focuses on advancing the Metaheuristic for Randomized Priority Search algorithm, known as Meta-RaPS, by integrating it with machine learning algorithms. Introducing a new metaheuristic algorithm starts with demonstrating its performance. This is accomplished by using the new algorithm to solve various combinatorial optimization problems in their basic form. The next stage focuses on advancing the new algorithm by strengthening its relatively weaker characteristics. In the third traditional stage, the algorithms are exercised in solving more complex optimization problems. In the case of effective algorithms, the second and third stages can occur in parallel as researchers are eager to employ good algorithms to solve complex problems. The third stage can inadvertently strengthen the original algorithm. The simplicity and effectiveness Meta-RaPS enjoys places it in both second and third research stages concurrently. This dissertation explores strengthening Meta-RaPS by incorporating memory and learning features. The major conceptual frameworks that guided this work are the Adaptive Memory Programming framework (or AMP) and the metaheuristic hybridization taxonomy. The concepts from both frameworks are followed when identifying useful information that Meta-RaPS can collect during execution. Hybridizing Meta-RaPS with machine learning algorithms helped in transforming the collected information into knowledge. The learning concepts selected are supervised and unsupervised learning. The algorithms selected to achieve both types of learning are the Inductive Decision Tree (supervised learning) and Association Rules (unsupervised learning). The objective behind hybridizing Meta-RaPS with an Inductive Decision Tree algorithm is to perform online control for Meta-RaPS\u27 parameters. This Inductive Decision Tree algorithm is used to find favorable parameter values using knowledge gained from previous Meta-RaPS iterations. The values selected are used in future Meta-RaPS iterations. The objective behind hybridizing Meta-RaPS with an Association Rules algorithm is to identify patterns associated with good solutions. These patterns are considered knowledge and are inherited as starting points for in future Meta-RaPS iteration. The performance of the hybrid Meta-RaPS algorithms is demonstrated by solving the capacitated Vehicle Routing Problem with and without time windows
Fabrics: A Foundationally Stable Medium for Encoding Prior Experience
Most dynamics functions are not well-aligned to task requirements.
Controllers, therefore, often invert the dynamics and reshape it into something
more useful. The learning community has found that these controllers, such as
Operational Space Control (OSC), can offer important inductive biases for
training. However, OSC only captures straight line end-effector motion. There's
a lot more behavior we could and should be packing into these systems. Earlier
work [15][16][19] developed a theory that generalized these ideas and
constructed a broad and flexible class of second-order dynamical systems which
was simultaneously expressive enough to capture substantial behavior (such as
that listed above), and maintained the types of stability properties that make
OSC and controllers like it a good foundation for policy design and learning.
This paper, motivated by the empirical success of the types of fabrics used in
[20], reformulates the theory of fabrics into a form that's more general and
easier to apply to policy learning problems. We focus on the stability
properties that make fabrics a good foundation for policy synthesis. Fabrics
create a fundamentally stable medium within which a policy can operate; they
influence the system's behavior without preventing it from achieving tasks
within its constraints. When a fabrics is geometric (path consistent) we can
interpret the fabric as forming a road network of paths that the system wants
to follow at constant speed absent a forcing policy, giving geometric intuition
to its role as a prior. The policy operating over the geometric fabric acts to
modulate speed and steers the system from one road to the next as it
accomplishes its task. We reformulate the theory of fabrics here rigorously and
develop theoretical results characterizing system behavior and illuminating how
to design these systems, while also emphasizing intuition throughout
Community development, higher education institutions and the Big Society: opportunities or opportunism?
In his Prison Notebooks, written between 1929-35, Gramsci claimed that 'all men are intellectuals: but not all men have in society the function of intellectuals.'
He used this term 'organic intellectuals' to illustrate that those working at grassroots level who have significant knowledge(s) about the way communities of all types work, are as important to the development of society as academic intellectuals. This article explores the current idea of a 'Big Society' as a hegemonic idea. This exploration is undertaken in relation to the current economic, social and political situation and with reference to the practice of community development, lifelong learning and the role of the Higher Education Institutions (HEIs) in supporting this field of activity. In this article we use the term 'community development' as Tett defines in Morgan-Klein and Osborne (2007:104). She claims it means to 'increase the capacity of particular communities through targeted resources for particular areas'.
We specifically explore the following areas:
<p>
• challenging the hegemonic ideas and policies
• practising within the restrictions of cuts and limited resources
• setting up supportive networks which will sustain workers
• making meaningful international links abroad and using international examples of good practice
• turning the ideology of the Big Society into an opportunity</p>
We will pose the critical questions that we think need to be addressed and which we hope will help us to find direction and an understanding of the way forward at a deeper level. We hope to create both useful and innovative knowledge which will be a valid contribution to the field of community development
MetaBags: Bagged Meta-Decision Trees for Regression
Ensembles are popular methods for solving practical supervised learning
problems. They reduce the risk of having underperforming models in
production-grade software. Although critical, methods for learning
heterogeneous regression ensembles have not been proposed at large scale,
whereas in classical ML literature, stacking, cascading and voting are mostly
restricted to classification problems. Regression poses distinct learning
challenges that may result in poor performance, even when using well
established homogeneous ensemble schemas such as bagging or boosting.
In this paper, we introduce MetaBags, a novel, practically useful stacking
framework for regression. MetaBags is a meta-learning algorithm that learns a
set of meta-decision trees designed to select one base model (i.e. expert) for
each query, and focuses on inductive bias reduction. A set of meta-decision
trees are learned using different types of meta-features, specially created for
this purpose - to then be bagged at meta-level. This procedure is designed to
learn a model with a fair bias-variance trade-off, and its improvement over
base model performance is correlated with the prediction diversity of different
experts on specific input space subregions. The proposed method and
meta-features are designed in such a way that they enable good predictive
performance even in subregions of space which are not adequately represented in
the available training data.
An exhaustive empirical testing of the method was performed, evaluating both
generalization error and scalability of the approach on synthetic, open and
real-world application datasets. The obtained results show that our method
significantly outperforms existing state-of-the-art approaches
Learning problem solving strategies using refinement and macro generation
In this paper we propose a technique for learning efficient strategies for solving a certain class of problems. The method, RWM, makes use of two separate methods, namely, refinement and macro generation. The former is a method for partitioning a given problem into a sequence of easier subproblems. The latter is for efficiently learning composite moves which are useful in solving the problem. These methods and a system that incorporates them are described in detail. The kind of strategies learned by RWM are based on the GPS problem solving method. Examples of strategies learned for different types of problems are given. RWM has learned good strategies for some problems which are difficult by human standards. © 1990
Model-based prediction of progression-free survival for combination therapies in oncology
Progression-free survival (PFS) is an important clinical metric for comparing and evaluating similar treatments for the same disease within oncology. After the completion of a clinical trial, a descriptive analysis of the patients\u27 PFS is often performed post hoc using the Kaplan–Meier estimator. However, to perform predictions, more sophisticated quantitative methods are needed. Tumor growth inhibition models are commonly used to describe and predict the dynamics of preclinical and clinical tumor size data. Moreover, frameworks also exist for describing the probability of different types of events, such as tumor metastasis or patient dropout. Combining these two types of models into a so-called joint model enables model-based prediction of PFS. In this paper, we have constructed a joint model from clinical data comparing the efficacy of FOLFOX against FOLFOX + panitumumab in patients with metastatic colorectal cancer. The nonlinear mixed effects framework was used to quantify interindividual variability (IIV). The model describes tumor size and PFS data well, and showed good predictive capabilities using truncated as well as external data. A machine-learning guided analysis was performed to reduce unexplained IIV by incorporating patient covariates. The model-based approach illustrated in this paper could be useful to help design clinical trials or to determine new promising drug candidates for combination therapy trials
- …