12 research outputs found
Recommended from our members
Sparsity in Machine Learning: An Information Selecting Perspective
Today we are living in a world awash with data. Large volumes of data are acquired, analyzed and applied to tasks through machine learning algorithms in nearly every area of science, business, and industry. For example, medical scientists analyze the gene expression data from a single specimen to learn the underlying causes of disease (e.g. cancer) and choose the best treatment; retailers can know more about customers\u27 shopping habits from retail data to adjust their business strategies to better appeal to customers; suppliers can enhance supply chain success through supply chain systems built on knowledge sharing. However, it is also reasonable to doubt whether all the genes make contributions to a disease; whether all the data obtained from existing customers can be applied to a new customer; whether all shared knowledge in the supply network is useful to a specific supply scenario. Therefore, it is crucial to sort through the massive information provided by data and keep what we really need. This process is referred to as information selection, which keeps the information that helps improve the performance of corresponding machine learning tasks and discards information that is useless or even harmful to task performance. Sparse learning is a powerful tool to achieve information selection. In this thesis, we apply sparse learning to two major areas in machine learning -- feature selection and transfer learning.
Feature selection is a dimensionality reduction technique that selects a subset of representative features. Recently, feature selection combined with sparse learning has attracted significant attention due to its outstanding performance compared with traditional feature selection methods that ignore correlation between features. However, they are restricted by design to linear data transformations, a potential drawback given that the underlying correlation structures of data are often non-linear. To leverage more sophisticated embedding than the linear model assumed by sparse learning, we propose an autoencoder-based unsupervised feature selection approach that leverages a single-layer autoencoder for a joint framework of feature selection and manifold learning. Additionally, we include spectral graph analysis on the projected data into the learning process to achieve local data geometry preservation from the original data space to the low-dimensional feature space.
Transfer learning describes a set of methods that aim at transferring knowledge from related domains to alleviate the problems caused by limited/no labeled training data in machine learnig tasks. Many transfer learning techniques have been proposed to deal with different application scenarios. However, due to the differences in data distribution, feature space, label space, etc., between source domain and target domain, it is necessary to select and only transfer relevant information from source domain to improve the performance of target learner. Otherwise, the target learner can be negatively impacted by the weak-related knowledge from source domain, which is referred to as negative transfer. In this thesis, we focus on two transfer learning scenarios for which limited labeled training data are available in target domain. In the first scenario, no label information is avaible in source data. In the second scenario, large amounts of labeled source data are available, but there is no overlap between the source and target label spaces. The corresponding transfer learning technique to the former case is called \emph{self-taught learning}, while that for the latter case is called \emph{few-shot learning}. We apply self-taught learning to visual, textal, and audio data. We also apply few-shot learning to wearable sensor based human activity data. For both cases, we propose a metric for the relevance between a target sample/class and a source sample/class, and then extract information from the related samples/classes for knowledge transfer to perform information selection so that negative transfer caused by weakly related source information can be alleviated. Experimental results show that transfer learning can provide better performance with information selection
Low-speed bearing fault diagnosis based on ArSSAE model using acoustic emission and vibration signals
The development of rolling element bearing fault diagnosis systems has attracted a great deal of attention due to bearing components having a high tendency toward unexpected failures. However, under low-speed operating conditions, the diagnosis of bearing components remains a problem. In this paper, the adaptive resilient stacked sparse autoencoder (ArSSAE) is proposed to compensate for the shortcomings of conventional fault diagnosis systems at low speed. The efficiency of the proposed ArSSAE model is initially assessed using the CWRU database. Then, the proposed model is evaluated on actual vibration analysis (VA) and acoustic emission (AE) signals measured on a bearing test rig at low operating speeds (48-480 rpm). Overall, the analysis demonstrates that the ArSSAE model is able to perform an accurate diagnosis of bearing components under low-speed conditions
Unsupervised space-time learning in primary visual cortex
The mammalian visual system is an incredibly complex computation device, capable of performing the various tasks of seeing: navigation, pattern and object recognition, motor coordination, trajectory extrapolation, among others. Decades of research has shown that experience-dependent plasticity of cortical circuitry underlies the impressive ability to rapidly learn many of these tasks and to adjust as required. One particular thread of investigation has focused on unsupervised learning, wherein changes to the visual environment lead to corresponding changes in cortical circuits. The most prominent example of unsupervised learning is ocular dominance plasticity, caused by visual deprivation to one eye and leading to a dramatic re-wiring of cortex. Other examples tend to make more subtle changes to the visual environment through passive exposure to novel visual stimuli. Here, we use one such unsupervised paradigm, sequence learning, to study experience-dependent plasticity in the mouse visual system. Through a combination of theory and experiment, we argue that the mammalian visual system is an unsupervised learning device.
Beginning with a mathematical exploration of unsupervised learning in biology, engineering, and machine learning, we seek a more precise expression of our fundamental hypothesis. We draw connections between information theory, efficient coding, and common unsupervised learning algorithms such as Hebbian plasticity and principal component analysis. Efficient coding suggests a simple rule for transmitting information in the nervous system: use more spikes to encode unexpected information, and fewer spikes to encode expected information. Therefore, expectation violations ought to produce prediction errors, or brief periods of heightened firing when an unexpected event occurs. Meanwhile, modern unsupervised learning algorithms show how such expectations can be learned.
Next, we review data from decades of visual neuroscience research, highlighting the computational principles and synaptic plasticity processes that support biological learning and seeing. By tracking the flow of visual information from the retina to thalamus and primary visual cortex, we discuss how the principle of efficient coding is evident in neural activity. One common example is predictive coding in the retina, where ganglion cells with canonical center-surround receptive fields compute a prediction error, sending spikes to the central nervous system only in response to locally-unpredictable visual stimuli. This behavior can be learned through simple Hebbian plasticity mechanisms. Similar models explain much of the activity of neurons in primary visual cortex, but we also discuss ways in which the theory fails to capture the rich biological complexity.
Finally, we present novel experimental results from physiological investigations of the mouse primary visual cortex. We trained mice by passively exposing them to complex spatiotemporal patterns of light: rapidly-flashed sequences of images. We find evidence that visual cortex learns these sequences in a manner consistent with efficient coding, such that unexpected stimuli tend to elicit more firing than expected ones. Overall, we observe dramatic changes in evoked neural activity across days of passive exposure. Neural responses to the first, unexpected sequence element increase with days of training while responses at other, expected time points either decrease or stay the same. Furthermore, substituting an unexpected element for an expected one or omitting an expected element both cause brief bursts of increased firing. Our results therefore provide evidence for unsupervised learning and efficient coding in the mouse visual system, especially because unexpected events drive prediction errors. Overall, our analysis suggests novel experiments, which could be performed in the near future, and provides a useful framework to understand visual perception and learning
Self-Taught Learning Based on Sparse Autoencoder for E-Nose in Wound Infection Detection
For an electronic nose (E-nose) in wound infection distinguishing, traditional learning methods have always needed large quantities of labeled wound infection samples, which are both limited and expensive; thus, we introduce self-taught learning combined with sparse autoencoder and radial basis function (RBF) into the field. Self-taught learning is a kind of transfer learning that can transfer knowledge from other fields to target fields, can solve such problems that labeled data (target fields) and unlabeled data (other fields) do not share the same class labels, even if they are from entirely different distribution. In our paper, we obtain numerous cheap unlabeled pollutant gas samples (benzene, formaldehyde, acetone and ethylalcohol); however, labeled wound infection samples are hard to gain. Thus, we pose self-taught learning to utilize these gas samples, obtaining a basis vector θ. Then, using the basis vector θ, we reconstruct the new representation of wound infection samples under sparsity constraint, which is the input of classifiers. We compare RBF with partial least squares discriminant analysis (PLSDA), and reach a conclusion that the performance of RBF is superior to others. We also change the dimension of our data set and the quantity of unlabeled data to search the input matrix that produces the highest accuracy
MS FT-2-2 7 Orthogonal polynomials and quadrature: Theory, computation, and applications
Quadrature rules find many applications in science and engineering. Their analysis is a classical area of applied mathematics and continues to attract considerable attention. This seminar brings together speakers with expertise in a large variety of quadrature rules. It is the aim of the seminar to provide an overview of recent developments in the analysis of quadrature rules. The computation of error estimates and novel applications also are described
Generalized averaged Gaussian quadrature and applications
A simple numerical method for constructing the optimal generalized averaged Gaussian quadrature formulas will be presented. These formulas exist in many cases in which real positive GaussKronrod formulas do not exist, and can be used as an adequate alternative in order to estimate the error of a Gaussian rule. We also investigate the conditions under which the optimal averaged Gaussian quadrature formulas and their truncated variants are internal
Safety and Reliability - Safe Societies in a Changing World
The contributions cover a wide range of methodologies and application areas for safety and reliability that contribute to safe societies in a changing world. These methodologies and applications include: - foundations of risk and reliability assessment and management
- mathematical methods in reliability and safety
- risk assessment
- risk management
- system reliability
- uncertainty analysis
- digitalization and big data
- prognostics and system health management
- occupational safety
- accident and incident modeling
- maintenance modeling and applications
- simulation for safety and reliability analysis
- dynamic risk and barrier management
- organizational factors and safety culture
- human factors and human reliability
- resilience engineering
- structural reliability
- natural hazards
- security
- economic analysis in risk managemen