5,040 research outputs found
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines
Recent studies have shown that synaptic unreliability is a robust and
sufficient mechanism for inducing the stochasticity observed in cortex. Here,
we introduce Synaptic Sampling Machines, a class of neural network models that
uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised
learning. Similar to the original formulation of Boltzmann machines, these
models can be viewed as a stochastic counterpart of Hopfield networks, but
where stochasticity is induced by a random mask over the connections. Synaptic
stochasticity plays the dual role of an efficient mechanism for sampling, and a
regularizer during learning akin to DropConnect. A local synaptic plasticity
rule implementing an event-driven form of contrastive divergence enables the
learning of generative models in an on-line fashion. Synaptic sampling machines
perform equally well using discrete-timed artificial units (as in Hopfield
networks) or continuous-timed leaky integrate & fire neurons. The learned
representations are remarkably sparse and robust to reductions in bit precision
and synapse pruning: removal of more than 75% of the weakest connections
followed by cursory re-learning causes a negligible performance loss on
benchmark classification tasks. The spiking neuron-based synaptic sampling
machines outperform existing spike-based unsupervised learners, while
potentially offering substantial advantages in terms of power and complexity,
and are thus promising models for on-line learning in brain-inspired hardware
Measurement, estimation, and prediction of software reliability
Quantitative indices of software reliability are defined, and application of three important indices is indicated: (1) reliability measurement, (2) reliability estimation, and (3) reliability prediction. State of the art techniques for each of these procedures are presented together with considerations of data acquisition. Failure classifications and other documentation for comprehensive software reliability evaluation are described
Outlier Detection Techniques For Wireless Sensor Networks: A Survey
In the field of wireless sensor networks, measurements that
significantly deviate from the normal pattern of sensed data are
considered as outliers. The potential sources of outliers include
noise and errors, events, and malicious attacks on the network.
Traditional outlier detection techniques are not directly
applicable to wireless sensor networks due to the multivariate
nature of sensor data and specific requirements and limitations of
the wireless sensor networks. This survey provides a comprehensive
overview of existing outlier detection techniques specifically
developed for the wireless sensor networks. Additionally, it
presents a technique-based taxonomy and a decision tree to be used
as a guideline to select a technique suitable for the application
at hand based on characteristics such as data type, outlier type,
outlier degree
Improving data driven decision making through integration of environmental sensing technologies
Coastal and estuarine zones contain vital and increasingly exploited resources. Traditional uses in these areas (transport, fishing, tourism) now sit alongside more recent activities (mineral extraction, wind farms). However, protecting the resource base upon which these marine-related economic and social activities depend requires access to reliable and timely data.
This requires both acquisition of background (baseline) data and monitoring impacts of resource exploitation on aquatic processes and the environment. Management decisions must be based on analysis of collected data to reduce negative impacts while supporting resource-efficient, environmentally sustainable uses. Multi-modal sensing and data fusion offer attractive possibilities for providing such data in a resource efficient and robust manner.
In this paper, we report the results of integrating multiple sensing technologies, including autonomous multi-parameter aquatic sensors with visual sensing systems. By focussing on salinity measurements, water level and freshwater influx into an estuarine system; we demonstrate the potential of modelling and data mining techniques in allowing deployment of fewer sensors, with greater network robustness. Using the estuary of the River Liffey in Dublin, Ireland, as an example, we present the outputs and benefits resulting from fusion of multi-modal sensing technologies to predict and understand freshwater input into estuarine systems and discuss the potential of multi-modal datasets for informed management decisions
Using Unsupervised Learning to Improve the Naive Bayes Classifier for Wireless Sensor Networks
Online processing is essential for many sensor network applications. Sensor nodes can sample far more data than what can practically be transmitted using state of the art sensor network radios. Online processing, however, is complicated due to limited resources of individual nodes. The naive Bayes classifier is an algorithm proven to be suitable for online classification on Wireless Sensor Networks. In this paper, we investigate a new technique to improve the naive Bayes classifier while maintaining sensor network compatibility. We propose the application of unsupervised learning techniques to enhance the probability density estimation needed for naive Bayes, thereby achieving the benefits of binning histogram probability density estimation without the related memory requirements. Using an offline experimental dataset, we demonstrate the possibility of matching the performance of the binning histogram approach within the constraints provided by Wireless Sensor Network hardware. We validate the feasibility of our approach using an implementation based on Arduino Nano hardware combined with NRF24L01+ radios
Statistical Analysis And Machine Learning For Coal Classification For Rare Earth Elements + Y (REY)
Due to their exceptional properties, rare earth elements (REEs) are critical to technological innovation in renewable energy production, electronics, health care, and national defense. They make up key components for many applications in the above areas. Many countries rely upon rare earth element imports. The high demand for rare earth elements has led to the development of alternative methods for exploration and capture. Coal has been labeled a viable potential source of rare earth elements and yttrium (REY). Statistical evaluation of REY concentrations and the properties of various coal samples is critical for successful characterization.
The USGS COALQUAL database Version 3.0 is an industry standard database for coal research that contains 7658 non-weathered, full-bed coal samples from the United States. 5485 of these samples contain a full spectrum of REY concentrations. The data quality in the COALQUAL database will be analyzed to ensure that the data is reliable, and characteristics will be analyzed using conventional statistical methodology. This methodology includes accounting for samples with REY concentrations below the lowest limits of detection. Mean concentrations for each REY will be adjusted to fit a distribution of mean REY concentrations from the National Coal Resources Data System (NCRDS) normalized by the Upper Continental Crust standard dataset of REY mean concentrations. All samples are classified as unpromising or promising using total rare earth oxide concentration and the ratio of critical REYs to excess REYs called the outlook coefficient.
Machine learning is a powerful tool that can utilize data to classify new data points added to a database based on data attributes. A machine learning model was developed to use existing data from the COALQUAL database to train and test algorithms to classify coal samples as unpromising or promising based on the samples ASTM ash percentage. The 5485 adjusted coal samples from the COALQUAL database were used and subjected to synthetic minority over-sampling technique (SMOTE) to eliminate label bias, and imputing methods were used to format the data for computational purposes. The adjusted coal samples were tested amongst various machine learning algorithms for the best performance. Accuracy and the number of false positives were the key performance indicators used to test each algorithm. The k-nearest neighbors (KNN) algorithm emerged as the best performer with 92% accuracy and 2% false positives. A brief economic analysis is included to justify using the model to save costs associated with obtaining trace element concentrations from laboratory analysis. Recommendations are given with details on how to utilize this research for future endeavors
Outlier detection techniques for wireless sensor networks: A survey
In the field of wireless sensor networks, those measurements that significantly deviate from the normal pattern of sensed data are considered as outliers. The potential sources of outliers include noise and errors, events, and malicious attacks on the network. Traditional outlier detection techniques are not directly applicable to wireless sensor networks due to the nature of sensor data and specific requirements and limitations of the wireless sensor networks. This survey provides a comprehensive overview of existing outlier detection techniques specifically developed for the wireless sensor networks. Additionally, it presents a technique-based taxonomy and a comparative table to be used as a guideline to select a technique suitable for the application at hand based on characteristics such as data type, outlier type, outlier identity, and outlier degree
Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective
On metrics of density and power efficiency, neuromorphic technologies have
the potential to surpass mainstream computing technologies in tasks where
real-time functionality, adaptability, and autonomy are essential. While
algorithmic advances in neuromorphic computing are proceeding successfully, the
potential of memristors to improve neuromorphic computing have not yet born
fruit, primarily because they are often used as a drop-in replacement to
conventional memory. However, interdisciplinary approaches anchored in machine
learning theory suggest that multifactor plasticity rules matching neural and
synaptic dynamics to the device capabilities can take better advantage of
memristor dynamics and its stochasticity. Furthermore, such plasticity rules
generally show much higher performance than that of classical Spike Time
Dependent Plasticity (STDP) rules. This chapter reviews the recent development
in learning with spiking neural network models and their possible
implementation with memristor-based hardware
Fleet Prognosis with Physics-informed Recurrent Neural Networks
Services and warranties of large fleets of engineering assets is a very
profitable business. The success of companies in that area is often related to
predictive maintenance driven by advanced analytics. Therefore, accurate
modeling, as a way to understand how the complex interactions between operating
conditions and component capability define useful life, is key for services
profitability. Unfortunately, building prognosis models for large fleets is a
daunting task as factors such as duty cycle variation, harsh environments,
inadequate maintenance, and problems with mass production can lead to large
discrepancies between designed and observed useful lives. This paper introduces
a novel physics-informed neural network approach to prognosis by extending
recurrent neural networks to cumulative damage models. We propose a new
recurrent neural network cell designed to merge physics-informed and
data-driven layers. With that, engineers and scientists have the chance to use
physics-informed layers to model parts that are well understood (e.g., fatigue
crack growth) and use data-driven layers to model parts that are poorly
characterized (e.g., internal loads). A simple numerical experiment is used to
present the main features of the proposed physics-informed recurrent neural
network for damage accumulation. The test problem consist of predicting fatigue
crack length for a synthetic fleet of airplanes subject to different mission
mixes. The model is trained using full observation inputs (far-field loads) and
very limited observation of outputs (crack length at inspection for only a
portion of the fleet). The results demonstrate that our proposed hybrid
physics-informed recurrent neural network is able to accurately model fatigue
crack growth even when the observed distribution of crack length does not match
with the (unobservable) fleet distribution.Comment: Data and codes (including our implementation for both the multi-layer
perceptron, the stress intensity and Paris law layers, the cumulative damage
cell, as well as python driver scripts) used in this manuscript are publicly
available on GitHub at https://github.com/PML-UCF/pinn. The data and code are
released under the MIT Licens
- ā¦