4,706 research outputs found
Near-Surface Interface Detection for Coal Mining Applications Using Bispectral Features and GPR
The use of ground penetrating radar (GPR) for detecting the presence of near-surface interfaces is a scenario of special interest to the underground coal mining industry. The problem is difficult to solve in practice because the radar echo from the near-surface interface is often dominated by unwanted components such as antenna crosstalk and ringing, ground-bounce effects, clutter, and severe attenuation. These nuisance components are also highly sensitive to subtle variations in ground conditions, rendering the application of standard signal pre-processing techniques such as background subtraction largely ineffective in the unsupervised case. As a solution to this detection problem, we develop a novel pattern recognition-based algorithm which utilizes a neural network to classify features derived from the bispectrum of 1D early time radar data. The binary classifier is used to decide between two key cases, namely whether an interface is within, for example, 5 cm of the surface or not. This go/no-go detection capability is highly valuable for underground coal mining operations, such as longwall mining, where the need to leave a remnant coal section is essential for geological stability. The classifier was trained and tested using real GPR data with ground truth measurements. The real data was acquired from a testbed with coal-clay, coal-shale and shale-clay interfaces, which represents a test mine site. We show that, unlike traditional second order correlation based methods such as matched filtering which can fail even in known conditions, the new method reliably allows the detection of interfaces using GPR to be applied in the near-surface region. In this work, we are not addressing the problem of depth estimation, rather confining ourselves to detecting an interface within a particular depth range
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Structured Dropout for Weak Label and Multi-Instance Learning and Its Application to Score-Informed Source Separation
Many success stories involving deep neural networks are instances of
supervised learning, where available labels power gradient-based learning
methods. Creating such labels, however, can be expensive and thus there is
increasing interest in weak labels which only provide coarse information, with
uncertainty regarding time, location or value. Using such labels often leads to
considerable challenges for the learning process. Current methods for
weak-label training often employ standard supervised approaches that
additionally reassign or prune labels during the learning process. The
information gain, however, is often limited as only the importance of labels
where the network already yields reasonable results is boosted. We propose
treating weak-label training as an unsupervised problem and use the labels to
guide the representation learning to induce structure. To this end, we propose
two autoencoder extensions: class activity penalties and structured dropout. We
demonstrate the capabilities of our approach in the context of score-informed
source separation of music
Pyramid diffractive optical networks for unidirectional magnification and demagnification
Diffractive deep neural networks (D2NNs) are composed of successive
transmissive layers optimized using supervised deep learning to all-optically
implement various computational tasks between an input and output field-of-view
(FOV). Here, we present a pyramid-structured diffractive optical network design
(which we term P-D2NN), optimized specifically for unidirectional image
magnification and demagnification. In this P-D2NN design, the diffractive
layers are pyramidally scaled in alignment with the direction of the image
magnification or demagnification. Our analyses revealed the efficacy of this
P-D2NN design in unidirectional image magnification and demagnification tasks,
producing high-fidelity magnified or demagnified images in only one direction,
while inhibiting the image formation in the opposite direction - confirming the
desired unidirectional imaging operation. Compared to the conventional D2NN
designs with uniform-sized successive diffractive layers, P-D2NN design
achieves similar performance in unidirectional magnification tasks using only
half of the diffractive degrees of freedom within the optical processor volume.
Furthermore, it maintains its unidirectional image
magnification/demagnification functionality across a large band of illumination
wavelengths despite being trained with a single illumination wavelength. With
this pyramidal architecture, we also designed a wavelength-multiplexed
diffractive network, where a unidirectional magnifier and a unidirectional
demagnifier operate simultaneously in opposite directions, at two distinct
illumination wavelengths. The efficacy of the P-D2NN architecture was also
validated experimentally using monochromatic terahertz illumination,
successfully matching our numerical simulations. P-D2NN offers a
physics-inspired strategy for designing task-specific visual processors.Comment: 26 Pages, 7 Figure
High Accuracy Distributed Target Detection and Classification in Sensor Networks Based on Mobile Agent Framework
High-accuracy distributed information exploitation plays an important role in sensor networks. This dissertation describes a mobile-agent-based framework for target detection and classification in sensor networks. Specifically, we tackle the challenging problems of multiple- target detection, high-fidelity target classification, and unknown-target identification.
In this dissertation, we present a progressive multiple-target detection approach to estimate the number of targets sequentially and implement it using a mobile-agent framework. To further improve the performance, we present a cluster-based distributed approach where the estimated results from different clusters are fused. Experimental results show that the distributed scheme with the Bayesian fusion method have better performance in the sense that they have the highest detection probability and the most stable performance. In addition, the progressive intra-cluster estimation can reduce data transmission by 83.22% and conserve energy by 81.64% compared to the centralized scheme.
For collaborative target classification, we develop a general purpose multi-modality, multi-sensor fusion hierarchy for information integration in sensor networks. The hierarchy is com- posed of four levels of enabling algorithms: local signal processing, temporal fusion, multi-modality fusion, and multi-sensor fusion using a mobile-agent-based framework. The fusion hierarchy ensures fault tolerance and thus generates robust results. In the meanwhile, it also takes into account energy efficiency. Experimental results based on two field demos show constant improvement of classification accuracy over different levels of the hierarchy.
Unknown target identification in sensor networks corresponds to the capability of detecting targets without any a priori information, and of modifying the knowledge base dynamically. In this dissertation, we present a collaborative method to solve this problem among multiple sensors. When applied to the military vehicles data set collected in a field demo, about 80% unknown target samples can be recognized correctly, while the known target classification ac- curacy stays above 95%
Low-Power Circuits for Brain–Machine Interfaces
This paper presents work on ultra-low-power circuits for brain–machine interfaces with applications for paralysis prosthetics, stroke, Parkinson’s disease, epilepsy, prosthetics for the blind, and experimental neuroscience systems. The circuits include a micropower neural amplifier with adaptive power biasing for use
in multi-electrode arrays; an analog linear decoding and learning
architecture for data compression; low-power radio-frequency
(RF) impedance-modulation circuits for data telemetry that
minimize power consumption of implanted systems in the body;
a wireless link for efficient power transfer; mixed-signal system
integration for efficiency, robustness, and programmability; and
circuits for wireless stimulation of neurons with power-conserving
sleep modes and awake modes. Experimental results from chips
that have stimulated and recorded from neurons in the zebra
finch brain and results from RF power-link, RF data-link, electrode-
recording and electrode-stimulating systems are presented.
Simulations of analog learning circuits that have successfully
decoded prerecorded neural signals from a monkey brain are also
presented
PerformanceNet: Score-to-Audio Music Generation with Multi-Band Convolutional Residual Network
Music creation is typically composed of two parts: composing the musical
score, and then performing the score with instruments to make sounds. While
recent work has made much progress in automatic music generation in the
symbolic domain, few attempts have been made to build an AI model that can
render realistic music audio from musical scores. Directly synthesizing audio
with sound sample libraries often leads to mechanical and deadpan results,
since musical scores do not contain performance-level information, such as
subtle changes in timing and dynamics. Moreover, while the task may sound like
a text-to-speech synthesis problem, there are fundamental differences since
music audio has rich polyphonic sounds. To build such an AI performer, we
propose in this paper a deep convolutional model that learns in an end-to-end
manner the score-to-audio mapping between a symbolic representation of music
called the piano rolls and an audio representation of music called the
spectrograms. The model consists of two subnets: the ContourNet, which uses a
U-Net structure to learn the correspondence between piano rolls and
spectrograms and to give an initial result; and the TextureNet, which further
uses a multi-band residual network to refine the result by adding the spectral
texture of overtones and timbre. We train the model to generate music clips of
the violin, cello, and flute, with a dataset of moderate size. We also present
the result of a user study that shows our model achieves higher mean opinion
score (MOS) in naturalness and emotional expressivity than a WaveNet-based
model and two commercial sound libraries. We open our source code at
https://github.com/bwang514/PerformanceNetComment: 8 pages, 6 figures, AAAI 2019 camera-ready versio
- …