2,608 research outputs found
Applying Deep Learning to Fast Radio Burst Classification
Upcoming Fast Radio Burst (FRB) surveys will search 10\, beams on
sky with very high duty cycle, generating large numbers of single-pulse
candidates. The abundance of false positives presents an intractable problem if
candidates are to be inspected by eye, making it a good application for
artificial intelligence (AI). We apply deep learning to single pulse
classification and develop a hierarchical framework for ranking events by their
probability of being true astrophysical transients. We construct a tree-like
deep neural network (DNN) that takes multiple or individual data products as
input (e.g. dynamic spectra and multi-beam detection information) and trains on
them simultaneously. We have built training and test sets using false-positive
triggers from real telescopes, along with simulated FRBs, and single pulses
from pulsars. Training of the DNN was independently done for two radio
telescopes: the CHIME Pathfinder, and Apertif on Westerbork. High accuracy and
recall can be achieved with a labelled training set of a few thousand events.
Even with high triggering rates, classification can be done very quickly on
Graphical Processing Units (GPUs). That speed is essential for selective
voltage dumps or issuing real-time VOEvents. Next, we investigate whether
dedispersion back-ends could be completely replaced by a real-time DNN
classifier. It is shown that a single forward propagation through a moderate
convolutional network could be faster than brute-force dedispersion; but the
low signal-to-noise per pixel makes such a classifier sub-optimal for this
problem. Real-time automated classification may prove useful for bright,
unexpected signals, both now and in the era of radio astronomy when data
volumes and the searchable parameter spaces further outgrow our ability to
manually inspect the data, such as for SKA and ngVLA
Status and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond
In this and a set of companion whitepapers, the USQCD Collaboration lays out
a program of science and computing for lattice gauge theory. These whitepapers
describe how calculation using lattice QCD (and other gauge theories) can aid
the interpretation of ongoing and upcoming experiments in particle and nuclear
physics, as well as inspire new ones.Comment: 44 pages. 1 of USQCD whitepapers
Efficient learning of neighbor representations for boundary trees and forests
We introduce a semiparametric approach to neighbor-based classification. We
build off the recently proposed Boundary Trees algorithm by Mathy et al.(2015)
which enables fast neighbor-based classification, regression and retrieval in
large datasets. While boundary trees use an Euclidean measure of similarity,
the Differentiable Boundary Tree algorithm by Zoran et al.(2017) was introduced
to learn low-dimensional representations of complex input data, on which
semantic similarity can be calculated to train boundary trees. As is pointed
out by its authors, the differentiable boundary tree approach contains a few
limitations that prevents it from scaling to large datasets. In this paper, we
introduce Differentiable Boundary Sets, an algorithm that overcomes the
computational issues of the differentiable boundary tree scheme and also
improves its classification accuracy and data representability. Our algorithm
is efficiently implementable with existing tools and offers a significant
reduction in training time. We test and compare the algorithms on the well
known MNIST handwritten digits dataset and the newer Fashion-MNIST dataset by
Xiao et al.(2017).Comment: 9 pages, 2 figure
A computationally efficient framework for large-scale distributed fingerprint matching
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of requirements for the degree of Master of Science, School of Computer Science and Applied Mathematics. May 2017.Biometric features have been widely implemented to be utilized for forensic and civil applications. Amongst many different kinds of biometric characteristics, the fingerprint is globally accepted and remains the mostly used biometric characteristic by commercial and industrial societies due to its easy acquisition, uniqueness, stability and reliability.
There are currently various effective solutions available, however the fingerprint identification is still not considered a fully solved problem mainly due to accuracy and computational time requirements. Although many of the fingerprint recognition systems based on minutiae provide good accuracy, the systems with very large databases require fast and real time comparison of fingerprints, they often either fail to meet the high performance speed requirements or compromise the accuracy.
For fingerprint matching that involves databases containing millions of fingerprints, real time identification can only be obtained through the implementation of optimal algorithms that may utilize the given hardware as robustly and efficiently as possible. There are currently no known distributed database and computing framework available that deal with real time solution for fingerprint recognition problem involving databases containing as many as sixty million fingerprints, the size which is close to the size of the South African population.
This research proposal intends to serve two main purposes: 1) exploit and scale the best known minutiae matching algorithm for a minimum of sixty million fingerprints; and 2) design a framework for distributed database to deal with large fingerprint databases based on the results obtained in the former item.GR201
- …