20,646 research outputs found
Recommended from our members
Bias Compensation for UWB Ranging for Pedestrian Geolocation Applications
We present an effective bias compensation method to process none-line-of-sight (NLoS) and long distance line-of-sight (LD-LoS) ultra wideband (UWB) range measurement signals used to aid a pedestrian inertial navigation system (INS). The common UWB bias compensation techniques use machine learning methods to identify and remove the bias in the measurements. These techniques are computationally expensive and require extensive prior data. Here, we propose to use an algorithmic compensation technique that accounts for the bias by estimating it using the Schmidt-Kalman filter (SKF). Next, we exploit the positivity of the error in the UWB range measurements to propose a novel constrained sigma point based correction filtering that can be used atop the SKF for further improvement in the positioning accuracy of the UWB-aided pedestrian inertial navigation. Experiments demonstrate the effectiveness of our methods
Software Defect Association Mining and Defect Correction Effort Prediction
Much current software defect prediction work concentrates on the number of defects remaining in software system. In this paper, we present association rule mining based methods to predict defect associations and defect-correction effort. This is to help developers detect software defects and assist project managers in allocating testing resources more effectively. We applied the proposed methods to the SEL defect data consisting of more than 200 projects over more than 15 years. The results show that for the defect association prediction, the accuracy is very high and the false negative rate is very low. Likewise for the defect-correction effort prediction, the accuracy for both defect isolation effort prediction and defect correction effort prediction are also high. We compared the defect-correction effort prediction method with other types of methods: PART, C4.5, and Na¨ıve Bayes and show that accuracy has been improved by at least 23%. We also evaluated the impact of support and confidence levels on prediction accuracy, false negative rate, false positive rate, and the number of rules. We found that higher support and confidence levels may not result in higher prediction accuracy, and a sufficient number of rules is a precondition for high prediction accuracy
Analytical/ML Mixed Approach for Concurrency Regulation in Software Transactional Memory
In this article we exploit a combination of analytical and Machine Learning (ML) techniques in order to build a performance model allowing to dynamically tune the level of concurrency of applications based on Software Transactional Memory (STM). Our mixed approach has the advantage of reducing the training time of pure machine learning methods, and avoiding approximation errors typically affecting pure analytical approaches. Hence it allows very fast construction of highly reliable performance models, which can be promptly and effectively exploited for optimizing actual application runs. We also present a real implementation of a concurrency regulation architecture, based on the mixed modeling approach, which has been integrated with the open source Tiny STM package, together with experimental data related to runs of applications taken from the STAMP benchmark suite demonstrating the effectiveness of our proposal. © 2014 IEEE
Status and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond
In this and a set of companion whitepapers, the USQCD Collaboration lays out
a program of science and computing for lattice gauge theory. These whitepapers
describe how calculation using lattice QCD (and other gauge theories) can aid
the interpretation of ongoing and upcoming experiments in particle and nuclear
physics, as well as inspire new ones.Comment: 44 pages. 1 of USQCD whitepapers
A review of domain adaptation without target labels
Domain adaptation has become a prominent problem setting in machine learning
and related fields. This review asks the question: how can a classifier learn
from a source domain and generalize to a target domain? We present a
categorization of approaches, divided into, what we refer to as, sample-based,
feature-based and inference-based methods. Sample-based methods focus on
weighting individual observations during training based on their importance to
the target domain. Feature-based methods revolve around on mapping, projecting
and representing features such that a source classifier performs well on the
target domain and inference-based methods incorporate adaptation into the
parameter estimation procedure, for instance through constraints on the
optimization procedure. Additionally, we review a number of conditions that
allow for formulating bounds on the cross-domain generalization error. Our
categorization highlights recurring ideas and raises questions important to
further research.Comment: 20 pages, 5 figure
Designing High-Fidelity Single-Shot Three-Qubit Gates: A Machine Learning Approach
Three-qubit quantum gates are key ingredients for quantum error correction
and quantum information processing. We generate quantum-control procedures to
design three types of three-qubit gates, namely Toffoli, Controlled-Not-Not and
Fredkin gates. The design procedures are applicable to a system comprising
three nearest-neighbor-coupled superconducting artificial atoms. For each
three-qubit gate, the numerical simulation of the proposed scheme achieves
99.9% fidelity, which is an accepted threshold fidelity for fault-tolerant
quantum computing. We test our procedure in the presence of decoherence-induced
noise as well as show its robustness against random external noise generated by
the control electronics. The three-qubit gates are designed via the machine
learning algorithm called Subspace-Selective Self-Adaptive Differential
Evolution (SuSSADE).Comment: 18 pages, 13 figures. Accepted for publication in Phys. Rev. Applie
- …