1,619 research outputs found
Formal Analysis and Redesign of a Neural Network-Based Aircraft Taxiing System with VerifAI
We demonstrate a unified approach to rigorous design of safety-critical
autonomous systems using the VerifAI toolkit for formal analysis of AI-based
systems. VerifAI provides an integrated toolchain for tasks spanning the design
process, including modeling, falsification, debugging, and ML component
retraining. We evaluate all of these applications in an industrial case study
on an experimental autonomous aircraft taxiing system developed by Boeing,
which uses a neural network to track the centerline of a runway. We define
runway scenarios using the Scenic probabilistic programming language, and use
them to drive tests in the X-Plane flight simulator. We first perform
falsification, automatically finding environment conditions causing the system
to violate its specification by deviating significantly from the centerline (or
even leaving the runway entirely). Next, we use counterexample analysis to
identify distinct failure cases, and confirm their root causes with specialized
testing. Finally, we use the results of falsification and debugging to retrain
the network, eliminating several failure cases and improving the overall
performance of the closed-loop system.Comment: Full version of a CAV 2020 pape
On-the-fly laser machining: a case study for in situ balancing of rotative parts
On-the-fly laser machining is defined as a process that aims to generate pockets/patches on target components that are rotated or moved at a constant velocity. Since it is a nonintegrated process (i.e., linear/rotary stage system moving the part is independent of that of the laser), it can be deployed to/into large industrial installations to perform in situ machining, i.e., without the need of disassembly. This allows a high degree of flexibility in its applications (e.g., balancing) and can result in significant cost savings for the user (e.g., no dis(assembly) cost). This paper introduces the concept of on-the-fly laser machining encompassing models for generating user-defined ablated features as well as error budgeting to understand the sources of errors on this highly dynamic process. Additionally, the paper presents laser pulse placement strategies aimed at increasing the surface finish of the targeted component by reducing the area surface roughness that are possible for on-the-fly laser machining. The overall concept was validated by balancing a rotor system through ablation of different pocket shapes by the use of a Yb:YAG pulsed fiber laser. In this respect, first, two different laser pulse placement strategies (square and hexagonal) were introduced in this research and have been validated on Inconel 718 target material; thus, it was concluded that hexagonal pulse placement reduces surface roughness by up to 17% compared to the traditional square laser pulse placement. The concept of on-the-fly laser machining has been validated by ablating two different features (4 × 60 mm and 12 × 4 mm) on a rotative target part at constant speed (100 rpm and 86 rpm) with the scope of being balanced. The mass removal of the ablated features to enable online balancing has been achieved within < 4 mg of the predicted value. Additionally, the error modeling revealed that most of the uncertainties in the dimensions of the feature/pocket originate from the stability of the rotor speed, which led to the conclusion that for the same mass of material to be removed it is advisable to ablate features (pockets) with longer circumferential dimensions, i.e., stretched and shallower pockets rather than compact and deep
Recommended from our members
Methods for cost-sensitive learning
Many approaches for achieving intelligent behavior of automated (computer) systems involve components that learn from past experience. This dissertation studies computational methods for learning from examples, for classification and for decision
making, when the decisions have different non-zero costs associated with them. Many practical applications of learning algorithms, including transaction monitoring, fraud detection, intrusion detection, and medical diagnosis, have such non-uniform costs, and there is a great need for new methods that can handle them. This dissertation discusses two approaches to cost-sensitive classification: input data weighting and conditional density estimation. The first method assigns a weight
to each training example in order to force the learning algorithm (which is otherwise unchanged) to pay more attention to examples with higher misclassification costs. The dissertation discusses several different weighting methods and concludes that a method that gives higher weight to examples from rarer classes works quite well. Another algorithm that gave good results was a wrapper method that applies Powell's gradient-free algorithm to optimize the input weights. The second approach to cost-sensitive classification is conditional density estimation. In this approach, the output of the learning algorithm is a classifier that estimates, for a new data point, the probability that it belongs to each of the classes. These probability estimates can be combined with a cost matrix to make decisions that minimize the expected cost. The dissertation presents a new algorithm, bagged lazy option trees (B-LOTs), that gives better probability estimates than any previous method based on decision trees. In order to evaluate cost-sensitive classification methods, appropriate statistical methods are needed. The dissertation presents two new statistical procedures: BLOTs provides a confidence interval on the expected cost of a classifier, and
BDELTACOST provides a confidence interval on the difference in expected costs of two classifiers. These methods are applied to a large set of experimental studies to evaluate and compare the cost-sensitive methods presented in this dissertation. Finally, the dissertation describes the application of the B-LOTs to a problem of predicting the stability of river channels. In this study, B-LOTs were shown to be superior to other methods in cases where the classes have very different frequencies a situation that arises frequently in cost-sensitive classification problems
Improved determination of hadron matrix elements using the variational method
The extraction of hadron form factors in lattice QCD using the standard two-
and three-point correlator functions has its limitations. One of the most
commonly studied sources of systematic error is excited state contamination,
which occurs when correlators are contaminated with results from higher energy
excitations. We apply the variational method to calculate the axial vector
current gA and compare the results to the more commonly used summation and
two-exponential fit methods. The results demonstrate that the variational
approach offers a more efficient and robust method for the determination of
nucleon matrix elements.Comment: 7 pages, 6 figures, talk presented at Lattice 2015, PoS (LATTICE2015
Provenance and Paleogeography of the 25-17 Ma Rainbow Gardens Formation: Evidence for Tectonic Activity at Ca. 19 Ma and Internal Drainage rather than Throughgoing Paleorivers on the Southwestern Colorado Plateau
The paleogeographic evolution of the Lake Mead region of southern Nevada and northwest Arizona is crucial to understanding the geologic history of the U.S. Southwest, including the evolution of the Colorado Plateau and formation of the Grand Canyon. The ca. 25–17 Ma Rainbow Gardens Formation in the Lake Mead region, the informally named, roughly coeval Jean Conglomerate, and the ca. 24–19 Ma Buck and Doe Conglomerate southeast of Lake Mead hold the only stratigraphic evidence for the Cenozoic pre-extensional geology and paleogeography of this area. Building on prior work, we present new sedimentologic and stratigraphic data, including sandstone provenance and detrital zircon data, to create a more detailed paleogeographic picture of the Lake Mead, Grand Wash Trough, and Hualapai Plateau region from 25 to 18 Ma. These data confirm that sediment was sourced primarily from Paleozoic strata exposed in surrounding Sevier and Laramide uplifts and active volcanic fields to the north. In addition, a distinctive signal of coarse sediment derived from Proterozoic crystalline basement first appeared in the southwestern corner of the basin ca. 25 Ma at the beginning of Rainbow Gardens Formation deposition and then prograded north and east ca. 19 Ma across the southern half of the basin. Regional thermochronologic data suggest that Cretaceous deposits likely blanketed the Lake Mead region by the end of Sevier thrusting. Post-Laramide northward cliff retreat off the Kingman/Mogollon uplifts left a stepped erosion surface with progressively younger strata preserved northward, on which Rainbow Gardens Formation strata were deposited. Deposition of the Rainbow Gardens Formation in general and the 19 Ma progradational pulse in particular may reflect tectonic uplift events just prior to onset of rapid extension at 17 Ma, as supported by both thermochronology and sedimentary data. Data presented here negate the California and Arizona River hypotheses for an “old” Grand Canyon and also negate models wherein the Rainbow Gardens Formation was the depocenter for a 25–18 Ma Little Colorado paleoriver flowing west through East Kaibab paleocanyons. Instead, provenance and paleocurrent data suggest local to regional sources for deposition of the Rainbow Gardens Formation atop a stripped low-relief western Colorado Plateau surface and preclude any significant input from a regional throughgoing paleoriver entering the basin from the east or northeast
Universal behaviour of ideal and interacting quantum gases in two dimensions
I discuss ideal and interacting quantum gases obeying general fractional
exclusion statistics. For systems with constant density of single-particle
states, described in the mean field approximation, the entropy depends neither
on the microscopic exclusion statistics, nor on the interaction. Such systems
are called {\em thermodynamically equivalent} and I show that the microscopic
reason for this equivalence is a one-to-one correspondence between the excited
states of these systems. This provides a method, different from the
bosonisation technique, to transform between systems of different exclusion
statistics. In the last section the macroscopic aspects of this method are
discussed.
In Appendix A I calculate the fluctuation of the ground state population of a
condensed Bose gas in grandcanonical ensemble and mean field approximation,
while in Appendix B I show a situation where although the system exhibits
fractional exclusion properties on microscopic energy intervals, a rigorous
calculation of the population of single particle states reveals a condensation
phenomenon. This also implies a malfunction of the usual and simplified
calculation technique of the most probable statistical distributions.Comment: About 14 journal pages, with 1 figure. Changes: Body of paper: same
content, with slight rephrasing. Apendices are new. In the original
submission I just mentioned the condensation, which is now detailed in
Appendix B. They were intended for a separate paper. Reason for changes:
rejection from Phys. Rev. Lett., resubmission to J. Phys. A: Math. Ge
- …