1,175 research outputs found
Generate To Adapt: Aligning Domains using Generative Adversarial Networks
Domain Adaptation is an actively researched problem in Computer Vision. In
this work, we propose an approach that leverages unsupervised data to bring the
source and target distributions closer in a learned joint feature space. We
accomplish this by inducing a symbiotic relationship between the learned
embedding and a generative adversarial network. This is in contrast to methods
which use the adversarial framework for realistic data generation and
retraining deep models with such data. We demonstrate the strength and
generality of our approach by performing experiments on three different tasks
with varying levels of difficulty: (1) Digit classification (MNIST, SVHN and
USPS datasets) (2) Object recognition using OFFICE dataset and (3) Domain
adaptation from synthetic to real data. Our method achieves state-of-the art
performance in most experimental settings and by far the only GAN-based method
that has been shown to work well across different datasets such as OFFICE and
DIGITS.Comment: Accepted as spotlight talk at CVPR 2018. Code available here:
https://github.com/yogeshbalaji/Generate_To_Adap
Vere-Jones' Self-Similar Branching Model
Motivated by its potential application to earthquake statistics, we study the
exactly self-similar branching process introduced recently by Vere-Jones, which
extends the ETAS class of conditional branching point-processes of triggered
seismicity. One of the main ingredient of Vere-Jones' model is that the power
law distribution of magnitudes m' of daughters of first-generation of a mother
of magnitude m has two branches m'm with
exponent beta+d, where beta and d are two positive parameters. We predict that
the distribution of magnitudes of events triggered by a mother of magnitude
over all generations has also two branches m'm
with exponent beta+h, with h= d \sqrt{1-s}, where s is the fraction of
triggered events. This corresponds to a renormalization of the exponent d into
h by the hierarchy of successive generations of triggered events. The empirical
absence of such two-branched distributions implies, if this model is seriously
considered, that the earth is close to criticality (s close to 1) so that beta
- h \approx \beta + h \approx \beta. We also find that, for a significant part
of the parameter space, the distribution of magnitudes over a full catalog
summed over an average steady flow of spontaneous sources (immigrants)
reproduces the distribution of the spontaneous sources and is blind to the
exponents beta, d of the distribution of triggered events.Comment: 13 page + 3 eps figure
Speeding up the constraint-based method in difference logic
"The final publication is available at http://link.springer.com/chapter/10.1007%2F978-3-319-40970-2_18"Over the years the constraint-based method has been successfully applied to a wide range of problems in program analysis, from invariant generation to termination and non-termination proving. Quite often the semantics of the program under study as well as the properties to be generated belong to difference logic, i.e., the fragment of linear arithmetic where atoms are inequalities of the form u v = k. However, so far constraint-based techniques have not exploited this fact: in general, Farkas’ Lemma is used to produce the constraints over template unknowns, which leads to non-linear SMT problems. Based on classical results of graph theory, in this paper we propose new encodings for generating these constraints when program semantics and templates belong to difference logic. Thanks to this approach, instead of a heavyweight non-linear arithmetic solver, a much cheaper SMT solver for difference logic or linear integer arithmetic can be employed for solving the resulting constraints. We present encouraging experimental results that show the high impact of the proposed techniques on the performance of the VeryMax verification systemPeer ReviewedPostprint (author's final draft
Validation of topology optimization for component design
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/76866/1/AIAA-1994-4265-799.pd
Anomalous Power Law Distribution of Total Lifetimes of Branching Processes Relevant to Earthquakes
We consider a branching model of triggered seismicity, the ETAS
(epidemic-type aftershock sequence) model which assumes that each earthquake
can trigger other earthquakes (``aftershocks''). An aftershock sequence results
in this model from the cascade of aftershocks of each past earthquake. Due to
the large fluctuations of the number of aftershocks triggered directly by any
earthquake (``productivity'' or ``fertility''), there is a large variability of
the total number of aftershocks from one sequence to another, for the same
mainshock magnitude. We study the regime where the distribution of fertilities
is characterized by a power law and the bare
Omori law for the memory of previous triggering mothers decays slowly as , with relevant for earthquakes. Using the tool
of generating probability functions and a quasistatic approximation which is
shown to be exact asymptotically for large durations, we show that the density
distribution of total aftershock lifetimes scales as when the average branching ratio is critical ().
The coefficient quantifies the interplay between the
exponent of the Gutenberg-Richter magnitude distribution and the increase of the number of aftershocks
with the mainshock magnitude (productivity) with . More
generally, our results apply to any stochastic branching process with a
power-law distribution of offsprings per mother and a long memory.Comment: 16 pages + 4 figure
Improving Strategies via SMT Solving
We consider the problem of computing numerical invariants of programs by
abstract interpretation. Our method eschews two traditional sources of
imprecision: (i) the use of widening operators for enforcing convergence within
a finite number of iterations (ii) the use of merge operations (often, convex
hulls) at the merge points of the control flow graph. It instead computes the
least inductive invariant expressible in the domain at a restricted set of
program points, and analyzes the rest of the code en bloc. We emphasize that we
compute this inductive invariant precisely. For that we extend the strategy
improvement algorithm of [Gawlitza and Seidl, 2007]. If we applied their method
directly, we would have to solve an exponentially sized system of abstract
semantic equations, resulting in memory exhaustion. Instead, we keep the system
implicit and discover strategy improvements using SAT modulo real linear
arithmetic (SMT). For evaluating strategies we use linear programming. Our
algorithm has low polynomial space complexity and performs for contrived
examples in the worst case exponentially many strategy improvement steps; this
is unsurprising, since we show that the associated abstract reachability
problem is Pi-p-2-complete
- …