1,175 research outputs found

    Generate To Adapt: Aligning Domains using Generative Adversarial Networks

    Full text link
    Domain Adaptation is an actively researched problem in Computer Vision. In this work, we propose an approach that leverages unsupervised data to bring the source and target distributions closer in a learned joint feature space. We accomplish this by inducing a symbiotic relationship between the learned embedding and a generative adversarial network. This is in contrast to methods which use the adversarial framework for realistic data generation and retraining deep models with such data. We demonstrate the strength and generality of our approach by performing experiments on three different tasks with varying levels of difficulty: (1) Digit classification (MNIST, SVHN and USPS datasets) (2) Object recognition using OFFICE dataset and (3) Domain adaptation from synthetic to real data. Our method achieves state-of-the art performance in most experimental settings and by far the only GAN-based method that has been shown to work well across different datasets such as OFFICE and DIGITS.Comment: Accepted as spotlight talk at CVPR 2018. Code available here: https://github.com/yogeshbalaji/Generate_To_Adap

    Vere-Jones' Self-Similar Branching Model

    Full text link
    Motivated by its potential application to earthquake statistics, we study the exactly self-similar branching process introduced recently by Vere-Jones, which extends the ETAS class of conditional branching point-processes of triggered seismicity. One of the main ingredient of Vere-Jones' model is that the power law distribution of magnitudes m' of daughters of first-generation of a mother of magnitude m has two branches m'm with exponent beta+d, where beta and d are two positive parameters. We predict that the distribution of magnitudes of events triggered by a mother of magnitude mm over all generations has also two branches m'm with exponent beta+h, with h= d \sqrt{1-s}, where s is the fraction of triggered events. This corresponds to a renormalization of the exponent d into h by the hierarchy of successive generations of triggered events. The empirical absence of such two-branched distributions implies, if this model is seriously considered, that the earth is close to criticality (s close to 1) so that beta - h \approx \beta + h \approx \beta. We also find that, for a significant part of the parameter space, the distribution of magnitudes over a full catalog summed over an average steady flow of spontaneous sources (immigrants) reproduces the distribution of the spontaneous sources and is blind to the exponents beta, d of the distribution of triggered events.Comment: 13 page + 3 eps figure

    Speeding up the constraint-based method in difference logic

    Get PDF
    "The final publication is available at http://link.springer.com/chapter/10.1007%2F978-3-319-40970-2_18"Over the years the constraint-based method has been successfully applied to a wide range of problems in program analysis, from invariant generation to termination and non-termination proving. Quite often the semantics of the program under study as well as the properties to be generated belong to difference logic, i.e., the fragment of linear arithmetic where atoms are inequalities of the form u v = k. However, so far constraint-based techniques have not exploited this fact: in general, Farkas’ Lemma is used to produce the constraints over template unknowns, which leads to non-linear SMT problems. Based on classical results of graph theory, in this paper we propose new encodings for generating these constraints when program semantics and templates belong to difference logic. Thanks to this approach, instead of a heavyweight non-linear arithmetic solver, a much cheaper SMT solver for difference logic or linear integer arithmetic can be employed for solving the resulting constraints. We present encouraging experimental results that show the high impact of the proposed techniques on the performance of the VeryMax verification systemPeer ReviewedPostprint (author's final draft

    Validation of topology optimization for component design

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/76866/1/AIAA-1994-4265-799.pd

    Anomalous Power Law Distribution of Total Lifetimes of Branching Processes Relevant to Earthquakes

    Full text link
    We consider a branching model of triggered seismicity, the ETAS (epidemic-type aftershock sequence) model which assumes that each earthquake can trigger other earthquakes (``aftershocks''). An aftershock sequence results in this model from the cascade of aftershocks of each past earthquake. Due to the large fluctuations of the number of aftershocks triggered directly by any earthquake (``productivity'' or ``fertility''), there is a large variability of the total number of aftershocks from one sequence to another, for the same mainshock magnitude. We study the regime where the distribution of fertilities μ\mu is characterized by a power law 1/μ1+γ\sim 1/\mu^{1+\gamma} and the bare Omori law for the memory of previous triggering mothers decays slowly as 1/t1+θ\sim 1/t^{1+\theta}, with 0<θ<10 < \theta <1 relevant for earthquakes. Using the tool of generating probability functions and a quasistatic approximation which is shown to be exact asymptotically for large durations, we show that the density distribution of total aftershock lifetimes scales as 1/t1+θ/γ\sim 1/t^{1+\theta/\gamma} when the average branching ratio is critical (n=1n=1). The coefficient 1<γ=b/α<21<\gamma = b/\alpha<2 quantifies the interplay between the exponent b1b \approx 1 of the Gutenberg-Richter magnitude distribution 10bm \sim 10^{-bm} and the increase 10αm\sim 10^{\alpha m} of the number of aftershocks with the mainshock magnitude mm (productivity) with α0.8\alpha \approx 0.8. More generally, our results apply to any stochastic branching process with a power-law distribution of offsprings per mother and a long memory.Comment: 16 pages + 4 figure

    Improving Strategies via SMT Solving

    Full text link
    We consider the problem of computing numerical invariants of programs by abstract interpretation. Our method eschews two traditional sources of imprecision: (i) the use of widening operators for enforcing convergence within a finite number of iterations (ii) the use of merge operations (often, convex hulls) at the merge points of the control flow graph. It instead computes the least inductive invariant expressible in the domain at a restricted set of program points, and analyzes the rest of the code en bloc. We emphasize that we compute this inductive invariant precisely. For that we extend the strategy improvement algorithm of [Gawlitza and Seidl, 2007]. If we applied their method directly, we would have to solve an exponentially sized system of abstract semantic equations, resulting in memory exhaustion. Instead, we keep the system implicit and discover strategy improvements using SAT modulo real linear arithmetic (SMT). For evaluating strategies we use linear programming. Our algorithm has low polynomial space complexity and performs for contrived examples in the worst case exponentially many strategy improvement steps; this is unsurprising, since we show that the associated abstract reachability problem is Pi-p-2-complete
    corecore