53 research outputs found
Double sampling and semiparametric methods for informatively missing data
Missing data arise almost ubiquitously in applied settings, and can pose a
substantial threat to the validity of statistical analyses. In the context of
comparative effectiveness research, such as in large observational databases
(e.g., those derived from electronic health records), outcomes may be missing
not at random with respect to measured covariates. In this setting, we propose
a double sampling method, in which outcomes are obtained via intensive
follow-up on a subsample of subjects for whom data were initially missing. We
describe assumptions under which the joint distribution of confounders,
treatment, and outcome is identified under this design, and derive efficient
estimators of the average treatment effect under a nonparametric model, as well
as a model assuming outcomes were initially missing at random. We compare these
in simulations to an approach that adaptively selects an estimator based on
evidence of violation of the missing at random assumption. We also show that
the proposed double sampling design can be extended to handle arbitrary
coarsening mechanisms, and derive consistent, asymptotically normal, and
nonparametric efficient estimators of any smooth full data functional of
interest, and prove that these estimators often are multiply robust.Comment: 35 pages, 2 figure
Computer Aided Verification
This open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency
Programmiersprachen und Rechenkonzepte
Die GI-Fachgruppe 2.1.4 "Programmiersprachen und Rechenkonzepte" veranstaltete vom 3. bis 5. Mai 2004 im Physikzentrum Bad Honnef ihren jährlichen Workshop. Dieser Bericht enthält eine Zusammenstellung der Beiträge. Das Treffen diente wie in jedem Jahr gegenseitigem Kennenlernen, der Vertiefung gegenseitiger Kontakte, der Vorstellung neuer Arbeiten und Ergebnisse und vor allem der intensiven Diskussion. Ein breites Spektrum von Beiträgen, von theoretischen Grundlagen über Programmentwicklung, Sprachdesign, Softwaretechnik und Objektorientierung bis hin zur überraschend langen Geschichte der Rechenautomaten seit der Antike bildete ein interessantes und abwechlungsreiches Programm. Unter anderem waren imperative, funktionale und funktional-logische Sprachen, Software/Hardware-Codesign, Semantik, Web-Programmierung und Softwaretechnik, generative Programmierung, Aspekte und formale Testunterstützung Thema. Interessante Beiträge zu diesen und weiteren Themen gaben Anlaß zu Erfahrungsaustausch und Fachgesprächen auch mit den Teilnehmern des zeitgleich im Physikzentrum Bad Honnef stattfindenden Workshops "Reengineering". Allen Teilnehmern möchte ich dafür danken, daß sie mit ihren Vorträgen und konstruktiven Diskussionsbeiträgen zum Gelingen des Workshops beigetragen haben. Dank für die Vielfalt und Qualität der Beiträge gebührt den Autoren. Ein Wort des Dankes gebührt ebenso den Mitarbeitern und der Leitung des Physikzentrums Bad Honnef für die gewohnte angenehme und anregende Atmosphäre und umfassende Betreuung
Computer Aided Verification
This open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency
Computer graphics, volume 1 Final report, Jun. 29 - Dec. 28, 1967
Computer graphic techniques for numerical control, electrical network analysis, flight mechanics, structural analysis, and engineering drawing retrieva
Recommended from our members
Converting to Optimization in Machine Learning: Perturb-and-MAP, Differential Privacy, and Program Synthesis
On a mathematical level, most computational problems encountered in machine learning are instances of one of four abstract, fundamental problems: sampling, integration, optimization, and search.
Thanks to the rich history of the respective mathematical fields, disparate methods with different properties have been developed for these four problem classes.
As a result it can be beneficial to convert a problem from one abstract class into a problem of a different class, because the latter might come with insights, techniques, and algorithms well suited to the particular problem at hand.
In particular, this thesis contributes four new methods and generalizations of existing methods for converting specific non-optimization machine learning tasks into optimization problems with more appealing properties.
The first example is partition function estimation (an integration problem), where an existing algorithm -- the Gumbel trick -- for converting to the MAP optimization problem is generalized into a more general family of algorithms, such that other instances of this family have better statistical properties.
Second, this family of algorithms is further generalized to another integration problem, the problem of estimating RĂ©nyi entropies.
The third example shows how an intractable sampling problem arising when wishing to publicly release a database containing sensitive data in a safe ("differentially private") manner can be converted into an optimization problem using the theory of Reproducing Kernel Hilbert Spaces.
Finally, the fourth case study casts the challenging discrete search problem of program synthesis from input-output examples as a supervised learning task that can be efficiently tackled using gradient-based optimization.
In all four instances, the conversions result in novel algorithms with desirable properties.
In the first instance, new generalizations of the Gumbel trick can be used to construct statistical estimators of the partition function that achieve the same estimation error while using up to 40% fewer samples.
The second instance shows that unbiased estimators of the RĂ©nyi entropy can be constructed in the Perturb-and-MAP framework.
The main contribution of the third instance is theoretical: the conversion shows that it is possible to construct an algorithm for releasing synthetic databases that approximate databases containing sensitive data in a mathematically precise sense, and to prove results about their approximation errors.
Finally, the fourth conversion yields an algorithm for synthesising program source code from input-output examples that is able to solve test problems 1-3 orders of magnitude faster than a wide range of baselines
- …