1,469 research outputs found
Atomization of viscoelastic fluids Final report, 1 Jan. 1968 - 30 Jun. 1969
Atomization of viscoelastic fluid
Ariadne: Analysis for Machine Learning Program
Machine learning has transformed domains like vision and translation, and is
now increasingly used in science, where the correctness of such code is vital.
Python is popular for machine learning, in part because of its wealth of
machine learning libraries, and is felt to make development faster; however,
this dynamic language has less support for error detection at code creation
time than tools like Eclipse. This is especially problematic for machine
learning: given its statistical nature, code with subtle errors may run and
produce results that look plausible but are meaningless. This can vitiate
scientific results. We report on Ariadne: applying a static framework, WALA, to
machine learning code that uses TensorFlow. We have created static analysis for
Python, a type system for tracking tensors---Tensorflow's core data
structures---and a data flow analysis to track their usage. We report on how it
was built and present some early results
A Pattern Calculus for Rule Languages: Expressiveness, Compilation, and Mechanization (Artifact)
This artifact contains the accompanying code for the ECOOP 2015 paper: "A Pattern Calculus for Rule Languages: Expressiveness, Compilation, and Mechanization". It contains source files for a full mechanization of the three languages presented in the paper: CAMP (Calculus for Aggregating Matching Patterns), NRA (Nested Relational Algebra) and NNRC (Named Nested Relational Calculus). Translations between all three languages and their attendant proofs of correctness are included. Additionally, a mechanization of a type system for the main languages is provided, along with bidirectional proofs of type preservation and proofs of the time complexity of the various compilers
The relationship between employee benefit satisfaction and organizational commitment
The purpose of this study was to examine the influence of individual characteristics, benefit satisfaction, and internal services received on employee job satisfaction and organizational commitment. Employees from a Las Vegas casino hotel were surveyed. A total of 201 usable questionnaires were returned for a response rate of 51 percent. The findings showed that benefit satisfaction and organizational commitment are positively related. Satisfaction with internal services was found to be significantly related to organizational commitment, and communication received was significantly related with benefit satisfaction. Only few of the sociodemographic variables were found to be significantly related to benefit satisfaction, job satisfaction, and organizational commitment. Based on the research findings practical implications for industry are discussed and suggestions for future research are offered
Pseudorandom Selective Excitation in NMR
In this work, average Hamiltonian theory is used to study selective
excitation in a spin-1/2 system evolving under a series of small flip-angle
pulses that are applied either periodically [which
corresponds to the DANTE pulse sequence] or aperiodically. First, an average
Hamiltonian description of the DANTE pulse sequence is developed; such a
description is determined to be valid either at or very far from the DANTE
resonance frequencies, which are simply integer multiples of the inverse of the
interpulse delay. For aperiodic excitation schemes where the interpulse delays
are chosen pseudorandomly, a single resonance can be selectively excited if the
-pulses' phases are modulated in concert with the time delays. Such a
selective pulse is termed a pseudorandom-DANTE or p-DANTE sequence, and the
conditions in which an average Hamiltonian description of p-DANTE is found to
be similar to that found for the DANTE sequence. It is also shown that
averaging over different p-DANTE sequences that are selective for the same
resonance can help reduce excitations at frequencies away from the resonance
frequency, thereby improving the apparent selectivity of the p-DANTE sequences.
Finally, experimental demonstrations of p-DANTE sequences and comparisons with
theory are presented.Comment: 23 pages, 8 figure
A Pattern Calculus for Rule Languages: Expressiveness, Compilation, and Mechanization
This paper introduces a core calculus for pattern-matching in production rule languages: the Calculus for Aggregating Matching Patterns (CAMP). CAMP is expressive enough to capture modern rule languages such as JRules, including extensions for aggregation. We show how CAMP can be compiled into a nested-relational algebra (NRA), with only minimal extension. This paves the way for applying relational techniques to running rules over large stores. Furthermore, we show that NRA can also be compiled back to CAMP, using named nested-relational calculus (NNRC) as an intermediate step. We mechanize proofs of correctness, program size preservation, and type preservation of the translations using modern theorem-proving techniques. A corollary of the type preservation is that polymorphic type inference for both CAMP and NRA is NP-complete. CAMP and its correspondence to NRA provide the foundations for efficient implementations of rules languages using databases technologies
Extending Stan for Deep Probabilistic Programming
Stan is a popular declarative probabilistic programming language with a
high-level syntax for expressing graphical models and beyond. Stan differs by
nature from generative probabilistic programming languages like Church,
Anglican, or Pyro. This paper presents a comprehensive compilation scheme to
compile any Stan model to a generative language and proves its correctness.
This sheds a clearer light on the relative expressiveness of different kinds of
probabilistic languages and opens the door to combining their mutual strengths.
Specifically, we use our compilation scheme to build a compiler from Stan to
Pyro and extend Stan with support for explicit variational inference guides and
deep probabilistic models. That way, users familiar with Stan get access to new
features without having to learn a fundamentally new language. Overall, our
paper clarifies the relationship between declarative and generative
probabilistic programming languages and is a step towards making deep
probabilistic programming easier
- …