41,803 research outputs found
Automated Refactoring of Nested-IF Formulae in Spreadsheets
Spreadsheets are the most popular end-user programming software, where
formulae act like programs and also have smells. One well recognized common
smell of spreadsheet formulae is nest-IF expressions, which have low
readability and high cognitive cost for users, and are error-prone during reuse
or maintenance. However, end users usually lack essential programming language
knowledge and skills to tackle or even realize the problem. The previous
research work has made very initial attempts in this aspect, while no effective
and automated approach is currently available.
This paper firstly proposes an AST-based automated approach to systematically
refactoring nest-IF formulae. The general idea is two-fold. First, we detect
and remove logic redundancy on the AST. Second, we identify higher-level
semantics that have been fragmented and scattered, and reassemble the syntax
using concise built-in functions. A comprehensive evaluation has been conducted
against a real-world spreadsheet corpus, which is collected in a leading IT
company for research purpose. The results with over 68,000 spreadsheets with 27
million nest-IF formulae reveal that our approach is able to relieve the smell
of over 99\% of nest-IF formulae. Over 50% of the refactorings have reduced
nesting levels of the nest-IFs by more than a half. In addition, a survey
involving 49 participants indicates that for most cases the participants prefer
the refactored formulae, and agree on that such automated refactoring approach
is necessary and helpful
Long Text Generation via Adversarial Training with Leaked Information
Automatically generating coherent and semantically meaningful text has many
applications in machine translation, dialogue systems, image captioning, etc.
Recently, by combining with policy gradient, Generative Adversarial Nets (GAN)
that use a discriminative model to guide the training of the generative model
as a reinforcement learning policy has shown promising results in text
generation. However, the scalar guiding signal is only available after the
entire text has been generated and lacks intermediate information about text
structure during the generative process. As such, it limits its success when
the length of the generated text samples is long (more than 20 words). In this
paper, we propose a new framework, called LeakGAN, to address the problem for
long text generation. We allow the discriminative net to leak its own
high-level extracted features to the generative net to further help the
guidance. The generator incorporates such informative signals into all
generation steps through an additional Manager module, which takes the
extracted features of current generated words and outputs a latent vector to
guide the Worker module for next-word generation. Our extensive experiments on
synthetic data and various real-world tasks with Turing test demonstrate that
LeakGAN is highly effective in long text generation and also improves the
performance in short text generation scenarios. More importantly, without any
supervision, LeakGAN would be able to implicitly learn sentence structures only
through the interaction between Manager and Worker.Comment: 14 pages, AAAI 201
Properties of a coupled two species atom-heteronuclear molecule condensate
We study the coherent association of a two-species atomic condensate into a
condensate of heteronuclear diatomic molecules, using both a semiclassical
treatment and a quantum mechanical approach. The differences and connections
between the two approaches are examined. We show that, in this coupled
nonlinear atom-molecule system, the population difference between the two
atomic species plays a significant role in the ground-state stability
properties as well as in coherent population oscillation dynamics.Comment: 7 pages, 4 figure
- …