123,430 research outputs found
Environment Behavior Models for Scenario Generation and Testing Automation
In Proceedings of the First International Workshop on Advances in Model-Based Software Testing (A-MOST'05), the 27th International Conference on Software Engineering ICSE’05, May 15-16, 2005, St. Louis, USAThis paper suggests an approach to automatic scenario generation
from environment models for testing of real-time reactive
systems. The behavior of the system is defined as a set of events
(event trace) with two basic relations: precedence and inclusion.
The attributed event grammar (AEG) specifies possible event
traces and provides a uniform approach for automatically
generating, executing, and analyzing test cases. The environment
model includes a description of hazardous states in which the
system may arrive and makes it possible to gather statistics for
system safety assessment. The approach is supported by a
generator that creates test cases from the AEG models. We
demonstrate the approach with case studies of prototypes for the
safety-critical computer-assisted resuscitation algorithm (CARA)
software for a casualty intravenous fluid infusion pump and the
Paderborn Shuttle System
Eye-movements in implicit artificial grammar learning
Artificial grammar learning (AGL) has been probed with forced-choice behavioral tests (active tests). Recent attempts to probe the outcomes of learning (implicitly acquired knowledge) with eye-movement responses (passive tests) have shown null results. However, these latter studies have not tested for sensitivity effects, for example, increased eye movements on a printed violation. In this study, we tested for sensitivity effects in AGL tests with (Experiment 1) and without (Experiment 2) concurrent active tests (preference- and grammaticality classification) in an eye-tracking experiment. Eye movements discriminated between sequence types in passive tests and more so in active tests. The eye-movement profile did not differ between preference and grammaticality classification, and it resembled sensitivity effects commonly observed in natural syntax processing. Our findings show that the outcomes of implicit structured sequence learning can be characterized in eye tracking. More specifically, whole trial measures (dwell time, number of fixations) showed robust AGL effects, whereas first-pass measures (first-fixation duration) did not. Furthermore, our findings strengthen the link between artificial and natural syntax processing, and they shed light on the factors that determine performance differences in preference and grammaticality classification tests.Max Planck Institute for PsycholinguisticsDonders Institute for Brain, Cognition and BehaviorVetenskapsradetSwedish Dyslexia Foundatio
Estimating Performance of Pipelined Spoken Language Translation Systems
Most spoken language translation systems developed to date rely on a
pipelined architecture, in which the main stages are speech recognition,
linguistic analysis, transfer, generation and speech synthesis. When making
projections of error rates for systems of this kind, it is natural to assume
that the error rates for the individual components are independent, making the
system accuracy the product of the component accuracies.
The paper reports experiments carried out using the SRI-SICS-Telia Research
Spoken Language Translator and a 1000-utterance sample of unseen data. The
results suggest that the naive performance model leads to serious overestimates
of system error rates, since there are in fact strong dependencies between the
components. Predicting the system error rate on the independence assumption by
simple multiplication resulted in a 16\% proportional overestimate for all
utterances, and a 19\% overestimate when only utterances of length 1-10 words
were considered.Comment: 10 pages, Latex source. To appear in Proc. ICSLP '9
Metamodel Instance Generation: A systematic literature review
Modelling and thus metamodelling have become increasingly important in
Software Engineering through the use of Model Driven Engineering. In this paper
we present a systematic literature review of instance generation techniques for
metamodels, i.e. the process of automatically generating models from a given
metamodel. We start by presenting a set of research questions that our review
is intended to answer. We then identify the main topics that are related to
metamodel instance generation techniques, and use these to initiate our
literature search. This search resulted in the identification of 34 key papers
in the area, and each of these is reviewed here and discussed in detail. The
outcome is that we are able to identify a knowledge gap in this field, and we
offer suggestions as to some potential directions for future research.Comment: 25 page
Written language skills in children with specific language impairment
Background. Young children are often required to carry out writing tasks in an educational context. However, little is known about the patterns of writing skills that children with Specific Language Impairment (CwSLI) have relative to their typically developing peers
Best-First Surface Realization
Current work in surface realization concentrates on the use of general,
abstract algorithms that interpret large, reversible grammars. Only little
attention has been paid so far to the many small and simple applications that
require coverage of a small sublanguage at different degrees of sophistication.
The system TG/2 described in this paper can be smoothly integrated with deep
generation processes, it integrates canned text, templates, and context-free
rules into a single formalism, it allows for both textual and tabular output,
and it can be parameterized according to linguistic preferences. These features
are based on suitably restricted production system techniques and on a generic
backtracking regime.Comment: 10 pages, LaTeX source, one EPS figur
- …