29,109 research outputs found
A model of the human observer and decision maker
The decision process is described in terms of classical sequential decision theory by considering the hypothesis that an abnormal condition has occurred by means of a generalized likelihood ratio test. For this, a sufficient statistic is provided by the innovation sequence which is the result of the perception an information processing submodel of the human observer. On the basis of only two model parameters, the model predicts the decision speed/accuracy trade-off and various attentional characteristics. A preliminary test of the model for single variable failure detection tasks resulted in a very good fit of the experimental data. In a formal validation program, a variety of multivariable failure detection tasks was investigated and the predictive capability of the model was demonstrated
Monitoring Networked Applications With Incremental Quantile Estimation
Networked applications have software components that reside on different
computers. Email, for example, has database, processing, and user interface
components that can be distributed across a network and shared by users in
different locations or work groups. End-to-end performance and reliability
metrics describe the software quality experienced by these groups of users,
taking into account all the software components in the pipeline. Each user
produces only some of the data needed to understand the quality of the
application for the group, so group performance metrics are obtained by
combining summary statistics that each end computer periodically (and
automatically) sends to a central server. The group quality metrics usually
focus on medians and tail quantiles rather than on averages. Distributed
quantile estimation is challenging, though, especially when passing large
amounts of data around the network solely to compute quality metrics is
undesirable. This paper describes an Incremental Quantile (IQ) estimation
method that is designed for performance monitoring at arbitrary levels of
network aggregation and time resolution when only a limited amount of data can
be transferred. Applications to both real and simulated data are provided.Comment: This paper commented in: [arXiv:0708.0317], [arXiv:0708.0336],
[arXiv:0708.0338]. Rejoinder in [arXiv:0708.0339]. Published at
http://dx.doi.org/10.1214/088342306000000583 in the Statistical Science
(http://www.imstat.org/sts/) by the Institute of Mathematical Statistics
(http://www.imstat.org
The unexplained nature of reading.
The effects of properties of words on their reading aloud response times (RTs) are 1 major source of evidence about the reading process. The precision with which such RTs could potentially be predicted by word properties is critical to evaluate our understanding of reading but is often underestimated due to contamination from individual differences. We estimated this precision without such contamination individually for 4 people who each read 2,820 words 50 times each. These estimates were compared to the precision achieved by a 31-variable regression model that outperforms current cognitive models on variance-explained criteria. Most (around 2/3) of the meaningful (non-first-phoneme, non-noise) word-level variance remained unexplained by this model. Considerable empirical and theoretical-computational effort has been expended on this area of psychology, but the high level of systematic variance remaining unexplained suggests doubts regarding contemporary accounts of the details of the mechanisms of reading at the level of the word. Future assessment of models can take advantage of the availability of our precise participant-level database
Cloud WorkBench - Infrastructure-as-Code Based Cloud Benchmarking
To optimally deploy their applications, users of Infrastructure-as-a-Service
clouds are required to evaluate the costs and performance of different
combinations of cloud configurations to find out which combination provides the
best service level for their specific application. Unfortunately, benchmarking
cloud services is cumbersome and error-prone. In this paper, we propose an
architecture and concrete implementation of a cloud benchmarking Web service,
which fosters the definition of reusable and representative benchmarks. In
distinction to existing work, our system is based on the notion of
Infrastructure-as-Code, which is a state of the art concept to define IT
infrastructure in a reproducible, well-defined, and testable way. We
demonstrate our system based on an illustrative case study, in which we measure
and compare the disk IO speeds of different instance and storage types in
Amazon EC2
- …