5,389,347 research outputs found
Meaningful Categorisation of Novice Programmer Errors
The frequency of different kinds of error made by students learning to write computer programs has long been of interest to researchers and educators. In the past, various studies investigated this topic, usually by recording and analysing compiler error messages, and producing tables of relative frequencies of specific errors diagnostics produced by the compiler. In this paper, we improve on such prior studies by investigating actual logical errors in student code, as opposed to diagnostic messages produced by the compiler. The actual errors reported here are more precise, more detailed and more accurate than the diagnostic produced automatically
Diagnostic error reduction in the United States and Italy through the intervention of diagnostic management teams
A major challenge to most countries is the growing cost of healthcare. The cost of laboratory testing is approximately 3% of the total clinical costs. On the other hand, waste from inappropriate admissions to clinical departments is reported to be as high as 15%. A frequently used approach to save dollars in healthcare is the random reduction in the budget for laboratories, with a focus on reduction of the number of unnecessary laboratory tests. The World Health Assembly has approached the problem by publishing a list of essential in vitro diagnostic tests, in order to achieve a global rationalization of the problem.
A much more thoughtful strategy to saving healthcare finance is to improve the efficiency of the diagnostic process. This report presents an opportunity to reduce diagnostic error and increase the efficiency of diagnostic testing. Reduction in time to a correct diagnosis provides a major financial as well as a clinical benefit. In addition, reducing both overutilization and underutilization of laboratory tests while achieving the correct diagnosis is a major benefit to challenged healthcare budgets.
One approach taken to achieve major savings in healthcare has been the creation of “Diagnostic Management Teams,” composed of experts in specialty areas of medicine who are primarily based in the clinical laboratory to advise physicians on the selection of only necessary tests and the interpretation of complex test results
Error-free milestones in error prone measurements
A predictor variable or dose that is measured with substantial error may
possess an error-free milestone, such that it is known with negligible error
whether the value of the variable is to the left or right of the milestone.
Such a milestone provides a basis for estimating a linear relationship between
the true but unknown value of the error-free predictor and an outcome, because
the milestone creates a strong and valid instrumental variable. The inferences
are nonparametric and robust, and in the simplest cases, they are exact and
distribution free. We also consider multiple milestones for a single predictor
and milestones for several predictors whose partial slopes are estimated
simultaneously. Examples are drawn from the Wisconsin Longitudinal Study, in
which a BA degree acts as a milestone for sixteen years of education, and the
binary indicator of military service acts as a milestone for years of service.Comment: Published in at http://dx.doi.org/10.1214/08-AOAS233 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Error Patterns
In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are -vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word, usually the one for which the error vector has minimum Hamming weight is chosen. In this note some remarks are made on the problem of the elements 1 in the error vector, that may enable unique decoding, in case two or more code words have the same Hamming distance to the received message word, thus turning error detection into error correction. The essentially new aspect is that code words, message words and error vectors are put in one-one correspondence with graphs
Error Analysis on Learners' Interlanguage and Intralanguage: a Case Study of Two Adolescent Students
This research focuses on exploring learners' language, especially the errors that are performed by the English learners. The subjects of this study are two adolescent students who have been learning English since early age. The data analyzed is collected by doing the interview session. Identification and classification are done toward the errors performed by the subjects. After that, the pattern is drawn to find out the subjects' nature of language. The result shows that both interlanguage and intralanguage affect the students' English. However, interlanguage affects the errors more than does intralanguage. It proves that the nature of L1 affects the L2 acquisition. The errors occurred in terms of subject-verb agreement, tenses, and relative clause. At the end, the appropriate feedback given to speaking performance is implicit correction such as recast and prompts
Principles of error detection and error correction codes
Report is reviewed which considers theoretical basis of groups, rings, fields, and vector spaces, and their relationship to algebraic coding theory. Report serves as summary for engineers and scientists involved in data handling and processing systems
The Squared-Error of Generalized LASSO: A Precise Analysis
We consider the problem of estimating an unknown signal from noisy
linear observations . In many practical instances,
has a certain structure that can be captured by a structure inducing convex
function . For example, norm can be used to encourage a
sparse solution. To estimate with the aid of , we consider the
well-known LASSO method and provide sharp characterization of its performance.
We assume the entries of the measurement matrix and the noise vector
have zero-mean normal distributions with variances and
respectively. For the LASSO estimator , we attempt to calculate the
Normalized Square Error (NSE) defined as as
a function of the noise level , the number of observations and the
structure of the signal. We show that, the structure of the signal and
choice of the function enter the error formulae through the summary
parameters and , which are defined as the Gaussian
squared-distances to the subdifferential cone and to the -scaled
subdifferential, respectively. The first LASSO estimator assumes a-priori
knowledge of and is given by . We prove that its worst case NSE is achieved when
and concentrates around .
Secondly, we consider , for some
. This time the NSE formula depends on the choice of
and is given by . We then establish a mapping
between this and the third estimator . Finally, for a number of important structured signal classes,
we translate our abstract formulae to closed-form upper bounds on the NSE
- …