5,433,373 research outputs found
Accuracy Improvement of Neural Networks Through Self-Organizing-Maps over Training Datasets
Although it is not a novel topic, pattern recognition has
become very popular and relevant in the last years. Different classification
systems like neural networks, support vector machines or even
complex statistical methods have been used for this purpose. Several
works have used these systems to classify animal behavior, mainly in an
offline way. Their main problem is usually the data pre-processing step,
because the better input data are, the higher may be the accuracy of the
classification system. In previous papers by the authors an embedded
implementation of a neural network was deployed on a portable device
that was placed on animals. This approach allows the classification to
be done online and in real time. This is one of the aims of the research
project MINERVA, which is focused on monitoring wildlife in Do˜nana
National Park using low power devices. Many difficulties were faced when
pre-processing methods quality needed to be evaluated. In this work, a
novel pre-processing evaluation system based on self-organizing maps
(SOM) to measure the quality of the neural network training dataset is
presented. The paper is focused on a three different horse gaits classification
study. Preliminary results show that a better SOM output map
matches with the embedded ANN classification hit improvement.Junta de Andalucía P12-TIC-1300Ministerio de Economía y Competitividad TEC2016-77785-
Accuracy and transferability of Gaussian approximation potential models for tungsten
We introduce interatomic potentials for tungsten in the bcc crystal phase and its defects within the Gaussian approximation potential framework, fitted to a database of first-principles density functional theory calculations. We investigate the performance of a sequence of models based on databases of increasing coverage in configuration space and showcase our strategy of choosing representative small unit cells to train models that predict properties observable only using thousands of atoms. The most comprehensive model is then used to calculate properties of the screw dislocation, including its structure, the Peierls barrier and the energetics of the vacancy-dislocation interaction. All software and raw data are available at www.libatoms.org
Horizontal accuracy assessment of very high resolution Google Earth images in the city of Rome, Italy
Google Earth (GE) has recently become the focus of increasing interest and popularity
among available online virtual globes used in scientific research projects, due to the
free and easily accessed satellite imagery provided with global coverage. Nevertheless,
the uses of this service raises several research questions on the quality and uncertainty
of spatial data (e.g. positional accuracy, precision, consistency), with implications for
potential uses like data collection and validation. This paper aims to analyze the
horizontal accuracy of very high resolution (VHR) GE images in the city of Rome
(Italy) for the years 2007, 2011, and 2013. The evaluation was conducted by using
both Global Positioning System ground truth data and cadastral photogrammetric
vertex as independent check points. The validation process includes the comparison of
histograms, graph plots, tests of normality, azimuthal direction errors, and the
calculation of standard statistical parameters. The results show that GE VHR imageries
of Rome have an overall positional accuracy close to 1 m, sufficient for deriving
ground truth samples, measurements, and large-scale planimetric maps
Handgun Accuracy Problem
A laboratory test, aimed to check the compliance of the model with demand, indicates that consecutive fires of about 10 centers around a circular region with a radius of 10cm. The fact that the fires, though performed at the same conditions, do not target at the same point is called focusing uncertainty of the handgun. Furthermore, it is observed, that bullet velocity measured 10 meters from gun varies up to about 7m/s (around 340m/s) among the firing set of 10. There are about ten different models and each model seems to display a different magnitude of uncertainty and velocity deviation from the expected average. The company, being willing to produce more data at request, asks to see if the focusing uncertainty and variation in bullet velocities can somehow be correlated. And with some help from other disciplines, the fact behind such uncertainties? Experiment apparatus or manufacturing process. If latter, which manufacturing unit contributes more
Accuracy in Sentencing
A host of errors can occur at sentencing, but whether a particular sentencing error can be remedied may depend on whether judges characterize errors as involving a miscarriage of justice -- that is, a claim of innocence. The Supreme Court\u27s miscarriage of justice standard, created as an exception to excuse procedural barriers in the context of federal habeas corpus review, has colonized a wide range of areas of law, from plain error review on appeal, to excusing appeal waivers, the scope of cognizable claims under 28 U.S.C. § 2255, the post-conviction statute for federal prisoners, and the Savings Clause that permits resort to habeas corpus rather than § 2255. That standard requires a judge to ask whether a reasonable decision maker would more likely than not reach the same result. However, the use of the miscarriage of justice standard with respect to claims of sentencing error remains quite unsettled In this Article, I provide a taxonomy of types of innocence of sentence claims, and describe how each has developed, focusing on federal courts. I question whether finality should play the same role regarding correction of errors in sentences, and I propose that a single miscarriage of justice standard apply to all types of sentencing error claims, when not considering on appeal under reasonableness review. Finally, I briefly describe how changes to the sentencing process or sentencing guidelines could also reflect certain concerns with accuracy
Bird Beak Accuracy Assessment
The purpose of this resource is to quantitatively evaluate the accuracy of a classification system. Students sort birds into three possible classes based on each bird's beak: carnivores, herbivores, and omnivores. Students compare their answers with a given set of validation data. Educational levels: Middle school, High school
Accuracy of Approximate Eigenstates
Besides perturbation theory, which requires, of course, the knowledge of the
exact unperturbed solution, variational techniques represent the main tool for
any investigation of the eigenvalue problem of some semibounded operator H in
quantum theory. For a reasonable choice of the employed trial subspace of the
domain of H, the lowest eigenvalues of H usually can be located with acceptable
precision whereas the trial-subspace vectors corresponding to these eigenvalues
approximate, in general, the exact eigenstates of H with much less accuracy.
Accordingly, various measures for the accuracy of the approximate eigenstates
derived by variational techniques are scrutinized. In particular, the matrix
elements of the commutator of the operator H and (suitably chosen) different
operators, with respect to degenerate approximate eigenstates of H obtained by
some variational method, are proposed here as new criteria for the accuracy of
variational eigenstates. These considerations are applied to that Hamiltonian
the eigenvalue problem of which defines the "spinless Salpeter equation." This
(bound-state) wave equation may be regarded as the most straightforward
relativistic generalization of the usual nonrelativistic Schroedinger
formalism, and is frequently used to describe, e.g., spin-averaged mass spectra
of bound states of quarks.Comment: LaTeX, 14 pages, Int. J. Mod. Phys. A (in print); 1 typo correcte
Accuracy-based scoring for phrase-based statistical machine translation
Although the scoring features of state-of-the-art Phrase-Based Statistical Machine Translation (PB-SMT) models are weighted so as to optimise an objective function measuring
translation quality, the estimation of the features
themselves does not have any relation to such quality metrics. In this paper, we introduce a translation quality-based feature to PBSMT in a bid to improve the translation quality of the system. Our feature is estimated by averaging
the edit-distance between phrase pairs involved in the translation of oracle sentences, chosen by automatic evaluation metrics from the N-best outputs of a baseline system, and phrase pairs occurring in the N-best list. Using
our method, we report a statistically significant 2.11% relative improvement in BLEU score for the WMT 2009 Spanish-to-English translation task. We also report that using our
method we can achieve statistically significant improvements over the baseline using many other MT evaluation metrics, and a substantial increase in speed and reduction in memory use (due to a reduction in phrase-table size of 87%) while maintaining significant gains in
translation quality
- …
