3,976,410 research outputs found

    Accuracy Improvement of Neural Networks Through Self-Organizing-Maps over Training Datasets

    Get PDF
    Although it is not a novel topic, pattern recognition has become very popular and relevant in the last years. Different classification systems like neural networks, support vector machines or even complex statistical methods have been used for this purpose. Several works have used these systems to classify animal behavior, mainly in an offline way. Their main problem is usually the data pre-processing step, because the better input data are, the higher may be the accuracy of the classification system. In previous papers by the authors an embedded implementation of a neural network was deployed on a portable device that was placed on animals. This approach allows the classification to be done online and in real time. This is one of the aims of the research project MINERVA, which is focused on monitoring wildlife in Do˜nana National Park using low power devices. Many difficulties were faced when pre-processing methods quality needed to be evaluated. In this work, a novel pre-processing evaluation system based on self-organizing maps (SOM) to measure the quality of the neural network training dataset is presented. The paper is focused on a three different horse gaits classification study. Preliminary results show that a better SOM output map matches with the embedded ANN classification hit improvement.Junta de Andalucía P12-TIC-1300Ministerio de Economía y Competitividad TEC2016-77785-

    Horizontal accuracy assessment of very high resolution Google Earth images in the city of Rome, Italy

    Get PDF
    Google Earth (GE) has recently become the focus of increasing interest and popularity among available online virtual globes used in scientific research projects, due to the free and easily accessed satellite imagery provided with global coverage. Nevertheless, the uses of this service raises several research questions on the quality and uncertainty of spatial data (e.g. positional accuracy, precision, consistency), with implications for potential uses like data collection and validation. This paper aims to analyze the horizontal accuracy of very high resolution (VHR) GE images in the city of Rome (Italy) for the years 2007, 2011, and 2013. The evaluation was conducted by using both Global Positioning System ground truth data and cadastral photogrammetric vertex as independent check points. The validation process includes the comparison of histograms, graph plots, tests of normality, azimuthal direction errors, and the calculation of standard statistical parameters. The results show that GE VHR imageries of Rome have an overall positional accuracy close to 1 m, sufficient for deriving ground truth samples, measurements, and large-scale planimetric maps

    Towards Understanding Spontaneous Speech: Word Accuracy vs. Concept Accuracy

    Full text link
    In this paper we describe an approach to automatic evaluation of both the speech recognition and understanding capabilities of a spoken dialogue system for train time table information. We use word accuracy for recognition and concept accuracy for understanding performance judgement. Both measures are calculated by comparing these modules' output with a correct reference answer. We report evaluation results for a spontaneous speech corpus with about 10000 utterances. We observed a nearly linear relationship between word accuracy and concept accuracy.Comment: 4 pages PS, Latex2e source importing 2 eps figures, uses icslp.cls, caption.sty, psfig.sty; to appear in the Proceedings of the Fourth International Conference on Spoken Language Processing (ICSLP 96

    Accuracy and transferability of Gaussian approximation potential models for tungsten

    Get PDF
    We introduce interatomic potentials for tungsten in the bcc crystal phase and its defects within the Gaussian approximation potential framework, fitted to a database of first-principles density functional theory calculations. We investigate the performance of a sequence of models based on databases of increasing coverage in configuration space and showcase our strategy of choosing representative small unit cells to train models that predict properties observable only using thousands of atoms. The most comprehensive model is then used to calculate properties of the screw dislocation, including its structure, the Peierls barrier and the energetics of the vacancy-dislocation interaction. All software and raw data are available at www.libatoms.org

    Accuracy in Sentencing

    Get PDF
    A host of errors can occur at sentencing, but whether a particular sentencing error can be remedied may depend on whether judges characterize errors as involving a miscarriage of justice -- that is, a claim of innocence. The Supreme Court\u27s miscarriage of justice standard, created as an exception to excuse procedural barriers in the context of federal habeas corpus review, has colonized a wide range of areas of law, from plain error review on appeal, to excusing appeal waivers, the scope of cognizable claims under 28 U.S.C. § 2255, the post-conviction statute for federal prisoners, and the Savings Clause that permits resort to habeas corpus rather than § 2255. That standard requires a judge to ask whether a reasonable decision maker would more likely than not reach the same result. However, the use of the miscarriage of justice standard with respect to claims of sentencing error remains quite unsettled In this Article, I provide a taxonomy of types of innocence of sentence claims, and describe how each has developed, focusing on federal courts. I question whether finality should play the same role regarding correction of errors in sentences, and I propose that a single miscarriage of justice standard apply to all types of sentencing error claims, when not considering on appeal under reasonableness review. Finally, I briefly describe how changes to the sentencing process or sentencing guidelines could also reflect certain concerns with accuracy

    Accuracy-based scoring for phrase-based statistical machine translation

    Get PDF
    Although the scoring features of state-of-the-art Phrase-Based Statistical Machine Translation (PB-SMT) models are weighted so as to optimise an objective function measuring translation quality, the estimation of the features themselves does not have any relation to such quality metrics. In this paper, we introduce a translation quality-based feature to PBSMT in a bid to improve the translation quality of the system. Our feature is estimated by averaging the edit-distance between phrase pairs involved in the translation of oracle sentences, chosen by automatic evaluation metrics from the N-best outputs of a baseline system, and phrase pairs occurring in the N-best list. Using our method, we report a statistically significant 2.11% relative improvement in BLEU score for the WMT 2009 Spanish-to-English translation task. We also report that using our method we can achieve statistically significant improvements over the baseline using many other MT evaluation metrics, and a substantial increase in speed and reduction in memory use (due to a reduction in phrase-table size of 87%) while maintaining significant gains in translation quality
    corecore