2,073 research outputs found

    A Class of Regression Models for Pairwise Comparisons of Forensic Handwriting Comparison Systems

    Get PDF
    Handwriting analysis is a complex field largely living in forensic science and the legal realm. One task of a forensic document examiner (FDE) may be to determine the writer(s) of handwritten documents. Automated identification systems (AIS) were built to aid FDEs in their examinations. Part of the uses of these AIS (such as FISH[5] [7],WANDA [6], CEDAR-FOX [17], and FLASHID®2) are tomeasure features about a handwriting sample and to provide the user with a numeric value of the evidence. These systems use their own algorithms and definitions of features to quantify the writing and can be considered a black-box. The outputs of two AIS are used to compare to the results of a survey of FDE writership opinions. In this dissertation I will be focusing on the development of a response surface that characterizes the feature outputs of AIS outputs. Using a set of handwriting samples, a pairwise metric, or scoring method, is applied to each of the individual features provided by the AIS to produce sets of pairwise scores. The pairwise scores lead to a degenerate U-statistic. We use a generalized least squares method to test the null hypothesis that there is no relationship between two metrics (β1 = 0.) Monte Carlo simulations are developed and ran to ensure the results, considering the structure of the pairwisemetric, behave under the null hypothesis, and to ensure the modeling will catch a relationship under the alternative hypothesis. The outcome of the significance tests helps to determine which of the metrics are related to each other

    Investigating Visual Perception Impairments through Serious Games and Eye Tracking to Anticipate Handwriting Difficulties

    Get PDF
    Dysgraphia is a learning disability that causes handwritten production below expectations. Its diagnosis is delayed until the completion of handwriting development. To allow a preventive training program, abilities not directly related to handwriting should be evaluated, and one of them is visual perception. To investigate the role of visual perception in handwriting skills, we gamified standard clinical visual perception tests to be played while wearing an eye tracker at three difficulty levels. Then, we identified children at risk of dysgraphia through the means of a handwriting speed test. Five machine learning models were constructed to predict if the child was at risk, using the CatBoost algorithm with Nested Cross-Validation, with combinations of game performance, eye-tracking, and drawing data as predictors. A total of 53 children participated in the study. The machine learning models obtained good results, particularly with game performances as predictors (F1 score: 0.77 train, 0.71 test). SHAP explainer was used to identify the most impactful features. The game reached an excellent usability score (89.4 +/- 9.6). These results are promising to suggest a new tool for dysgraphia early screening based on visual perception skills

    Automatic Analysis of Archimedes’ Spiral for Characterization of Genetic Essential Tremor Based on Shannon’s Entropy and Fractal Dimension

    Get PDF
    Among neural disorders related to movement, essential tremor has the highest prevalence; in fact, it is twenty times more common than Parkinson's disease. The drawing of the Archimedes' spiral is the gold standard test to distinguish between both pathologies. The aim of this paper is to select non-linear biomarkers based on the analysis of digital drawings. It belongs to a larger cross study for early diagnosis of essential tremor that also includes genetic information. The proposed automatic analysis system consists in a hybrid solution: Machine Learning paradigms and automatic selection of features based on statistical tests using medical criteria. Moreover, the selected biomarkers comprise not only commonly used linear features (static and dynamic), but also other non-linear ones: Shannon entropy and Fractal Dimension. The results are hopeful, and the developed tool can easily be adapted to users; and taking into account social and economic points of view, it could be very helpful in real complex environments.This research was partially funded by the Basque Goverment, the University of the Basque Country by the IT1115-16 project-ELEKIN, Diputacion Foral de Gipuzkoa, University of Vic-Central University of Catalonia under the research grant R0947, and the Spanish Ministry of Science and Innovation TEC2016-77791-C04-R

    Fair Use and Machine Learning

    Get PDF
    There would be a beaten path to the maker of software that could reliably state whether a use of a copyrighted work was protected as fair use. But applying machine learning to fair use faces considerable hurdles. Fair use has generated hundreds of reported cases, but machine learning works best with examples in greater numbers. More examples may be available, from mining the decision making of web sites, from having humans judge fair use examples just as they label images to teach self-driving cars, and using machine learning itself to generate examples. Beyond the number of examples, the form of the data is more abstract than the concrete examples on which machine learning has succeeded, such as computer vision, viewing recommendations, and even in comparison to machine translation, where the operative unit was the sentence, not a concept that could be distributed across a document. But techniques presently in use do find patterns in data to build more abstract features, and then use the same process to build more abstract features. It may be that such automated processes can provide the conceptual blocks necessary. In addition, tools drawn from knowledge engineering (ironically, the branch of artificial intelligence that of late has been eclipsed by machine learning) may extract concepts from such data as judicial opinions. Such tools would include new methods of knowledge representation and automated tagging. If the data questions are overcome, machine learning provides intriguing possibilities, but also faces challenges from the nature of fair use law. Artificial neural networks have shown formidable performance in classification. Classifying fair use examples raises a number of questions. Fair use law is often considered contradictory, vague, and unpredictable. In computer science terminology, the data is “noisy.” That inconsistency could flummox artificial neural networks, or the networks could disclose consistencies that have eluded commentators. Other algorithms such as nearest neighbor and support vectors could likewise both use and test legal reasoning by analogy. Another approach to machine learning, decision trees, may be simpler than other approaches in some respects, but could work on smaller data sets (addressing one of the data issues above) and provide something that machine learning often lacks: transparency. Decision trees disclose their decision-making process, whereas neural networks, especially deep learning, are opaque black boxes. Finally, unsupervised machine learning could be used to explore fair use case law for patterns, whether they be consistent structures in its jurisprudence, or biases that have played an undisclosed role. Any possible patterns found, however, should be treated as possibilities, pending testing by other means

    2023 SDSU Data Science Symposium Presentation Abstracts

    Get PDF
    This document contains abstracts for presentations and posters 2023 SDSU Data Science Symposium

    2023 SDSU Data Science Symposium Presentation Abstracts

    Get PDF
    This document contains abstracts for presentations and posters 2023 SDSU Data Science Symposium

    Drawing, Handwriting Processing Analysis: New Advances and Challenges

    No full text
    International audienceDrawing and handwriting are communicational skills that are fundamental in geopolitical, ideological and technological evolutions of all time. drawingand handwriting are still useful in defining innovative applications in numerous fields. In this regard, researchers have to solve new problems like those related to the manner in which drawing and handwriting become an efficient way to command various connected objects; or to validate graphomotor skills as evident and objective sources of data useful in the study of human beings, their capabilities and their limits from birth to decline

    Investigation of possible causes for human-performance degradation during microgravity flight

    Get PDF
    The results of the first year of a three year study of the effects of microgravity on human performance are given. Test results show support for the hypothesis that the effects of microgravity can be studied indirectly on Earth by measuring performance in an altered gravitational field. The hypothesis was that an altered gravitational field could disrupt performance on previously automated behaviors if gravity was a critical part of the stimulus complex controlling those behaviors. In addition, it was proposed that performance on secondary cognitive tasks would also degrade, especially if the subject was provided feedback about degradation on the previously automated task. In the initial experimental test of these hypotheses, there was little statistical support. However, when subjects were categorized as high or low in automated behavior, results for the former group supported the hypotheses. The predicted interaction between body orientation and level of workload in their joint effect on performance in the secondary cognitive task was significant for the group high in automatized behavior and receiving feedback, but no such interventions were found for the group high in automatized behavior but not receiving feedback, or the group low in automatized behavior
    • …
    corecore