5 research outputs found

    HapticDive: An Intuitive Warning System for Underwater Users

    Get PDF
    All divers—regardless of skill or activity—are constantly at risk of decompression sickness; mild symptoms can often go ignored, and can also be deadly if left untreated. Currently, divers receive training and carry a dive computer or a combination of a depth gauge and a depth watch for checking to avoid such situations. However, this equipment does not warn a user if they are in danger of decompression sickness, since users have to keep track of their ascension rates and since shallow-water divers often carry minimal equipment. This work proposes an application called HapticDive to keep track of a user’s depth in relation to the time passed underwater. The application paces their ascent to the surface by providing “stop” signals to users as an audio-visual combination, so that users avoid experiencing “the bends” (i.e., decompression sickness symptoms). HapticDive aims to provide the foundation for a cost-effective application that warns divers— especially surface supported divers, free divers, and general shallow-water divers—when they are at risk of decompression sickness, so they may avoid symptom

    Integrating Multiple Sketch Recognition Methods to Improve Accuracy and Speed

    Get PDF
    Sketch recognition is the computer understanding of hand drawn diagrams. Recognizing sketches instantaneously is necessary to build beautiful interfaces with real time feedback. There are various techniques to quickly recognize sketches into ten or twenty classes. However for much larger datasets of sketches from a large number of classes, these existing techniques can take an extended period of time to accurately classify an incoming sketch and require significant computational overhead. Thus, to make classification of large datasets feasible, we propose using multiple stages of recognition. In the initial stage, gesture-based feature values are calculated and the trained model is used to classify the incoming sketch. Sketches with an accuracy less than a threshold value, go through a second stage of geometric recognition techniques. In the second geometric stage, the sketch is segmented, and sent to shape-specific recognizers. The sketches are matched against predefined shape descriptions, and confidence values are calculated. The system outputs a list of classes that the sketch could be classified as, along with the accuracy, and precision for each sketch. This process both significantly reduces the time taken to classify such huge datasets of sketches, and increases both the accuracy and precision of the recognition

    Integrating Multiple Sketch Recognition Methods to Improve Accuracy and Speed

    Get PDF
    Sketch recognition is the computer understanding of hand drawn diagrams. Recognizing sketches instantaneously is necessary to build beautiful interfaces with real time feedback. There are various techniques to quickly recognize sketches into ten or twenty classes. However for much larger datasets of sketches from a large number of classes, these existing techniques can take an extended period of time to accurately classify an incoming sketch and require significant computational overhead. Thus, to make classification of large datasets feasible, we propose using multiple stages of recognition. In the initial stage, gesture-based feature values are calculated and the trained model is used to classify the incoming sketch. Sketches with an accuracy less than a threshold value, go through a second stage of geometric recognition techniques. In the second geometric stage, the sketch is segmented, and sent to shape-specific recognizers. The sketches are matched against predefined shape descriptions, and confidence values are calculated. The system outputs a list of classes that the sketch could be classified as, along with the accuracy, and precision for each sketch. This process both significantly reduces the time taken to classify such huge datasets of sketches, and increases both the accuracy and precision of the recognition

    A Fine Motor Skill Classifying Framework to Support Children's Self-Regulation Skills and School Readiness

    Get PDF
    Children’s self-regulation skills predict their school-readiness and social behaviors, and assessing these skills enables parents and teachers to target areas for improvement or prepare children to enter school ready to learn and achieve. Assessing these skills enables parents and teachers to target areas for improvement or prepare children to enter school ready to learn and achieve. To assess children’s fine motor skills, current educators are assessing those skills by either determining their shape drawing correctness or measuring their drawing time durations through paper-based assessments. However, the methods involve human experts manually assessing children’s fine motor skills, which are time consuming and prone to human error and bias. As there are many children that use sketch-based applications on mobile and tablet devices, computer-based fine motor skill assessment has high potential to solve the limitations of the paper-based assessments. Furthermore, sketch recognition technology is able to offer more detailed, accurate, and immediate drawing skill information than the paper-based assessments such as drawing time or curvature difference. While a number of educational sketch applications exist for teaching children how to sketch, they are lacking the ability to assess children’s fine motor skills and have not proved the validity of the traditional methods onto tablet-environments. We introduce our fine motor skill classifying framework based on children’s digital drawings on tablet-computers. The framework contains two fine motor skill classifiers and a sketch-based educational interface (EasySketch). The fine motor skill classifiers contain: (1) KimCHI: the classifier that determines children’s fine motor skills based on their overall drawing skills and (2) KimCHI2: the classifier that determines children’s fine motor skills based on their curvature- and corner-drawing skills. Our fine motor skill classifiers determine children’s fine motor skills by generating 131 sketch features, which can analyze their drawing ability (e.g. DCR sketch feature can determine their curvature-drawing skills). We first implemented the KimCHI classifier, which can determine children’s fine motor skills based on their overall drawing skills. From our evaluation with 10- fold cross-validation, we found that the classifier can determine children’s fine motor skills with an f-measure of 0.904. After that, we implemented the KimCHI2 classifier, which can determine children’s fine motor skills based on their curvature- and corner-drawing skills. From our evaluation with 10-fold cross-validation, we found that the classifier can determine children’s curvature-drawing skills with an f-measure of 0.82 and corner-drawing skills with an f-measure of 0.78. The KimCHI2 classifier outperformed the KimCHI classifier during the fine motor skill evaluation. EasySketch is a sketch-based educational interface that (1) determines children’s fine motor skills based on their drawing skills and (2) assists children how to draw basic shapes such as alphabet letters or numbers based on their learning progress. When we evaluated our interface with children, our interface determined children’s fine motor skills more accurately than the conventional methodology by f-measures of 0.907 and 0.744, accordingly. Furthermore, children improved their drawing skills from our pedagogical feedback. Finally, we introduce our findings that sketch features (DCR and Polyline Test) can explain children’s fine motor skill developmental stages. From the sketch feature distributions per each age group, we found that from age 5 years, they show notable fine motor skill development
    corecore