7 research outputs found
Integrating Multiple Sketch Recognition Methods to Improve Accuracy and Speed
Sketch recognition is the computer understanding of hand drawn diagrams. Recognizing sketches instantaneously is necessary to build beautiful interfaces with real time feedback. There are various techniques to quickly recognize sketches into ten or twenty classes. However for much larger datasets of sketches from a large number of classes, these existing techniques can take an extended period of time to accurately classify an incoming sketch and require significant computational overhead. Thus, to make classification of large datasets feasible, we propose using multiple stages of recognition.
In the initial stage, gesture-based feature values are calculated and the trained model is used to classify the incoming sketch. Sketches with an accuracy less than a threshold value, go through a second stage of geometric recognition techniques. In the second geometric stage, the sketch is segmented, and sent to shape-specific recognizers. The sketches are matched against predefined shape descriptions, and confidence values are calculated. The system outputs a list of classes that the sketch could be classified as, along with the accuracy, and precision for each sketch. This process both significantly reduces the time taken to classify such huge datasets of sketches, and increases both the accuracy and precision of the recognition
Integrating Multiple Sketch Recognition Methods to Improve Accuracy and Speed
Sketch recognition is the computer understanding of hand drawn diagrams. Recognizing sketches instantaneously is necessary to build beautiful interfaces with real time feedback. There are various techniques to quickly recognize sketches into ten or twenty classes. However for much larger datasets of sketches from a large number of classes, these existing techniques can take an extended period of time to accurately classify an incoming sketch and require significant computational overhead. Thus, to make classification of large datasets feasible, we propose using multiple stages of recognition.
In the initial stage, gesture-based feature values are calculated and the trained model is used to classify the incoming sketch. Sketches with an accuracy less than a threshold value, go through a second stage of geometric recognition techniques. In the second geometric stage, the sketch is segmented, and sent to shape-specific recognizers. The sketches are matched against predefined shape descriptions, and confidence values are calculated. The system outputs a list of classes that the sketch could be classified as, along with the accuracy, and precision for each sketch. This process both significantly reduces the time taken to classify such huge datasets of sketches, and increases both the accuracy and precision of the recognition
Creating guidelines for landscape drawing in digital mediaeric roldan roa â a study on the impact of group formation in game-based collaborative activities for teaching mathematics along with music
https://www.ester.ee/record=b5366807*es
Sketchography - Automatic Grading of Map Sketches for Geography Education
Geography is a vital classroom subject that teaches students about the physical features of the planet we live on. Despite the importance of geographic knowledge, almost 75% of 8th graders scored below proficient in geography on the 2014 National Assessment of Educational Progress. Sketchography is a pen-based intelligent tutoring system that provides real-time feedback to students learning the locations, directions, and topography of rivers around the world. Sketchography uses sketch recognition and artificial intelligence to understand the userâs sketched intentions. As sketches are inherently messy, and even the most expert geographer will draw only a close approximation of the riverâs flow, data has been gathered from both novice and expert sketchers. This data, in combination with professorsâ grading rubrics and statistically driving AI-algorithms, provide real-time automatic grading that is similar to a human graderâs score. Results show the system to be 94.64% accurate compared to human grading
Evaluation of Conceptual Sketches on Stylus-Based Devices
Design Sketching is an important tool for designers and creative professionals to express their ideas and thoughts onto visual medium. Being a very critical and versatile skill for engineering students, this course is often taught in universities on pen and paper. However, this traditional pedagogy is limited by the availability of human instructors for their feedback. Also, students having low self-efficacy do not learn efficiently in traditional learning environment.
Using intelligent interfaces this problem can be solved where we try to mimic the feedback given by an instructor and assess the student drawn sketches to give them insight of the areas they need to improve on. PerSketchTivity is an intelligent tutoring system which allows students to practice their drawing fundamentals and gives them real-time assessment and feedback. This research deals with finding the evaluation metrics that will enable us to grade students from their sketch data. There are seven metrics that we will work with to analyse how each of them contribute in deciding the quality of the sketches. The main contribution of this research is to identify the features of the sketch that can distinguish a good quality sketch from a poor one and design a grading metric for the sketches that can give a final score between 0 and 1 to the user sketches. Using these obtained features and our grading metric method, we grade all the sketches of students and experts
A Fine Motor Skill Classifying Framework to Support Children's Self-Regulation Skills and School Readiness
Childrenâs self-regulation skills predict their school-readiness and social behaviors, and assessing these skills enables parents and teachers to target areas for improvement or prepare children to enter school ready to learn and achieve. Assessing these skills enables parents and teachers to target areas for improvement or prepare children to enter school ready to learn and achieve.
To assess childrenâs fine motor skills, current educators are assessing those skills by either determining their shape drawing correctness or measuring their drawing time durations through paper-based assessments. However, the methods involve human experts manually assessing childrenâs fine motor skills, which are time consuming and prone to human error and bias. As there are many children that use sketch-based applications on mobile and tablet devices, computer-based fine motor skill assessment has high potential to solve the limitations of the paper-based assessments. Furthermore, sketch recognition technology is able to offer more detailed, accurate, and immediate drawing skill information than the paper-based assessments such as drawing time or curvature difference. While a number of educational sketch applications exist for teaching children how to sketch, they are lacking the ability to assess childrenâs fine motor skills and have not proved the validity of the traditional methods onto tablet-environments.
We introduce our fine motor skill classifying framework based on childrenâs digital drawings on tablet-computers. The framework contains two fine motor skill classifiers and a sketch-based educational interface (EasySketch). The fine motor skill classifiers contain: (1) KimCHI: the classifier that determines childrenâs fine motor skills based on their overall drawing skills and (2) KimCHI2: the classifier that determines childrenâs fine motor skills based on their curvature- and corner-drawing skills. Our fine motor skill classifiers determine childrenâs fine motor skills by generating 131 sketch features, which can analyze their drawing ability (e.g. DCR sketch feature can determine their curvature-drawing skills).
We first implemented the KimCHI classifier, which can determine childrenâs fine motor skills based on their overall drawing skills. From our evaluation with 10- fold cross-validation, we found that the classifier can determine childrenâs fine motor skills with an f-measure of 0.904. After that, we implemented the KimCHI2 classifier, which can determine childrenâs fine motor skills based on their curvature- and corner-drawing skills. From our evaluation with 10-fold cross-validation, we found that the classifier can determine childrenâs curvature-drawing skills with an f-measure of 0.82 and corner-drawing skills with an f-measure of 0.78. The KimCHI2 classifier outperformed the KimCHI classifier during the fine motor skill evaluation.
EasySketch is a sketch-based educational interface that (1) determines childrenâs fine motor skills based on their drawing skills and (2) assists children how to draw basic shapes such as alphabet letters or numbers based on their learning progress. When we evaluated our interface with children, our interface determined childrenâs fine motor skills more accurately than the conventional methodology by f-measures of 0.907 and 0.744, accordingly. Furthermore, children improved their drawing skills from our pedagogical feedback.
Finally, we introduce our findings that sketch features (DCR and Polyline Test) can explain childrenâs fine motor skill developmental stages. From the sketch feature distributions per each age group, we found that from age 5 years, they show notable fine motor skill development
Eye Tracking Methods for Analysis of Visuo-Cognitive Behavior in Medical Imaging
Predictive modeling of human visual search behavior and the underlying metacognitive processes is now possible thanks to significant advances in bio-sensing device technology and machine intelligence. Eye tracking bio-sensors, for example, can measure psycho-physiological response through change events in configuration of the human eye. These events include positional changes such as visual fixation, saccadic movements, and scanpath, and non-positional changes such as blinks and pupil dilation and constriction. Using data from eye-tracking sensors, we can model human perception, cognitive processes, and responses to external stimuli.
In this study, we investigated the visuo-cognitive behavior of clinicians during the diagnostic decision process for breast cancer screening under clinically equivalent experimental conditions involving multiple monitors and breast projection views. Using a head-mounted eye tracking device and a customized user interface, we recorded eye change events and diagnostic decisions from 10 clinicians (three breast-imaging radiologists and seven Radiology residents) for a corpus of 100 screening mammograms (comprising cases of varied pathology and breast parenchyma density).
We proposed novel features and gaze analysis techniques, which help to encode discriminative pattern changes in positional and non-positional measures of eye events. These changes were shown to correlate with individual image readers' identity and experience level, mammographic case pathology and breast parenchyma density, and diagnostic decision.
Furthermore, our results suggest that a combination of machine intelligence and bio-sensing modalities can provide adequate predictive capability for the characterization of a mammographic case and image readers diagnostic performance. Lastly, features characterizing eye movements can be utilized for biometric identification purposes. These findings are impactful in real-time performance monitoring and personalized intelligent training and evaluation systems in screening mammography. Further, the developed algorithms are applicable in other application domains involving high-risk visual tasks