245 research outputs found

    SketchyDynamics: A Library for the Development of Physics Simulation Applications with Sketch-Based Interfaces

    Get PDF
    Sketch-based interfaces provide a powerful, natural and intuitive way for users to interact with an application. By combining a sketch-based interface with a physically simulated environment, an application offers the means for users to rapidly sketch a set of objects, like if they are doing it on piece of paper, and see how these objects behave in a simulation. In this paper we present SketchyDynamics, a library that intends to facilitate the creation of applications by rapidly providing them a sketch-based interface and physics simulation capabilities. SketchyDynamics was designed to be versatile and customizable but also simple. In fact, a simple application where the user draws objects and they are immediately simulated, colliding with each other and reacting to the specified physical forces, can be created with only 3 lines of code. In order to validate SketchyDynamics design choices, we also present some details of the usability evaluation that was conducted with a proof-of-concept prototype

    Relating Objective and Subjective Performance Measures for AAM-based Visual Speech Synthesizers

    Get PDF
    We compare two approaches for synthesizing visual speech using Active Appearance Models (AAMs): one that utilizes acoustic features as input, and one that utilizes a phonetic transcription as input. Both synthesizers are trained using the same data and the performance is measured using both objective and subjective testing. We investigate the impact of likely sources of error in the synthesized visual speech by introducing typical errors into real visual speech sequences and subjectively measuring the perceived degradation. When only a small region (e.g. a single syllable) of ground-truth visual speech is incorrect we find that the subjective score for the entire sequence is subjectively lower than sequences generated by our synthesizers. This observation motivates further consideration of an often ignored issue, which is to what extent are subjective measures correlated with objective measures of performance? Significantly, we find that the most commonly used objective measures of performance are not necessarily the best indicator of viewer perception of quality. We empirically evaluate alternatives and show that the cost of a dynamic time warp of synthesized visual speech parameters to the respective ground-truth parameters is a better indicator of subjective quality

    Integrating Multiple Sketch Recognition Methods to Improve Accuracy and Speed

    Get PDF
    Sketch recognition is the computer understanding of hand drawn diagrams. Recognizing sketches instantaneously is necessary to build beautiful interfaces with real time feedback. There are various techniques to quickly recognize sketches into ten or twenty classes. However for much larger datasets of sketches from a large number of classes, these existing techniques can take an extended period of time to accurately classify an incoming sketch and require significant computational overhead. Thus, to make classification of large datasets feasible, we propose using multiple stages of recognition. In the initial stage, gesture-based feature values are calculated and the trained model is used to classify the incoming sketch. Sketches with an accuracy less than a threshold value, go through a second stage of geometric recognition techniques. In the second geometric stage, the sketch is segmented, and sent to shape-specific recognizers. The sketches are matched against predefined shape descriptions, and confidence values are calculated. The system outputs a list of classes that the sketch could be classified as, along with the accuracy, and precision for each sketch. This process both significantly reduces the time taken to classify such huge datasets of sketches, and increases both the accuracy and precision of the recognition

    Integrating Multiple Sketch Recognition Methods to Improve Accuracy and Speed

    Get PDF
    Sketch recognition is the computer understanding of hand drawn diagrams. Recognizing sketches instantaneously is necessary to build beautiful interfaces with real time feedback. There are various techniques to quickly recognize sketches into ten or twenty classes. However for much larger datasets of sketches from a large number of classes, these existing techniques can take an extended period of time to accurately classify an incoming sketch and require significant computational overhead. Thus, to make classification of large datasets feasible, we propose using multiple stages of recognition. In the initial stage, gesture-based feature values are calculated and the trained model is used to classify the incoming sketch. Sketches with an accuracy less than a threshold value, go through a second stage of geometric recognition techniques. In the second geometric stage, the sketch is segmented, and sent to shape-specific recognizers. The sketches are matched against predefined shape descriptions, and confidence values are calculated. The system outputs a list of classes that the sketch could be classified as, along with the accuracy, and precision for each sketch. This process both significantly reduces the time taken to classify such huge datasets of sketches, and increases both the accuracy and precision of the recognition

    Algorithmic Efficiency of Stroke Gesture Recognizers: a Comparative Analysis

    Get PDF
    Gesture interaction is today recognized as a natural, intuitive way to execute commands of an interactive system. For this purpose, several stroke gesture recognizers become more efficient in recognizing end-user gestures from a training set. Although the rate algorithms propose their rates of return there is a deficiency in knowing which is the most recommended algorithm for its use. In the same way, the experiments known by the most successful algorithms have been carried out under different conditions, resulting in non-comparable results. To better understand their respective algorithmic efficiency, this paper compares the recognition rate, the error rate, and the recognition time of five reference stroke gesture recognition algorithms, i.e., 1,1, P, Q,!FTL,andPennyPincher,onthreediversegesturesets,i.e.,NicIcon,HHReco,andUtopianoAlphabet,inauserindependentscenario.Similarconditionswereappliedtoallalgorithms,tobeexecutedunderthesamecharacteristics.Forthealgorithmsstudied,themethodagreedtoevaluatetheerrorrateandperformancerate,aswellastheexecutiontimeofeachofthesealgorithms.AsoftwaretestingenvironmentwasdevelopedinJavaScripttoperformthecomparativeanalysis.Theresultsofthisanalysishelprecommendingarecognizerwhereitturnsouttobethemostefficient.!FTL(NLSD)isthebestrecognitionrateandthemostefficientalgorithmfortheHHrecoandNicIcondatasets.However,PennyPincherwasthefasteralgorithmforHHrecodatasets.Finally,Q, !FTL, and Penny Pincher, on three diverse gesture sets, i.e., NicIcon, HHReco, and Utopiano Alphabet, in a user-independent scenario. Similar conditions were applied to all algorithms, to be executed under the same characteristics. For the algorithms studied, the method agreed to evaluate the error rate and performance rate, as well as the execution time of each of these algorithms. A software testing environment was developed in JavaScript to perform the comparative analysis. The results of this analysis help recommending a recognizer where it turns out to be the most efficient. !FTL (NLSD) is the best recognition rate and the most efficient algorithm for the HHreco and NicIcon datasets. However, Penny Pincher was the faster algorithm for HHreco datasets. Finally, 1 obtained the best recognition rate for the Utopiano Alphabet dataset

    A Model for Synthesizing a Combined Verbal and Nonverbal Behavior Based on Personality Traits in Human-Robot Interaction

    Get PDF
    International audienceIn Human-Robot Interaction (HRI) scenarios, an intelligent robot should be able to synthesize an appropriate behavior adapted to human profile (i.e., personality). Recent research studies discussed the effect of personality traits on human verbal and nonverbal behaviors. The dynamic characteristics of the generated gestures and postures during the nonverbal communication can differ according to personality traits, which similarly can influence the verbal content of human speech. This research tries to map human verbal behavior to a corresponding verbal and nonverbal combined robot behavior based on the extraversion-introversion personality dimension. We explore the human-robot personality matching aspect and the similarity attraction principle, in addition to the different effects of the adapted combined robot behavior expressed through speech and gestures, and the adapted speech-only robot behavior, on interaction. Experiments with the humanoid NAO robot are reported

    How to Synthesize a Large-Scale and Trainable Micro-Expression Dataset?

    Full text link
    This paper does not contain technical novelty but introduces our key discoveries in a data generation protocol, a database and insights. We aim to address the lack of large-scale datasets in micro-expression (MiE) recognition due to the prohibitive cost of data collection, which renders large-scale training less feasible. To this end, we develop a protocol to automatically synthesize large scale MiE training data that allow us to train improved recognition models for real-world test data. Specifically, we discover three types of Action Units (AUs) that can constitute trainable MiEs. These AUs come from real-world MiEs, early frames of macro-expression videos, and the relationship between AUs and expression categories defined by human expert knowledge. With these AUs, our protocol then employs large numbers of face images of various identities and an off-the-shelf face generator for MiE synthesis, yielding the MiE-X dataset. MiE recognition models are trained or pre-trained on MiE-X and evaluated on real-world test sets, where very competitive accuracy is obtained. Experimental results not only validate the effectiveness of the discovered AUs and MiE-X dataset but also reveal some interesting properties of MiEs: they generalize across faces, are close to early-stage macro-expressions, and can be manually defined.Comment: European Conference on Computer Vision 202
    corecore