1,683 research outputs found

    Bayesian Methods for Intelligent Task Assignment in Crowdsourcing Systems

    Get PDF

    Speech-driven Animation with Meaningful Behaviors

    Full text link
    Conversational agents (CAs) play an important role in human computer interaction. Creating believable movements for CAs is challenging, since the movements have to be meaningful and natural, reflecting the coupling between gestures and speech. Studies in the past have mainly relied on rule-based or data-driven approaches. Rule-based methods focus on creating meaningful behaviors conveying the underlying message, but the gestures cannot be easily synchronized with speech. Data-driven approaches, especially speech-driven models, can capture the relationship between speech and gestures. However, they create behaviors disregarding the meaning of the message. This study proposes to bridge the gap between these two approaches overcoming their limitations. The approach builds a dynamic Bayesian network (DBN), where a discrete variable is added to constrain the behaviors on the underlying constraint. The study implements and evaluates the approach with two constraints: discourse functions and prototypical behaviors. By constraining on the discourse functions (e.g., questions), the model learns the characteristic behaviors associated with a given discourse class learning the rules from the data. By constraining on prototypical behaviors (e.g., head nods), the approach can be embedded in a rule-based system as a behavior realizer creating trajectories that are timely synchronized with speech. The study proposes a DBN structure and a training approach that (1) models the cause-effect relationship between the constraint and the gestures, (2) initializes the state configuration models increasing the range of the generated behaviors, and (3) captures the differences in the behaviors across constraints by enforcing sparse transitions between shared and exclusive states per constraint. Objective and subjective evaluations demonstrate the benefits of the proposed approach over an unconstrained model.Comment: 13 pages, 12 figures, 5 table

    Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure

    Full text link
    As machine learning systems move from computer-science laboratories into the open world, their accountability becomes a high priority problem. Accountability requires deep understanding of system behavior and its failures. Current evaluation methods such as single-score error metrics and confusion matrices provide aggregate views of system performance that hide important shortcomings. Understanding details about failures is important for identifying pathways for refinement, communicating the reliability of systems in different settings, and for specifying appropriate human oversight and engagement. Characterization of failures and shortcomings is particularly complex for systems composed of multiple machine learned components. For such systems, existing evaluation methods have limited expressiveness in describing and explaining the relationship among input content, the internal states of system components, and final output quality. We present Pandora, a set of hybrid human-machine methods and tools for describing and explaining system failures. Pandora leverages both human and system-generated observations to summarize conditions of system malfunction with respect to the input content and system architecture. We share results of a case study with a machine learning pipeline for image captioning that show how detailed performance views can be beneficial for analysis and debugging

    Quality control and cost management in crowdsourcing

    Get PDF
    By harvesting online workers’ knowledge, crowdsourcing has become an efficient and cost-effective way to obtain a large amount of labeled data for solving human intelligent tasks (HITs), such as entity resolution and sentiment analysis. Due to the open nature of crowdsourcing, online workers with different knowledge backgrounds may provide conflicting labels to tasks. Therefore, it is a common practice to perform a pre-determined number of assignments, either per task or for all tasks, and aggregate collected labels to infer the true label of tasks. This model could suffer from poor accuracy in case of under-budget or a waste of resource in case of over-budget. In addition, as worker labels are usually aggregated in a voting manner, crowdsourcing systems are vulnerable to strategic Sybil attack, where the attacker may manipulate several robot Sybil workers to share randomized labels for outvoting independent workers and apply various strategies to evade Sybil detection. In this thesis, we are specifically interested in providing a guaranteed aggregation accuracy with minimum worker cost and defending against strategic Sybil attack. In our first work, we assume that workers are independent and honest. By enforcing a specified accuracy threshold on aggregated labels and minimizing the worker cost under this requirement, we formulate the dual requirements for quality and cost as a Guaranteed Accuracy Problem (GAP) and present an efficient task assignment algorithm for solving the problem. In our second work, we assume that strategic Sybil attackers may coordinate Sybil workers to obtain rewards without honestly labeling tasks and apply different strategies to evade detection. By camouflaging golden tasks (i.e., tasks with known true labels) from the attacker and suppressing the impact of Sybil workers and low-quality independent workers, we extend the principled truth discovery to defend against strategic Sybil attack in crowdsorucing. For both works, we conduct comprehensive empirical evaluations on real and synthetic datasets to demonstrate the effectiveness and efficiency of our methods
    • …
    corecore