197 research outputs found

    Joint-Based Action Progress Prediction

    Get PDF
    Action understanding is a fundamental computer vision branch for several applications, ranging from surveillance to robotics. Most works deal with localizing and recognizing the action in both time and space, without providing a characterization of its evolution. Recent works have addressed the prediction of action progress, which is an estimate of how far the action has advanced as it is performed. In this paper, we propose to predict action progress using a different modality compared to previous methods: body joints. Human body joints carry very precise information about human poses, which we believe are a much more lightweight and effective way of characterizing actions and therefore their execution. Estimating action progress can in fact be determined based on the understanding of how key poses follow each other during the development of an activity. We show how an action progress prediction model can exploit body joints and integrate it with modules providing keypoint and action information in order to be run directly from raw pixels. The proposed method is experimentally validated on the Penn Action Dataset

    PANEL: Challenges for multimedia/multimodal research in the next decade

    Get PDF
    The multimedia and multimodal community is witnessing an explosive transformation in the recent years with major societal impact. With the unprecedented deployment of multimedia devices and systems, multimedia research is critical to our abilities and prospects in advancing state-of-theart technologies and solving real-world challenges facing the society and the nation. To respond to these challenges and further advance the frontiers of the field of multimedia, this panel will discuss the challenges and visions that may guide future research in the next ten years

    Multiple future prediction leveraging synthetic trajectories

    Get PDF

    Credence attributes and the quest for a higher price – A hedonic stochastic frontier approach

    Get PDF
    Food manufacturers that offer credence attributes, whose presence cannot be determined a priori, may fail to differentiate their products effectively and achieve higher prices if asymmetric information (on the producers' side) impairs their ability to reach consumers with higher willingness to pay. In this article, we assess whether manufacturers carrying products with credence attributes in their portfolio are able to obtain higher prices. To this end, we use a large database of yoghurt sales in Italy and a hedonic price model estimated using a stochastic frontier estimator. The results indicate that manufacturers that offer more credence attributes in their portfolios have the ability to price their products systematically at higher levels

    Explaining autonomous driving by learning end-to-end visual attention

    Get PDF

    PLM-IPE: A Pixel-Landmark Mutual Enhanced Framework for Implicit Preference Estimation

    Get PDF
    In this paper, we are interested in understanding how customers perceive fashion recommendations, in particular when observing a proposed combination of garments to compose an outfit. Automatically understanding how a suggested item is perceived, without any kind of active engagement, is in fact an essential block to achieve interactive applications. We propose a pixel-landmark mutual enhanced framework for implicit preference estimation, named PLM-IPE, which is capable of inferring the user's implicit preferences exploiting visual cues, without any active or conscious engagement. PLM-IPE consists of three key modules: pixel-based estimator, landmark-based estimator and mutual learning based optimization. The former two modules work on capturing the implicit reaction of the user from the pixel level and landmark level, respectively. The last module serves to transfer knowledge between the two parallel estimators. Towards evaluation, we collected a real-world dataset, named SentiGarment, which contains 3,345 facial reaction videos paired with suggested outfits and human labeled reaction scores. Extensive experiments show the superiority of our model over state-of-the-art approaches
    corecore