25 research outputs found

    Visual Prosody: Facial Movements Accompanying Speech

    Get PDF
    As we articulate speech, we usually move the head and exhibit various facial expressions. This visual aspect of speech aids understanding and helps communicating additional information, such as the speaker's mood. We analyze quantitatively head and facial movements that accompany speech and investigate how they relate to the text's prosodic structure. We recorded several hours of speech and measured the locations of the speakers' main facial features as well as their head poses. The text was evaluated with a prosody prediction tool, identifying phrase boundaries and pitch accents. Characteristic for most speakers are simple motion patterns that are repeatedly applied in synchrony with the main prosodic events. Direction and strength of head movements vary widely from one speaker to another, yet their timing is typically well synchronized with the spoken text. Understanding quantitatively the correlations between head movements and spoken text is important for synthesizing photo-realistic talking heads. Talking heads appear much more engaging when they exhibit realistic motion pattern

    Mitotic figure recognition: Agreement among pathologists and computerized detector

    Get PDF
    Abstract. Despite the prognostic importance of mitotic count as one of the components of the Bloom -Richardson grad

    Introduzione al Coding Unplugged per la Fascia 3–7 Anni con il The Coding Box Laptop

    No full text
    Per una società con solide competenze digitali, è opportuno che l’informatica sia insegnata fin dalla scuola dell’infanzia. D’altra parte, è sconsigliabile che i bambini utilizzino troppo i dispositivi elettronici. In questo contributo, presentiamo un’attività giocosa ed educativa che permette di insegnare i concetti fondamentali del coding in maniera unplugged, cioè senza ricorrere a veri computer. L’attività si basa infatti su un finto laptop di legno che permette ai bambini di immedesimarsi nel ruolo di programmatore ed esecutore, in una sorta di gioco di ruolo

    Abstract

    No full text
    We describe a vision-based obstacle avoidance system for off-road mobile robots. The system is trained from end to end to map raw input images to steering angles. It is trained in supervised mode to predict the steering angles provided by a human driver during training runs collected in a wide variety of terrains, weather conditions, lighting conditions, and obstacle types. The robot is a 50cm off-road truck, with two forwardpointing wireless color cameras. A remote computer processes the video and controls the robot via radio. The learning system is a large 6-layer convolutional network whose input is a single left/right pair of unprocessed low-resolution images. The robot exhibits an excellent ability to detect obstacles and navigate around them in real time at speeds of 2 m/s.
    corecore