26 research outputs found

    Non-choreographed Robot Dance

    Get PDF
    This research aims at investigating the difficulties of enabling the humanoid robot Nao to dance on music. The focus is on creating a dance that is not predefined by the researcher, but which emerges from the music played to the robot. Such an undertaking can not be fully tackled in a small-scale project. Nevertheless, rather than focusing on a subtask of the topic, this research tries to maintain a holistic view on the subject, and tries to provide a framework based on which work in this area can be continued in the future. The need for this research comes from the fact that current approaches to robot dance in general, and Nao dance in particular, focus on predefined dances built by the researcher. The main goal of this project is to move away from the current choreographed approaches to Nao dance, and investigate how to make the robot dance in a non-predefined fashion. Moreover, given the fact that previous research has focused mainly on the analysis of musical beat, a secondary goal of this project is to focus not only on the beat, but other elements of music as well, in order to create the dance

    A robot uses its own microphone to synchronize its steps to musical beats while scatting and singing

    Full text link
    Abstract—Musical beat tracking is one of the effective technologies for human-robot interaction such as musical ses-sions. Since such interaction should be performed in various environments in a natural way, musical beat tracking for a robot should cope with noise sources such as environmental noise, its own motor noises, and self voices, by using its own microphone. This paper addresses a musical beat tracking robot which can step, scat and sing according to musical beats by using its own microphone. To realize such a robot, we propose a robust beat tracking method by introducing two key techniques, that is, spectro-temporal pattern matching and echo cancellation. The former realizes robust tempo estimation with a shorter window length, thus, it can quickly adapt to tempo changes. The latter is effective to cancel self noises such as stepping, scatting, and singing. We implemented the proposed beat tracking method for Honda ASIMO. Experimental results showed ten times faster adaptation to tempo changes and high robustness in beat tracking for stepping, scatting and singing noises. We also demonstrated the robot times its steps while scatting or singing to musical beats. I

    Towards an interactive framework for robot dancing applications

    Get PDF
    Estágio realizado no INESC-Porto e orientado pelo Prof. Doutor Fabien GouyonTese de mestrado integrado. Engenharia Electrotécnica e de Computadores - Major Telecomunicações. Faculdade de Engenharia. Universidade do Porto. 200

    Can can robótico

    Get PDF
    Tese de mestrado em Engenharia Informática (Interacção e Conhecimento), apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2011Cada vez mais a área de robótica tem vindo a despertar maior interesse devido à sua aplicação em diversas áreas que vão desde aplicações militares até aplicações domésticas. Uma das áreas que tem sido bastante desenvolvida é a área de entretenimento. Nesta área existem robôs que fazem o papel de animais de estimação e robôs que funcionam como parceiros de dança ou de conversa. O objectivo deste projecto consiste em explorar a área de entretenimento através da intersecção do mundo da robótica e da dança, criando um robô dançarino que seja capaz de reagir ao ritmo musical em tempo real e de dançar de acordo com a música que está a tocar. A maioria dos robôs bailarinos dança utilizando a imitação. Com algumas excepções como seja o caso do robô da Lego de João Manuel Oliveira que reage à música que está a ouvir em tempo real. No entanto este robô não tem como objectivo principal a dança mas sim o sistema de análise de som. Neste documento serão apresentados os passos dados com vista a construir um robô capaz de dançar cancan em tempo real.More and more robotics is an area that has come to awake a greater interest due to its application in diverse areas that go from military applications to domestic ones. One of the areas that has been greatly developed is entertainment. In this area there are robots that do the part of pet animals and those who work as dance or talk partners. The objective of this project consists in exploring the area of entertainment through the intersection of robotic and dance world, creating a dancing robot that is capable of reacting to musical rhythm in real time and dance according to a song that is playing. Most dancing robots use imitation. With a few exception like the case of the Lego robot of João Manuel Oliveira that reacts to the music that it’s listening in real time. However this robot prime goal isn’t dance but a system of sound analysis. In this document will be presented the steps towards building a robot capable of dancing “cancan” in real time

    Developing a Noise-Robust Beat Learning Algorithm for Music-Information Retrieval

    Get PDF
    The field of Music-Information Retrieval (Music-IR) involves the development of algorithms that can analyze musical audio and extract various high-level musical features. Many such algorithms have been developed, and systems now exist that can reliably identify features such as beat locations, tempo, and rhythm from musical sources. These features in turn are used to assist in a variety of music-related tasks ranging from automatically creating playlists that match specified criteria to synchronizing various elements, such as computer graphics, with a performance. These Music-IR systems thus help humans to enjoy and interact with music. While current systems for identifying beats in music are have found widespread utility, most of them have been developed on music that is relatively free of acoustic noise. Much of the music that humans listen to, though, is performed in noisy environments. People often enjoy music in crowded clubs and noisy rooms, but this music is much more challenging for Music-IR systems to analyze, and current beat trackers generally perform poorly on musical audio heard in such conditions. If our algorithms could accurately process this music, though, it would enable this music too to be used in applications such as automatic song selection, which are currently limited to music taken directly from professionally-produced digital files that have little acoustic noise. Noise-robust beat learning algorithms would also allow for additional types of performance augmentation which create noise and thus cannot be used with current algorithms. Such a system, for instance, could aid robots in performing synchronously with music, whereas current systems are generally unable to accurately process audio heard in conjunction with noisy robot motors. This work aims to present a new approach for learning beats and identifying both their temporal locations and their spectral characteristics for music recorded in the presence of noise. First, datasets of musical audio recorded in environments with multiple types of noise were collected and annotated. Noise sources used for these datasets included HVAC sounds from a room, chatter from a crowded bar, and fans and motor noises from a moving robot. Second, an algorithm for learning and locating musical beats was developed which incorporates signal processing and machine learning techniques such as Harmonic-Percussive Source Separation and Probabilistic Latent Component Analysis. A representation of the musical signal called the stacked spectrogram was also utilized in order to better represent the time-varying nature of the beats. Unlike many current systems, which assume that the beat locations will be correlated with some hand-crafted features, this system learns the beats directly from the acoustic signal. Finally, the algorithm was tested against several state-of-the-art beat trackers on the audio datasets. The resultant system was found to significantly outperform the state-of-the-art when evaluated on audio played in realistically noisy conditions.Ph.D., Electrical Engineering -- Drexel University, 201

    Artech 2008: proceedings of the 4th International Conference on Digital Arts

    Get PDF
    ARTECH 2008 is the fourth international conference held in Portugal and Galicia on the topic of Digital Arts. It aims to promote contacts between Iberian and International contributors concerned with the conception, production and dissemination of Digital and Electronic Art. ARTECH brings the scientific, technological and artistic community together, promoting the interest in the digital culture and its intersection with art and technology as an important research field, a common space for discussion, an exchange of experiences, a forum for emerging digital artists and a way of understanding and appreciating new forms of cultural expression. Hosted by the Portuguese Catholic University’s School of Arts (UCP-EA) at the City of Porto, ARTCH 2008 falls in alignment with the main commitment of the Research Center for Science and Technology of the Arts (CITAR) to promote knowledge in the field of the Arts trough research and development within UCP-AE and together with the local and international community. The main areas proposed for the conference were related with sound, image, video, music, multimedia and other new media related topics, in the context of emerging practice of artistic creation. Although non exclusive, the main topics of the conference are usually: Art and Science; Audio-Visual and Multimedia Design; Creativity Theory; Electronic Music; Generative and Algorithmic Art; Interactive Systems for Artistic Applications; Media Art history; Mobile Multimedia; Net Art and Digital Culture; New Experiences with New Media and New Applications; Tangible and Gesture Interfaces; Technology in Art Education; Virtual Reality and Augmented Reality. The contribution from the international community was extremely gratifying, resulting in the submission of 79 original works (Long Papers, Short Papers and installation proposals) from 22 Countries. Our Scientific Committee reviewed these submissions thoroughly resulting in a 73% acceptance ratio of a diverse and promising body of work presented in this book of proceedings. This compilation of articles provides an overview of the state of the art as well as a glimpse of new tendencies in the field of Digital Arts, with special emphasis in the topics: Sound and Music Computing; Technology Mediated Dance; Collaborative Art Performance; Digital Narratives; Media Art and Creativity Theory; Interactive Art; Audiovisual and Multimedia Design.info:eu-repo/semantics/publishedVersio

    A Biped Robot that Keeps Steps in Time with Musical Beats while Listening to Music with Its Own Ears

    No full text
    Abstract — We aim at enabling a biped robot to interact with humans through real-world music in daily-life environments, e.g., to autonomously keep its steps (stamps) in time with musical beats. To achieve this, the robot should be able to robustly predict the beat times in real time while listening to musical performance with its own ears (head-embedded microphones). However, this has not previously been addressed in most studies on music-synchronized robots due to the difficulty in predicting the beat times in real-world music. To solve this problem, we implemented a beat-tracking method developed in the field of music information processing. The predicted beat times are then used by a feedback-control method that adjusts the robot’s step intervals to synchronize its steps in time with the beats. The experimental results show that the robot can adjust its steps in time with the beat times as the tempo changes. The resulting robot needed about 25 [s] to recognize the tempo change after it and then synchronize its steps. I

    Towards a framework for socially interactive robots

    Get PDF
    250 p.En las últimas décadas, la investigación en el campo de la robótica social ha crecido considerablemente. El desarrollo de diferentes tipos de robots y sus roles dentro de la sociedad se están expandiendo poco a poco. Los robots dotados de habilidades sociales pretenden ser utilizados para diferentes aplicaciones; por ejemplo, como profesores interactivos y asistentes educativos, para apoyar el manejo de la diabetes en niños, para ayudar a personas mayores con necesidades especiales, como actores interactivos en el teatro o incluso como asistentes en hoteles y centros comerciales.El equipo de investigación RSAIT ha estado trabajando en varias áreas de la robótica, en particular,en arquitecturas de control, exploración y navegación de robots, aprendizaje automático y visión por computador. El trabajo presentado en este trabajo de investigación tiene como objetivo añadir una nueva capa al desarrollo anterior, la capa de interacción humano-robot que se centra en las capacidades sociales que un robot debe mostrar al interactuar con personas, como expresar y percibir emociones, mostrar un alto nivel de diálogo, aprender modelos de otros agentes, establecer y mantener relaciones sociales, usar medios naturales de comunicación (mirada, gestos, etc.),mostrar personalidad y carácter distintivos y aprender competencias sociales.En esta tesis doctoral, tratamos de aportar nuestro grano de arena a las preguntas básicas que surgen cuando pensamos en robots sociales: (1) ¿Cómo nos comunicamos (u operamos) los humanos con los robots sociales?; y (2) ¿Cómo actúan los robots sociales con nosotros? En esa línea, el trabajo se ha desarrollado en dos fases: en la primera, nos hemos centrado en explorar desde un punto de vista práctico varias formas que los humanos utilizan para comunicarse con los robots de una maneranatural. En la segunda además, hemos investigado cómo los robots sociales deben actuar con el usuario.Con respecto a la primera fase, hemos desarrollado tres interfaces de usuario naturales que pretenden hacer que la interacción con los robots sociales sea más natural. Para probar tales interfaces se han desarrollado dos aplicaciones de diferente uso: robots guía y un sistema de controlde robot humanoides con fines de entretenimiento. Trabajar en esas aplicaciones nos ha permitido dotar a nuestros robots con algunas habilidades básicas, como la navegación, la comunicación entre robots y el reconocimiento de voz y las capacidades de comprensión.Por otro lado, en la segunda fase nos hemos centrado en la identificación y el desarrollo de los módulos básicos de comportamiento que este tipo de robots necesitan para ser socialmente creíbles y confiables mientras actúan como agentes sociales. Se ha desarrollado una arquitectura(framework) para robots socialmente interactivos que permite a los robots expresar diferentes tipos de emociones y mostrar un lenguaje corporal natural similar al humano según la tarea a realizar y lascondiciones ambientales.La validación de los diferentes estados de desarrollo de nuestros robots sociales se ha realizado mediante representaciones públicas. La exposición de nuestros robots al público en esas actuaciones se ha convertido en una herramienta esencial para medir cualitativamente la aceptación social de los prototipos que estamos desarrollando. De la misma manera que los robots necesitan un cuerpo físico para interactuar con el entorno y convertirse en inteligentes, los robots sociales necesitan participar socialmente en tareas reales para las que han sido desarrollados, para así poder mejorar su sociabilida

    Casco Bay Weekly : 1 May 2003

    Get PDF
    https://digitalcommons.portlandlibrary.com/cbw_2003/1016/thumbnail.jp
    corecore