3,811 research outputs found
From rituals to magic: Interactive art and HCI of the past, present, and future
The connection between art and technology is much tighter than is commonly recognized. The emergence of aesthetic computing in the early 2000s has brought renewed focus on this relationship. In this article, we articulate how art and Human–Computer Interaction (HCI) are compatible with each other and actually essential to advance each other in this era, by briefly addressing interconnected components in both areas—interaction, creativity, embodiment, affect, and presence. After briefly introducing the history of interactive art, we discuss how art and HCI can contribute to one another by illustrating contemporary examples of art in immersive environments, robotic art, and machine intelligence in art. Then, we identify challenges and opportunities for collaborative efforts between art and HCI. Finally, we reiterate important implications and pose future directions. This article is intended as a catalyst to facilitate discussions on the mutual benefits of working together in the art and HCI communities. It also aims to provide artists and researchers in this domain with suggestions about where to go next
Sensoring a Generative System to Create User-Controlled Melodies
[EN] The automatic generation of music is an emergent field of research that has attracted the attention of countless researchers. As a result, there is a broad spectrum of state of the art research in this field. Many systems have been designed to facilitate collaboration between humans and machines in the generation of valuable music. This research proposes an intelligent system that generates melodies under the supervision of a user, who guides the process through a mechanical device. The mechanical device is able to capture the movements of the user and translate them into a melody. The system is based on a Case-Based Reasoning (CBR) architecture, enabling it to learn from previous compositions and to improve its performance over time. The user uses a device that allows them to adapt the composition to their preferences by adjusting the pace of a melody to a specific context or generating more serious or acute notes. Additionally, the device can automatically resist some of the user’s movements, this way the user learns how they can create a good melody. Several experiments were conducted to analyze the quality of the system and the melodies it generates. According to the users’ validation, the proposed system can generate music that follows a concrete style. Most of them also believed that the partial control of the device was essential for the quality of the generated music
Musical agency and collaboration in the digital age
This is the author accepted manuscript. The final version is available from Bloomsbury via the link in this recordIn 2019, the musician Holly Herndon released her third full-length album, Proto. In
addition to input from two other human artists, the album had a fourth collaborator:
an artificial neural network named Spawn. The software had been trained over
several years to generate and manipulate the cavernous choral soundscapes that
brought Proto widespread critical acclaim. Spawn’s role in each stage of the musicmaking process was neither completely predictable nor completely under Herndon’s
control; her vocal contribution – its tone, pitch, rhythm, and dynamics - was often
novel, original, and surprising. Herndon describes Spawn as ‘a performer... an
ensemble member. So I would say that I collaborated with a human and an inhuman
ensemble’ (Funai 2019).
Here, we consider how seriously we ought to take assertions like this one.
Can we really conceive of AI systems as legitimate collaborators in the skilled project
of making art? Do they have the kinds of creative agency, autonomy, and expressive
power that characterise membership of an artistic ensemble?
In the next section, we rehearse some reasons why there has been a reluctance
to give affirmative answers to these questions – why, that is, computational systems
have been taken to have an impoverished status, lacking capacities essential to true
artistic agency (see Boden 2007). In section 2, we explore the view that even when
attributions of creativity and autonomy to artificial systems are not literally true, they
can instead be fictionally true. Those who work alongside generative systems like
Spawn and those who enjoy the musical fruits of such collaboration are participants
in an elaborate game of make-believe, wherein the non-human contributor is
imaginatively conceived as being a real improviser, a real singer, a real musician.
Taking this line allows us to give credence to testimony like Herndon’s, and to better
understand the production and appreciation of music that has a partially nonhuman origin
Computational Creativity and Music Generation Systems: An Introduction to the State of the Art
Computational Creativity is a multidisciplinary field that tries to obtain creative behaviors from computers. One of its most prolific subfields is that of Music Generation (also called Algorithmic Composition or Musical Metacreation), that uses computational means to compose music. Due to the multidisciplinary nature of this research field, it is sometimes hard to define precise goals and to keep track of what problems can be considered solved by state-of-the-art systems and what instead needs further developments. With this survey, we try to give a complete introduction to those who wish to explore Computational Creativity and Music Generation. To do so, we first give a picture of the research on the definition and the evaluation of creativity, both human and computational, needed to understand how computational means can be used to obtain creative behaviors and its importance within Artificial Intelligence studies. We then review the state of the art of Music Generation Systems, by citing examples for all the main approaches to music generation, and by listing the open challenges that were identified by previous reviews on the subject. For each of these challenges, we cite works that have proposed solutions, describing what still needs to be done and some possible directions for further research
Towards a framework for socially interactive robots
250 p.En las últimas décadas, la investigación en el campo de la robótica social ha crecido considerablemente. El desarrollo de diferentes tipos de robots y sus roles dentro de la sociedad se están expandiendo poco a poco. Los robots dotados de habilidades sociales pretenden ser utilizados para diferentes aplicaciones; por ejemplo, como profesores interactivos y asistentes educativos, para apoyar el manejo de la diabetes en niños, para ayudar a personas mayores con necesidades especiales, como actores interactivos en el teatro o incluso como asistentes en hoteles y centros comerciales.El equipo de investigación RSAIT ha estado trabajando en varias áreas de la robótica, en particular,en arquitecturas de control, exploración y navegación de robots, aprendizaje automático y visión por computador. El trabajo presentado en este trabajo de investigación tiene como objetivo añadir una nueva capa al desarrollo anterior, la capa de interacción humano-robot que se centra en las capacidades sociales que un robot debe mostrar al interactuar con personas, como expresar y percibir emociones, mostrar un alto nivel de diálogo, aprender modelos de otros agentes, establecer y mantener relaciones sociales, usar medios naturales de comunicación (mirada, gestos, etc.),mostrar personalidad y carácter distintivos y aprender competencias sociales.En esta tesis doctoral, tratamos de aportar nuestro grano de arena a las preguntas básicas que surgen cuando pensamos en robots sociales: (1) ¿Cómo nos comunicamos (u operamos) los humanos con los robots sociales?; y (2) ¿Cómo actúan los robots sociales con nosotros? En esa lÃnea, el trabajo se ha desarrollado en dos fases: en la primera, nos hemos centrado en explorar desde un punto de vista práctico varias formas que los humanos utilizan para comunicarse con los robots de una maneranatural. En la segunda además, hemos investigado cómo los robots sociales deben actuar con el usuario.Con respecto a la primera fase, hemos desarrollado tres interfaces de usuario naturales que pretenden hacer que la interacción con los robots sociales sea más natural. Para probar tales interfaces se han desarrollado dos aplicaciones de diferente uso: robots guÃa y un sistema de controlde robot humanoides con fines de entretenimiento. Trabajar en esas aplicaciones nos ha permitido dotar a nuestros robots con algunas habilidades básicas, como la navegación, la comunicación entre robots y el reconocimiento de voz y las capacidades de comprensión.Por otro lado, en la segunda fase nos hemos centrado en la identificación y el desarrollo de los módulos básicos de comportamiento que este tipo de robots necesitan para ser socialmente creÃbles y confiables mientras actúan como agentes sociales. Se ha desarrollado una arquitectura(framework) para robots socialmente interactivos que permite a los robots expresar diferentes tipos de emociones y mostrar un lenguaje corporal natural similar al humano según la tarea a realizar y lascondiciones ambientales.La validación de los diferentes estados de desarrollo de nuestros robots sociales se ha realizado mediante representaciones públicas. La exposición de nuestros robots al público en esas actuaciones se ha convertido en una herramienta esencial para medir cualitativamente la aceptación social de los prototipos que estamos desarrollando. De la misma manera que los robots necesitan un cuerpo fÃsico para interactuar con el entorno y convertirse en inteligentes, los robots sociales necesitan participar socialmente en tareas reales para las que han sido desarrollados, para asà poder mejorar su sociabilida
Machine Learning Approach for an Advanced Agent-based Intelligent Tutoring System
Machine Learning Approach for an Advanced Agent-based Intelligent Tutoring System
Roya Aminikia
Learning Management Systems (LMSs) are digital frameworks that provide curriculum, training
materials, and corresponding assessments to guarantee an effective learning process. Although
these systems are capable of distributing the learning content, they do not support dynamic learning
processes and do not have the capability to communicate with human learners who are required to
interact in a dynamic environment during the learning process. To create this process and support
the interaction feature, LMSs are equipped with Intelligent Tutoring Systems (ITSs). The main
objective of an ITS is to facilitate students’ movement towards their learning goals through virtual
tutoring. When equipped with ITSs, LMSs operate as dynamic systems to provide students with
access to a tutor who is available anytime during the learning session. The crucial issues we address
in this thesis are how to set up a dynamic LMS, and how to design the logical structure behind an
ITS. Artificial intelligence, multi-agent technology and machine learning provide powerful theories
and foundations that we leverage to tackle these issues.
We designed and implemented the new concept of Pedagogical Agent (PA) as the main part of
our ITS. This agent uses an evaluation procedure to compare each particular student, in terms of
performance, with their peers to develop a worthwhile guidance. The agent captures global knowledge
of students’ feature measurements during students’ guiding process. Therefore, the PA retains
an updated status, called image, of each specific student at any moment. The agent uses this image
for the purpose of diagnosing students’ skills to implement a specific correct instruction. To develop
the infrastructure of the agent decision making algorithm, we laid out a protocol (decision tree) to
select the best individual direction. The significant capability of the agent is the ability to update
its functionality by looking at a student’s image at run time. We also applied two supervised machine
learning methods to improve the decision making protocol performance in order to maximize
the effect of the collaborating mechanism between students and the ITS. Through these methods,
we made the necessary modifications to the decision making structure to promote students’ performance
by offering prompts during the learning sessions. The conducted experiments showed that
the proposed system is able to efficiently classify students into learners with high versus low performance.
Deployment of such a model enabled the PA to use different decision trees while interacting
with students of different learning skills. The performance of the system has been shown by ROC
curves and details regarding combination of different attributes used in the two machine learning
algorithms are discussed, along with the correlation of key attributes that contribute to the accuracy
and performance of the decision maker components
- …