237 research outputs found

    SHELDON Smart habitat for the elderly.

    Get PDF
    An insightful document concerning active and assisted living under different perspectives: Furniture and habitat, ICT solutions and Healthcare

    Proceedings of KogWis 2012. 11th Biannual Conference of the German Cognitive Science Society

    Get PDF
    The German cognitive science conference is an interdisciplinary event where researchers from different disciplines -- mainly from artificial intelligence, cognitive psychology, linguistics, neuroscience, philosophy of mind, and anthropology -- and application areas -- such as eduction, clinical psychology, and human-machine interaction -- bring together different theoretical and methodological perspectives to study the mind. The 11th Biannual Conference of the German Cognitive Science Society took place from September 30 to October 3 2012 at Otto-Friedrich-Universität in Bamberg. The proceedings cover all contributions to this conference, that is, five invited talks, seven invited symposia and two symposia, a satellite symposium, a doctoral symposium, three tutorials, 46 abstracts of talks and 23 poster abstracts

    Environnements virtuels émotionnellement intelligents

    Full text link
    Les émotions ont été étudiées sous différents angles dans le domaine de l'interaction homme-machine y compris les systèmes tutoriel intelligents, les réseaux sociaux, les plateformes d’apprentissage en ligne et le e-commerce. Beaucoup d’efforts en informatique affective sont investis pour intégrer la dimension émotionnelle dans les environnements virtuels (tel que les jeux vidéo, les jeux sérieux et les environnements de réalité virtuelle ou de réalité augmenté). Toutefois, les stratégies utilisées dans les jeux sont encore empiriques et se basent sur des modèles psychologiques et sociologiques du joueur : Courbe d’apprentissage, gestion de la difficulté, degré d’efficience dans l’évaluation des performances et de la motivation du joueur. Or cette analyse peut malmener le système dans la mesure où les critères sont parfois trop vagues ou ne représentent pas les réelles compétences du joueur, ni ses vraies difficultés. Étant donné que la stratégie d’intervention est très influencée par la précision de l’analyse et l’évaluation du joueur, de nouveaux moyens sont nécessaires afin d’améliorer les processus décisionnels dans les jeux et d’organiser les stratégies d’adaptation de façon optimale. Ce travail de recherche vise à construire une nouvelle approche pour l’évaluation et le suivi du joueur. L’approche permet une modélisation du joueur plus efficace et moins intrusive par l’intégration des états mentaux et affectifs obtenus à partir de senseurs physiologiques (signaux cérébraux, Activité électrodermale, …) ou/et instruments optiques (Webcam, traceur de regard, …). Les états affectifs et mentaux tels que les émotions de base (basées sur les expressions faciales), l’état d’engagement, de motivation et d’attention sont les plus visés dans cette recherche. Afin de soutenir l’adaptation dans les jeux, des modèles des émotions et de la motivation du joueur basé sur ces indicateurs mentaux et affectifs, ont été développés. Nous avons implémenté cette approche en développant un système sous forme d’une architecture modulaire qui permet l’adaptation dans les environnements virtuels selon les paramètres affectifs du joueur détectés en temps-réel par des techniques d’intelligence artificielle.Emotions were studied from different angles in the field of human-machine interaction including intelligent tutorial systems, social networks, online learning platforms and e-commerce. Much effort in affective computing are invested to integrate the emotional dimension in virtual environments (such as video games, serious games and virtual reality environments or augmented reality). However, the strategies used in games are still empirical and are based on psychological and sociological models of the player: Learning Curve, trouble management, degree of efficiency in the evaluation of performance and motivation of the player. But this analysis can mislead the system to the extent that the criteria are sometimes too vague and do not represent the actual skills of the player, nor his real difficulties. Since the intervention strategy is influenced by the accuracy of the analysis and evaluation of the player, new ways are needed to improve decision-making in games and organizing adaptation strategies in optimal way. This research aims to build a new approach to the evaluation and monitoring of the player. The approach enables more effective and less intrusive player modeling through the integration of mental and emotional states obtained from physiological sensors (brain signals, electro-dermal activity, ...) or/and optical instruments (Webcam, eye-tracker, ...). The emotional and mental states such as basic emotions (based on facial expressions), the states of engagement, motivation and attention are the most targeted in this research. In order to support adaptation in games, models of emotions and motivation of the player based on these mental and emotional indicators, have been developed. We have implemented this approach by developing a system in the form of a modular architecture that allows adaptation in virtual environments according to the player's emotional parameters detected in real time by artificial intelligence methods

    Beyond Rare Disease Patients: Exploring Machine Learning Interventions To Support People Experiencing a Diagnostic Odyssey

    Get PDF
    People with a rare condition face several hurdles throughout their odyssey to obtain a diagnosis. This odyssey lasts several years and involves frequent referrals and misdiagnoses, often resulting in permanent and severe consequences on patients’ health. In addition, patients feel unheard by their healthcare providers and isolated from their peers who ‘just don’t understand’. The UK Strategy for Rare Diseases states that patients can play a significant role in their diagnosis if given suitable resources. However, patients with rare diseases feel that they lack the support they need. This thesis explores the role that technology can have in addressing this gap in support.Within this context, this thesis spans a range of topics, from human-centred design approaches to generating data and presenting a new methodological approach. Through a human-centred approach, we characterise the needs of rare disease patients, thus opening the research space to include previously unmet support needs. In addition, we identify limitations with existing measures of success and highlight the importance of a reduction in the time of diagnosis for rare disease pre-diagnostic technology. This provides the basis the simulation-based methodological approach that we develop. The simulation-task aimed to mirror the information seeking tasks that rare disease patients undertake. To do this, we curate data that is representative of a rare disease patient’s perspective, both in terms of the terminology used and the stage in which symptoms and clinical findings are discovered. In addition, we curate a pre-diagnostic patient matching prototype that is designed around rare disease patients’ needs and demonstrate that (in comparison to two search engines) our application shows greater potential to: aid clinical experiences; facilitate empathetic support networks; and provide better facilitation of information-seeking. All of these contributions stem from a critical examination of the experiences that rare disease patients go through on their journeys towards diagnosis and aim to pave the way for future research within this area

    Six Human-Centered Artificial Intelligence Grand Challenges

    Get PDF
    Widespread adoption of artificial intelligence (AI) technologies is substantially affecting the human condition in ways that are not yet well understood. Negative unintended consequences abound including the perpetuation and exacerbation of societal inequalities and divisions via algorithmic decision making. We present six grand challenges for the scientific community to create AI technologies that are human-centered, that is, ethical, fair, and enhance the human condition. These grand challenges are the result of an international collaboration across academia, industry and government and represent the consensus views of a group of 26 experts in the field of human-centered artificial intelligence (HCAI). In essence, these challenges advocate for a human-centered approach to AI that (1) is centered in human well-being, (2) is designed responsibly, (3) respects privacy, (4) follows human-centered design principles, (5) is subject to appropriate governance and oversight, and (6) interacts with individuals while respecting human’s cognitive capacities. We hope that these challenges and their associated research directions serve as a call for action to conduct research and development in AI that serves as a force multiplier towards more fair, equitable and sustainable societies

    Deliberative Technology for Alignment

    Full text link
    For humanity to maintain and expand its agency into the future, the most powerful systems we create must be those which act to align the future with the will of humanity. The most powerful systems today are massive institutions like governments, firms, and NGOs. Deliberative technology is already being used across these institutions to help align governance and diplomacy with human will, and modern AI is poised to make this technology significantly better. At the same time, the race to superhuman AGI is already underway, and the AI systems it gives rise to may become the most powerful systems of the future. Failure to align the impact of such powerful AI with the will of humanity may lead to catastrophic consequences, while success may unleash abundance. Right now, there is a window of opportunity to use deliberative technology to align the impact of powerful AI with the will of humanity. Moreover, it may be possible to engineer a symbiotic coupling between powerful AI and deliberative alignment systems such that the quality of alignment improves as AI capabilities increase

    Building Embodied Conversational Agents:Observations on human nonverbal behaviour as a resource for the development of artificial characters

    Get PDF
    "Wow this is so cool!" This is what I most probably yelled, back in the 90s, when my first computer program on our MSX computer turned out to do exactly what I wanted it to do. The program contained the following instruction: COLOR 10(1.1) After hitting enter, it would change the screen color from light blue to dark yellow. A few years after that experience, Microsoft Windows was introduced. Windows came with an intuitive graphical user interface that was designed to allow all people, so also those who would not consider themselves to be experienced computer addicts, to interact with the computer. This was a major step forward in human-computer interaction, as from that point forward no complex programming skills were required anymore to perform such actions as adapting the screen color. Changing the background was just a matter of pointing the mouse to the desired color on a color palette. "Wow this is so cool!". This is what I shouted, again, 20 years later. This time my new smartphone successfully skipped to the next song on Spotify because I literally told my smartphone, with my voice, to do so. Being able to operate your smartphone with natural language through voice-control can be extremely handy, for instance when listening to music while showering. Again, the option to handle a computer with voice instructions turned out to be a significant optimization in human-computer interaction. From now on, computers could be instructed without the use of a screen, mouse or keyboard, and instead could operate successfully simply by telling the machine what to do. In other words, I have personally witnessed how, within only a few decades, the way people interact with computers has changed drastically, starting as a rather technical and abstract enterprise to becoming something that was both natural and intuitive, and did not require any advanced computer background. Accordingly, while computers used to be machines that could only be operated by technically-oriented individuals, they had gradually changed into devices that are part of many people’s household, just as much as a television, a vacuum cleaner or a microwave oven. The introduction of voice control is a significant feature of the newer generation of interfaces in the sense that these have become more "antropomorphic" and try to mimic the way people interact in daily life, where indeed the voice is a universally used device that humans exploit in their exchanges with others. The question then arises whether it would be possible to go even one step further, where people, like in science-fiction movies, interact with avatars or humanoid robots, whereby users can have a proper conversation with a computer-simulated human that is indistinguishable from a real human. An interaction with a human-like representation of a computer that behaves, talks and reacts like a real person would imply that the computer is able to not only produce and understand messages transmitted auditorily through the voice, but also could rely on the perception and generation of different forms of body language, such as facial expressions, gestures or body posture. At the time of writing, developments of this next step in human-computer interaction are in full swing, but the type of such interactions is still rather constrained when compared to the way humans have their exchanges with other humans. It is interesting to reflect on how such future humanmachine interactions may look like. When we consider other products that have been created in history, it sometimes is striking to see that some of these have been inspired by things that can be observed in our environment, yet at the same do not have to be exact copies of those phenomena. For instance, an airplane has wings just as birds, yet the wings of an airplane do not make those typical movements a bird would produce to fly. Moreover, an airplane has wheels, whereas a bird has legs. At the same time, an airplane has made it possible for a humans to cover long distances in a fast and smooth manner in a way that was unthinkable before it was invented. The example of the airplane shows how new technologies can have "unnatural" properties, but can nonetheless be very beneficial and impactful for human beings. This dissertation centers on this practical question of how virtual humans can be programmed to act more human-like. The four studies presented in this dissertation all have the equivalent underlying question of how parts of human behavior can be captured, such that computers can use it to become more human-like. Each study differs in method, perspective and specific questions, but they are all aimed to gain insights and directions that would help further push the computer developments of human-like behavior and investigate (the simulation of) human conversational behavior. The rest of this introductory chapter gives a general overview of virtual humans (also known as embodied conversational agents), their potential uses and the engineering challenges, followed by an overview of the four studies

    Building Embodied Conversational Agents:Observations on human nonverbal behaviour as a resource for the development of artificial characters

    Get PDF
    "Wow this is so cool!" This is what I most probably yelled, back in the 90s, when my first computer program on our MSX computer turned out to do exactly what I wanted it to do. The program contained the following instruction: COLOR 10(1.1) After hitting enter, it would change the screen color from light blue to dark yellow. A few years after that experience, Microsoft Windows was introduced. Windows came with an intuitive graphical user interface that was designed to allow all people, so also those who would not consider themselves to be experienced computer addicts, to interact with the computer. This was a major step forward in human-computer interaction, as from that point forward no complex programming skills were required anymore to perform such actions as adapting the screen color. Changing the background was just a matter of pointing the mouse to the desired color on a color palette. "Wow this is so cool!". This is what I shouted, again, 20 years later. This time my new smartphone successfully skipped to the next song on Spotify because I literally told my smartphone, with my voice, to do so. Being able to operate your smartphone with natural language through voice-control can be extremely handy, for instance when listening to music while showering. Again, the option to handle a computer with voice instructions turned out to be a significant optimization in human-computer interaction. From now on, computers could be instructed without the use of a screen, mouse or keyboard, and instead could operate successfully simply by telling the machine what to do. In other words, I have personally witnessed how, within only a few decades, the way people interact with computers has changed drastically, starting as a rather technical and abstract enterprise to becoming something that was both natural and intuitive, and did not require any advanced computer background. Accordingly, while computers used to be machines that could only be operated by technically-oriented individuals, they had gradually changed into devices that are part of many people’s household, just as much as a television, a vacuum cleaner or a microwave oven. The introduction of voice control is a significant feature of the newer generation of interfaces in the sense that these have become more "antropomorphic" and try to mimic the way people interact in daily life, where indeed the voice is a universally used device that humans exploit in their exchanges with others. The question then arises whether it would be possible to go even one step further, where people, like in science-fiction movies, interact with avatars or humanoid robots, whereby users can have a proper conversation with a computer-simulated human that is indistinguishable from a real human. An interaction with a human-like representation of a computer that behaves, talks and reacts like a real person would imply that the computer is able to not only produce and understand messages transmitted auditorily through the voice, but also could rely on the perception and generation of different forms of body language, such as facial expressions, gestures or body posture. At the time of writing, developments of this next step in human-computer interaction are in full swing, but the type of such interactions is still rather constrained when compared to the way humans have their exchanges with other humans. It is interesting to reflect on how such future humanmachine interactions may look like. When we consider other products that have been created in history, it sometimes is striking to see that some of these have been inspired by things that can be observed in our environment, yet at the same do not have to be exact copies of those phenomena. For instance, an airplane has wings just as birds, yet the wings of an airplane do not make those typical movements a bird would produce to fly. Moreover, an airplane has wheels, whereas a bird has legs. At the same time, an airplane has made it possible for a humans to cover long distances in a fast and smooth manner in a way that was unthinkable before it was invented. The example of the airplane shows how new technologies can have "unnatural" properties, but can nonetheless be very beneficial and impactful for human beings. This dissertation centers on this practical question of how virtual humans can be programmed to act more human-like. The four studies presented in this dissertation all have the equivalent underlying question of how parts of human behavior can be captured, such that computers can use it to become more human-like. Each study differs in method, perspective and specific questions, but they are all aimed to gain insights and directions that would help further push the computer developments of human-like behavior and investigate (the simulation of) human conversational behavior. The rest of this introductory chapter gives a general overview of virtual humans (also known as embodied conversational agents), their potential uses and the engineering challenges, followed by an overview of the four studies

    Could Alexa Increase Your Social Worth?

    Get PDF
    People have historically used personal introductions to build social capital, which is the foundation of career networking and is perhaps the most effective way to advance a career (Lin, 2001). With societal changes, such as the pandemic (Venkatesh & Edirappuli, 2020), and the increasing capabilities of Artificial Intelligence (AI), new approaches may emerge that impact societal relationships. Social capital theory highlights the need for reciprocal agreements to establish the trust between parties (Gouldner, 1960). My theoretical prediction and focus of this research include two principles: The impact of reciprocity in evaluating trust of the source of the introduction and the acceptability of AI in interpersonal relationships. I test this relationship through the creation of plausible vignettes that the participants may have encountered in business. The results show that a higher trust of AI and could replace one side of the relationship, thus reducing the dependency on or eliminating reciprocal behavior

    Tätigkeitsbericht 2014-2016

    Get PDF
    • …
    corecore