55 research outputs found

    Exploring The Use Of AI Technology To Help Owners Remotely Accompany And Care For Their Cats

    Get PDF
    There are many domesticated cats in Canada. Caregivers need to provide domesticated cats with a safe space and daily games that simulate hunting because of cats’ hunting nature, and they should also be mindful of the amount of exercise and food they eat to prevent obesity. However, many carers are too busy with their lives, resulting in them not being able to provide an ideal life for their cats. This thesis project will use the Research through Design (RtD) approach to explore how to employ COCO object detection model, Arduino, and Internet of Things (IoT) to design for the domestic cat's needs when people aren't at home. The research project iterated on four prototypes: 1) a safe space for cats - the Cat Castle. 2) a smart cat teaser to mimic the hunting game, which uses COCO object detection and Arduino. 3) An auto feeder to encourage cats to exercise more. 4) Integration of the above three prototypes to form an early-stage smart and cat-friendly environment. Finally, the prototype is designed to meet some of the cat's needs and it can also accompany the cat when the carer is not at home. This study can provide some exploratory experience in the animal-computer interaction (ACI) field of research on related topics

    A contribution to robust person perception on real-world assistance robots in domestic scenarios

    Get PDF
    Die höhere Lebenserwartung der Bevölkerung und eine rückläufige Geburtenrate führen zu einem steigenden Anteil älterer Menschen in der Gesellschaft. Mobile Assistenzroboter sollen ältere Personen zukünftig in ihren Wohnungen unterstützen. Um sinnvolle Funktionen und Dienste anbieten zu können, muss der Roboter Personen in seiner Umgebung wahrnehmen können. Das häusliche Szenario stellt dabei aufgrund seiner Komplexität eine Herausforderung für die Erkennungsalgorithmen dar. Komplexität entsteht beispielsweise durch unterschiedliche Einrichtungsmöglichkeiten, schwierige Beleuchtungsbedingungen und variable Nutzerposen. Die Dissertation stellt eine Architektur zur Personenwahrnehmung für mobile Roboter vor. Die modulare Architektur beschreibt die verwendeten Komponenten und deren Kommunikation untereinander. Aufgrund der Modularität können einzelne Komponenten schnell integriert oder ausgetauscht werden. Die Arbeit evaluiert eine Vielzahl von multi-modalen Detektionsverfahren auf Basis von Laser-, Kamera- und 3D-Tiefendaten. Ausgewählte Algorithmen werden für Anwendungsszenario angepasst und weiterentwickelt. Die Hypothesen der Detektoren werden durch einen Personentracker raumzeitlich gefiltert und fusioniert. Besonderheiten des Personentrackers umfassen die Unterstützung mehrerer Filter und Systemmodelle, die Integration von nicht unabhängigen und verspäteten Beobachtungen, die Schätzung der Existenzwahrscheinlichkeit sowie die Integration von Umgebungswissen. Um Nutzer, welche sich nicht im Sichtbereich des Roboters befinden, in der Wohnung zu finden, werden verschiedene Suchverfahren vorgestellt. Das fortschrittlichste Verfahren verwendet eine explorative Suche, um die gesamte Wohnung effektiv zu durchsuchen. Dabei werden falsch-positiv Detektionen ausgeschlossen und mit dynamischen Hindernissen und nicht erreichbaren Räumen umgegangen. Die Arbeit stellt ein Verfahren für die Erkennung von gestürzten, am Boden liegenden Personen vor. Die auf Tiefendaten basierende Erkennung erlaubt es dem Roboter, Personen von anderen Objekten oder Tieren in der Wohnung zu unterscheiden. Die entwickelten Algorithmen wurden im realen Anwendungsszenario evaluiert, indem der Roboter für bis zu 3 Tage in den Wohnungen von Senioren zur freien Nutzung verblieb. Die Experimente zeigten, dass die vorgestellte Architektur zur Personenwahrnehmung robust genug arbeitet, damit der Roboter mithilfe seiner Dienste einen Mehrwert für die Senioren liefern kann.The increased life expectancy of the population and declining birth rates lead to an increasing proportion of elderly people in the modern society and hence an increasing need for age care. Mobile robots can assist users in theirs homes by means of services and companionship. To provide useful functionalities, the robot must be able to observe the user in the environment. The domestic scenario poses a challenge for people detection and tracking algorithms through its complexity caused, among others, by variable furnishing options, difficult lighting conditions and various user poses. This thesis presents an architecture for people detection and tracking for mobile robots in domestic environments. The modular architecture describes the used components and their communication with each other. Due to the modularity of design, the individual components can be easily integrated or exchanged. This work evaluates a variety of multi-modal detection methods based on laser data, camera data and 3D depth data. Suitable algorithms are being applied, adapted and enhanced. The detections are processed by a person tracker to allow for spatial-temporal filtering. Important features of the person tracker include the support of multiple filters and system models, the integration of coupled observations and out-of-sequence measurements, the estimation of the existence probability and the integration of environmental knowledge. The thesis proposes various methods to search and locate users in the apartment, which have left the robots limited field-of-view. The most advanced method uses an exploratory search method to examine the environment effectively. It handles false positive detections, dynamic obstacles and inaccessible rooms in a reasonable manner. Furthermore, this work presents a method to detect people that have fallen to the ground given occlusion. The method uses the depth data of a Kinect sensor mounted on the mobile robot. Point clouds are segmented, layered and classified to distinguish fallen people from furniture, household objects and animals. The developed algorithms were evaluated in a real-world scenario, by allowing the robot to stay in retirement homes for up to three days. The experiments showed that the presented architecture for people detection and tracking is robust enough, so that the robot's services proved to provide an added value to the seniors citizens

    Realisierung nutzeradaptiven Interaktionsverhaltens fĂĽr mobile Assistenzroboter

    Get PDF
    Im Zentrum dieser Dissertation steht die soziale Assistenzrobotik. In den letzten Jahren hat die Bedeutung dieses Teilgebietes der mobilen Robotik stark zugenommen und zusammen mit der Diversifizierung robotischer Fähigkeiten hat sich die Nutzergruppe hin zur breiten Masse mit potentiellen technischen Laien gewandelt. Aus dieser Situation heraus erwachsen an die Interaktionsfähigkeiten sozialer Assistenzroboter umfangreiche Anforderungen. Insbesondere stehen in dieser Arbeit die Multimodalität der Interaktion und die Anpassungsfähigkeiten an den konkreten Nutzer im Vordergrund. Am Beispiel eines Serviceroboters für die häusliche Gesundheitsassistenz, wie er in einem vom Autor mit bearbeiteten Forschungsprojekt realisiert wurde, wird zunächst der Analyse- und Entwurfsprozess für dessen Umsetzung geschildert. Im Anschluss daran wird gezeigt, wie sich aus der Systemspezifikation eine mehrschichtige Systemarchitektur ableiten lässt, welche auch auf andere Robotikanwendungen übertragbar ist. Der Fokus liegt dabei auf der modularen Realisierung einer Ablauf- und Dialogsteuerung. Um dem System eine Persönlichkeit zu geben und ein im Langzeiteinsatz akzeptierbares Dialogverhalten zu generieren, wurde ein frame-basierter Dialogmanager konzipiert und umgesetzt. Dabei wurden Aspekte wie Modularität durch ein App-Konzept, leichte Erweiterbarkeit und die Möglichkeit, nutzeradaptive Dialoge zu realisieren, berücksichtigt. Im Kern des vorgestellten Dialogsystems kommt eine gänzlich neue Methode der probabilistischen online-Planung von Dialogsequenzen zum Einsatz. Ein eigens konzipiertes Realweltexperiment konnte zeigen, dass es mit diesem System möglich ist, anhand von systeminternen aber auch nutzergetriebenen Bewertungen, das Dialogverhalten im Rahmen von durch den Designer vorgegebenen Freiheiten zur Laufzeit zu optimieren. Die Gestaltung des robotischen Gesundheitsassistenten wurde durch weitere Teilsysteme abgerundet. Unter diesen spielen verschiedene taktile Sensoriken und ein Emotionsmodell eine entscheidende Rolle für die Realisierung eines liebenswerten Begleiters. Letztendlich konnte in sehr erfolgreichen teils mehrtägigen Nutzerstudien mit Senioren die Praktikabilität des entwickelten Interaktionskonzepts und der Systemarchitektur nachgewiesen werden.The central topic of this thesis concerns social service robotics. In recent years this branch of mobile robotics in general has seen increasing interest. Due to increasing capabilities and growing fields of application of such robots, the group of potential users has changed. Unexperienced users raise extensive requirements regarding the interaction capabilities of such robots. The multi-modality of human-robot dialog and its adaptivity regarding user's preferences and needs are in the focus of this thesis. First, the analysis and specification process for such a system is explained by means of an example, which is a service robot for health assistance in home environments, as it has been developed in a research project at which the author participated. Following this, it is shown how a multi-layer system architecture is derived from that specification, which is applicable to other robotic applications as well. Though the main focus is on a modular realization for the control structures and the dialog handling. In order to enable a long term acceptability of such a system and to give it a personality, a frame-based dialog manager has been designed and is explained in detail. Aspects of interest there are modularity by means of an app-concept, extendablility, and adaptivity of the interaction skills regarding users' qirks and demands. In the core of the presented dialog system, there is a unique planning mechanism based on probabilistic reasoning in a factor graph model of the dialog going on. In a real world experiment it could be shown that this online learning concept is able to optimize dialog behavior regarding system internal as well as user driven reward signals. During the implementation of the health assistant robot further system components have been developed in order to realize a likeable companion. Among them, there are two kinds of tactile sensors and an emotion model, which are presented in this thesis as well. Finally, very successful real world user trials of the health assistant robot involving 9 elderly people are described to show that the presented concepts for system architecture and dialog modelling are viable

    Actor & Avatar: A Scientific and Artistic Catalog

    Get PDF
    What kind of relationship do we have with artificial beings (avatars, puppets, robots, etc.)? What does it mean to mirror ourselves in them, to perform them or to play trial identity games with them? Actor & Avatar addresses these questions from artistic and scholarly angles. Contributions on the making of "technical others" and philosophical reflections on artificial alterity are flanked by neuroscientific studies on different ways of perceiving living persons and artificial counterparts. The contributors have achieved a successful artistic-scientific collaboration with extensive visual material

    Discrete Automation - Eyes of the City

    Get PDF
    Observing people’s presence in physical space and deciphering their behaviors have always been critical actions to designers, planners and anyone else who has an interest in exploring how cities work. It was 1961 when Jane Jacobs, in her seminal book “The Death and Life of Great American Cities”, coined a famous expression to convey this idea. According to Jacobs, “the natural proprietors” of a certain part of the metropolis – the people who live, work or spend a substantial amount of time there – become the “eyes on the street.” Their collective, distributed, decentralized gaze becomes the prerequisite to establishing “a marvelous order for maintaining the safety of the streets and the freedom of the city.” Almost half a century later, we find ourselves at the inception of a new chapter in the relationship between the city and digital technologies, which calls for a reexamination of the old “eyes on the street” idea. In the next few years, thanks to the most recent advances in Artificial Intelligence, deep learning and imaging, we are about to reach an unprecedented scenario, the most radical development in the evolution of the Internet-of-Things: architectural space is acquiring the full ability to “see.” Imagine that any room, street or shop in our city can recognize you, and autonomously respond to your presence. With Jacobs’s “eyes on the street,” it was people who looked at other people or the city and interpreted its mechanisms. In this new scenario, buildings and streets similarly acquire the ability to observe and react as urban life unfolds in front of them. After the “eyes on the street,” we are now entering the era of the “Eyes of the City.” What happens, then, to people and the urban landscape when the sensor-imbued city is able to gaze back? What we are currently facing is an “utopia or oblivion” crossroads, to say it with the words of one of the most notable thinkers of the past century, Richard Buckminster Fuller. We believe that one of the fundamental duties of architects and designers today is to grapple with this momentous shift, and engage people in the process. “Eyes of the City” aims to experiment with these emerging scenarios to better comprehend them, deconstructing the potential uses of new technologies in order to make them accessible to everyone and inspire people to form an opinion. Using critical design as a tool, the exhibition seeks to create experiences that will encourage people to get involved in defining the ways in which new technologies will shape their cities in years to come. For this reason, it recognizes in Shenzhen’s Futian high-speed railway station its natural home – a place where to reach a broad, diverse audience of intentional visitors and accidental passersby, and a space where, just like in most other liminal transportation hubs, the impact of an “Eyes of the City” scenario is likely going to be felt the most

    Machine Sensation

    Get PDF
    Emphasising the alien qualities of anthropomorphic technologies, Machine Sensation makes a conscious effort to increase rather than decrease the tension between nonhuman and human experience. In a series of rigorously executed cases studies, including natural user interfaces, artificial intelligence as well as sex robots, Leach shows how object-oriented ontology enables one to insist upon the unhuman nature of technology while acknowledging its immense power and significance in human life. Machine Sensation meticulously engages OOO, Actor Network Theory, the philosophy of technology, cybernetics and posthumanism in innovative and gripping ways

    Connected World:Insights from 100 academics on how to build better connections

    Get PDF

    Dynamic motion coupling of body movement for input control

    Get PDF
    Touchless gestures are used for input when touch is unsuitable or unavailable, such as when interacting with displays that are remote, large, public, or when touch is prohibited for hygienic reasons. Traditionally user input is spatially or semantically mapped to system output, however, in the context of touchless gestures these interaction principles suffer from several disadvantages including memorability, fatigue, and ill-defined mappings. This thesis investigates motion correlation as the third interaction principle for touchless gestures, which maps user input to system output based on spatiotemporal matching of reproducible motion. We demonstrate the versatility of motion correlation by using movement as the primary sensing principle, relaxing the restrictions on how a user provides input. Using TraceMatch, a novel computer vision-based system, we show how users can provide effective input through investigation of input performance with different parts of the body, and how users can switch modes of input spontaneously in realistic application scenarios. Secondly, spontaneous spatial coupling shows how motion correlation can bootstrap spatial input, allowing any body movement, or movement of tangible objects, to be appropriated for ad hoc touchless pointing on a per interaction basis. We operationalise the concept in MatchPoint, and demonstrate the unique capabilities through an exploration of the design space with application examples. Finally, we explore how users synchronise with moving targets in the context of motion correlation, revealing how simple harmonic motion leads to better synchronisation. Using the insights gained we explore the robustness of algorithms used for motion correlation, showing how it is possible to successfully detect a user's intent to interact whilst suppressing accidental activations from common spatial and semantic gestures. Finally, we look across our work to distil guidelines for interface design, and further considerations of how motion correlation can be used, both in general and for touchless gestures
    • …
    corecore