371 research outputs found

    Interplayable surface: an exploration on augmented GUI that co-exists with physical environments

    Get PDF
    The main goal of this experiment-driven thesis is to envision and design an interactive GUI1(graphic user interface) that coexists with physical surfaces. Based on an understanding of user behavioral patterns for getting access to information in these types of situations, experimentations and prototypes are implemented and tested with participants. In particular, to observe the user behavioral pattern for augmented GUI within certain environments and circumstances, this thesis presents several types of participatory experimentations with physical GUIs. The experiment participants were encouraged to participate in re-creates and reorganizes physical GUI, relating to their own situational specificity or informational tendencies they have. Based on extracted insights from research and experiments, in the last phase, I propose two thesis models about how interactive GUI applies to a physical environment: simulation mock-ups for user scenarios of augmented GUI and interactive GUI surface combined with projection mapping. Related to people’s behavioral patterns on augmented GUI, the thesis models will show several types of information structures and interactions. Also, in framing the overall data structure and wireframe for the thesis product model, informative affordance corresponding with users’ situational specificity2 is considered as a crucial direction point, actualized on an artifact in a perceptible way. Through experimentally prototyping a thesis model, consequently, I would like to expand the speculative usability interactive GUI will feature in the near future

    RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation

    Full text link
    We present RoboGen, a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation. RoboGen leverages the latest advancements in foundation and generative models. Instead of directly using or adapting these models to produce policies or low-level actions, we advocate for a generative scheme, which uses these models to automatically generate diversified tasks, scenes, and training supervisions, thereby scaling up robotic skill learning with minimal human supervision. Our approach equips a robotic agent with a self-guided propose-generate-learn cycle: the agent first proposes interesting tasks and skills to develop, and then generates corresponding simulation environments by populating pertinent objects and assets with proper spatial configurations. Afterwards, the agent decomposes the proposed high-level task into sub-tasks, selects the optimal learning approach (reinforcement learning, motion planning, or trajectory optimization), generates required training supervision, and then learns policies to acquire the proposed skill. Our work attempts to extract the extensive and versatile knowledge embedded in large-scale models and transfer them to the field of robotics. Our fully generative pipeline can be queried repeatedly, producing an endless stream of skill demonstrations associated with diverse tasks and environments

    Study of emotion in videogames : understanding presence and behaviour

    Get PDF
    Only when videogames are released are we able to look at them and analyse them. Nowadays, platforms to share our thoughts and opinions about a videogame, or part of it, are everywhere, with both positive and negative commentaries being shared daily. However, what makes a game be seen as a positive experience and what components satisfy and engage players in it? In this Dissertation, we aim to comprehend how players perceive videogames and what motivates and triggers emotions one has during play. We will take a look at several different concepts that all work together when playing a videogame. We will start by understanding what Interaction is and how humans behave. Afterwards, we will better investigate the widely used topic of Immersion, and its unknown and unrecognized brother Presence. From there, we will divide involvement in game play in two parts, the technological side, which relates to natural interfaces and mastery of controls, and the side of design and implementation of content, more specifically the concept of Agency and how it plays a huge part in making players feel part of the game.Só quando um videojogo é lançado é que o podemos analisar e rever. Atualmente, encontramos plataformas para partilhar a nossa opinião acerca de um videojogo, ou parte dele, em qualquer lado, com comentários positivos e negativos a serem partilhados diariamente. No entanto, o que é que faz um jogo ser visto como uma experiência positiva e quais são os componentes que satisfazem e envolvem jogadores? Nesta Dissertação, pretendemos compreender como é que jogadores percecionam um videojogo e que emoções são despoletadas que os motiva a jogar. Iremos analisar diferentes conceitos que contribuem para o jogar de um videojogo. Começaremos por ver o que é a Interação e como é que o ser humano se comporta e age. Prosseguindo, iremos analisar o já bastante usado conceito de Imersão, e o seu desconhecido e menos reconhecido irmão, Presença. Daí iremos dividir o envolvimento com um videojogo em duas partes, no lado tecnológico, relacionado com interfaces naturais e mestria de controlos, e no lado de design e implementação de conteúdo, mais especificamente no conceito de Agência e a maneira como esta integra os jogadores no jogo

    When machines touch back : simulating-- and stimulating-- the most intimate of senses

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Humanities, Program in Writing and Humanistic Studies, 2005.Includes bibliographical references (leaf 51).Thomas Massie invented the Phantom, a computer peripheral for simulating the sense of touch, that became the de facto device for haptics research. The thesis recounts the story of Massie, his invention, and present and potential applications as varied as telesurgery and teledildonics. Along the way the thesis explores the science of touch and considers the implications of the fact that perhaps the most reassuring and intimate of senses can be simulated.by Kevin Bullis.S.M

    Hearing Loss in Older Adults: Exploring Occupational Therapy’s Role

    Get PDF
    A quarter of older adults between 65-74 years old and half over the age of 75 have hearing loss (HL). HL can increase the risk for dementia, falls, depression, and hospitalization and is associated with decreased engagement in ADLs, IADLs, and leisure activities. Older adults with symptoms of HL often delay seeking treatment up to ten years. While hearing aids seem like a natural solution, less than thirty percent of older adults over the age of 70 have ever used them and long-term compliance and general satisfaction for these devices remains low. The aim of this thesis is to spread understanding about occupational therapy’s role in supporting quality of life, function, and well-being in older adults with HL through alternative, hearing-related interventions including Hearing Assistive Technology (HAT), environmental acoustic modifications, and compensatory techniques. This knowledge was disseminated in three ways. The first method was to instruct occupational therapy students at two universities through curriculum presentation. The second method was to raise awareness and educate occupational therapy practitioners attending the National American Occupational Therapy Association conference by doing a poster presentation. The final method was to inform readers of OT Practice Magazine through a published article. Older adults with HL have unmet needs that can be met in occupational therapy, and yet further research is needed within our profession on this topic. Occupational therapy educational programs can expand to include this aspect of sensory loss in its curriculum. Finally, our professional organizations can facilitate conversation and increase awareness about our unique opportunity to collaborate with other hearing health professionals to meet the needs of older adults with HL

    DEVELOPMENT AND ASSESSMENT OF ADVANCED ASSISTIVE ROBOTIC MANIPULATORS USER INTERFACES

    Get PDF
    Text BoxAssistive Robotic Manipulators (ARM) have shown improvement in self-care and increased independence among people with severe upper extremity disabilities. With an ARM mounted on the side of an electric powered wheelchair, an ARM may provide manipulation assistance, such as picking up object, eating, drinking, dressing, reaching out, or opening doors. However, existing assessment tools are inconsistent between studies, time consuming, and unclear in clinical effectiveness. Therefore, in this research, we have developed an ADL task board evaluation tool that provides standardized, efficient, and reliable assessment of ARM performance. Among powered wheelchair users and able-bodied controls using two commercial ARM user interfaces – joystick and keypad, we found that there were statistical differences between both user interface performances, but no statistical difference was found in the cognitive loading. The ADL task board demonstrated highly correlated performance with an existing functional assessment tool, Wolf Motor Function Test. Through this study, we have also identified barriers and limits in current commercial user interfaces and developed smartphone and assistive sliding-autonomy user interfaces that yields improved performance. Testing results from our smartphone manual interface revealed statistically faster performance. The assistive sliding-autonomy interface helped seamlessly correct the error seen with autonomous functions. The ADL task performance evaluation tool may help clinicians and researchers better access ARM user interfaces and evaluated the efficacy of customized user interfaces to improve performance. The smartphone manual interface demonstrated improved performance and the sliding-autonomy framework showed enhanced success with tasks without recalculating path planning and recognition

    Online Estimation of Articulated Objects with Factor Graphs using Vision and Proprioceptive Sensing

    Full text link
    From dishwashers to cabinets, humans interact with articulated objects every day, and for a robot to assist in common manipulation tasks, it must learn a representation of articulation. Recent deep learning learning methods can provide powerful vision-based priors on the affordance of articulated objects from previous, possibly simulated, experiences. In contrast, many works estimate articulation by observing the object in motion, requiring the robot to already be interacting with the object. In this work, we propose to use the best of both worlds by introducing an online estimation method that merges vision-based affordance predictions from a neural network with interactive kinematic sensing in an analytical model. Our work has the benefit of using vision to predict an articulation model before touching the object, while also being able to update the model quickly from kinematic sensing during the interaction. In this paper, we implement a full system using shared autonomy for robotic opening of articulated objects, in particular objects in which the articulation is not apparent from vision alone. We implemented our system on a real robot and performed several autonomous closed-loop experiments in which the robot had to open a door with unknown joint while estimating the articulation online. Our system achieved an 80% success rate for autonomous opening of unknown articulated objects

    Quiet Interaction: Designing an Accessible Home Environment for Deaf and Hard of Hearing (DHH) Individuals through AR, AI, and IoT Technologies

    Get PDF
    As technology rapidly evolves, voice-command-based smart assistants are becoming integral to our daily lives. However, this advancement overlooks the needs of the Deaf and Hard of Hearing (DHH) community, creating a technological gap in current systems. To address this technological oversight, this study develops a Mixed-Reality (MR) application that integrates Augmented Reality (AR), Artificial Intelligence (AI), and the Internet of Things (IoT) technologies to fill the gaps in safety, communication, and accessibility for DHH individuals at home. By employing the User-Centric design methodology, this study begins with a needs assessment through a literature review and online survey to understand the unique challenges and preferences of the DHH community. The key contribution of this study lies in its innovative integration of technologies within a Mixed-Reality (MR) framework, with the goal of creating a more inclusive and accessible home environment for the DHH community
    corecore