48 research outputs found

    Beyond Gazing, Pointing, and Reaching: A Survey of Developmental Robotics

    Get PDF
    Developmental robotics is an emerging field located at the intersection of developmental psychology and robotics, that has lately attracted quite some attention. This paper gives a survey of a variety of research projects dealing with or inspired by developmental issues, and outlines possible future directions

    Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”This position paper proposes that the study of embodied cognitive agents, such as humanoid robots, can advance our understanding of the cognitive development of complex sensorimotor, linguistic, and social learning skills. This in turn will benefit the design of cognitive robots capable of learning to handle and manipulate objects and tools autonomously, to cooperate and communicate with other robots and humans, and to adapt their abilities to changing internal, environmental, and social conditions. Four key areas of research challenges are discussed, specifically for the issues related to the understanding of: 1) how agents learn and represent compositional actions; 2) how agents learn and represent compositional lexica; 3) the dynamics of social interaction and learning; and 4) how compositional action and language representations are integrated to bootstrap the cognitive system. The review of specific issues and progress in these areas is then translated into a practical roadmap based on a series of milestones. These milestones provide a possible set of cognitive robotics goals and test scenarios, thus acting as a research roadmap for future work on cognitive developmental robotics.Peer reviewe

    3-D Interfaces for Spatial Construction

    Get PDF
    It is becoming increasingly easy to bring the body directly to digital form via stereoscopic immersive displays and tracked input devices. Is this space a viable one in which to construct 3d objects? Interfaces built upon two-dimensional displays and 2d input devices are the current standard for spatial construction, yet 3d interfaces, where the dimensionality of the interactive space matches that of the design space, have something unique to offer. This work increases the richness of 3d interfaces by bringing several new tools into the picture: the hand is used directly to trace surfaces; tangible tongs grab, stretch, and rotate shapes; a handle becomes a lightsaber and a tool for dropping simple objects; and a raygun, analagous to the mouse, is used to select distant things. With these tools, a richer 3d interface is constructed in which a variety of objects are created by novice users with relative ease. What we see is a space, not exactly like the traditional 2d computer, but rather one in which a distinct and different set of operations is easy and natural. Design studies, complemented by user studies, explore the larger space of three-dimensional input possibilities. The target applications are spatial arrangement, freeform shape construction, and molecular design. New possibilities for spatial construction develop alongside particular nuances of input devices and the interactions they support. Task-specific tangible controllers provide a cultural affordance which links input devices to deep histories of tool use, enhancing intuition and affective connection within an interface. On a more practical, but still emotional level, these input devices frame kinesthetic space, resulting in high-bandwidth interactions where large amounts of data can be comfortably and quickly communicated. A crucial issue with this interface approach is the tension between specific and generic input devices. Generic devices are the tradition in computing -- versatile, remappable, frequently bereft of culture or relevance to the task at hand. Specific interfaces are an emerging trend -- customized, culturally rich, to date these systems have been tightly linked to a single application, limiting their widespread use. The theoretical heart of this thesis, and its chief contribution to interface research at large is an approach to customization. Instead of matching an application domain's data, each new input device supports a functional class. The spatial construction task is split into four types of manipulation: grabbing, pointing, holding, and rubbing. Each of these action classes spans the space of spatial construction, allowing a single tool to be used in many settings without losing the unique strengths of its specific form. Outside of 3d interface, outside of spatial construction, this approach strikes a balance between generic and specific suitable for many interface scenarios. In practice, these specific function groups are given versatility via a quick remapping technique which allows one physical tool to perform many digital tasks. For example, the handle can be quickly remapped from a lightsaber that cuts shapes to tools that place simple platonic solids, erase portions of objects, and draw double-helices in space. The contributions of this work lie both in a theoretical model of spatial interaction, and input devices (combined with new interactions) which illustrate the efficacy of this philosophy. This research brings the new results of Tangible User Interface to the field of Virtual Reality. We find a space, in and around the hand, where full-fledged haptics are not necessary for users physically connect with digital form.</p

    eXtended Reality for Education and Training

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Multimodal feedback for mid-air gestures when driving

    Get PDF
    Mid-air gestures in cars are being used by an increasing number of drivers on the road. Us-ability concerns mean good feedback is important, but a balance needs to be found between supporting interaction and reducing distraction in an already demanding environment. Visual feedback is most commonly used, but takes visual attention away from driving. This thesis investigates novel non-visual alternatives to support the driver during mid-air gesture interaction: Cutaneous Push, Peripheral Lights, and Ultrasound feedback. These modalities lack the expressive capabilities of high resolution screens, but are intended to allow drivers to focus on the driving task. A new form of haptic feedback — Cutaneous Push — was defined. Six solenoids were embedded along the rim of the steering wheel, creating three bumps under each palm. Studies 1, 2, and 3 investigated the efficacy of novel static and dynamic Cutaneous Push patterns, and their impact on driving performance. In simulated driving studies, the cutaneous patterns were tested. The results showed pattern identification rates of up to 81.3% for static patterns and 73.5% for dynamic patterns and 100% recognition of directional cues. Cutaneous Push notifications did not impact driving behaviour nor workload and showed very high user acceptance. Cutaneous Push patterns have the potential to make driving safer by providing non-visual and instantaneous messages, for example to indicate an approaching cyclist or obstacle. Studies 4 & 5 looked at novel uni- and bimodal feedback combinations of Visual, Auditory, Cutaneous Push, and Peripheral Lights for mid-air gestures and found that non-visual feedback modalities, especially when combined bimodally, offered just as much support for interaction without negatively affecting driving performance, visual attention and cognitive demand. These results provide compelling support for using non-visual feedback from in-car systems, supporting input whilst letting drivers focus on driving.Studies 6 & 7 investigated the above bimodal combinations as well as uni- and bimodal Ultrasound feedback during the Lane Change Task to assess the impact of gesturing and feedback modality on car control during more challenging driving. The results of study Seven suggests that Visual and Ultrasound feedback are not appropriate for in-car usage,unless combined multimodally. If Ultrasound is used unimodally it is more useful in a binary scenario.Findings from Studies 5, 6, and 7 suggest that multimodal feedback significantly reduces eyes-off-the-road time compared to Visual feedback without compromising driving performance or perceived user workload, thus it can potentially reduce crash risks. Novel design recommendations for providing feedback during mid-air gesture interaction in cars are provided, informed by the experiment findings

    Instruction with 3D Computer Generated Anatomy

    No full text
    Research objectives. 1) To create an original and useful software application; 2) to investigate the utility of dyna-linking for teaching upper limb anatomy. Dyna-linking is an arrangement whereby interaction with one representation automatically drives the behaviour of another representation. Method. An iterative user-centred software development methodology was used to build, test and refine successive prototypes of an upper limb software tutorial. A randomised trial then tested the null hypothesis: There will be no significant difference in learning outcomes between participants using dyna-linked 2D and 3D representations of the upper limb and those using non dyna-linked representations. Data was analysed in SPSS using factorial analysis of variance (ANOVA). Results and analysis. The study failed to reject the null hypothesis as there was no signi cant di fference between experimental conditions. Post-hoc analysis revealed that participants with low prior knowledge performed significantly better (p = 0.036) without dyna-linking (mean gain = 7.45) than with dyna-linking (mean gain = 4.58). Participants with high prior knowledge performed equally well with or without dyna-linking. These findings reveal an aptitude by treatment interaction (ATI) whereby the effectiveness of dyna-linking varies according to learner ability. On average, participants using the non dyna-linked system spent 3 minutes and 4 seconds longer studying the tutorial. Participants using the non dyna-linked system clicked 30% more on the representations. Dyna-linking had a high perceived value in questionnaire surveys (n=48) and a focus group (n=7). Conclusion. Dyna-linking has a high perceived value but may actually over-automate learning by prematurely giving novice learners a fully worked solution. Further research is required to confirm if this finding is repeated in other domains, with different learners and more sophisticated implementations of dyna-linking

    Embodiment and the Arts: Views from South Africa

    Get PDF
    Embodiment and the Arts: Views from South Africa presents a diversity of views on the nature and status of the body in relation to acting, advertisements, designs, films, installations, music, photographs, performance, typography, and video works. Applying the methodologies of phenomenology, hermeneutic phenomenology, embodied perception, ecological psychology, and sense-based research, the authors place the body at the centre of their analyses. The cornerstone of the research presented here is the view that aesthetic experience is active and engaged rather than passive and disinterested. This novel volume offers a rich and diverse range of applications of the paradigm of embodiment to the arts in South Africa.Publishe

    Robot Assisted 3D Block Building to Augment Spatial Visualization Skills in Children - An exploratory study

    Get PDF
    The unique social presence of robots can be leveraged in learning situations to increase comfortability and engagement of kids, while still providing instructional guidance. When and how to interfere to provide feedback on their mistakes is still not fully clear. One effective feedback strategy used by human tutors is to implicitly inform the students of their errors rather than explicitly providing corrective feedback. This essay explores if and how a social robot can be utilized to provide implicit feedback to a user who is performing spatial visualization tasks. We explore impact of implicit and explicit feedback strategies on user's learning gains, self-regulation and perception of robot during 3D block building tasks in one-on-one child-robot tutoring. We demonstrate a realtime system that tracks the assembly of a 3D block structure using a RealSense RGB-D camera. The system allows three control actions: Add, Remove and Adjust on blocks of four basic colors to manipulate the structure in the play area. 3D structures can be authored in the Learning mode for system to record, and tracking enables the robot to provide selected feedback in the Teaching mode depending on the type of mistake made by the user. Proposed system can detect five types of mistakes i.e., mistake in: shape, color, orientation, level from base and position of the block. The feedback provided by the robot is based on mistake made by the user. Either implicit or explicit feedback, chosen randomly, is narrated by the robot. Various feedback statements are designed to implicitly inform the user of the mistake made. Two robot behaviours have been designed to support the effective delivery of feedback statements i.e., nodding and referential gaze. We conducted an exploratory study to evaluate our robot assisted 3D block building system to augment spatial visualization skills with one participant. We found that the system was easy to use. The robot was perceived as trustworthy, fun and interesting. Intentions of the robot are communicated through feedback statements and its behaviour. Our goal is to explore that the suggestion of mistakes in implicit ways can help the users self-regulate and scaffold their learning processes

    Development of a System for the Training Assessment and Mental Workload Evaluation

    Get PDF
    Several studies have demonstrated that the main cause of accidents are due to Human Factor (HF) failures. Humans are the least and last controllable factor in the activity workflows, and the availability of tools able to provide objective information about the user’s cognitive state should be very helpful in maintain proper levels of safety. To overcome these issues, the objectives of the PhD covered three topics. The first phase was focused on the study of machine-learning techniques to evaluate the user’s mental workload during the execution of a task. In particular, the methodology was developed to address two important limitations: i) over-time reliability (no recalibration of the algorithm); ii) automatic brain features selection to avoid both the underfitting and overfitting problems. The second phase was dedicated to the study of the training assessment. In fact, the standard training evaluation methods do not provide any objective information about the amount of brain activation\resources required by the user, neither during the execution of the task, nor across the training sessions. Therefore, the aim of this phase was to define a neurophysiological methodology able to address such limitation. The third phase of the PhD consisted in overcoming the lack of neurophysiological studies regarding the evaluation of the cognitive control behaviour under which the user performs a task. The model introduced by Rasmussen was selected to seek neurometrics to characterize the skill, rule and knowledge behaviours by means of the user’s brain activity. The experiments were initially ran in controlled environments, whilst the final tests were carried out in realistic environments. The results demonstrated the validity of the developed algorithm and methodologies (2 patents pending) in solving the issues quoted initially. In addition, such results brought to the submission of a H2020-SMEINST project, for the realization of a device based on such results

    Interactive Imitation Learning in Robotics: A Survey

    Full text link
    Interactive Imitation Learning (IIL) is a branch of Imitation Learning (IL) where human feedback is provided intermittently during robot execution allowing an online improvement of the robot's behavior. In recent years, IIL has increasingly started to carve out its own space as a promising data-driven alternative for solving complex robotic tasks. The advantages of IIL are its data-efficient, as the human feedback guides the robot directly towards an improved behavior, and its robustness, as the distribution mismatch between the teacher and learner trajectories is minimized by providing feedback directly over the learner's trajectories. Nevertheless, despite the opportunities that IIL presents, its terminology, structure, and applicability are not clear nor unified in the literature, slowing down its development and, therefore, the research of innovative formulations and discoveries. In this article, we attempt to facilitate research in IIL and lower entry barriers for new practitioners by providing a survey of the field that unifies and structures it. In addition, we aim to raise awareness of its potential, what has been accomplished and what are still open research questions. We organize the most relevant works in IIL in terms of human-robot interaction (i.e., types of feedback), interfaces (i.e., means of providing feedback), learning (i.e., models learned from feedback and function approximators), user experience (i.e., human perception about the learning process), applications, and benchmarks. Furthermore, we analyze similarities and differences between IIL and RL, providing a discussion on how the concepts offline, online, off-policy and on-policy learning should be transferred to IIL from the RL literature. We particularly focus on robotic applications in the real world and discuss their implications, limitations, and promising future areas of research
    corecore