12 research outputs found

    An Experimental Study of Learning Behaviour in an ELearning Environment

    Get PDF
    To reach an adaptive eLearning course, it is crucial to control and monitor the student behaviour dynamically to implicitly diagnose the student learning style. Eye tracing can serve that purpose by investigate the gaze data behaviour to the learning content. In this study, we conduct an eye tracking experiment to analyse the student pattern of behaviour to output his learning style as an aspect of personalisation in an eLearning course. We use the electroencephalography EEG Epoc that reflects users emotions to improve our result with more accurate data. Our objective is to test the hypothesis whether the verbal and visual learning Styles reflect actual preferences according to Felder and Silverman Learning Style Model in an eLearning environment. Another objective is to use the outcome presented in this experiment as the starting point for further exhaustive experiments. In this paper, we present the actual state of our experiment, conclusions, and plans for future development

    Expressive social exchange between humans and robots

    Get PDF
    Thesis (Sc.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (p. 253-264).Sociable humanoid robots are natural and intuitive for people to communicate with and to teach. We present recent advances in building an autonomous humanoid robot, Kismet, that can engage humans in expressive social interaction. We outline a set of design issues and a framework that we have found to be of particular importance for sociable robots. Having a human-in-the-loop places significant social constraints on how the robot aesthetically appears, how its sensors are configured, its quality of movement, and its behavior. Inspired by infant social development, psychology, ethology, and evolutionary perspectives, this work integrates theories and concepts from these diverse viewpoints to enable Kismet to enter into natural and intuitive social interaction with a human caregiver, reminiscent of parent-infant exchanges. Kismet perceives a variety of natural social cues from visual and auditory channels, and delivers social signals to people through gaze direction, facial expression, body posture, and vocalizations. We present the implementation of Kismet's social competencies and evaluate each with respect to: 1) the ability of naive subjects to read and interpret the robot's social cues, 2) the robot's ability to perceive and appropriately respond to naturally offered social cues, 3) the robot's ability to elicit interaction scenarios that afford rich learning potential, and 4) how this produces a rich, flexible, dynamic interaction that is physical, affective, and social. Numerous studies with naive human subjects are described that provide the data upon which we base our evaluations.by Cynthia L. Breazeal.Sc.D

    On-line control of active camera networks

    Get PDF
    Large networks of cameras have been increasingly employed to capture dynamic events for tasks such as surveillance and training. When using active (pan-tilt-zoom) cameras to capture events distributed throughout a large area, human control becomes impractical and unreliable. This has led to the development of automated approaches for on-line camera control. I introduce a new approach that consists of a stochastic performance metric and a constrained optimization method. The metric quantifies the uncertainty in the state of multiple points on each target. It uses state-space methods with stochastic models of the target dynamics and camera measurements. It can account for static and dynamic occlusions, accommodate requirements specific to the algorithm used to process the images, and incorporate other factors that can affect its results. The optimization explores the space of camera configurations over time under constraints associated with the cameras, the predicted target trajectories, and the image processing algorithm. While an exhaustive exploration of this parameter space is intractable, through careful complexity analysis and application domain observations I have identified appropriate alternatives for reducing the space. Specifically, I reduce the spatial dimension of the search by dividing the optimization problem into subproblems, and then optimizing each subproblem independently. I reduce the temporal dimension of the search by using empirically-based heuristics inside each subproblem. The result is a tractable optimization that explores an appropriate subspace of the parameters, while attempting to minimize the risk of excluding the global optimum. The approach can be applied to conventional surveillance tasks (e.g., tracking or face recognition), as well as tasks employing more complex computer vision methods (e.g., markerless motion capture or 3D reconstruction). I present the results of experimental simulations of two such scenarios, using controlled and natural (unconstrained) target motions, employing simulated and real target tracks, in realistic scenes, and with realistic camera networks

    Virtual reality and body rotation: 2 flight experiences in comparison

    Get PDF
    Embodied interfaces, represented by devices that incorporate bodily motion and proprioceptive stimulation, are promising for Virtual Reality (VR) because they can improve immersion and user experience while at the same time reducing simulator sickness compared to more traditional handheld interfaces (e.g.,gamepads). The aim of the study is to evaluate a novel embodied interface called VitruvianVR. The machine is composed of two separate rings that allow its users to bodily rotate onto three different axes. The suitability of the VitruvianVR was tested in a Virtual Reality flight scenario. In order to reach the goal we compared the VitruvianVR to a gamepad using perfomance measures (i.e., accuracy, fails), head movements and position of the body. Furthermore, a series of data coming from questionnaires about sense of presence, user experience, cognitive load, usability and cybersickness was retrieved.Embodied interfaces, represented by devices that incorporate bodily motion and proprioceptive stimulation, are promising for Virtual Reality (VR) because they can improve immersion and user experience while at the same time reducing simulator sickness compared to more traditional handheld interfaces (e.g.,gamepads). The aim of the study is to evaluate a novel embodied interface called VitruvianVR. The machine is composed of two separate rings that allow its users to bodily rotate onto three different axes. The suitability of the VitruvianVR was tested in a Virtual Reality flight scenario. In order to reach the goal we compared the VitruvianVR to a gamepad using perfomance measures (i.e., accuracy, fails), head movements and position of the body. Furthermore, a series of data coming from questionnaires about sense of presence, user experience, cognitive load, usability and cybersickness was retrieved

    Life Sciences Program Tasks and Bibliography for FY 1996

    Get PDF
    This document includes information on all peer reviewed projects funded by the Office of Life and Microgravity Sciences and Applications, Life Sciences Division during fiscal year 1996. This document will be published annually and made available to scientists in the space life sciences field both as a hard copy and as an interactive Internet web page

    Life Sciences Program Tasks and Bibliography for FY 1997

    Get PDF
    This document includes information on all peer reviewed projects funded by the Office of Life and Microgravity Sciences and Applications, Life Sciences Division during fiscal year 1997. This document will be published annually and made available to scientists in the space life sciences field both as a hard copy and as an interactive internet web page

    Patient centric intervention for children with high functioning autism spectrum disorder. Can ICT solutions improve the state of the art ?

    Get PDF
    In my PhD research we developed an integrated technological platform for the acquisition of neurophysiologic signals in a semi-naturalistic setting where children are free to move around, play with different objects and interact with the examiner. The interaction with the examiner rather than with a screen is another very important feature of the present research, and allows recreating a more real situation with social interactions and cues. In this paradigm, we can assume that the signals acquired from the brain and the autonomic system, are much more similar to what is generated while the child interacts in common life situations. This setting, with a relatively simple technical implementation, can be considered as one step towards a more behaviorally driven analysis of neurophysiologic activity. Within the context of a pilot open trial, we showed the feasibility of the technological platform applied to the classical intervention solutions for the autism. We found that (1) the platform was useful during both children-therapist interaction at hospital as well as children-parents interaction at home, (2) tailored intervention was compatible with at home use and non-professional therapist/parents. Going back to the title of my thesis: 'Can ICT solution improve the state-of-the-art ?' the answer could be: 'Yes it can be an useful support for a skilled professional in the field of autis

    Development of an augmented reality guided computer assisted orthopaedic surgery system

    Get PDF
    Previously held under moratorium from 1st December 2016 until 1st December 2021.This body of work documents the developed of a proof of concept augmented reality guided computer assisted orthopaedic surgery system – ARgCAOS. After initial investigation a visible-spectrum single camera tool-mounted tracking system based upon fiducial planar markers was implemented. The use of visible-spectrum cameras, as opposed to the infra-red cameras typically used by surgical tracking systems, allowed the captured image to be streamed to a display in an intelligible fashion. The tracking information defined the location of physical objects relative to the camera. Therefore, this information allowed virtual models to be overlaid onto the camera image. This produced a convincing augmented experience, whereby the virtual objects appeared to be within the physical world, moving with both the camera and markers as expected of physical objects. Analysis of the first generation system identified both accuracy and graphical inadequacies, prompting the development of a second generation system. This too was based upon a tool-mounted fiducial marker system, and improved performance to near-millimetre probing accuracy. A resection system was incorporated into the system, and utilising the tracking information controlled resection was performed, producing sub-millimetre accuracies. Several complications resulted from the tool-mounted approach. Therefore, a third generation system was developed. This final generation deployed a stereoscopic visible-spectrum camera system affixed to a head-mounted display worn by the user. The system allowed the augmentation of the natural view of the user, providing convincing and immersive three dimensional augmented guidance, with probing and resection accuracies of 0.55±0.04 and 0.34±0.04 mm, respectively.This body of work documents the developed of a proof of concept augmented reality guided computer assisted orthopaedic surgery system – ARgCAOS. After initial investigation a visible-spectrum single camera tool-mounted tracking system based upon fiducial planar markers was implemented. The use of visible-spectrum cameras, as opposed to the infra-red cameras typically used by surgical tracking systems, allowed the captured image to be streamed to a display in an intelligible fashion. The tracking information defined the location of physical objects relative to the camera. Therefore, this information allowed virtual models to be overlaid onto the camera image. This produced a convincing augmented experience, whereby the virtual objects appeared to be within the physical world, moving with both the camera and markers as expected of physical objects. Analysis of the first generation system identified both accuracy and graphical inadequacies, prompting the development of a second generation system. This too was based upon a tool-mounted fiducial marker system, and improved performance to near-millimetre probing accuracy. A resection system was incorporated into the system, and utilising the tracking information controlled resection was performed, producing sub-millimetre accuracies. Several complications resulted from the tool-mounted approach. Therefore, a third generation system was developed. This final generation deployed a stereoscopic visible-spectrum camera system affixed to a head-mounted display worn by the user. The system allowed the augmentation of the natural view of the user, providing convincing and immersive three dimensional augmented guidance, with probing and resection accuracies of 0.55±0.04 and 0.34±0.04 mm, respectively
    corecore