621 research outputs found

    Teaching Introductory Programming Concepts through a Gesture-Based Interface

    Get PDF
    Computer programming is an integral part of a technology driven society, so there is a tremendous need to teach programming to a wider audience. One of the challenges in meeting this demand for programmers is that most traditional computer programming classes are targeted to university/college students with strong math backgrounds. To expand the computer programming workforce, we need to encourage a wider range of students to learn about programming. The goal of this research is to design and implement a gesture-driven interface to teach computer programming to young and non-traditional students. We designed our user interface based on the feedback from students attending the College of Engineering summer camps at the University of Arkansas. Our system uses the Microsoft Xbox Kinect to capture the movements of new programmers as they use our system. Our software then tracks and interprets student hand movements in order to recognize specific gestures which correspond to different programming constructs, and uses this information to create and execute programs using the Google Blockly visual programming framework. We focus on various gesture recognition algorithms to interpret user data as specific gestures, including template matching, sector quantization, and supervised machine learning clustering algorithms

    A New Hand-Movement-Based Authentication Method Using Feature Importance Selection with the Hotelling’s Statistic

    Get PDF
    The growing amount of collected and processed data means that there is a need to control access to these resources. Very often, this type of control is carried out on the basis of biometric analysis. The article proposes a new user authentication method based on a spatial analysis of the movement of the finger’s position. This movement creates a sequence of data that is registered by a motion recording device. The presented approach combines spatial analysis of the position of all fingers at the time. The proposed method is able to use the specific, often different movements of fingers of each user. The experimental results confirm the effectiveness of the method in biometric applications. In this paper, we also introduce an effective method of feature selection, based on the Hotelling T2 statistic. This approach allows selecting the best distinctive features of each object from a set of all objects in the database. It is possible thanks to the appropriate preparation of the input data

    Review of three-dimensional human-computer interaction with focus on the leap motion controller

    Get PDF
    Modern hardware and software development has led to an evolution of user interfaces from command-line to natural user interfaces for virtual immersive environments. Gestures imitating real-world interaction tasks increasingly replace classical two-dimensional interfaces based on Windows/Icons/Menus/Pointers (WIMP) or touch metaphors. Thus, the purpose of this paper is to survey the state-of-the-art Human-Computer Interaction (HCI) techniques with a focus on the special field of three-dimensional interaction. This includes an overview of currently available interaction devices, their applications of usage and underlying methods for gesture design and recognition. Focus is on interfaces based on the Leap Motion Controller (LMC) and corresponding methods of gesture design and recognition. Further, a review of evaluation methods for the proposed natural user interfaces is given

    A myographic-based HCI solution proposal for upper limb amputees

    Get PDF
    "Conference on ENTERprise Information Systems / International Conference on Project MANagement / Conference on Health and Social Care Information Systems and Technologies, CENTERIS / ProjMAN / HCist 2016, October 5-7, 2016 "Interaction plays a fundamental role as it sets bridges between humans and computers. However, people with disability are prevented to use computers by the ordinary means, due to physical or intellectual impairments. Thus, the human-computer interaction (HCI) research area has been developing solutions to improve the technological accessibility of impaired people, by enhancing computers and similar devices with the necessary means to attend to the different disabilities, thereby contributing to reduce digital exclusion. Within the aforementioned scope, this paper presents an interaction solution for upper limb amputees, supported on a myographic gesture-control device named Myo. This device is an emergent wearable technology, which consists in a muscle-sensitive bracelet. It transmits myographic and inertial data, susceptible of being converted into actions for interaction purposes (e.g. clicking or moving a mouse cursor). Although being a gesture control armband, Myo can also be used in the legs, as was ascertained through some preliminary tests with users. Both data types (myographic and inertial) remain to be transmitted and are available to be converted into gestures. A general architecture, a use case diagram and the two main functional modules specification are presented. These will guide the future implementation of the proposed Myo-based HCI solution, which is intended to be a solid contribution for the interaction between upper limb amputees and computers

    Advanced Technologies for Human-Computer Interfaces in Mixed Reality

    Get PDF
    As human beings, we trust our five senses, that allow us to experience the world and communicate. Since our birth, the amount of data that every day we can acquire is impressive and such a richness reflects the complexity of humankind in arts, technology, etc. The advent of computers and the consequent progress in Data Science and Artificial Intelligence showed how large amounts of data can contain some sort of “intelligence” themselves. Machines learn and create a superimposed layer of reality. How data generated by humans and machines are related today? To give an answer we will present three projects in the context of “Mixed Reality”, the ideal place where Reality, Virtual Reality and Augmented Reality are increasingly connected as long as data enhance the digital experiences, making them more “real”. We will start with BRAVO, a tool that exploits the brain activity to improve the user’s learning process in real time by means of a Brain-Computer Interface that acquires EEG data. Then we will see AUGMENTED GRAPHICS, a framework for detecting objects in the reality that can be captured easily and inserted in any digital scenario. Based on the moments invariants theory, it looks particularly designed for mobile devices, as it assumes a light concept of object detection and it works without any training set. As third work, GLOVR, a wearable hand controller that uses inertial sensors to offer directional controls and to recognize gestures, particularly suitable for Virtual Reality applications. It features a microphone to record voice sequences that then are translated in tasks by means of a natural language web service. For each project we will summarize the main results and we will trace some future directions of research and development

    Trajectory Prediction with Event-Based Cameras for Robotics Applications

    Get PDF
    This thesis presents the study, analysis, and implementation of a framework to perform trajectory prediction using an event-based camera for robotics applications. Event-based perception represents a novel computation paradigm based on unconventional sensing technology that holds promise for data acquisition, transmission, and processing at very low latency and power consumption, crucial in the future of robotics. An event-based camera, in particular, is a sensor that responds to light changes in the scene, producing an asynchronous and sparse output over a wide illumination dynamic range. They only capture relevant spatio-temporal information - mostly driven by motion - at high rate, avoiding the inherent redundancy in static areas of the field of view. For such reasons, this device represents a potential key tool for robots that must function in highly dynamic and/or rapidly changing scenarios, or where the optimisation of the resources is fundamental, like robots with on-board systems. Prediction skills are something humans rely on daily - even unconsciously - for instance when driving, playing sports, or collaborating with other people. In the same way, predicting the trajectory or the end-point of a moving target allows a robot to plan for appropriate actions and their timing in advance, interacting with it in many different manners. Moreover, prediction is also helpful for compensating robot internal delays in the perception-action chain, due for instance to limited sensors and/or actuators. The question I addressed in this work is whether event-based cameras are advantageous or not in trajectory prediction for robotics. In particular, if classical deep learning architecture used for this task can accommodate for event-based data, working asynchronously, and which benefit they can bring with respect to standard cameras. The a priori hypothesis is that being the sampling of the scene driven by motion, such a device would allow for more meaningful information acquisition, improving the prediction accuracy and processing data only when needed - without any information loss or redundant acquisition. To test the hypothesis, experiments are mostly carried out using the neuromorphic iCub, a custom version of the iCub humanoid platform that mounts two event-based cameras in the eyeballs, along with standard RGB cameras. To further motivate the work on iCub, a preliminary step is the evaluation of the robot's internal delays, a value that should be compensated by the prediction to interact in real-time with the object perceived. The first part of this thesis sees the implementation of the event-based framework for prediction, to answer the question if Long Short-Term Memory neural networks, the architecture used in this work, can be combined with event-based cameras. The task considered is the handover Human-Robot Interaction, during which the trajectory of the object in the human's hand must be inferred. Results show that the proposed pipeline can predict both spatial and temporal coordinates of the incoming trajectory with higher accuracy than model-based regression methods. Moreover, fast recovery from failure cases and adaptive prediction horizon behavior are exhibited. Successively, I questioned how much the event-based sampling approach can be convenient with respect to the classical fixed-rate approach. The test case used is the trajectory prediction of a bouncing ball, implemented with the pipeline previously introduced. A comparison between the two sampling methods is analysed in terms of error for different working rates, showing how the spatial sampling of the event-based approach allows to achieve lower error and also to adapt the computational load dynamically, depending on the motion in the scene. Results from both works prove that the merging of event-based data and Long Short-Term Memory networks looks promising for spatio-temporal features prediction in highly dynamic tasks, and paves the way to further studies about the temporal aspect and to a wide range of applications, not only robotics-related. Ongoing work is now focusing on the robot control side, finding the best way to exploit the spatio-temporal information provided by the predictor and defining the optimal robot behavior. Future work will see the shift of the full pipeline - prediction and robot control - to a spiking implementation. First steps in this direction have been already made thanks to a collaboration with a group from the University of Zurich, with which I propose a closed-loop motor controller implemented on a mixed-signal analog/digital neuromorphic processor, emulating a classical PID controller by means of spiking neural networks
    • 

    corecore