1,625 research outputs found

    Natural User Interface for Education in Virtual Environments

    Get PDF
    Education and self-improvement are key features of human behavior. However, learning in the physical world is not always desirable or achievable. That is how simulators came to be. There are domains where purely virtual simulators can be created in contrast to physical ones. In this research we present a novel environment for learning, using a natural user interface. We, humans, are not designed to operate and manipulate objects via keyboard, mouse or a controller. The natural way of interaction and communication is achieved through our actuators (hands and feet) and our sensors (hearing, vision, touch, smell and taste). That is the reason why it makes more sense to use sensors that can track our skeletal movements, are able to estimate our pose, and interpret our gestures. After acquiring and processing the desired – natural input, a system can analyze and translate those gestures into movement signals

    Teknologi Natural User Interface Menggunakan Kinect Sebagai Pemicu Kerja Perangkat Keras Berbasis Fuzzy Inference System

    Get PDF
    Human Computer Interaction (HCI) merupakan ilmu yang mempelajari bagaimana membuat interaksi antara manusia dan komputer dapat terjadi seramah dan seefisien mungkin. Salah satu penerapan prinsip dari HCI adalah teknologi Natural User Interface (NUI). NUI merupakan payung dari beberapa teknologi seperti speech recognition, multitouch dan kinectic interface seperti kinect. NUI dimaksudkan untuk menghilangkan hambatan mental dan fisik pengguna. Kinect adalah alat yang menerapkan NUI, dengan memanfaatkan kinect, pengguna dapat menangkap data citra berwarna, citra kedalaman, gesture, jarak, posisi dan ketinggian tubuh pengguna. Data yang berasal dari kinect diubah menjadi suatu perintah yang dapat dimengerti oleh perangkat keras. Data tersebut dikombinasikan dengan Fuzzy Inferenece System metode TSK untuk mendapatkan hasil yang semaksimal mungkin dalam proses memicu kerja perangkat keras sebagai simulasi sistem smart house, sehingga prinsip dari HCI yaitu membuat interaksi antara manusia dan komputer terjadi seramah dan seefisien mungkin dapat tercapai.Kata Kunci—HCI, NUI, Kinect, Fuzzy Inference System metode TSK

    Visual Programming Language with Natural User Interface

    Get PDF
    One of the fastest-growing fields of interest in computer science, fueled primarily by gaming, is the Natural User Interface (NUI). NUI encompasses technologies which would replace the typical mouse-and-keyboard approach to interaction with computer systems, with the goal of making human-computer interactions more similar to face-to-face interpersonal interactions. This is done by using technologies such as gesture recognition or speech recognition and speech synthesis, which use interpersonal skills we learn and practice on a daily basis. Visual Programming Languages (VPLs) are languages that allow the creation of a program by arranging graphical representations of program behavior, rather than textual program code. Visual programming tools are used in various disciplines, but are used most often for K-12 programming education, as a way to introduce fundamental programming concepts. This project is an application which combines these two ideas as an attempt to answer a question: Is it possible to do meaningful programming without actually touching a computer? The application uses the Leap Motion controller for gesture recognition, C# speech recognition functionality for speech recognition, and C# and WPF for the user interface design and logic

    Characterizing Natural User Interface with Wearable Smart Watches

    Get PDF
    Background - The emergence of new interaction paradigms makes the use of technology inrealizing the users??? natural ways of exploring the real world the ultimate goal of designers today.Research on interactive and immersive technologies for user interface design is still a challenging chore for engineers and scientists when it comes to designing natural interaction for wearable smart devices. To address the challenge, our study aims to develop guidelines for design practitioners in designing wearable smart watches that could offer natural user experiences. Methods - To better understand natural user experiences with smart watches, an extensive literature review was conducted. A quantitative survey with 80 participants was conducted, of which the focus was on the expected functions of smart watches. Based on the survey results, we selected eight participants in terms of technology familiarity. To achieve the objectives of our research, three studies were conducted: a design workshop (Study 1), a cultural probe (Study 2), and a focus group interview (Study 3). The design workshop was created to figure out the needs and wishes people have forsmart watches. In the cultural probe, the focus was on figuring out natural interactions with smart watches. Finally, the focus group interview aimed to gain more insights from the results of the cultural probe in terms of natural user interaction with particular functions. Results - To address the needs and wishes of the users toward wearable smartwatches, we made a subdivision into three categories, such as functions, input measures, and notification (feedback) methods. According to the results, participants wanted weather notification, health monitoring, and identification as expected functions. Regarding the methodof input, voice command and touch screen were preferred. In order to get feedback, most of the participantswanted vibrations, particularly as a reaction tocompleting the commands or inputs. There was also a suggestion to customize their smart watch. For example, users can select the functions and build their own command system, and even choose the notificationmethods. Considering natural user interface with respect to functions (weather, answering a call, navigation, health monitoring, taking a picture and messaging), specific natural user interfaces were mentioned for particular functions. Conclusions - Throughout the study, people???s needs and wishes and their perceptions about natural interaction were identified and the characteristics of natural user interfacesweredetermined. Based on the results, tenperceptions were specifically defined to provide a better understanding of smart watches in terms of natural interaction: user affinity of form, awareness by familiarity, reality correspondence, behavioral extension, purpose orientation, easiness of performance, timeliness, routine acceptance, generality, and rule of thumb. In addition to that, natural user interfaces were categorized into five groups: user familiarity, realistic interaction, accomplishment assistance, contextual appropriateness, and social awareness. In this study,we tried to identify what constitutes anatural interaction and how it should be created. The limitations and further study are discussed at the end.ope

    Incorporating Speech Recognition into a Natural User Interface

    Get PDF
    The Augmented/ Virtual Reality (AVR) Lab has been working to study the applicability of recent virtual and augmented reality hardware and software to KSC operations. This includes the Oculus Rift, HTC Vive, Microsoft HoloLens, and Unity game engine. My project in this lab is to integrate voice recognition and voice commands into an easy to modify system that can be added to an existing portion of a Natural User Interface (NUI). A NUI is an intuitive and simple to use interface incorporating visual, touch, and speech recognition. The inclusion of speech recognition capability will allow users to perform actions or make inquiries using only their voice. The simplicity of needing only to speak to control an on-screen object or enact some digital action means that any user can quickly become accustomed to using this system. Multiple programs were tested for use in a speech command and recognition system. Sphinx4 translates speech to text using a Hidden Markov Model (HMM) based Language Model, an Acoustic Model, and a word Dictionary running on Java. PocketSphinx had similar functionality to Sphinx4 but instead ran on C. However, neither of these programs were ideal as building a Java or C wrapper slowed performance. The most ideal speech recognition system tested was the Unity Engine Grammar Recognizer. A Context Free Grammar (CFG) structure is written in an XML file to specify the structure of phrases and words that will be recognized by Unity Grammar Recognizer. Using Speech Recognition Grammar Specification (SRGS) 1.0 makes modifying the recognized combinations of words and phrases very simple and quick to do. With SRGS 1.0, semantic information can also be added to the XML file, which allows for even more control over how spoken words and phrases are interpreted by Unity. Additionally, using a CFG with SRGS 1.0 produces a Finite State Machine (FSM) functionality limiting the potential for incorrectly heard words or phrases. The purpose of my project was to investigate options for a Speech Recognition System. To that end I attempted to integrate Sphinx4 into a user interface. Sphinx4 had great accuracy and is the only free program able to perform offline speech dictation. However it had a limited dictionary of words that could be recognized, single syllable words were almost impossible for it to hear, and since it ran on Java it could not be integrated into the Unity based NUI. PocketSphinx ran much faster than Sphinx4 which would've made it ideal as a plugin to the Unity NUI, unfortunately creating a C# wrapper for the C code made the program unusable with Unity due to the wrapper slowing code execution and class files becoming unreachable. Unity Grammar Recognizer is the ideal speech recognition interface, it is flexible in recognizing multiple variations of the same command. It is also the most accurate program in recognizing speech due to using an XML grammar to specify speech structure instead of relying solely on a Dictionary and Language model. The Unity Grammar Recognizer will be used with the NUI for these reasons as well as being written in C# which further simplifies the incorporation

    Natural User Interface for Roombots

    Get PDF
    Roombots (RB) are self-reconfigurable modular robots designed to study robotic reconfiguration on a structured grid and adaptive locomotion off grid. One of the main goals of this platform is to create adaptive furniture inside living spaces such as homes or offices. To ease the control of RB modules in these environments, we propose a novel and more natural way of interaction with the RB modules on a RB grid, called the Natural Roombots User Interface. In our method, the user commands the RB modules using pointing gestures. The user's body is tracked using multiple Kinects. The user is also given real-time visual feedback of their physical actions and the state of the system via LED illumination electronics installed on both RB modules and the grid. We demonstrate how our interface can be used to efficiently control RB modules on simple point-to-point grid locomotion and conclude by discussing future extensions

    Natural User Interface Based American Sign Language Tutoring Program

    Get PDF
    The COVID-19 pandemic has exposed a substantial shortcoming in the modern American educational system: there is a sufficient need for our educators to be trained in the practices required to provide an educational experience for their students that is as effective as in-person instruction. There exist already systems of online instruction for various academic subjects, such as math and the sciences. In the subject of linguistic studies, educational programs have been developed to evaluate student proficiency in both the written and spoken forms of the language in which they are studying. However, there exist few programs that can effectively provide a similar experience for students studying any variation of sign language. This report details the design process of a proposed system for an interactive online system for learning American Sign Language through the use of hand-tracking peripherals, such as Ultraleap’s Leap Motion controller, and a course structure designed to teach the most commonly used words in American Sign Language through the use of time-spaced learning practices. This report details the course structure, the technical specifications for this system, and the methods through which academic institutions can effectively implement this system

    Implementation of a Natural User Interface to Command a Drone

    Full text link
    In this work, we propose the use of a Natural User Interface (NUI) through body gestures using the open source library OpenPose, looking for a more dynamic and intuitive way to control a drone. For the implementation, we use the Robotic Operative System (ROS) to control and manage the different components of the project. Wrapped inside ROS, OpenPose (OP) processes the video obtained in real-time by a commercial drone, allowing to obtain the user's pose. Finally, the keypoints from OpenPose are obtained and translated, using geometric constraints, to specify high-level commands to the drone. Real-time experiments validate the full strategy.Comment: 2020 International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece, 202
    corecore