435,738 research outputs found

    Deep integration of machine learning Into column stores

    Get PDF
    We leverage vectorized User-Defined Functions (UDFs) to efficiently integrate unchanged machine learning pipelines into an analytical data management system. The entire pipelines including data, models, parameters and evaluation outcomes are stored and executed inside the database system. Experiments using our MonetDB/Python UDFs show greatly improved performance due to reduced data movement and parallel processing opportunities. In addition, this integration enables meta-analysis of models using relational queries

    Can eye movements be quantitatively applied to image quality studies?

    Full text link
    The aim of the study is to find out whether subjective image quality evaluations can be quantified by eye movement tracking. We want to map objective or physically measurable image quality to subjective evaluations and eye movement data. Results show that eye movement parameters consistently change according to the instructions given to the user, and according to physical image quality. These results indicate that eye movement tracking could be used to differentiate image quality evaluation strategies that the users have. Results also show that eye movements would help mapping between technological and subjective image quality. We also propose to extend the widely used image quality process model, the Image Quality Circle. We suggest adding the objective measurements of a viewer (e.g. eye tracking) in parallel with customer perceptions as an option to gather information of customer perceptions of image quality

    Gaze Assisted Prediction of Task Difficulty Level and User Activities in an Intelligent Tutoring System (ITS)

    Get PDF
    Efforts toward modernizing education are emphasizing the adoption of Intelligent Tutoring Systems (ITS) to complement conventional teaching methodologies. Intelligent tutoring systems empower instructors to make teaching more engaging by providing a platform to tutor, deliver learning material, and to assess students’ progress. Despite the advantages, existing intelligent tutoring systems do not automatically assess how students engage in problem solving? How do they perceive various activities, while solving a problem? and How much time they spend on each discrete activity leading to the solution? In this research, we present an eye tracking framework that can assess how eye movements manifest students’ perceived activities and overall engagement in a sketch based Intelligent tutoring system, “Mechanix.” Mechanix guides students in solving truss problems by supporting user initiated feedback. Through an evaluation involving 21 participants, we show the potential of leveraging eye movement data to recognize students’ perceived activities, “reading, gazing at an image, and problem solving,” with an accuracy of 97.12%. We are also able to leverage the user gaze data to classify problems being solved by students as difficult, medium, or hard with an accuracy of more than 80%. In this process, we also identify the key features of eye movement data, and discuss how and why these features vary across different activities

    Gaze Assisted Prediction of Task Difficulty Level and User Activities in an Intelligent Tutoring System (ITS)

    Get PDF
    Efforts toward modernizing education are emphasizing the adoption of Intelligent Tutoring Systems (ITS) to complement conventional teaching methodologies. Intelligent tutoring systems empower instructors to make teaching more engaging by providing a platform to tutor, deliver learning material, and to assess students’ progress. Despite the advantages, existing intelligent tutoring systems do not automatically assess how students engage in problem solving? How do they perceive various activities, while solving a problem? and How much time they spend on each discrete activity leading to the solution? In this research, we present an eye tracking framework that can assess how eye movements manifest students’ perceived activities and overall engagement in a sketch based Intelligent tutoring system, “Mechanix.” Mechanix guides students in solving truss problems by supporting user initiated feedback. Through an evaluation involving 21 participants, we show the potential of leveraging eye movement data to recognize students’ perceived activities, “reading, gazing at an image, and problem solving,” with an accuracy of 97.12%. We are also able to leverage the user gaze data to classify problems being solved by students as difficult, medium, or hard with an accuracy of more than 80%. In this process, we also identify the key features of eye movement data, and discuss how and why these features vary across different activities

    Sonification of guidance data during road crossing for people with visual impairments or blindness

    Get PDF
    In the last years several solutions were proposed to support people with visual impairments or blindness during road crossing. These solutions focus on computer vision techniques for recognizing pedestrian crosswalks and computing their relative position from the user. Instead, this contribution addresses a different problem; the design of an auditory interface that can effectively guide the user during road crossing. Two original auditory guiding modes based on data sonification are presented and compared with a guiding mode based on speech messages. Experimental evaluation shows that there is no guiding mode that is best suited for all test subjects. The average time to align and cross is not significantly different among the three guiding modes, and test subjects distribute their preferences for the best guiding mode almost uniformly among the three solutions. From the experiments it also emerges that higher effort is necessary for decoding the sonified instructions if compared to the speech instructions, and that test subjects require frequent `hints' (in the form of speech messages). Despite this, more than 2/3 of test subjects prefer one of the two guiding modes based on sonification. There are two main reasons for this: firstly, with speech messages it is harder to hear the sound of the environment, and secondly sonified messages convey information about the "quantity" of the expected movement

    Animating Virtual Human for Virtual Batik Modeling

    Get PDF
    This research paper describes a development of animating virtual human for virtual batik modeling project. The objectives of this project are to animate the virtual human, to map the cloth with the virtual human body, to present the batik cloth, and to evaluate the application in terms of realism of virtual human look, realism of virtual human movement, realism of 3D scene, application suitability, application usability, fashion suitability and user acceptance. The final goal is to accomplish an animated virtual human for virtual batik modeling. There are 3 essential phases which research and analysis (data collection of modeling and animating technique), development (model and animate virtual human, map cloth to body and add a music) and evaluation (evaluation of realism of virtual human look, realism of virtual human movement, realism of props, application suitability, application usability, fashion suitability and user acceptance). The result for application usability is the highest percentage which 90%. Result show that this application is useful to the people. In conclusion, this project has met the objective, which the realism is achieved by used a suitable technique for modeling and animating

    DRACULA Microscopic Traffic Simulator

    Get PDF
    The DRACULA traffic simulator is a microscopic model in that the vehicles are individually represented. The movement of vehicles in the network are represented continuously and updated every one second. The network is modelled as a set of nodes and links which represent junctions and streets respectively. Vehicles are generated at their origins with a random headway distribution and are assigned a set of driver/vehicle characteristics (according to user-specified probabilities) and a fixed route. The movement of the vehicles on a network is governed by a car-following law, the gap acceptance rules and the traffic regulations at intersections. They can join a queue, change lane, discharge to another link or exit from the system. The traffic regulation at an intersection is actuated by traffic lights or right-of-way rules. The inputs to the simulation are network data, trip matrix, fixed-time signal plans, gap-acceptance and car-following parameters. Outputs are in forms of animated graphics and statistical measures of network performance. The program is written in C-language. All types of vehicle attributes are represented as one entity using the structure data type which provides a flexibility in storing and modifying various types of data. Attributes of nodes, links and lanes are also represented as structures. The large number of variables associated with vehicles and the network imply that the performance of the simulation depends on the size of the network and the total number of vehicles within the network at one time. The simulator can be applied in many areas of urban traffic control and management, such as detailed evaluation of traffic signal control strategies, environmental issues such as air pollution due to emission from vehicles in idling, accelerating, decelerating or cruising, and analyses of the effects of variable demand and supply upon the performance of a network

    Automated labeling and online evaluation for self-paced movement detection BCI

    Get PDF
    Electroencephalogram (EEG)-based brain–computer interfaces (BCIs) allow users to use brain signals to control external instruments, and movement intention detecting BCIs can aid in the rehabilitation of patients who have lost motor function. Existing studies in this area mostly rely on cue-based data collection that facilitates sample labeling but introduces noise from cue stimuli; moreover, it requires extensive user training, and cannot reflect real usage scenarios. In contrast, self-paced BCIs can overcome the limitations of the cue-based approach by supporting users to perform movements at their own initiative and pace, but they fall short in labeling. Therefore, in this study, we proposed an automated labeling approach that can cross-reference electromyography (EMG) signals for EEG labeling with zero human effort. Furthermore, considering that only a few studies have focused on evaluating BCI systems for online use and most of them do not report details of the online systems, we developed and present in detail a pseudo-online evaluation suite to facilitate online BCI research. We collected self-paced movement EEG data from 10 participants performing opening and closing hand movements for training and evaluation. The results show that the automated labeling method can contend well with noisy data compared with the baseline labeling method. We also explored popular machine learning models for online self-paced movement detection. The results demonstrate the capability of our online pipeline, and that a well-performing offline model does not necessarily translate to a well-performing online model owing to the specific settings of an online BCI system. Our proposed automated labeling method, online evaluation suite, and dataset take a concrete step towards real-world self-paced BCI systems.</p
    • …
    corecore