3,789 research outputs found

    To Draw or Not to Draw: Recognizing Stroke-Hover Intent in Gesture-Free Bare-Hand Mid-Air Drawing Tasks

    Get PDF
    Over the past several decades, technological advancements have introduced new modes of communication with the computers, introducing a shift from traditional mouse and keyboard interfaces. While touch based interactions are abundantly being used today, latest developments in computer vision, body tracking stereo cameras, and augmented and virtual reality have now enabled communicating with the computers using spatial input in the physical 3D space. These techniques are now being integrated into several design critical tasks like sketching, modeling, etc. through sophisticated methodologies and use of specialized instrumented devices. One of the prime challenges in design research is to make this spatial interaction with the computer as intuitive as possible for the users. Drawing curves in mid-air with fingers, is a fundamental task with applications to 3D sketching, geometric modeling, handwriting recognition, and authentication. Sketching in general, is a crucial mode for effective idea communication between designers. Mid-air curve input is typically accomplished through instrumented controllers, specific hand postures, or pre-defined hand gestures, in presence of depth and motion sensing cameras. The user may use any of these modalities to express the intention to start or stop sketching. However, apart from suffering with issues like lack of robustness, the use of such gestures, specific postures, or the necessity of instrumented controllers for design specific tasks further result in an additional cognitive load on the user. To address the problems associated with different mid-air curve input modalities, the presented research discusses the design, development, and evaluation of data driven models for intent recognition in non-instrumented, gesture-free, bare-hand mid-air drawing tasks. The research is motivated by a behavioral study that demonstrates the need for such an approach due to the lack of robustness and intuitiveness while using hand postures and instrumented devices. The main objective is to study how users move during mid-air sketching, develop qualitative insights regarding such movements, and consequently implement a computational approach to determine when the user intends to draw in mid-air without the use of an explicit mechanism (such as an instrumented controller or a specified hand-posture). By recording the user’s hand trajectory, the idea is to simply classify this point as either hover or stroke. The resulting model allows for the classification of points on the user’s spatial trajectory. Drawing inspiration from the way users sketch in mid-air, this research first specifies the necessity for an alternate approach for processing bare hand mid-air curves in a continuous fashion. Further, this research presents a novel drawing intent recognition work flow for every recorded drawing point, using three different approaches. We begin with recording mid-air drawing data and developing a classification model based on the extracted geometric properties of the recorded data. The main goal behind developing this model is to identify drawing intent from critical geometric and temporal features. In the second approach, we explore the variations in prediction quality of the model by improving the dimensionality of data used as mid-air curve input. Finally, in the third approach, we seek to understand the drawing intention from mid-air curves using sophisticated dimensionality reduction neural networks such as autoencoders. Finally, the broad level implications of this research are discussed, with potential development areas in the design and research of mid-air interactions

    The Advocate

    Get PDF
    Headlines Include: Vicinanzo, Toner, Take Wormser; One Night At a Stein; Special Examination Introhttps://ir.lawnet.fordham.edu/student_the_advocate/1019/thumbnail.jp

    Drawing, Handwriting Processing Analysis: New Advances and Challenges

    No full text
    International audienceDrawing and handwriting are communicational skills that are fundamental in geopolitical, ideological and technological evolutions of all time. drawingand handwriting are still useful in defining innovative applications in numerous fields. In this regard, researchers have to solve new problems like those related to the manner in which drawing and handwriting become an efficient way to command various connected objects; or to validate graphomotor skills as evident and objective sources of data useful in the study of human beings, their capabilities and their limits from birth to decline

    LEARNING HOW STUDENTS ARE LEARNING IN PROGRAMMING LAB SESSIONS

    Get PDF
    Department of Computer Science and EngineeringProgramming lab sessions help students learn to program in a practical way. Although these sessions are typically valuable to students, it is not uncommon for some participants to fall behind throughout the sessions and leave without fully grasping the concepts covered during the session. In my thesis, I will be presenting LabEX, a system for instructors to understand students' progress and learning experience during programming lab sessions. LabEX utilizes statistical techniques that help distinguishing struggling students and understand their degree of struggle. LabEX also helps instructors to provide in-situ feedback to students with its real-time code review. LabEX was evaluated in an entry-level programming course taken by more than two hundred students in UNIST, establishing that it increases the quality of programming lab sessions.ope

    The Effects of Automation Transparency and Reliability on Task Shedding and Operator Trust

    Get PDF
    Because automation use is common in many domains, understanding how to design it to optimize human-automation system performance is vital. Well-calibrated trust ensures good performance when using imperfect automation. Two factors that may jointly affect trust calibration are automation transparency and perceived reliability. Transparency information that explains automated processes and analyses to the operator may help the operator choose appropriate times to shed task control to automation. Because operator trust is positively correlated with automation use, behaviors such as task shedding to automation can indicate the presence of trust. This study used a 2 (reliability; between) × 3 (transparency; within) split-plot design to study the effects that reliability and amount of transparency information have on operators’ subjective trust and task shedding behaviors. Results showed a significant effect of reliability on trust, in which high reliability resulted in more trust. There was no effect of transparency on trust. There was no effect of either reliability or transparency on task shedding frequency or time to task shed. This may be due to high workload of the primary task, restricting participants’ ability to utilize transparency information beyond the automation recommendation. Another influence on these findings was participant hesitance to task shed which could have influenced behavior regardless of automation reliability. These findings contribute to the understanding of automation trust and operator task shedding behavior. Consistent with literature, reliability increased trust. However, there was no effect of transparency, demonstrating the complexity of the relationship between transparency and trust. Participants demonstrated a bias to retain personal control, even with highly reliable automation and at the cost of time-out errors. Future research should examine the relationship between workload and transparency and the influence of task importance on task shedding

    Just Produce: A Meditation on Time & Materials, Past & Present

    Get PDF

    Mobile Ka-Band Polarimetric Doppler Radar Observations Of Wildfire Smoke Plumes

    Get PDF
    Remote sensing techniques have been more recently used to study and track wildfire smoke plume structure and evolution; however, knowledge gaps remain due to the limited availability of observational datasets aimed at understanding the fine-scale fire-atmosphere interactions and plume microphysics. In this study, we present a new mobile millimeter-wave (Ka-band) Doppler radar system acquired to sample the fine-scale kinematics and microphysical properties of active wildfire smoke plumes from both wildfires and large prescribed fires. Four field deployments were conducted in the fall of 2019 during two wildfires in California and one prescribed burn in Utah. An additional dataset of precipitation observations was obtained prior to the wildfire deployments to compare the Ka-band specific signatures of precipitation and wildfire smoke plumes. Radar parameters investigated in this study include reflectivity, radial velocity, Doppler spectrum width, Differential Reflectivity (ZDR), and copolarized correlation coefficients (HV). Observed radar reflectivity ranged between -15 and 20 dBZ in plume and radial velocity ranged 0 to 16 m s-1. Dual-polarimetric observations revealed that scattering sources within wildfire plumes are primarily nonspherical and oblate shaped targets as indicated by ZDR values measuring above 0 and HV values below 0.8 within the plume. Doppler spectrum width maxima were located near the updraft core location and were associated with radar reflectivity maxima

    The Boston University Photonics Center annual report 2016-2017

    Full text link
    This repository item contains an annual report that summarizes activities of the Boston University Photonics Center in the 2016-2017 academic year. The report provides quantitative and descriptive information regarding photonics programs in education, interdisciplinary research, business innovation, and technology development. The Boston University Photonics Center (BUPC) is an interdisciplinary hub for education, research, scholarship, innovation, and technology development associated with practical uses of light.This has undoubtedly been the Photonics Center’s best year since I became Director 10 years ago. In the following pages, you will see highlights of the Center’s activities in the past year, including more than 100 notable scholarly publications in the leading journals in our field, and the attraction of more than 22 million dollars in new research grants/contracts. Last year I had the honor to lead an international search for the first recipient of the Moustakas Endowed Professorship in Optics and Photonics, in collaboration with ECE Department Chair Clem Karl. This professorship honors the Center’s most impactful scholar and one of the Center’s founding visionaries, Professor Theodore Moustakas. We are delighted to haveawarded this professorship to Professor Ji-Xin Cheng, who joined our faculty this year.The past year also marked the launch of Boston University’s Neurophotonics Center, which will be allied closely with the Photonics Center. Leading that Center will be a distinguished new faculty member, Professor David Boas. David and I are together leading a new Neurophotonics NSF Research Traineeship Program that will provide $3M to promote graduate traineeships in this emerging new field. We had a busy summer hosting NSF Sites for Research Experiences for Undergraduates, Research Experiences for Teachers, and the BU Student Satellite Program. As a community, we emphasized the theme of “Optics of Cancer Imaging” at our annual symposium, hosted by Darren Roblyer. We entered a five-year second phase of NSF funding in our Industry/University Collaborative Research Center on Biophotonic Sensors and Systems, which has become the centerpiece of our translational biophotonics program. That I/UCRC continues to focus on advancing the health care and medical device industries
    • 

    corecore