7,204 research outputs found

    EyePACT: eye-based parallax correction on touch-enabled interactive displays

    Get PDF
    The parallax effect describes the displacement between the perceived and detected touch locations on a touch-enabled surface. Parallax is a key usability challenge for interactive displays, particularly for those that require thick layers of glass between the screen and the touch surface to protect them from vandalism. To address this challenge, we present EyePACT, a method that compensates for input error caused by parallax on public displays. Our method uses a display-mounted depth camera to detect the user's 3D eye position in front of the display and the detected touch location to predict the perceived touch location on the surface. We evaluate our method in two user studies in terms of parallax correction performance as well as multi-user support. Our evaluations demonstrate that EyePACT (1) significantly improves accuracy even with varying gap distances between the touch surface and the display, (2) adapts to different levels of parallax by resulting in significantly larger corrections with larger gap distances, and (3) maintains a significantly large distance between two users' fingers when interacting with the same object. These findings are promising for the development of future parallax-free interactive displays

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    The Internet of Things Will Thrive by 2025

    Get PDF
    This report is the latest research report in a sustained effort throughout 2014 by the Pew Research Center Internet Project to mark the 25th anniversary of the creation of the World Wide Web by Sir Tim Berners-LeeThis current report is an analysis of opinions about the likely expansion of the Internet of Things (sometimes called the Cloud of Things), a catchall phrase for the array of devices, appliances, vehicles, wearable material, and sensor-laden parts of the environment that connect to each other and feed data back and forth. It covers the over 1,600 responses that were offered specifically about our question about where the Internet of Things would stand by the year 2025. The report is the next in a series of eight Pew Research and Elon University analyses to be issued this year in which experts will share their expectations about the future of such things as privacy, cybersecurity, and net neutrality. It includes some of the best and most provocative of the predictions survey respondents made when specifically asked to share their views about the evolution of embedded and wearable computing and the Internet of Things
    corecore