5,875 research outputs found

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    Apparatus to control and visualize the impact of a high-energy laser pulse on a liquid target

    Get PDF
    We present an experimental apparatus to control and visualize the response of a liquid target to a laser-induced vaporization. We use a millimeter-sized drop as target and present two liquid-dye solutions that allow a variation of the absorption coefficient of the laser light in the drop by seven orders of magnitude. The excitation source is a Q-switched Nd:YAG laser at its frequency-doubled wavelength emitting nanosecond pulses with energy densities above the local vaporization threshold. The absorption of the laser energy leads to a large-scale liquid motion at timescales that are separated by several orders of magnitude, which we spatiotemporally resolve by a combination of ultra-high-speed and stroboscopic high-resolution imaging in two orthogonal views. Surprisingly, the large-scale liquid motion at upon laser impact is completely controlled by the spatial energy distribution obtained by a precise beam-shaping technique. The apparatus demonstrates the potential for accurate and quantitative studies of laser-matter interactions.Comment: Submitted to Review of Scientific Instrument

    An Empirical Evaluation of Deep Learning on Highway Driving

    Full text link
    Numerous groups have applied a variety of deep learning techniques to computer vision problems in highway perception scenarios. In this paper, we presented a number of empirical evaluations of recent deep learning advances. Computer vision, combined with deep learning, has the potential to bring about a relatively inexpensive, robust solution to autonomous driving. To prepare deep learning for industry uptake and practical applications, neural networks will require large data sets that represent all possible driving environments and scenarios. We collect a large data set of highway data and apply deep learning and computer vision algorithms to problems such as car and lane detection. We show how existing convolutional neural networks (CNNs) can be used to perform lane and vehicle detection while running at frame rates required for a real-time system. Our results lend credence to the hypothesis that deep learning holds promise for autonomous driving.Comment: Added a video for lane detectio

    Supporting ethnographic studies of ubiquitous computing in the wild

    Get PDF
    Ethnography has become a staple feature of IT research over the last twenty years, shaping our understanding of the social character of computing systems and informing their design in a wide variety of settings. The emergence of ubiquitous computing raises new challenges for ethnography however, distributing interaction across a burgeoning array of small, mobile devices and online environments which exploit invisible sensing systems. Understanding interaction requires ethnographers to reconcile interactions that are, for example, distributed across devices on the street with online interactions in order to assemble coherent understandings of the social character and purchase of ubiquitous computing systems. We draw upon four recent studies to show how ethnographers are replaying system recordings of interaction alongside existing resources such as video recordings to do this and identify key challenges that need to be met to support ethnographic study of ubiquitous computing in the wild
    • 

    corecore