5,875 research outputs found
Recommended from our members
Agile thinking in motion graphics practice and its potential for design education
Motion Graphics is relatively new subject and its methodologies are still being developed. There are useful lessons to be learnt from the practice in early cinema from the 1890's to the 1930's where Agile thinking was used by a number of practitioners including Fritz Lang. Recent studies in MA Motion Graphics have accessed some of this thinking incorporating them in a series of Motion Graphic tests and experiments culminating in a two minute animation â1896 Olympic Marathonâ. This paper demonstrates how the project and its design methodology can contribute new knowledge for the practice and teaching of this relatively new and expanding area of Motion Graphic Design. This would be not only invaluable to the International community of Motion Graphic practitioners, Educators and Researchers in their development of this maturing field. But also to the broader Multidisciplinary disciplines within Design Education. These methodologies have been arrived at by accessing the work of creative and reflective practice as defined by Carol Grey and Julian Marlin in Visualizing Research (2004) and reflective practice as defined by Donald Schon (1983). Central to the investigation has been the approach of Agile thinking from the methodology of "Bricolage" by Levi Strauss "The Savage Mind" (1966)
The Evolution of First Person Vision Methods: A Survey
The emergence of new wearable technologies such as action cameras and
smart-glasses has increased the interest of computer vision scientists in the
First Person perspective. Nowadays, this field is attracting attention and
investments of companies aiming to develop commercial devices with First Person
Vision recording capabilities. Due to this interest, an increasing demand of
methods to process these videos, possibly in real-time, is expected. Current
approaches present a particular combinations of different image features and
quantitative methods to accomplish specific objectives like object detection,
activity recognition, user machine interaction and so on. This paper summarizes
the evolution of the state of the art in First Person Vision video analysis
between 1997 and 2014, highlighting, among others, most commonly used features,
methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart
Glasses, Computer Vision, Video Analytics, Human-machine Interactio
Apparatus to control and visualize the impact of a high-energy laser pulse on a liquid target
We present an experimental apparatus to control and visualize the response of
a liquid target to a laser-induced vaporization. We use a millimeter-sized drop
as target and present two liquid-dye solutions that allow a variation of the
absorption coefficient of the laser light in the drop by seven orders of
magnitude. The excitation source is a Q-switched Nd:YAG laser at its
frequency-doubled wavelength emitting nanosecond pulses with energy densities
above the local vaporization threshold. The absorption of the laser energy
leads to a large-scale liquid motion at timescales that are separated by
several orders of magnitude, which we spatiotemporally resolve by a combination
of ultra-high-speed and stroboscopic high-resolution imaging in two orthogonal
views. Surprisingly, the large-scale liquid motion at upon laser impact is
completely controlled by the spatial energy distribution obtained by a precise
beam-shaping technique. The apparatus demonstrates the potential for accurate
and quantitative studies of laser-matter interactions.Comment: Submitted to Review of Scientific Instrument
An Empirical Evaluation of Deep Learning on Highway Driving
Numerous groups have applied a variety of deep learning techniques to
computer vision problems in highway perception scenarios. In this paper, we
presented a number of empirical evaluations of recent deep learning advances.
Computer vision, combined with deep learning, has the potential to bring about
a relatively inexpensive, robust solution to autonomous driving. To prepare
deep learning for industry uptake and practical applications, neural networks
will require large data sets that represent all possible driving environments
and scenarios. We collect a large data set of highway data and apply deep
learning and computer vision algorithms to problems such as car and lane
detection. We show how existing convolutional neural networks (CNNs) can be
used to perform lane and vehicle detection while running at frame rates
required for a real-time system. Our results lend credence to the hypothesis
that deep learning holds promise for autonomous driving.Comment: Added a video for lane detectio
Supporting ethnographic studies of ubiquitous computing in the wild
Ethnography has become a staple feature of IT research over the last twenty years, shaping our understanding of the social character of computing systems and informing their design in a wide variety of settings. The emergence of ubiquitous computing raises new challenges for ethnography however, distributing interaction across a burgeoning array of small, mobile devices and online environments which exploit invisible sensing systems. Understanding interaction requires ethnographers to reconcile interactions that are, for example, distributed across devices on the street with online interactions in order to assemble coherent understandings of the social character and purchase of ubiquitous computing systems. We draw upon four recent studies to show how ethnographers are replaying system recordings of interaction alongside existing resources such as video recordings to do this and identify key challenges that need to be met to support ethnographic study of ubiquitous computing in the wild
- âŠ