4,345 research outputs found

    ClimbAR - An Arkansas Rock Climbing Documentary

    Get PDF
    The goal of this thesis project, ‘ClimbAR’ - a rock climbing documentary - is to tell the story of a fringe sport/outdoor activity in the state of Arkansas. The history of the sport has been passed down primarily by word of mouth and contained within a small, tight knit group of Arkansas rock climbers since its humble beginnings in the 1980s. Though many of the original climbers in the state have since moved on, a new generation of adventurers have taken the reins. This film focuses on the newest generation of Arkansas rock climbers. Like many climbers in the state, this story uses Cole Fennel’s guidebooks, “Rock Climbing Arkansas” Vol. I & II, to more accurately portray the wide variety of rock climbing locations in the Ozarks

    LookOut! Interactive Camera Gimbal Controller for Filming Long Takes

    Get PDF
    The job of a camera operator is more challenging, and potentially dangerous, when filming long moving camera shots. Broadly, the operator must keep the actors in-frame while safely navigating around obstacles, and while fulfilling an artistic vision. We propose a unified hardware and software system that distributes some of the camera operator's burden, freeing them up to focus on safety and aesthetics during a take. Our real-time system provides a solo operator with end-to-end control, so they can balance on-set responsiveness to action vs planned storyboards and framing, while looking where they're going. By default, we film without a field monitor. Our LookOut system is built around a lightweight commodity camera gimbal mechanism, with heavy modifications to the controller, which would normally just provide active stabilization. Our control algorithm reacts to speech commands, video, and a pre-made script. Specifically, our automatic monitoring of the live video feed saves the operator from distractions. In pre-production, an artist uses our GUI to design a sequence of high-level camera "behaviors." Those can be specific, based on a storyboard, or looser objectives, such as "frame both actors." Then during filming, a machine-readable script, exported from the GUI, ties together with the sensor readings to drive the gimbal. To validate our algorithm, we compared tracking strategies, interfaces, and hardware protocols, and collected impressions from a) film-makers who used all aspects of our system, and b) film-makers who watched footage filmed using LookOut.Comment: V2: - Fixed typos. - Cleaner supplemental. - New plot in control section with same data from a supplemental vide

    The dawn of the age of the drones: an Australian privacy law perspective

    Get PDF
    Examines Australia\u27s privacy laws in relation to unmanned aerial vehicles, to identify deficiencies that may need to be addressed. Introduction Suppose a homeowner habitually enjoys sunbathing in his or her backyard, protected by a high fence from prying eyes, including those of an adolescent neighbour. In times past such homeowners could be assured that they might go about their activities without a threat to their privacy. However, recent years have seen technological advances in the development of unmanned aerial vehicles (‘UAVs’), also known colloquially as drones, that have allowed them to become reduced in size, complexity and price. UAVs today include models retailing to the public for less than $350 and with an ease of operation that enables them to serve as mobile platforms for miniature cameras. These machines now mean that for individuals like the posited homeowner’s adolescent neighbour, barriers such as high fences no longer constitute insuperable obstacles to their voyeuristic endeavours. Moreover, ease of access to the internet and video sharing websites provides a ready means of sharing any recordings made with such cameras with a wide audience. Persons in the homeowner’s position might understandably seek some form of redress for such egregious invasions of their privacy. Other than some form of self-help, what alternative measures may be available? Under Australian law this problem yields no easy answer. In this country, a fractured landscape of common law, Commonwealth and state/territory legislation provides piecemeal protection against invasions of privacy by cameras mounted on UAVs. It is timely, at what may be regarded as the early days of the drone age, to consider these laws and to identify deficiencies that may need to be addressed lest, to quote words that are as apt today as they were when written over 120 years ago, ‘modern enterprise and invention 
 through invasions upon [their] privacy, [subject victims] to mental pain and distress, far greater than could be inflicted by mere bodily injury.

    Access Magazine, May 2016

    Get PDF
    https://scholarworks.sjsu.edu/accessmagazine/1016/thumbnail.jp

    CineTransfer: Controlling a Robot to Imitate Cinematographic Style from a Single Example

    Full text link
    This work presents CineTransfer, an algorithmic framework that drives a robot to record a video sequence that mimics the cinematographic style of an input video. We propose features that abstract the aesthetic style of the input video, so the robot can transfer this style to a scene with visual details that are significantly different from the input video. The framework builds upon CineMPC, a tool that allows users to control cinematographic features, like subjects' position on the image and the depth of field, by manipulating the intrinsics and extrinsics of a cinematographic camera. However, CineMPC requires a human expert to specify the desired style of the shot (composition, camera motion, zoom, focus, etc). CineTransfer bridges this gap, aiming a fully autonomous cinematographic platform. The user chooses a single input video as a style guide. CineTransfer extracts and optimizes two important style features, the composition of the subject in the image and the scene depth of field, and provides instructions for CineMPC to control the robot to record an output sequence that matches these features as closely as possible. In contrast with other style transfer methods, our approach is a lightweight and portable framework which does not require deep network training or extensive datasets. Experiments with real and simulated videos demonstrate the system's ability to analyze and transfer style between recordings, and are available in the supplementary video
    • 

    corecore