4,345 research outputs found
ClimbAR - An Arkansas Rock Climbing Documentary
The goal of this thesis project, âClimbARâ - a rock climbing documentary - is to tell the story of a fringe sport/outdoor activity in the state of Arkansas. The history of the sport has been passed down primarily by word of mouth and contained within a small, tight knit group of Arkansas rock climbers since its humble beginnings in the 1980s. Though many of the original climbers in the state have since moved on, a new generation of adventurers have taken the reins. This film focuses on the newest generation of Arkansas rock climbers.
Like many climbers in the state, this story uses Cole Fennelâs guidebooks, âRock Climbing Arkansasâ Vol. I & II, to more accurately portray the wide variety of rock climbing locations in the Ozarks
Recommended from our members
Towards a Smart Drone Cinematographer for Filming Human Motion
Affordable consumer drones have made capturing aerial footage more convenient and accessible. However, shooting cinematic motion videos using a drone is challenging because it requires users to analyze dynamic scenarios while operating the controller. In this thesis, our task is to develop an autonomous drone cinematography system to capture cinematic videos of human motion. We understand the system's filming performance to be influenced by three key components: 1) video quality metric, which measures the aesthetic quality -- the angle, the distance, the image composition -- of the captured video, 2) visual feature, which encapsulates the visual elements that influence the filming style, and 3) camera planning, which is a decision-making model that predicts the next best movement. By analyzing these three components, we designed two autonomous drone cinematography systems using both heuristic-based methods and learning-based methods.For the first system, we designed an Autonomous CinemaTography system -- "ACT" by proposing a viewpoint quality metric focusing on the visibility of the 3D human skeleton of the subject. We expanded the application of human motion analysis and simplified manual control by assisting viewpoint selection using a through-the-lens method. For the second system, we designed an imitation-based system that learns the artistic intention of the cameramen through watching professional aerial videos. We designed a camera planner that analyzes the video contents and previous camera motion to predict future camera motion. Furthermore, we propose a planning framework, which can imitate a filming style by ``seeing" only one single demonstration video of such style. We named it ``one-shot imitation filming." To the best of our knowledge, this is the first work that extends imitation learning to autonomous filming. Experimental results in both simulation and field test exhibit significant improvements over existing techniques and our approach managed to help inexperienced pilots capture cinematic videos
LookOut! Interactive Camera Gimbal Controller for Filming Long Takes
The job of a camera operator is more challenging, and potentially dangerous,
when filming long moving camera shots. Broadly, the operator must keep the
actors in-frame while safely navigating around obstacles, and while fulfilling
an artistic vision. We propose a unified hardware and software system that
distributes some of the camera operator's burden, freeing them up to focus on
safety and aesthetics during a take. Our real-time system provides a solo
operator with end-to-end control, so they can balance on-set responsiveness to
action vs planned storyboards and framing, while looking where they're going.
By default, we film without a field monitor.
Our LookOut system is built around a lightweight commodity camera gimbal
mechanism, with heavy modifications to the controller, which would normally
just provide active stabilization. Our control algorithm reacts to speech
commands, video, and a pre-made script. Specifically, our automatic monitoring
of the live video feed saves the operator from distractions. In pre-production,
an artist uses our GUI to design a sequence of high-level camera "behaviors."
Those can be specific, based on a storyboard, or looser objectives, such as
"frame both actors." Then during filming, a machine-readable script, exported
from the GUI, ties together with the sensor readings to drive the gimbal. To
validate our algorithm, we compared tracking strategies, interfaces, and
hardware protocols, and collected impressions from a) film-makers who used all
aspects of our system, and b) film-makers who watched footage filmed using
LookOut.Comment: V2: - Fixed typos. - Cleaner supplemental. - New plot in control
section with same data from a supplemental vide
The dawn of the age of the drones: an Australian privacy law perspective
Examines Australia\u27s privacy laws in relation to unmanned aerial vehicles, to identify deficiencies that may need to be addressed.
Introduction
Suppose a homeowner habitually enjoys sunbathing in his or her backyard, protected by a high fence from prying eyes, including those of an adolescent neighbour. In times past such homeowners could be assured that they might go about their activities without a threat to their privacy. However, recent years have seen technological advances in the development of unmanned aerial vehicles (âUAVsâ), also known colloquially as drones, that have allowed them to become reduced in size, complexity and price. UAVs today include models retailing to the public for less than $350 and with an ease of operation that enables them to serve as mobile platforms for miniature cameras. These machines now mean that for individuals like the posited homeownerâs adolescent neighbour, barriers such as high fences no longer constitute insuperable obstacles to their voyeuristic endeavours. Moreover, ease of access to the internet and video sharing websites provides a ready means of sharing any recordings made with such cameras with a wide audience. Persons in the homeownerâs position might understandably seek some form of redress for such egregious invasions of their privacy. Other than some form of self-help, what alternative measures may be available?
Under Australian law this problem yields no easy answer. In this country, a fractured landscape of common law, Commonwealth and state/territory legislation provides piecemeal protection against invasions of privacy by cameras mounted on UAVs. It is timely, at what may be regarded as the early days of the drone age, to consider these laws and to identify deficiencies that may need to be addressed lest, to quote words that are as apt today as they were when written over 120 years ago, âmodern enterprise and invention ⊠through invasions upon [their] privacy, [subject victims] to mental pain and distress, far greater than could be inflicted by mere bodily injury.
Access Magazine, May 2016
https://scholarworks.sjsu.edu/accessmagazine/1016/thumbnail.jp
CineTransfer: Controlling a Robot to Imitate Cinematographic Style from a Single Example
This work presents CineTransfer, an algorithmic framework that drives a robot
to record a video sequence that mimics the cinematographic style of an input
video. We propose features that abstract the aesthetic style of the input
video, so the robot can transfer this style to a scene with visual details that
are significantly different from the input video. The framework builds upon
CineMPC, a tool that allows users to control cinematographic features, like
subjects' position on the image and the depth of field, by manipulating the
intrinsics and extrinsics of a cinematographic camera. However, CineMPC
requires a human expert to specify the desired style of the shot (composition,
camera motion, zoom, focus, etc). CineTransfer bridges this gap, aiming a fully
autonomous cinematographic platform. The user chooses a single input video as a
style guide. CineTransfer extracts and optimizes two important style features,
the composition of the subject in the image and the scene depth of field, and
provides instructions for CineMPC to control the robot to record an output
sequence that matches these features as closely as possible. In contrast with
other style transfer methods, our approach is a lightweight and portable
framework which does not require deep network training or extensive datasets.
Experiments with real and simulated videos demonstrate the system's ability to
analyze and transfer style between recordings, and are available in the
supplementary video
- âŠ