1,752 research outputs found
Recommended from our members
Towards a Smart Drone Cinematographer for Filming Human Motion
Affordable consumer drones have made capturing aerial footage more convenient and accessible. However, shooting cinematic motion videos using a drone is challenging because it requires users to analyze dynamic scenarios while operating the controller. In this thesis, our task is to develop an autonomous drone cinematography system to capture cinematic videos of human motion. We understand the system's filming performance to be influenced by three key components: 1) video quality metric, which measures the aesthetic quality -- the angle, the distance, the image composition -- of the captured video, 2) visual feature, which encapsulates the visual elements that influence the filming style, and 3) camera planning, which is a decision-making model that predicts the next best movement. By analyzing these three components, we designed two autonomous drone cinematography systems using both heuristic-based methods and learning-based methods.For the first system, we designed an Autonomous CinemaTography system -- "ACT" by proposing a viewpoint quality metric focusing on the visibility of the 3D human skeleton of the subject. We expanded the application of human motion analysis and simplified manual control by assisting viewpoint selection using a through-the-lens method. For the second system, we designed an imitation-based system that learns the artistic intention of the cameramen through watching professional aerial videos. We designed a camera planner that analyzes the video contents and previous camera motion to predict future camera motion. Furthermore, we propose a planning framework, which can imitate a filming style by ``seeing" only one single demonstration video of such style. We named it ``one-shot imitation filming." To the best of our knowledge, this is the first work that extends imitation learning to autonomous filming. Experimental results in both simulation and field test exhibit significant improvements over existing techniques and our approach managed to help inexperienced pilots capture cinematic videos
The dawn of the age of the drones: an Australian privacy law perspective
Examines Australia\u27s privacy laws in relation to unmanned aerial vehicles, to identify deficiencies that may need to be addressed.
Introduction
Suppose a homeowner habitually enjoys sunbathing in his or her backyard, protected by a high fence from prying eyes, including those of an adolescent neighbour. In times past such homeowners could be assured that they might go about their activities without a threat to their privacy. However, recent years have seen technological advances in the development of unmanned aerial vehicles (‘UAVs’), also known colloquially as drones, that have allowed them to become reduced in size, complexity and price. UAVs today include models retailing to the public for less than $350 and with an ease of operation that enables them to serve as mobile platforms for miniature cameras. These machines now mean that for individuals like the posited homeowner’s adolescent neighbour, barriers such as high fences no longer constitute insuperable obstacles to their voyeuristic endeavours. Moreover, ease of access to the internet and video sharing websites provides a ready means of sharing any recordings made with such cameras with a wide audience. Persons in the homeowner’s position might understandably seek some form of redress for such egregious invasions of their privacy. Other than some form of self-help, what alternative measures may be available?
Under Australian law this problem yields no easy answer. In this country, a fractured landscape of common law, Commonwealth and state/territory legislation provides piecemeal protection against invasions of privacy by cameras mounted on UAVs. It is timely, at what may be regarded as the early days of the drone age, to consider these laws and to identify deficiencies that may need to be addressed lest, to quote words that are as apt today as they were when written over 120 years ago, ‘modern enterprise and invention … through invasions upon [their] privacy, [subject victims] to mental pain and distress, far greater than could be inflicted by mere bodily injury.
Nearshore Bathymetry Estimation from Drone Video Using PIV Technique
This research introduces a novel method to estimate nearshore bottom topography
using an unmanned aerial vehicle (UAV), or drone. The UAV was manipulated over the
area of interest to film video, and the Particle image velocimetry (PIV) technique was then
applied to analyze the video frames in order to retrieve the wave speeds. Under the shallow
water conditions, the wave dispersion relation can be simplified in a manner such that
when the wave speed is known, the water depth can be inferred. In other words, when
wave speed is known, water depth can be inferred. After combining the inferred water
depths at multiple points from within the area of interest, the bathymetry was constructed.
To validate the method, individual waves were recorded in the nearshore breaking
zone during two trials at Freeport, Texas, USA. We measured the significant difference in
intensity across the recorded images, as the intensity had a larger signal-to-noise ratio, and
this improved the implementation of the PIV algorithm. We then compared the PIVestimated
water depth with field measurement and observations, finding that the water
depth was overestimated by 13.5%, which was primarily explained by non-linear wave
breaking effects. We then introduced a correction factor, reducing the estimation error to
within 6% of the true observed water depth.
Though there are limitations, this new approach can lower the cost of developing
bathymetric maps in the nearshore and result in greater flexibility across space and time.
Further improvements in equipment and work on developing better correction factors may
result in still greater precision
Drone Filming: Creativity versus Regulations in Autonomous Art Systems. A Case Study
This article explores the impact of drone regulations on the narrative potential of drone filming. The central focus of this exploration is a Case Study analysis of the production of a multi-screen audio-visual digital installation, The Crossing (Patel, 2016). The Crossing [1], filmed in central London, utilized the use of a heavy weight Unmanned Aerial System (UAS) also known as a drone with a 5-kilo weight load capacity with the Alexa Mini WCU-4. Combined with the CForce Mini lens control system, the UAS gave unparalleled camera and lens control at extended ranges, providing complete pan, tilt and lens control and allowing dynamic moves in the air. The result was the ability to navigate through spaces to give intimate and playful shots that give the viewer ‘alternate’ versions of reality that only a ‘machine’ can provide. Artists, performers and filmmakers are finding new kinds of beauty through automated programming where the drones are not just capturing the story but the machines themselves become the story. However, the operational scope of drones is limited by legal and health and safety regulations, particularly within built up urban environments. These regulations govern the vertical and horizontal distance from objects and people, line of sight, time constraints, weather conditions as well as security implications. Further restrictions include requiring a trained and fully licensed crew with permission from the relevant aviation bodies. This article seeks to answer whether these restrictions limit the creativity of the artist or challenge the creator to consider alternate ways of using these Autonomous Art Systems to inform the aesthetic scope of the captured image. This article will draw on a combination of original filming and broadcast examples to examine how legal and security restrictions on UAS inform the narrative and aesthetic realization of the final art form and subsequent emotional and physical response of the spectator
- …