4,060 research outputs found
Coding local and global binary visual features extracted from video sequences
Binary local features represent an effective alternative to real-valued
descriptors, leading to comparable results for many visual analysis tasks,
while being characterized by significantly lower computational complexity and
memory requirements. When dealing with large collections, a more compact
representation based on global features is often preferred, which can be
obtained from local features by means of, e.g., the Bag-of-Visual-Word (BoVW)
model. Several applications, including for example visual sensor networks and
mobile augmented reality, require visual features to be transmitted over a
bandwidth-limited network, thus calling for coding techniques that aim at
reducing the required bit budget, while attaining a target level of efficiency.
In this paper we investigate a coding scheme tailored to both local and global
binary features, which aims at exploiting both spatial and temporal redundancy
by means of intra- and inter-frame coding. In this respect, the proposed coding
scheme can be conveniently adopted to support the Analyze-Then-Compress (ATC)
paradigm. That is, visual features are extracted from the acquired content,
encoded at remote nodes, and finally transmitted to a central controller that
performs visual analysis. This is in contrast with the traditional approach, in
which visual content is acquired at a node, compressed and then sent to a
central unit for further processing, according to the Compress-Then-Analyze
(CTA) paradigm. In this paper we experimentally compare ATC and CTA by means of
rate-efficiency curves in the context of two different visual analysis tasks:
homography estimation and content-based retrieval. Our results show that the
novel ATC paradigm based on the proposed coding primitives can be competitive
with CTA, especially in bandwidth limited scenarios.Comment: submitted to IEEE Transactions on Image Processin
A 360 VR and Wi-Fi Tracking Based Autonomous Telepresence Robot for Virtual Tour
This study proposes a novel mobile robot teleoperation interface that demonstrates the applicability of a robot-aided remote telepresence system with a virtual reality (VR) device to a virtual tour scenario. To improve realism and provide an intuitive replica of the remote environment for the user interface, the implemented system automatically moves a mobile robot (viewpoint) while displaying a 360-degree live video streamed from the robot to a VR device (Oculus Rift). Upon the user choosing a destination location from a given set of options, the robot generates a route based on a shortest path graph and travels along that the route using a wireless signal tracking method that depends on measuring the direction of arrival (DOA) of radio signals. This paper presents an overview of the system and architecture, and discusses its implementation aspects. Experimental results show that the proposed system is able to move to the destination stably using the signal tracking method, and that at the same time, the user can remotely control the robot through the VR interface
New Solutions Based On Wireless Networks For Dynamic Traffic Lights Management: A Comparison Between IEEE 802.15.4 And Bluetooth
Abstract
The Wireless Sensor Networks are widely used to detect and exchange information and in recent years they have been increasingly involved in Intelligent Transportation System applications, especially in dynamic management of signalized intersections. In fact, the real-time knowledge of information concerning traffic light junctions represents a valid solution to congestion problems. In this paper, a wireless network architecture, based on IEEE 802.15.4 or Bluetooth, in order to monitor vehicular traffic flows near to traffic lights, is introduced. Moreover, an innovative algorithm is proposed in order to determine dynamically green times and phase sequence of traffic lights, based on measured values of traffic flows. Several simulations compare IEEE 802.15.4 and Bluetooth protocols in order to identify the more suitable communication protocol for ITS applications. Furthermore, in order to confirm the validity of the proposed algorithm for the dynamic management of traffic lights, some case studies have been considered and several simulations have been performed
WiseEye: next generation expandable and programmable camera trap platform for wildlife research
Funding: The work was supported by the RCUK Digital Economy programme to the dot.rural Digital Economy Hub; award reference: EP/G066051/1. The work of S. Newey and RJI was part funded by the Scottish Government's Rural and Environment Science and Analytical Services (RESAS). Details published as an Open Source Toolkit, PLOS Journals at: http://dx.doi.org/10.1371/journal.pone.0169758Peer reviewedPublisher PD
RUR53: an Unmanned Ground Vehicle for Navigation, Recognition and Manipulation
This paper proposes RUR53: an Unmanned Ground Vehicle able to autonomously
navigate through, identify, and reach areas of interest; and there recognize,
localize, and manipulate work tools to perform complex manipulation tasks. The
proposed contribution includes a modular software architecture where each
module solves specific sub-tasks and that can be easily enlarged to satisfy new
requirements. Included indoor and outdoor tests demonstrate the capability of
the proposed system to autonomously detect a target object (a panel) and
precisely dock in front of it while avoiding obstacles. They show it can
autonomously recognize and manipulate target work tools (i.e., wrenches and
valve stems) to accomplish complex tasks (i.e., use a wrench to rotate a valve
stem). A specific case study is described where the proposed modular
architecture lets easy switch to a semi-teleoperated mode. The paper
exhaustively describes description of both the hardware and software setup of
RUR53, its performance when tests at the 2017 Mohamed Bin Zayed International
Robotics Challenge, and the lessons we learned when participating at this
competition, where we ranked third in the Gran Challenge in collaboration with
the Czech Technical University in Prague, the University of Pennsylvania, and
the University of Lincoln (UK).Comment: This article has been accepted for publication in Advanced Robotics,
published by Taylor & Franci
Orbital Angular Momentum Waves: Generation, Detection and Emerging Applications
Orbital angular momentum (OAM) has aroused a widespread interest in many
fields, especially in telecommunications due to its potential for unleashing
new capacity in the severely congested spectrum of commercial communication
systems. Beams carrying OAM have a helical phase front and a field strength
with a singularity along the axial center, which can be used for information
transmission, imaging and particle manipulation. The number of orthogonal OAM
modes in a single beam is theoretically infinite and each mode is an element of
a complete orthogonal basis that can be employed for multiplexing different
signals, thus greatly improving the spectrum efficiency. In this paper, we
comprehensively summarize and compare the methods for generation and detection
of optical OAM, radio OAM and acoustic OAM. Then, we represent the applications
and technical challenges of OAM in communications, including free-space optical
communications, optical fiber communications, radio communications and acoustic
communications. To complete our survey, we also discuss the state of art of
particle manipulation and target imaging with OAM beams
- …