85,170 research outputs found
Dynamically variable step search motion estimation algorithm and a dynamically reconfigurable hardware for its implementation
Motion Estimation (ME) is the most computationally intensive part of video compression and video enhancement systems. For the recently available High Definition (HD) video formats, the computational complexity of De full search (FS) ME algorithm is prohibitively high, whereas the PSNR obtained by fast search ME algorithms is low. Therefore, ill this paper, we present Dynamically Variable Step Search (DVSS) ME algorithm for Processing high definition video formats and a dynamically reconfigurable hardware efficiently implementing DVSS algorithm. The architecture for efficiently implementing DVSS algorithm. The simulation results showed that DVSS algorithm performs very close to FS algorithm by searching much fewer search locations than FS algorithm and it outperforms successful past search ME algorithms by searching more search locations than these algorithms. The proposed hardware is implemented in VHDL and is capable, of processing high definition video formats in real time. Therefore, it can be used in consumer electronics products for video compression, frame rate up-conversion and de-interlacing(1)
Evolutionary strategy based improved motion estimation technique for H.264 video coding
In this paper we propose an improved motion estimation algorithm based on evolutionary strategy (ES) for H.264 video codec applied to video. The proposed technique works in a parallel local search for macroblocks. For this purpose (mu+lambda) ES is used with an initial population of heuristically and randomly generated motion vectors. Experimental results show that the proposed scheme can reduce the computational complexity up to 50% of the motion estimation algorithm used in the H.264 reference codec at the same picture quality. Therefore, the proposed algorithm provides a significant improvement in motion estimation in the H.264 video codec
Attention and Anticipation in Fast Visual-Inertial Navigation
We study a Visual-Inertial Navigation (VIN) problem in which a robot needs to
estimate its state using an on-board camera and an inertial sensor, without any
prior knowledge of the external environment. We consider the case in which the
robot can allocate limited resources to VIN, due to tight computational
constraints. Therefore, we answer the following question: under limited
resources, what are the most relevant visual cues to maximize the performance
of visual-inertial navigation? Our approach has four key ingredients. First, it
is task-driven, in that the selection of the visual cues is guided by a metric
quantifying the VIN performance. Second, it exploits the notion of
anticipation, since it uses a simplified model for forward-simulation of robot
dynamics, predicting the utility of a set of visual cues over a future time
horizon. Third, it is efficient and easy to implement, since it leads to a
greedy algorithm for the selection of the most relevant visual cues. Fourth, it
provides formal performance guarantees: we leverage submodularity to prove that
the greedy selection cannot be far from the optimal (combinatorial) selection.
Simulations and real experiments on agile drones show that our approach ensures
state-of-the-art VIN performance while maintaining a lean processing time. In
the easy scenarios, our approach outperforms appearance-based feature selection
in terms of localization errors. In the most challenging scenarios, it enables
accurate visual-inertial navigation while appearance-based feature selection
fails to track robot's motion during aggressive maneuvers.Comment: 20 pages, 7 figures, 2 table
Cooperative localization by dual foot-mounted inertial sensors and inter-agent ranging
The implementation challenges of cooperative localization by dual
foot-mounted inertial sensors and inter-agent ranging are discussed and work on
the subject is reviewed. System architecture and sensor fusion are identified
as key challenges. A partially decentralized system architecture based on
step-wise inertial navigation and step-wise dead reckoning is presented. This
architecture is argued to reduce the computational cost and required
communication bandwidth by around two orders of magnitude while only giving
negligible information loss in comparison with a naive centralized
implementation. This makes a joint global state estimation feasible for up to a
platoon-sized group of agents. Furthermore, robust and low-cost sensor fusion
for the considered setup, based on state space transformation and
marginalization, is presented. The transformation and marginalization are used
to give the necessary flexibility for presented sampling based updates for the
inter-agent ranging and ranging free fusion of the two feet of an individual
agent. Finally, characteristics of the suggested implementation are
demonstrated with simulations and a real-time system implementation.Comment: 14 page
Fast and easy blind deblurring using an inverse filter and PROBE
PROBE (Progressive Removal of Blur Residual) is a recursive framework for
blind deblurring. Using the elementary modified inverse filter at its core,
PROBE's experimental performance meets or exceeds the state of the art, both
visually and quantitatively. Remarkably, PROBE lends itself to analysis that
reveals its convergence properties. PROBE is motivated by recent ideas on
progressive blind deblurring, but breaks away from previous research by its
simplicity, speed, performance and potential for analysis. PROBE is neither a
functional minimization approach, nor an open-loop sequential method (blur
kernel estimation followed by non-blind deblurring). PROBE is a feedback
scheme, deriving its unique strength from the closed-loop architecture rather
than from the accuracy of its algorithmic components
- …