41 research outputs found
A Fisher-Rao metric for paracatadioptric images of lines
In a central paracatadioptric imaging system a perspective camera takes an image of a scene reflected in a paraboloidal mirror. A 360° field of view is obtained, but
the image is severely distorted. In particular, straight lines in the scene project to circles in the image. These distortions make it diffcult to detect projected lines using standard image processing algorithms. The distortions are removed using a Fisher-Rao metric which is defined on the space of projected lines in the paracatadioptric image. The space of projected lines is divided into subsets such that on each subset the Fisher-Rao metric is closely approximated by the Euclidean metric. Each subset is sampled at the vertices of a square grid and values are assigned to the sampled points using an adaptation of the trace transform. The result is a set of digital images to which standard image processing algorithms can be applied.
The effectiveness of this approach to line detection is illustrated using two algorithms, both of which are based on the Sobel edge operator. The task of line detection is reduced to the task of finding isolated peaks in a Sobel image. An experimental comparison is made between these two algorithms and third algorithm taken from the literature and
based on the Hough transform
Spherical perspective
We survey the present state of spherical perspective, regarding both mathematical structure and drawing practice, with a view to applications in the visual arts. We define a spherical perspective as the entailment of a conical anamorphosis with a compact flattening of the visual sphere. We examine a general framework for solving spherical perspectives, exemplified with the azimuthal equidistant (“fisheye”) and equirectangular cases. We consider the relation between spherical and curvilinear perspectives. We briefly discuss computer renderings but focus on methods adapted to freehand sketching or technical drawing with simple instruments such as ruler and compass. We discuss how handmade spherical perspective drawings can generate immersive anamorphoses, which can be rendered as virtual reality panoramas, leading to hybrid visual creations that bridge the gap between traditional drawing and digital environments.info:eu-repo/semantics/publishedVersio
Mathematical Modeling for Software-in-the-Loop Prototyping of Automated Manufacturing Systems
Reducing the Sim-to-Real Gap for Event Cameras
Event cameras are paradigm-shifting novel sensors that report asynchronous,
per-pixel brightness changes called 'events' with unparalleled low latency.
This makes them ideal for high speed, high dynamic range scenes where
conventional cameras would fail. Recent work has demonstrated impressive
results using Convolutional Neural Networks (CNNs) for video reconstruction and
optic flow with events. We present strategies for improving training data for
event based CNNs that result in 20-40% boost in performance of existing
state-of-the-art (SOTA) video reconstruction networks retrained with our
method, and up to 15% for optic flow networks. A challenge in evaluating event
based video reconstruction is lack of quality ground truth images in existing
datasets. To address this, we present a new High Quality Frames (HQF) dataset,
containing events and ground truth frames from a DAVIS240C that are
well-exposed and minimally motion-blurred. We evaluate our method on HQF +
several existing major event camera datasets.Comment: Figure 5 fixed (had a glitch
A model for ovine brucellosis incorporating direct and indirect transmission
International audienc
Online Parameter Identification for State of Power Prediction of Lithium-ion Batteries in Electric Vehicles Using Extremum Seeking
What can neuromorphic event-driven precise timing add to spike-based pattern recognition?
This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time—pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30–60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations