28,823 research outputs found
Real-time image streaming over a low-bandwidth wireless camera network
In this paper we describe the recent development of a low-bandwidth wireless camera sensor network. We propose a simple, yet effective, network architecture which allows multiple cameras to be connected to the network and synchronize their communication schedules. Image compression of greater than 90% is performed at each node running on a local DSP coprocessor, resulting in nodes using 1/8th the energy compared to streaming uncompressed images. We briefly introduce the Fleck wireless node and the DSP/camera sensor, and then outline the network architecture and compression algorithm. The system is able to stream color QVGA images over the network to a base station at up to 2 frames per second. © 2007 IEEE
Optical joint correlator for real-time image tracking and retinal surgery
A method for tracking an object in a sequence of images is described. Such sequence of images may, for example, be a sequence of television frames. The object in the current frame is correlated with the object in the previous frame to obtain the relative location of the object in the two frames. An optical joint transform correlator apparatus is provided to carry out the process. Such joint transform correlator apparatus forms the basis for laser eye surgical apparatus where an image of the fundus of an eyeball is stabilized and forms the basis for the correlator apparatus to track the position of the eyeball caused by involuntary movement. With knowledge of the eyeball position, a surgical laser can be precisely pointed toward a position on the retina
Deep Bilateral Learning for Real-Time Image Enhancement
Performance is a critical challenge in mobile image processing. Given a
reference imaging pipeline, or even human-adjusted pairs of images, we seek to
reproduce the enhancements and enable real-time evaluation. For this, we
introduce a new neural network architecture inspired by bilateral grid
processing and local affine color transforms. Using pairs of input/output
images, we train a convolutional neural network to predict the coefficients of
a locally-affine model in bilateral space. Our architecture learns to make
local, global, and content-dependent decisions to approximate the desired image
transformation. At runtime, the neural network consumes a low-resolution
version of the input image, produces a set of affine transformations in
bilateral space, upsamples those transformations in an edge-preserving fashion
using a new slicing node, and then applies those upsampled transformations to
the full-resolution image. Our algorithm processes high-resolution images on a
smartphone in milliseconds, provides a real-time viewfinder at 1080p
resolution, and matches the quality of state-of-the-art approximation
techniques on a large class of image operators. Unlike previous work, our model
is trained off-line from data and therefore does not require access to the
original operator at runtime. This allows our model to learn complex,
scene-dependent transformations for which no reference implementation is
available, such as the photographic edits of a human retoucher.Comment: 12 pages, 14 figures, Siggraph 201
Another Deleuzian Resnais: l'année dernière à Marienbad as conflict between sadism and masochism
The Deleuzian reading of L'Année dernière à Marienbad proposed here draws less on what has become a virtually canonical concept in film studies – Deleuze's time-image – than on a much earlier work by the same author, Masochism, which treats sadism and masochism as qualitatively different symbolic universes. Resnais's film, with its deployment of mirrors and statuary and its suggestion of a contract between the characters A and X, presents striking resemblances to the world of masochism as described by Deleuze (drawing on the work of Theodor Reik). At the same time, the role of the third protagonist, M, like that of Robbe-Grillet who wrote the screenplay, has Sadean overtones, suggesting that it might be possible to read the film with its diegetic ambiguities as a Möbius strip linking the sadistic and the masochistic world not only with each other, but with the crystalline universe of the time-image
Real-time image difference detection using a polarization rotation spacial light modulator
An image difference detection system is described, of the type wherein two created image representations such as transparencies representing the images to be compared lie coplanar, while light passes through the two transparencies and is formed into coincident images at the image plane for comparison. The two transparencies are formed by portions of a polarization-rotation spatial light modulator display such as a multi-pixel liquid crystal display or a magnetooptical rotation type display. In a system where light passing through the two transparencies is polarized in transverse directions to enable the use of a Wollaston prism to bring the images into coincidence, a liquid crystal display can be used which is devoid of polarizing sheets that would interfere with transverse polarizing of the light passing through the two transparencies
A Q-Ising model application for linear-time image segmentation
A computational method is presented which efficiently segments digital
grayscale images by directly applying the Q-state Ising (or Potts) model. Since
the Potts model was first proposed in 1952, physicists have studied lattice
models to gain deep insights into magnetism and other disordered systems. For
some time, researchers have realized that digital images may be modeled in much
the same way as these physical systems (i.e., as a square lattice of numerical
values). A major drawback in using Potts model methods for image segmentation
is that, with conventional methods, it processes in exponential time. Advances
have been made via certain approximations to reduce the segmentation process to
power-law time. However, in many applications (such as for sonar imagery),
real-time processing requires much greater efficiency. This article contains a
description of an energy minimization technique that applies four Potts
(Q-Ising) models directly to the image and processes in linear time. The result
is analogous to partitioning the system into regions of four classes of
magnetism. This direct Potts segmentation technique is demonstrated on
photographic, medical, and acoustic images.Comment: 7 pages, 8 figures, revtex, uses subfigure.sty. Central European
Journal of Physics, in press (2010
Synthetic generation of address-events for real-time image processing
Address-event-representation (AER) is a communication protocol that emulates the nervous system's neurons communication, and that is typically used for transferring images between chips. It was originally developed for bio-inspired and real-time image processing systems. Such systems may consist of a complicated hierarchical structure with many chips that transmit images among them in real time, while performing some processing. In this paper several software methods for generating AER streams from images stored in a computer's memory are presented. A hardware version that works in real-time is also being studied. All of them have been evaluated and compared.Comisión Europea IST-2001-34102
Coagulation time detection by means of a real-time image processing
Several techniques for semi-automatic or automatic detection of coagulation time in blood or in plasma analysis are available in the literature. However, these techniques are either complex and demand for specialized equipment, or allow the analysis of very few samples in parallel. In this paper a new system based on computer vision is presented. An easy image processing algorithm has been developed, which leads to an accurate estimation of the coagulation time of several samples in parallel. The estimation can be performed in real time using transputer architecture supported by a PC.Peer ReviewedPostprint (published version
Low-level processing for real-time image analysis
A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given
- …
