150 research outputs found
Particle detection and tracking in fluorescence time-lapse imaging: a contrario approach
This paper proposes a probabilistic approach for the detection and the
tracking of particles in fluorescent time-lapse imaging. In the presence of a
very noised and poor-quality data, particles and trajectories can be
characterized by an a contrario model, that estimates the probability of
observing the structures of interest in random data. This approach, first
introduced in the modeling of human visual perception and then successfully
applied in many image processing tasks, leads to algorithms that neither
require a previous learning stage, nor a tedious parameter tuning and are very
robust to noise. Comparative evaluations against a well-established baseline
show that the proposed approach outperforms the state of the art.Comment: Published in Journal of Machine Vision and Application
A flexible algorithm for detecting challenging moving objects in real-time within IR video sequences
Real-time detecting moving objects in infrared video sequences may be particularly challenging because of the characteristics of the objects, such as their size, contrast, velocity and trajectory. Many proposed algorithms achieve good performances but only in the presence of some specific kinds of objects, or by neglecting the computational time, becoming unsuitable for real-time applications. To obtain more flexibility in different situations, we developed an algorithm capable of successfully dealing with small and large objects, slow and fast objects, even if subjected to unusual movements, and poorly-contrasted objects. The algorithm is also capable to handle the contemporary presence of multiple objects within the scene and to work in real-time even using cheap hardware. The implemented strategy is based on a fast but accurate background estimation and rejection, performed pixel by pixel and updated frame by frame, which is robust to possible background intensity changes and to noise. A control routine prevents the estimation from being biased by the transit of moving objects, while two noise-adaptive thresholding stages, respectively, drive the estimation control and allow extracting moving objects after the background removal, leading to the desired detection map. For each step, attention has been paid to develop computationally light solution to achieve the real-time requirement. The algorithm has been tested on a database of infrared video sequences, obtaining promising results against different kinds of challenging moving objects and outperforming other commonly adopted solutions. Its effectiveness in terms of detection performance, flexibility and computational time make the algorithm particularly suitable for real-time applications such as intrusion monitoring, activity control and detection of approaching objects, which are fundamental task in the emerging research area of Smart City
Motion compensated interpolation for subband coding of moving images
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.Includes bibliographical references (leaves 108-119).by Mark Daniel Polomski.M.S
Neural network directed Bayes decision rule for moving target classification
Includes bibliographical references.In this paper, a new neural network directed Bayes decision rule is developed for target classification exploiting the dynamic behavior of the target. The system consists of a feature extractor, a neural network directed conditional probability generator and a novel sequential Bayes classifier. The velocity and curvature sequences extracted from each track are used as the primary features. Similar to hidden Markov model (HMM) scheme, several hidden states are used to train the neural network, the output of which is the conditional probability of occurring the hidden states given the observations. These conditional probabilities are then used as the inputs to the sequential Bayes classifier to make the classification. The classification results are updated recursively whenever a new scan of data is received. Simulation results on multiscan images containing heavy clutter are presented to demonstrate the effectiveness of the proposed methods.This work was funded by the Optoelectronic Computing Systems (OCS) Center at Colorado State University, under NSF/REC Grant 9485502
ΠΠ±Π½Π°ΡΡΠΆΠ΅Π½ΠΈΠ΅ ΡΠΈΠ³Π½Π°Π»ΠΎΠ² Π΄Π²ΠΈΠΆΡΡΠΈΡ ΡΡ ΠΎΠ±ΡΠ΅ΠΊΡΠΎΠ² Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ ΠΌΠ΅ΡΠΎΠ΄Π° Π²ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠΉ ΡΠ΅Π»Π΅ΠΊΡΠΈΠΈ
To increase the efficiency of detecting moving objects in radiolocation, additional features are used, associated with the characteristics of trajectories. The authors assumed that trajectories are correlated, that allows extrapolation of the coordinate values taking into account their increments over the scanning period. The detection procedure consists of two stages. At the first, detection is carried out by the classical threshold method with a low threshold level, which provides a high probability of detection with high values of the probability of false alarms. At the same time uncertainty in the selection of object trajectory embedded in false trajectories arises. Due to the statistical independence of the coordinates of the false trajectories in comparison with the correlated coordinates of the object, the average duration of the first of them is less than the average duration of the second ones. This difference is used to solve the detection problem at the second stage based on the time-selection method. The obtained results allow estimation of the degree of gain in the probability of detection when using the proposed method.ΠΠ»Ρ ΠΏΠΎΠ²ΡΡΠ΅Π½ΠΈΡ ΡΡΡΠ΅ΠΊΡΠΈΠ²Π½ΠΎΡΡΠΈ ΠΎΠ±Π½Π°ΡΡΠΆΠ΅Π½ΠΈΡ Π΄Π²ΠΈΠΆΡΡΠΈΡ
ΡΡ ΠΎΠ±ΡΠ΅ΠΊΡΠΎΠ² Π² ΡΠ°Π΄ΠΈΠΎΠ»ΠΎΠΊΠ°ΡΠΈΠΈ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΡΡΡΡ Π΄ΠΎΠΏΠΎΠ»Π½ΠΈΡΠ΅Π»ΡΠ½ΡΠ΅ ΠΏΡΠΈΠ·Π½Π°ΠΊΠΈ, ΡΠ²ΡΠ·Π°Π½Π½ΡΠ΅ Ρ ΡΡΠ΅ΡΠΎΠΌ Ρ
Π°ΡΠ°ΠΊΡΠ΅ΡΠΈΡΡΠΈΠΊ ΡΡΠ°Π΅ΠΊΡΠΎΡΠΈΠΉ Π΄Π²ΠΈΠΆΠ΅Π½ΠΈΡ. ΠΠ²ΡΠΎΡΠ°ΠΌΠΈ ΠΏΡΠΈΠ½ΠΈΠΌΠ°Π΅ΡΡΡ ΠΏΡΠ΅Π΄ΠΏΠΎΠ»ΠΎΠΆΠ΅Π½ΠΈΠ΅ ΠΎ ΠΊΠΎΡΡΠ΅Π»ΠΈΡΠΎΠ²Π°Π½Π½ΠΎΡΡΠΈ ΡΡΠ°Π΅ΠΊΡΠΎΡΠΈΠΉ, ΠΏΠΎΠ·Π²ΠΎΠ»ΡΡΡΠ΅Π΅ ΡΠΊΡΡΡΠ°ΠΏΠΎΠ»ΠΈΡΠΎΠ²Π°ΡΡ Π·Π½Π°ΡΠ΅Π½ΠΈΡ ΠΊΠΎΠΎΡΠ΄ΠΈΠ½Π°Ρ Ρ ΡΡΠ΅ΡΠΎΠΌ ΠΈΡ
ΠΏΡΠΈΡΠ°ΡΠ΅Π½ΠΈΠΉ Π·Π° ΠΏΠ΅ΡΠΈΠΎΠ΄ ΡΠΊΠ°Π½ΠΈΡΠΎΠ²Π°Π½ΠΈΡ. ΠΡΠΎΡΠ΅Π΄ΡΡΠ° ΠΎΠ±Π½Π°ΡΡΠΆΠ΅Π½ΠΈΡ ΡΠΎΡΡΠΎΠΈΡ ΠΈΠ· Π΄Π²ΡΡ
ΡΡΠ°ΠΏΠΎΠ². ΠΠ° ΠΏΠ΅ΡΠ²ΠΎΠΌ ΠΎΡΡΡΠ΅ΡΡΠ²Π»ΡΠ΅ΡΡΡ ΠΎΠ±Π½Π°ΡΡΠΆΠ΅Π½ΠΈΠ΅ ΠΊΠ»Π°ΡΡΠΈΡΠ΅ΡΠΊΠΈΠΌ ΠΏΠΎΡΠΎΠ³ΠΎΠ²ΡΠΌ ΠΌΠ΅ΡΠΎΠ΄ΠΎΠΌ Ρ Π½ΠΈΠ·ΠΊΠΈΠΌ ΡΡΠΎΠ²Π½Π΅ΠΌ ΠΏΠΎΡΠΎΠ³Π°, ΠΎΠ±Π΅ΡΠΏΠ΅ΡΠΈΠ²Π°ΡΡΠΈΠΌ Π²ΡΡΠΎΠΊΡΡ Π²Π΅ΡΠΎΡΡΠ½ΠΎΡΡΡ ΠΏΡΠ°Π²ΠΈΠ»ΡΠ½ΠΎΠ³ΠΎ ΠΎΠ±Π½Π°ΡΡΠΆΠ΅Π½ΠΈΡ ΠΏΡΠΈ Π²ΡΡΠΎΠΊΠΈΡ
Π·Π½Π°ΡΠ΅Π½ΠΈΡΡ
Π²Π΅ΡΠΎΡΡΠ½ΠΎΡΡΠΈ Π»ΠΎΠΆΠ½ΡΡ
ΡΡΠ΅Π²ΠΎΠ³. ΠΡΠΈ ΡΡΠΎΠΌ Π²ΠΎΠ·Π½ΠΈΠΊΠ°Π΅Ρ Π½Π΅ΠΎΠΏΡΠ΅Π΄Π΅Π»Π΅Π½Π½ΠΎΡΡΡ Π² Π²ΡΠ΄Π΅Π»Π΅Π½ΠΈΠΈ ΡΡΠ°Π΅ΠΊΡΠΎΡΠΈΠΈ ΠΎΠ±ΡΠ΅ΠΊΡΠ° Π½Π° ΡΠΎΠ½Π΅ Π»ΠΎΠΆΠ½ΡΡ
ΡΡΠ°Π΅ΠΊΡΠΎΡΠΈΠΉ. ΠΠ·-Π·Π° ΡΡΠ°ΡΠΈΡΡΠΈΡΠ΅ΡΠΊΠΎΠΉ Π½Π΅Π·Π°Π²ΠΈΡΠΈΠΌΠΎΡΡΠΈ ΠΊΠΎΠΎΡΠ΄ΠΈΠ½Π°Ρ Π»ΠΎΠΆΠ½ΡΡ
ΡΡΠ°Π΅ΠΊΡΠΎΡΠΈΠΉ ΠΏΠΎ ΡΡΠ°Π²Π½Π΅Π½ΠΈΡ Ρ ΠΊΠΎΡΡΠ΅Π»ΠΈΡΠΎΠ²Π°Π½Π½ΡΠΌΠΈ ΠΊΠΎΠΎΡΠ΄ΠΈΠ½Π°ΡΠ°ΠΌΠΈ ΠΎΠ±ΡΠ΅ΠΊΡΠ° ΡΡΠ΅Π΄Π½ΡΡ Π΄Π»ΠΈΡΠ΅Π»ΡΠ½ΠΎΡΡΡ ΠΏΠ΅ΡΠ²ΡΡ
ΠΈΠ· Π½ΠΈΡ
ΠΌΠ΅Π½ΡΡΠ΅ ΡΡΠ΅Π΄Π½Π΅ΠΉ Π΄Π»ΠΈΡΠ΅Π»ΡΠ½ΠΎΡΡΠΈ Π²ΡΠΎΡΡΡ
. ΠΡΠΎ ΠΎΡΠ»ΠΈΡΠΈΠ΅ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΡΡΡ Π΄Π»Ρ ΡΠ΅ΡΠ΅Π½ΠΈΡ Π·Π°Π΄Π°ΡΠΈ ΠΎΠ±Π½Π°ΡΡΠΆΠ΅Π½ΠΈΡ Π½Π° Π²ΡΠΎΡΠΎΠΌ ΡΡΠ°ΠΏΠ΅ Π½Π° ΠΎΡΠ½ΠΎΠ²Π΅ ΠΌΠ΅ΡΠΎΠ΄Π° Π²ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠΉ ΡΠ΅Π»Π΅ΠΊΡΠΈΠΈ. ΠΠΎΠ»ΡΡΠ΅Π½Π½ΡΠ΅ ΡΠ΅Π·ΡΠ»ΡΡΠ°ΡΡ ΠΏΠΎΠ·Π²ΠΎΠ»ΡΡΡ ΡΡΠ΄ΠΈΡΡ ΠΎ ΡΡΠ΅ΠΏΠ΅Π½ΠΈ Π²ΡΠΈΠ³ΡΡΡΠ° Π² Π²Π΅ΡΠΎΡΡΠ½ΠΎΡΡΠΈ ΠΎΠ±Π½Π°ΡΡΠΆΠ΅Π½ΠΈΡ ΠΏΡΠΈ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠΈ ΠΏΡΠ΅Π΄Π»Π°Π³Π°Π΅ΠΌΠΎΠ³ΠΎ ΠΌΠ΅ΡΠΎΠ΄Π°
Multimotion Visual Odometry (MVO): Simultaneous Estimation of Camera and Third-Party Motions
Estimating motion from images is a well-studied problem in computer vision
and robotics. Previous work has developed techniques to estimate the motion of
a moving camera in a largely static environment (e.g., visual odometry) and to
segment or track motions in a dynamic scene using known camera motions (e.g.,
multiple object tracking).
It is more challenging to estimate the unknown motion of the camera and the
dynamic scene simultaneously. Most previous work requires a priori object
models (e.g., tracking-by-detection), motion constraints (e.g., planar motion),
or fails to estimate the full SE(3) motions of the scene (e.g., scene flow).
While these approaches work well in specific application domains, they are not
generalizable to unconstrained motions.
This paper extends the traditional visual odometry (VO) pipeline to estimate
the full SE(3) motion of both a stereo/RGB-D camera and the dynamic scene. This
multimotion visual odometry (MVO) pipeline requires no a priori knowledge of
the environment or the dynamic objects. Its performance is evaluated on a
real-world dynamic dataset with ground truth for all motions from a motion
capture system.Comment: This updated manuscript corrects the experimental results published
in the proceedings of the 2018 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS).. 8 Pages. 7 Figures. Video available
at https://www.youtube.com/watch?v=84tXCJOlj0
Piecewise-stationary motion modeling and iterative smoothing to track heterogeneous particle motions in dense environments
International audienceOne of the major challenges in multiple particle tracking is the capture of extremely heterogeneous movements of objects in crowded scenes. The presence of numerous assignment candidates in the expected range of particle motion makes the tracking ambiguous and induces false positives. Lowering the ambiguity by reducing the search range, on the other hand, is not an option, as this would increase the rate of false negatives. We propose here a piecewise-stationary motion model (PMM) for the particle transport along an iterative smoother that exploits recursive tracking in multiple rounds in forward and backward temporal directions. By fusing past and future information, our method, termed PMMS, can recover fast transitions from freely or confined diffusive to directed motions with linear time complexity. To avoid false positives we complemented recursive tracking with a robust inline estimator of the search radius for assignment (a.k.a. gating), where past and future information are exploited using only two frames at each optimization step. We demonstrate the improvement of our technique on simulated data β especially the impact of density, variation in frame to frame displacements, and motion switching probability. We evaluated our technique on the 2D particle tracking challenge dataset published by Chenouard et al in 2014. Using high SNR to focus on motion modeling challenges, we show superior performance at high particle density. On biological applications, our algorithm allows us to quantify the extremely small percentage of motor-driven movements of fluorescent particles along microtubules in a dense field of unbound, diffusing particles. We also show with virus imaging that our algorithm can cope with a strong reduction in recording frame rate while keeping the same performance relative to methods relying on fast sampling
- β¦