10 research outputs found

    Trajectory Analysis for Sport and Video Surveillance

    Get PDF
    In video surveillance and sports analysis applications, object trajectories offer the possibility of extracting rich information on the underlying behavior of the moving targets. To this end we introduce an extension of Point Distribution Models (PDM) to analyze the object motion in their spatial, temporal and spatiotemporal dimensions. These trajectory models represent object paths as an average trajectory and a set of deformation modes, in the spatial, temporal and spatiotemporal domains. Thus any given motion can be expressed in terms of its modes, which in turn can be ascribed to a particular behavior. The proposed analysis tool has been tested on motion data extracted from a vision system that was tracking radio-guided cars running inside a circuit. This affords an easier interpretation of results, because the shortest lap provides a reference behavior. Besides showing an actual analysis we discuss how to normalize trajectories to have a meaningful analysis

    A Quantitative Method for Comparing Trajectories of Mobile Robots Using Point Distribution Models

    Get PDF
    The need for efficient security systems led to the development of automatic behavioral identification tools using video tracking. However, in the field of mobile robotics, trajectories are seldom taken into account to qualify robot behavior. Most metrics rely mainly on the time to accomplish a given task or on a prior knowledge of the robot controller, with the assumption that the trajectory can be kept within a maximal bounding error. A trajectory analysis method based on a Point Distribution Model (PDM) is presented here. The applicability of this method is demonstrated on the trajectories of a real differential-drive robot, endowed with two different controllers which lead to different patterns of motion. Results demonstrate that in the space of the PDM, the difference between the two controllers can be easily quantified. This method applies equally well to the trajectories gathered in real world experiments as to those generated in a corresponding realistic simulation. Quantitative comparison between these results (real and simulated) affords an assessment of the simulation quality, when simulation features are appropriately tuned

    Development and position data-related application of a stochastic model for trajectory simulation of a nonspinning volleyball

    Get PDF
    Während die meiste bisherige Forschung zum Flattereffekt, der als eine erratische aerodynamische Eigenschaft von Sportbällen bekannt ist, empirisch durchgeführt wurde, sind theoretische Untersuchungen, wie jene, die auf Grundlage von a priori Information eine bidirektionale Krümmung in der Simulation der Flugbahn eines nichtrotierenden Balles berücksichtigen, noch nicht verfügbar. Unsicherheitsquantifizierung in der numerischen Evaluation von Ballflugbahnen ist daher ein geeigneter Gegenstand für weiterführende Untersuchungen. Das verwendete Systemmodell basiert auf einer stochastischen vektoriellen Differentialgleichung (Zweites Newton‘ sches Gesetz), worin Unsicherheitsquellen durch spektrale und geschwindigkeitsspezifische Eigenschaften von Langevin-Kräften für Widerstand und Auftrieb quantifiziert sind. Eine datenbasierte Modellentwicklung erfolgte durch Verringerung der Anzahl von Unsicherheitsquellen und Dimensionsreduktion, und eine jeweilige Modellverifikation wurde erreicht unter Verwendung von mindestens einer Verifikationsmethode (ANOVA-Zerlegung der Legendre Chaos-Entwicklung). Zunächst konnte, ausgehend von einer ereignisanalytischen Klassifizierung von simulationsbasierten räumlich-zeitlichen Tracking-Daten (zeitabhängige Ortsvektordifferenz zum Vergleich von CW- und Schwankungsgrößen), die zeitliche Verteilung und die Auftrittswahrscheinlichkeit eines (Polar)winkel-spezifischen Flattereffektes berechnet werden. Als ein geeignetes Kriterium für die Identifizierung des Effektes wurde eine vergleichsweise schnelle und große zeitliche Änderung des Polarwinkels des Differenzvektors herangezogen. Ein übergeordnetes Ziel in globaler Unsicherheitsquantifizierung war eine vergleichende Untersuchung zum Einfluss von Positionsdaten in standardisierten Situationen des Sportspiels. Untersuchungsmethoden umfassten Zeitmittelung von 99% Konfidenzintervalllängen für dissipierte/erzeugte Leistung und Betrag des Ortsvektors, Gauß-Legendre-Integration zur Berechnung der Varianz des Ballauftreffortes sowie eine Analyse von Deformationsmoden basierend auf Hotelling‘s T2-Statistik einschließlich Eigenwertanalyse zur Reduzierung der Anzahl von Variablen. Um den systematischen Einfluss der Ballflugzeit zu eliminieren, wurde eine Kalibrierung der Ergebnisse mittels Neuberechnung unter der Annahme eines geschwindigkeitsunabhängigen Widerstandsbeiwertes vorgenommen, wodurch sich ein globaler Beiwert-spezifischer Flattereffekt einführen lässt. Es wurden Aufschlag-Szenarien des Sportspiels Volleyball mit positioneller hoher Skalierungsdichte gewählt. Dadurch ermöglicht sich sowohl eine differenzierte Identifizierung sportartspezifischer Anwendung in Bezug auf unmittelbare Wettkampfsteuerung (z.B. taktische Verhaltensweisen, auch hinsichtlich einer geschwindigkeitsbedingten perzeptuellen Trajektorienillusion) als auch eine simulationsgestützte Trajektorien-Evaluation in der Sportball-Technologie. Insbesondere erscheint das zugrundeliegende stochastische Modell für eine prozessbegleitende rechenzeiteffiziente Optimierung von Textur-Gestaltung und Panel-Musterung geeignet zu sein, wofür Ergebnisse aus Windkanal-Messung der zeit- und geschwindigkeitsabhängigen aerodynamischen Beiwerte erforderlich sind.While most previous research on the knuckling effect, known as an erratic aerodynamic property of sports balls, was carried out empirically, theoretical investigations, such as those allowing a priori information based for bidirectional curvature generation in the trajectory simulation of a nonspinning ball, are not yet available. Therefore, uncertainty quantification in the numerical ball flight trajectories evaluation is a suitable subject for subsequent investigations. The system model used is based on a stochastic vectorial differential equation (Newton's 2nd law) quantifying sources of uncertainty by spectral and velocity-specific properties of Langevin forces for drag and lift. A data-based model development was carried out by both reducing the number of sources of uncertainty and dimension reduction, and a respective model verification was achieved by using at least one verification method (ANOVA decomposition of the Legendre chaos expansion). First, the temporal distribution and the probability of occurrence of a (polar-)angle-specific knuckling effect could be calculated based on an event analytic classification of simulation-based spatiotemporal tracking data (time-dependent difference of position vectors to compare CW- and fluctuation quantities). As a suitable criterion for the identification of the effect, a comparatively fast and large temporal change of the polar-angle of the difference vector was used. An overarching aim in global uncertainty quantification was a comparative investigation on the influence of position data in standardised situations of the sports game. Examination methods included time-averaging 99% confidence interval lengths for dissipated/generated power and magnitude of position vector, Gauss-Legendre integration to calculate the variance of landing points as well as an analysis of deformation modes based on Hotelling’s T2 statistic, including eigenvalue analysis for reducing the number of variables. In order to eliminate the systematic impact of time-of-flight, a calibration of results by means of recalculation assuming a speed-independent drag coefficient was undertaken, whereby a global coefficient-specific knuckling effect can be introduced. Scenarios in volleyballs’ serving play were chosen by positional high density scaling. This enables a differentiated identification of different kinds of sport-specific application in relation to immediate competition control (e.g., tactical behaviors, also with regard to speed-related perceptual trajectory illusion) as well as a simulation-based trajectory evaluation in sports ball engineering. In particular, the underlying stochastic model appears to be suitable for an in-process computation time-efficient optimization of textural styling and panel patterning which will require results of wind tunnel measurement of time- and speed-dependent aerodynamic coefficients

    Détection automatique de chutes de personnes basée sur des descripteurs spatio-temporels (définition de la méthode, évaluation des performances et implantation temps-réel)

    Get PDF
    Nous proposons une méthode supervisée de détection de chutes de personnes en temps réel, robusteaux changements de point de vue et d environnement. La première partie consiste à rendredisponible en ligne une base de vidéos DSFD enregistrées dans quatre lieux différents et qui comporteun grand nombre d annotations manuelles propices aux comparaisons de méthodes. Nousavons aussi défini une métrique d évaluation qui permet d évaluer la méthode en s adaptant à la naturedu flux vidéo et la durée d une chute, et en tenant compte des contraintes temps réel. Dans unsecond temps, nous avons procédé à la construction et l évaluation des descripteurs spatio-temporelsSTHF, calculés à partir des attributs géométriques de la forme en mouvement dans la scène ainsique leurs transformations, pour définir le descripteur optimisé de chute après une méthode de sélectiond attributs. La robustesse aux changements d environnement a été évaluée en utilisant les SVMet le Boosting. On parvient à améliorer les performances par la mise à jour de l apprentissage parl intégration des vidéos sans chutes enregistrées dans l environnement définitif. Enfin, nous avonsréalisé, une implantation de ce détecteur sur un système embarqué assimilable à une caméra intelligentebasée sur un composant SoC de type Zynq. Une démarche de type Adéquation AlgorithmeArchitecture a permis d obtenir un bon compromis performance de classification/temps de traitementWe propose a supervised approach to detect falls in home environment adapted to location andpoint of view changes. First, we maid publicly available a realistic dataset, acquired in four differentlocations, containing a large number of manual annotation suitable for methods comparison. We alsodefined a new metric, adapted to real-time tasks, allowing to evaluate fall detection performance ina continuous video stream. Then, we build the initial spatio-temporal descriptor named STHF usingseveral combinations of transformations of geometrical features and an automatically optimised setof spatio-temporal descriptors thanks to an automatic feature selection step. We propose a realisticand pragmatic protocol which enables performance to be improved by updating the training in thecurrent location with normal activities records. Finally, we implemented the fall detection in Zynqbasedhardware platform similar to smart camera. An Algorithm-Architecture Adequacy step allowsa good trade-off between performance of classification and processing timeDIJON-BU Doc.électronique (212319901) / SudocSudocFranceF

    The SURE-LET approach to image denoising

    Get PDF
    Denoising is an essential step prior to any higher-level image-processing tasks such as segmentation or object tracking, because the undesirable corruption by noise is inherent to any physical acquisition device. When the measurements are performed by photosensors, one usually distinguish between two main regimes: in the first scenario, the measured intensities are sufficiently high and the noise is assumed to be signal-independent. In the second scenario, only few photons are detected, which leads to a strong signal-dependent degradation. When the noise is considered as signal-independent, it is often modeled as an additive independent (typically Gaussian) random variable, whereas, otherwise, the measurements are commonly assumed to follow independent Poisson laws, whose underlying intensities are the unknown noise-free measures. We first consider the reduction of additive white Gaussian noise (AWGN). Contrary to most existing denoising algorithms, our approach does not require an explicit prior statistical modeling of the unknown data. Our driving principle is the minimization of a purely data-adaptive unbiased estimate of the mean-squared error (MSE) between the processed and the noise-free data. In the AWGN case, such a MSE estimate was first proposed by Stein, and is known as "Stein's unbiased risk estimate" (SURE). We further develop the original SURE theory and propose a general methodology for fast and efficient multidimensional image denoising, which we call the SURE-LET approach. While SURE allows the quantitative monitoring of the denoising quality, the flexibility and the low computational complexity of our approach are ensured by a linear parameterization of the denoising process, expressed as a linear expansion of thresholds (LET).We propose several pointwise, multivariate, and multichannel thresholding functions applied to arbitrary (in particular, redundant) linear transformations of the input data, with a special focus on multiscale signal representations. We then transpose the SURE-LET approach to the estimation of Poisson intensities degraded by AWGN. The signal-dependent specificity of the Poisson statistics leads to the derivation of a new unbiased MSE estimate that we call "Poisson's unbiased risk estimate" (PURE) and requires more adaptive transform-domain thresholding rules. In a general PURE-LET framework, we first devise a fast interscale thresholding method restricted to the use of the (unnormalized) Haar wavelet transform. We then lift this restriction and show how the PURE-LET strategy can be used to design and optimize a wide class of nonlinear processing applied in an arbitrary (in particular, redundant) transform domain. We finally apply some of the proposed denoising algorithms to real multidimensional fluorescence microscopy images. Such in vivo imaging modality often operates under low-illumination conditions and short exposure time; consequently, the random fluctuations of the measured fluorophore radiations are well described by a Poisson process degraded (or not) by AWGN. We validate experimentally this statistical measurement model, and we assess the performance of the PURE-LET algorithms in comparison with some state-of-the-art denoising methods. Our solution turns out to be very competitive both qualitatively and computationally, allowing for a fast and efficient denoising of the huge volumes of data that are nowadays routinely produced in biomedical imaging

    A highly adaptable model based – method for colour image interpretation

    Get PDF
    This Thesis presents a model-based interpretation of images that can vary greatly in appearance. Rather than seek characteristic landmarks to model objects we sample points at regular intervals on the boundary to model objects with a smooth boundary. A statistical model of form in the exponent domain of an extended superellipse is created using sampled points and appearance by sampling inside objects. A colour Maximum Likelihood Ratio criterion (MLR) was used to detect cues to the location of potential pedestrians. The adaptability and specificity of this cue detector was evaluated using over 700 images. A True Positive Rate (TPR) of 0.95 and a False Positive Rate (FPR) of 0.20 were obtained. To detect objects with axes at various orientations a variant method using an interpolated colour MLR has been developed. This had a TPR of 0.94 and an FPR of 0.21 when tested over 700 images of pedestrians. Interpretation was evaluated using over 220 video sequences (640 x 480 pixels per frame) and 1000 images of people alone and people associated with other objects. The objective was not so much to evaluate pedestrian detection but the precision and reliability of object delineation. More than 94% of pedestrians were correctly interpreted

    Trajectory analysis using point distribution models:algorithms, performance evaluation, and experimental validation using mobile robots

    Get PDF
    This thesis focuses on the analysis of the trajectories of a mobile agent. It presents different techniques to acquire a quantitative measure of the difference between two trajectories or two trajectory datasets. A novel approach is presented here, based on the Point Distribution Model (PDM). This model was developed by computer vision scientists to compare deformable shapes. This thesis presents the mathematical reformulation of the PDM to fit spatiotemporal data, such as trajectory information. The behavior of a mobile agent can rarely be represented by a unique trajectory, as its stochastic component will not be taken into account. Thus, the PDM focuses on the comparison of trajectory datasets. If the difference between datasets is greater than the variation within each dataset, it will be observable in the first few dimensions of the PDM. Moreover, this difference can also be quantified using the inter-cluster distance defined in this thesis. The resulting measure is much more efficient than visual comparisons of trajectories, as are often made in existing scientific literature. This thesis also compares the PDM with standard techniques, such as statistical tests, Hidden Markov Models (HMMs) or Correlated Random Walk (CRW) models. As a PDM is a linear transformation of space, it is much simpler to comprehend. Moreover, spatial representations of the deformation modes can easily be constructed in order to make the model more intuitive. This thesis also presents the limits of the PDM and offers other solutions when it is not adequate. From the different results obtained, it can be pointed out that no universal solution exists for the analysis of trajectories, however, solutions were found and described for all of the problems presented in this thesis. As the PDM requires that all the trajectories consist of the same number of points, techniques of resampling were studied. The main solution was developed for trajectories generated on a track, such as the trajectory of a car on a road or the trajectory of a pedestrian in a hallway. The different resampling techniques presented in this thesis provide solutions to all the experimental setups studied, and can easily be modified to fit other scenarios. It is however very important to understand how they work and to tune their parameters according to the characteristics of the experimental setup. The main principle of this thesis is that analysis techniques and data representations must be appropriately selected with respect to the fundamental goal. Even a simple tool such as the t-test can occasionally be sufficient to measure trajectory differences. However, if no dissimilarity can be observed, it does not necessarily mean that the trajectories are equal – it merely indicates that the analyzed feature is similar. Alternatively, other more complex methods could be used to highlight differences. Ultimately, two trajectories are equal if and only if they consist of the exact same sequence of points. Otherwise, a difference can always be found. Thus, it is important to know which trajectory features have to be compared. Finally, the diverse techniques used in this thesis offer a complete methodology to analyze trajectories

    Trajectory Analysis for Sport and Video Surveillance

    No full text
    In video surveillance and sports analysis applications, object trajectories offer the possibility of extracting rich information on the underlying behavior of the moving targets. To this end we introduce an extension of Point Distribution Models (PDM) to analyze the object motion in their spatial, temporal and spatiotemporal dimensions. These trajectory models represent object paths as an average trajectory and a set of deformation modes, in the spatial, temporal and spatiotemporal domains. Thus any given motion can be expressed in terms of its modes, which in turn can be ascribed to a particular behavior. The proposed analysis tool has been tested on motion data extracted from a vision system that was tracking radio-guided cars running inside a circuit. This affords an easier interpretation of results, because the shortest lap provides a reference behavior. Besides showing an actual analysis we discuss how to normalize trajectories to have a meaningful analysis

    Electronic Letters on Computer Vision and Image Analysis 5(3):148-156, 2005 Trajectory Analysis for Sport and Video Surveillance

    No full text
    In video surveillance and sports analysis applications, object trajectories offer the possibility of extracting rich information on the underlying behavior of the moving targets. To this end we introduce an extension of Point Distribution Models (PDM) to analyze the object motion in their spatial, temporal and spatiotemporal dimensions. These trajectory models represent object paths as an average trajectory and a set of deformation modes, in the spatial, temporal and spatiotemporal domains. Thus any given motion can be expressed in terms of its modes, which in turn can be ascribed to a particular behavior. The proposed analysis tool has been tested on motion data extracted from a vision system that was tracking radio-guided cars running inside a circuit. This affords an easier interpretation of results, because the shortest lap provides a reference behavior. Besides showing an actual analysis we discuss how to normalize trajectories to have a meaningful analysis
    corecore