236 research outputs found

    Stochastic uncertainty models for the luminance consistency assumption

    Get PDF
    International audienceIn this paper, a stochastic formulation of the brightness consistency used in many computer vision problems involving dynamic scenes (motion estimation or point tracking for instance) is proposed. Usually, this model which assumes that the luminance of a point is constant along its trajectory is expressed in a differential form through the total derivative of the luminance function. This differential equation links linearly the point velocity to the spatial and temporal gradients of the luminance function. However when dealing with images, the available informations only hold at discrete time and on a discrete grid. In this paper we formalize the image luminance as a continuous function transported by a flow known only up to some uncertainties related to such a discretization process. Relying on stochastic calculus, we define a formulation of the luminance function preservation in which these uncertainties are taken into account. From such a framework, it can be shown that the usual deterministic optical flow constraint equation corresponds to our stochastic evolution under some strong constraints. These constraints can be relaxed by imposing a weaker temporal assumption on the luminance function and also in introducing anisotropic intensity-based uncertainties. We in addition show that these uncertainties can be computed at each point of the image grid from the image data and provide hence meaningful information on the reliability of the motion estimates. To demonstrate the benefit of such a stochastic formulation of the brightness consistency assumption, we have considered a local least squares motion estimator relying on this new constraint. This new motion estimator improves significantly the quality of the results

    Data-Driven Animation of Crowds

    Get PDF
    International audienceIn this paper we propose an original method to animate a crowd of virtual beings in a virtual environment. Instead of relying on models to describe the motions of people along time, we suggest to use {\em a priori} knowledge on the dynamic of the crowd acquired from videos of real crowd situations. In our method this information is expressed as a time-varying motion field which accounts for a continuous flow of people along time. This motion descriptor is obtained through optical flow estimation with a specific second order regularization. Obtained motion fields are then used in a classical fixed step size integration scheme that allows to animate a virtual crowd in real-time. The power of our technique is demonstrated through various examples and possible follow-ups to this work are also described

    Data-Driven Animation of Crowds

    Get PDF
    International audienceIn this paper we propose an original method to animate a crowd of virtual beings in a virtual environment. Instead of relying on models to describe the motions of people along time, we suggest to use {\em a priori} knowledge on the dynamic of the crowd acquired from videos of real crowd situations. In our method this information is expressed as a time-varying motion field which accounts for a continuous flow of people along time. This motion descriptor is obtained through optical flow estimation with a specific second order regularization. Obtained motion fields are then used in a classical fixed step size integration scheme that allows to animate a virtual crowd in real-time. The power of our technique is demonstrated through various examples and possible follow-ups to this work are also described

    Optimal crowd editing

    Get PDF
    International audienceSimulating realistic crowd behaviors is a challenging problem in computer graphics. Yet, several satisfying simulation models exhibiting natural pedestrians or group emerging behaviors exist. Choosing among these model generally depends on the considered crowd density or the topology of the environment. Conversely, achieving a user-desired kinematic or dynamic pattern at a given instant of the simulation reveals to be much more tedious. In this paper, a novel generic control methodology is proposed to solve this crowd editing issue. Our method relies on an adjoint formulation of the underlying optimization procedure. It is independent to a certain extent of the choice of the simulation model, and is designed to handle several forms of constraints. A variety of examples attesting the benefits of our approach are proposed, along with quantitative performance measures

    AGORASET: a dataset for crowd video analysis

    Get PDF
    International audienceThe ability of efficient computer vision tools (detec- tion of pedestrians, tracking, ...) as well as advanced rendering techniques have enabled both the analysis of crowd phenomena and the simulation of realistic sce- narios. A recurrent problem lies in the evaluation of those methods since few common benchmark are avail- able to compare and evaluate the techniques is avail- able. This paper proposes a dataset of crowd sequences with associated ground truths (individual trajectories of each pedestrians inside the crowd, related continuous quantities of the scene such as the density and the veloc- ity field). We chosed to rely on realistic image synthesis to achieve our goal. As contributions of this paper, a typology of sequences relevant to the computer vision analysis problem is proposed, along with images of se- quences available in the database

    Interpolation de données manquantes dans des séquences multi-modales d'images géophysiques satellitaires

    Get PDF
    Session "Articles"National audienceCet article étudie l'estimation conjointe de données manquantes et de champs de déplacements dans des séquences multimodales d'observations satellitaires géophysiques. La complexité de la tâche est liée au taux élevé de données manquantes (entre 20% et 90%) pour des observations journalières de haute résolution et la reconstruction de structures fines en accord avec la dynamique sous jacente. Nous avons développé une méthode basée sur l'assimilation variationnelle de données pour des séries multimodales et multi-résolutions. A l'aide de données synthétiques et de données réelles de la surface océanique, une évaluation numérique et qualitative démontre l'apport de deux composantes clés du modèle proposé: la fusion d'informations multimodales à partir d'une contrainte géométrique basée sur les structures frontales, et la méthode d'assimilation variationnelle utilisant comme à priori dynamique un modèle d'advection-diffusion. Les expérimentations conduites montrent que de bonnes performances de reconstruction sont obtenues pour les observations hautes résolutions en dépit du pourcentage élevé de données manquante

    Continuous Control of Lagrangian Data

    Get PDF
    International audienceThis paper addresses the challenging problem of globally controlling several (and possibly independent) moving agents that together form a whole, generally called swarm, which may display interesting properties. Applications are numerous and can be related either to robotics or computer animation. Assuming the agents are driven by their own dynamics (such act as Newtonian particles), controlling this swarm is known as the particle swarm control problem. In this paper, the theory of an original approach to solve this issue, where we rely on a centralized control rather than focusing on designing individual and simple rules for the agents, is presented. To that end, we propose a framework to control several particles with constraints either expressed on a per-particle basis, or expressed as a function of their environment. We refer to these two categories as respectively Lagrangian or Eulerian constraints

    Fluid flow estimation with multiscale ensemble filters based on motion measurements under location uncertainty

    Get PDF
    International audienceThis paper proposes a novel multi-scale fluid flow data assimilation approach, which integrates and complements the advantages of a Bayesian sequential assimilation technique, the Weighted Ensemble Kalman filter (WEnKF). The data assimilation proposed in this work incorporates measurement brought by an efficient multiscale stochastic formulation of the well-known Lucas-Kanade (LK) estimator. This estimator has the great advantage to provide uncertainties associated to the motion measurements at different scales. The proposed assimilation scheme benefits from this multiscale uncertainty information and enables to enforce a physically plausible dynamical consistency of the estimated motion fields along the image sequence. Experimental evaluations are presented on synthetic and real fluid flow sequences

    Change detection needs change information: improving deep 3D point cloud change detection

    Full text link
    Change detection is an important task that rapidly identifies modified areas, particularly when multi-temporal data are concerned. In landscapes with a complex geometry (e.g., urban environment), vertical information is a very useful source of knowledge that highlights changes and classifies them into different categories. In this study, we focus on change segmentation using raw three-dimensional (3D) point clouds (PCs) directly to avoid any information loss due to the rasterization processes. While deep learning has recently proven its effectiveness for this particular task by encoding the information through Siamese networks, we investigate herein the idea of also using change information in the early steps of deep networks. To do this, we first propose to provide a Siamese KPConv state-of-the-art (SoTA) network with hand-crafted features, especially a change-related one, which improves the mean of the Intersection over Union (IoU) over the classes of change by 4.70%. Considering that a major improvement is obtained due to the change-related feature, we then propose three new architectures to address 3D PC change segmentation: OneConvFusion, Triplet KPConv, and Encoder Fusion SiamKPConv. All these networks consider the change information in the early steps and outperform the SoTA methods. In particular, Encoder Fusion SiamKPConv overtakes the SoTA approaches by more than 5% of the mean of the IoU over the classes of change, emphasizing the value of having the network focus on change information for the change detection task. The code is available at https://github.com/IdeGelis/torch-points3d-SiamKPConvVariants.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl
    • …
    corecore