126 research outputs found

    Visibility computation through image generalization

    Get PDF
    This dissertation introduces the image generalization paradigm for computing visibility. The paradigm is based on the observation that an image is a powerful tool for computing visibility. An image can be rendered efficiently with the support of graphics hardware and each of the millions of pixels in the image reports a visible geometric primitive. However, the visibility solution computed by a conventional image is far from complete. A conventional image has a uniform sampling rate which can miss visible geometric primitives with a small screen footprint. A conventional image can only find geometric primitives to which there is direct line of sight from the center of projection (i.e. the eye) of the image; therefore, a conventional image cannot compute the set of geometric primitives that become visible as the viewpoint translates, or as time changes in a dynamic dataset. Finally, like any sample-based representation, a conventional image can only confirm that a geometric primitive is visible, but it cannot confirm that a geometric primitive is hidden, as that would require an infinite number of samples to confirm that the primitive is hidden at all of its points. ^ The image generalization paradigm overcomes the visibility computation limitations of conventional images. The paradigm has three elements. (1) Sampling pattern generalization entails adding sampling locations to the image plane where needed to find visible geometric primitives with a small footprint. (2) Visibility sample generalization entails replacing the conventional scalar visibility sample with a higher dimensional sample that records all geometric primitives visible at a sampling location as the viewpoint translates or as time changes in a dynamic dataset; the higher-dimensional visibility sample is computed exactly, by solving visibility event equations, and not through sampling. Another form of visibility sample generalization is to enhance a sample with its trajectory as the geometric primitive it samples moves in a dynamic dataset. (3) Ray geometry generalization redefines a camera ray as the set of 3D points that project at a given image location; this generalization supports rays that are not straight lines, and enables designing cameras with non-linear rays that circumvent occluders to gather samples not visible from a reference viewpoint. ^ The image generalization paradigm has been used to develop visibility algorithms for a variety of datasets, of visibility parameter domains, and of performance-accuracy tradeoff requirements. These include an aggressive from-point visibility algorithm that guarantees finding all geometric primitives with a visible fragment, no matter how small primitive\u27s image footprint, an efficient and robust exact from-point visibility algorithm that iterates between a sample-based and a continuous visibility analysis of the image plane to quickly converge to the exact solution, a from-rectangle visibility algorithm that uses 2D visibility samples to compute a visible set that is exact under viewpoint translation, a flexible pinhole camera that enables local modulations of the sampling rate over the image plane according to an input importance map, an animated depth image that not only stores color and depth per pixel but also a compact representation of pixel sample trajectories, and a curved ray camera that integrates seamlessly multiple viewpoints into a multiperspective image without the viewpoint transition distortion artifacts of prior art methods

    Geometric uncertainty models for correspondence problems in digital image processing

    Get PDF
    Many recent advances in technology rely heavily on the correct interpretation of an enormous amount of visual information. All available sources of visual data (e.g. cameras in surveillance networks, smartphones, game consoles) must be adequately processed to retrieve the most interesting user information. Therefore, computer vision and image processing techniques gain significant interest at the moment, and will do so in the near future. Most commonly applied image processing algorithms require a reliable solution for correspondence problems. The solution involves, first, the localization of corresponding points -visualizing the same 3D point in the observed scene- in the different images of distinct sources, and second, the computation of consistent geometric transformations relating correspondences on scene objects. This PhD presents a theoretical framework for solving correspondence problems with geometric features (such as points and straight lines) representing rigid objects in image sequences of complex scenes with static and dynamic cameras. The research focuses on localization uncertainty due to errors in feature detection and measurement, and its effect on each step in the solution of a correspondence problem. Whereas most other recent methods apply statistical-based models for spatial localization uncertainty, this work considers a novel geometric approach. Localization uncertainty is modeled as a convex polygonal region in the image space. This model can be efficiently propagated throughout the correspondence finding procedure. It allows for an easy extension toward transformation uncertainty models, and to infer confidence measures to verify the reliability of the outcome in the correspondence framework. Our procedure aims at finding reliable consistent transformations in sets of few and ill-localized features, possibly containing a large fraction of false candidate correspondences. The evaluation of the proposed procedure in practical correspondence problems shows that correct consistent correspondence sets are returned in over 95% of the experiments for small sets of 10-40 features contaminated with up to 400% of false positives and 40% of false negatives. The presented techniques prove to be beneficial in typical image processing applications, such as image registration and rigid object tracking

    Structured Variable Selection with Sparsity-Inducing Norms

    Get PDF
    We consider the empirical risk minimization problem for linear supervised learning, with regularization by structured sparsity-inducing norms. These are defined as sums of Euclidean norms on certain subsets of variables, extending the usual â„“1\ell_1-norm and the group â„“1\ell_1-norm by allowing the subsets to overlap. This leads to a specific set of allowed nonzero patterns for the solutions of such problems. We first explore the relationship between the groups defining the norm and the resulting nonzero patterns, providing both forward and backward algorithms to go back and forth from groups to patterns. This allows the design of norms adapted to specific prior knowledge expressed in terms of nonzero patterns. We also present an efficient active set algorithm, and analyze the consistency of variable selection for least-squares linear regression in low and high-dimensional settings

    Collection of abstracts of the 24th European Workshop on Computational Geometry

    Get PDF
    International audienceThe 24th European Workshop on Computational Geomety (EuroCG'08) was held at INRIA Nancy - Grand Est & LORIA on March 18-20, 2008. The present collection of abstracts contains the 63 scientific contributions as well as three invited talks presented at the workshop

    Discrete Differential Geometry

    Get PDF
    This is the collection of extended abstracts for the 26 lectures and the open problem session at the fourth Oberwolfach workshop on Discrete Differential Geometry

    16th Scandinavian Symposium and Workshops on Algorithm Theory: SWAT 2018, June 18-20, 2018, Malmö University, Malmö, Sweden

    Get PDF

    Optimisation-based verification process of obstacle avoidance systems for unmanned vehicles

    Get PDF
    This thesis deals with safety verification analysis of collision avoidance systems for unmanned vehicles. The safety of the vehicle is dependent on collision avoidance algorithms and associated control laws, and it must be proven that the collision avoidance algorithms and controllers are functioning correctly in all nominal conditions, various failure conditions and in the presence of possible variations in the vehicle and operational environment. The current widely used exhaustive search based approaches are not suitable for safety analysis of autonomous vehicles due to the large number of possible variations and the complexity of algorithms and the systems. To address this topic, a new optimisation-based verification method is developed to verify the safety of collision avoidance systems. The proposed verification method formulates the worst case analysis problem arising the verification of collision avoidance systems into an optimisation problem and employs optimisation algorithms to automatically search the worst cases. Minimum distance to the obstacle during the collision avoidance manoeuvre is defined as the objective function of the optimisation problem, and realistic simulation consisting of the detailed vehicle dynamics, the operational environment, the collision avoidance algorithm and low level control laws is embedded in the optimisation process. This enables the verification process to take into account the parameters variations in the vehicle, the change of the environment, the uncertainties in sensors, and in particular the mismatching between model used for developing the collision avoidance algorithms and the real vehicle. It is shown that the resultant simulation based optimisation problem is non-convex and there might be many local optima. To illustrate and investigate the proposed optimisation based verification process, the potential field method and decision making collision avoidance method are chosen as an obstacle avoidance candidate technique for verification study. Five benchmark case studies are investigated in this thesis: static obstacle avoidance system of a simple unicycle robot, moving obstacle avoidance system for a Pioneer 3DX robot, and a 6 Degrees of Freedom fixed wing Unmanned Aerial Vehicle with static and moving collision avoidance algorithms. It is proven that although a local optimisation method for nonlinear optimisation is quite efficient, it is not able to find the most dangerous situation. Results in this thesis show that, among all the global optimisation methods that have been investigated, the DIviding RECTangle method provides most promising performance for verification of collision avoidance functions in terms of guaranteed capability in searching worst scenarios

    Sixth Biennial Report : August 2001 - May 2003

    No full text
    • …
    corecore