13 research outputs found

    Nonlinear Control and Estimation Techniques with Applications to Vision-based and Biomedical Systems

    Get PDF
    This dissertation is divided into four self-contained chapters. In Chapter 1, a new estimator using a single calibrated camera mounted on a moving platform is developed to asymptotically recover the range and the three-dimensional (3D) Euclidean position of a static object feature. The estimator also recovers the constant 3D Euclidean coordinates of the feature relative to the world frame as a byproduct. The position and orientation of the camera is assumed to be measurable unlike existing observers where velocity measurements are assumed to be known. To estimate the unknown range variable, an adaptive least squares estimation strategy is employed based on a novel prediction error formulation. A Lyapunov stability analysis is used to prove the convergence properties of the estimator. The developed estimator has a simple mathematical structure and can be used to identify range and 3D Euclidean coordinates of multiple features. These properties of the estimator make it suitable for use with robot navigation algorithms where position measurements are readily available. Numerical simulation results along with experimental results are presented to illustrate the effectiveness of the proposed algorithm. In Chapter 2, a novel Euclidean position estimation technique using a single uncalibrated camera mounted on a moving platform is developed to asymptotically recover the three-dimensional (3D) Euclidean position of static object features. The position of the moving platform is assumed to be measurable, and a second object with known 3D Euclidean coordinates relative to the world frame is considered to be available a priori. To account for the unknown camera calibration parameters and to estimate the unknown 3D Euclidean coordinates, an adaptive least squares estimation strategy is employed based on prediction error formulations and a Lyapunov-type stability analysis. The developed estimator is shown to recover the 3D Euclidean position of the unknown object features despite the lack of knowledge of the camera calibration parameters. Numerical simulation results along with experimental results are presented to illustrate the effectiveness of the proposed algorithm. In Chapter 3, a new range identification technique for a calibrated paracatadioptric system mounted on a moving platform is developed to recover the range information and the three-dimensional (3D) Euclidean coordinates of a static object feature. The position of the moving platform is assumed to be measurable. To identify the unknown range, first, a function of the projected pixel coordinates is related to the unknown 3D Euclidean coordinates of an object feature. This function is nonlinearly parameterized (i.e., the unknown parameters appear nonlinearly in the parameterized model). An adaptive estimator based on a min-max algorithm is then designed to estimate the unknown 3D Euclidean coordinates of an object feature relative to a fixed reference frame which facilitates the identification of range. A Lyapunov-type stability analysis is used to show that the developed estimator provides an estimation of the unknown parameters within a desired precision. Numerical simulation results are presented to illustrate the effectiveness of the proposed range estimation technique. In Chapter 4, optimization of antiangiogenic therapy for tumor management is considered as a nonlinear control problem. A new technique is developed to optimize antiangiogenic therapy which minimizes the volume of a tumor and prevents it from growing using an optimum drug dose. To this end, an optimum desired trajectory is designed to minimize a performance index. Two controllers are then presented that drive the tumor volume to its optimum value. The first controller is proven to yield exponential results given exact model knowledge. The second controller is developed under the assumption of parameteric uncertainties in the system model. A least-squares estimation strategy based on a prediction error formulation and a Lyapunov-type stability analysis is developed to estimate the unknown parameters of the performance index. An adaptive controller is then designed to track the desired optimum trajectory. The proposed tumor minimization scheme is shown to minimize the tumor volume with an optimum drug dose despite the lack of knowledge of system parameters. Numerical simulation results are presented to illustrate the effectiveness of the proposed technique. An extension of the developed technique for a mathematical model which accounts for pharmacodynamics and pharmacokinetics is also presented. Futhermore, a technique for the estimation of the carrying capacity of endothelial cells is also presented

    Predicting Collisions in Mobile Robot Navigation by Kalman Filter

    Get PDF
    The growing trend of the use of robots in many areas of daily life makes it necessary to search for approaches to improve efficiency in tasks performed by robots. For that reason, we show, in this chapter, the application of the Kalman filter applied to the navigation of mobile robots, specifically the Time-to-contact (TTC) problem. We present a summary of approaches that have been taken to address the TTC problem. We use a monocular vision-based approach to detect potential obstacles and follow them over time through their apparent size change. Our approach collects information about obstacle data and models the behavior while the robot is approaching the obstacle, in order to predict collisions. We highlight some characteristics of the Kalman filter applied to our problem. Finally, we show of our results applied to sequences composed of 210 frames in different real scenarios. The results show a fast convergence of the model to the data and good fit even with noisy measures

    Real-time endoscopic mosaicking

    Get PDF
    Abstract. With the advancement of minimally invasive techniques for surgical and diagnostic procedures, there is a growing need for the development of methods for improved visualization of internal body structures. Video mosaicking is one method for doing this. This approach provides a broader field of view of the scene by stitching together images in a video sequence. Of particular importance is the need for online processing to provide real-time feedback and visualization for image-guided surgery and diagnosis. We propose a method for online video mosaicking applied to endoscopic imagery, with examples in microscopic retinal imaging and catadioptric endometrial imaging

    Estimation du Temps Ă  Collision en Vision Catadioptrique

    Get PDF
    Time to contact or time to collision (TTC) is of utmost importance information for animals as well as for mobile robots because it enables them to avoid obstacles; it is a convenient way to analyze the surrounding environment. The problem of TTC estimation is largely discussed in perspective images. Although a lot of works have shown the interest of omnidirectional camera for robotic applications such as localization, motion, monitoring, few works use omnidirectional images to compute the TTC. In this thesis, we show that TTC can be also estimated on catadioptric images. We present two approaches for TTC estimation using directly or indirectly the optical flow based on de-rotation strategy. The first, called ''gradient based TTC'', is simple, fast and it does not need an explicit estimation of the optical flow. Nevertheless, this method cannot provide a TTC on each pixel, valid only for para-catadioptric sensors and requires an initial segmentation of the obstacle.The second method, called ''TTC map estimation based on optical flow'', estimates TTC on each point on the image and provides the depth map of the environment for any obstacle in any direction and is valid for all central catadioptric sensors. Some results and comparisons in synthetic and real images will be given.Cette thèse s'intéresse à l'estimation du temps à collision d'un robot mobile muni d'une caméra catadioptrique. Ce type de caméra est très utile en robotique car il permet d'obtenir un champ de vue panoramique à chaque instant. Le temps de collision a été largement étudié dans le cas des caméras perspectives. Cependant, ces méthodes ne sont pas directement applicables et nécessitent d'être adaptées, à cause des distorsions des images obtenues par les caméras omnidirectionnelles. Dans ce travail, nous proposons d'exploiter explicitement et implicitement le flot optique calculé sur les images omnidirectionnelles pour en déduire le temps à collision (TTC) entre le robot et l'obstacle. Nous verrons que la double projection d'un point 3D sur le miroir puis sur le plan caméra aboutit à des nouvelles formulations du TTC pour les caméras catadioptriques. La première formulation est appelée globale basée sur les gradients, elle estime le TTC en exprimant le mouvement apparent en fonction du TTC et l'équation de la surface plane en fonction des coordonnées images et à partir des paramètres de son vecteur normal. Ces deux outils sont intégrés dans l'équation du flot optique (ECMA) afin d'en déduire le TTC. Cette méthode a l'avantage d'être simple rapide et fournit une information supplémentaire sur l'inclinaison de la surface plane. Néanmoins, la méthode globale basée sur les gradients est valable seulement pour les capteurs para-catadioptriques et elle peut être appliquée seulement pour les surfaces planes. La seconde formulation, appelée locale basée sur le flot optique, estime le TTC en utilisant explicitement le mouvement apparent. Cette formulation nous permet de connaître à chaque instant et sur chaque pixel de l'image le TTC à partir du flot optique en ce point. Le calcul du TTC en chaque pixel permet d'obtenir une carte des temps de collision. C'est une méthode plus générale car elle est valable pour tous les capteurs à PVU et elle peut être utilisée pour n'importe quelle forme géométrique d'obstacle. Les deux approches sont validées sur des données de synthèse et des expérimentations réelles

    A Full Scale Camera Calibration Technique with Automatic Model Selection – Extension and Validation

    Get PDF
    This thesis presents work on the testing and development of a complete camera calibration approach which can be applied to a wide range of cameras equipped with normal, wide-angle, fish-eye, or telephoto lenses. The full scale calibration approach estimates all of the intrinsic and extrinsic parameters. The calibration procedure is simple and does not require prior knowledge of any parameters. The method uses a simple planar calibration pattern. Closed-form estimates for the intrinsic and extrinsic parameters are computed followed by nonlinear optimization. Polynomial functions are used to describe the lens projection instead of the commonly used radial model. Statistical information criteria are used to automatically determine the complexity of the lens distortion model. In the first stage experiments were performed to verify and compare the performance of the calibration method. Experiments were performed on a wide range of lenses. Synthetic data was used to simulate real data and validate the performance. Synthetic data was also used to validate the performance of the distortion model selection which uses Information Theoretic Criterion (AIC) to automatically select the complexity of the distortion model. In the second stage work was done to develop an improved calibration procedure which addresses shortcomings of previously developed method. Experiments on the previous method revealed that the estimation of the principal point during calibration was erroneous for lenses with a large focal length. To address this issue the calibration method was modified to include additional methods to accurately estimate the principal point in the initial stages of the calibration procedure. The modified procedure can now be used to calibrate a wide spectrum of imaging systems including telephoto and verifocal lenses. Survey of current work revealed a vast amount of research concentrating on calibrating only the distortion of the camera. In these methods researchers propose methods to calibrate only the distortion parameters and suggest using other popular methods to find the remaining camera parameters. Using this proposed methodology we apply distortion calibration to our methods to separate the estimation of distortion parameters. We show and compare the results with the original method on a wide range of imaging systems

    On unifying sparsity and geometry for image-based 3D scene representation

    Get PDF
    Demand has emerged for next generation visual technologies that go beyond conventional 2D imaging. Such technologies should capture and communicate all perceptually relevant three-dimensional information about an environment to a distant observer, providing a satisfying, immersive experience. Camera networks offer a low cost solution to the acquisition of 3D visual information, by capturing multi-view images from different viewpoints. However, the camera's representation of the data is not ideal for common tasks such as data compression or 3D scene analysis, as it does not make the 3D scene geometry explicit. Image-based scene representations fundamentally require a multi-view image model that facilitates extraction of underlying geometrical relationships between the cameras and scene components. Developing new, efficient multi-view image models is thus one of the major challenges in image-based 3D scene representation methods. This dissertation focuses on defining and exploiting a new method for multi-view image representation, from which the 3D geometry information is easily extractable, and which is additionally highly compressible. The method is based on sparse image representation using an overcomplete dictionary of geometric features, where a single image is represented as a linear combination of few fundamental image structure features (edges for example). We construct the dictionary by applying a unitary operator to an analytic function, which introduces a composition of geometric transforms (translations, rotation and anisotropic scaling) to that function. The advantage of this approach is that the features across multiple views can be related with a single composition of transforms. We then establish a connection between image components and scene geometry by defining the transforms that satisfy the multi-view geometry constraint, and obtain a new geometric multi-view correlation model. We first address the construction of dictionaries for images acquired by omnidirectional cameras, which are particularly convenient for scene representation due to their wide field of view. Since most omnidirectional images can be uniquely mapped to spherical images, we form a dictionary by applying motions on the sphere, rotations, and anisotropic scaling to a function that lives on the sphere. We have used this dictionary and a sparse approximation algorithm, Matching Pursuit, for compression of omnidirectional images, and additionally for coding 3D objects represented as spherical signals. Both methods offer better rate-distortion performance than state of the art schemes at low bit rates. The novel multi-view representation method and the dictionary on the sphere are then exploited for the design of a distributed coding method for multi-view omnidirectional images. In a distributed scenario, cameras compress acquired images without communicating with each other. Using a reliable model of correlation between views, distributed coding can achieve higher compression ratios than independent compression of each image. However, the lack of a proper model has been an obstacle for distributed coding in camera networks for many years. We propose to use our geometric correlation model for distributed multi-view image coding with side information. The encoder employs a coset coding strategy, developed by dictionary partitioning based on atom shape similarity and multi-view geometry constraints. Our method results in significant rate savings compared to independent coding. An additional contribution of the proposed correlation model is that it gives information about the scene geometry, leading to a new camera pose estimation method using an extremely small amount of data from each camera. Finally, we develop a method for learning stereo visual dictionaries based on the new multi-view image model. Although dictionary learning for still images has received a lot of attention recently, dictionary learning for stereo images has been investigated only sparingly. Our method maximizes the likelihood that a set of natural stereo images is efficiently represented with selected stereo dictionaries, where the multi-view geometry constraint is included in the probabilistic modeling. Experimental results demonstrate that including the geometric constraints in learning leads to stereo dictionaries that give both better distributed stereo matching and approximation properties than randomly selected dictionaries. We show that learning dictionaries for optimal scene representation based on the novel correlation model improves the camera pose estimation and that it can be beneficial for distributed coding

    Enhancing 3D Visual Odometry with Single-Camera Stereo Omnidirectional Systems

    Full text link
    We explore low-cost solutions for efficiently improving the 3D pose estimation problem of a single camera moving in an unfamiliar environment. The visual odometry (VO) task -- as it is called when using computer vision to estimate egomotion -- is of particular interest to mobile robots as well as humans with visual impairments. The payload capacity of small robots like micro-aerial vehicles (drones) requires the use of portable perception equipment, which is constrained by size, weight, energy consumption, and processing power. Using a single camera as the passive sensor for the VO task satisfies these requirements, and it motivates the proposed solutions presented in this thesis. To deliver the portability goal with a single off-the-shelf camera, we have taken two approaches: The first one, and the most extensively studied here, revolves around an unorthodox camera-mirrors configuration (catadioptrics) achieving a stereo omnidirectional system (SOS). The second approach relies on expanding the visual features from the scene into higher dimensionalities to track the pose of a conventional camera in a photogrammetric fashion. The first goal has many interdependent challenges, which we address as part of this thesis: SOS design, projection model, adequate calibration procedure, and application to VO. We show several practical advantages for the single-camera SOS due to its complete 360-degree stereo views, that other conventional 3D sensors lack due to their limited field of view. Since our omnidirectional stereo (omnistereo) views are captured by a single camera, a truly instantaneous pair of panoramic images is possible for 3D perception tasks. Finally, we address the VO problem as a direct multichannel tracking approach, which increases the pose estimation accuracy of the baseline method (i.e., using only grayscale or color information) under the photometric error minimization as the heart of the “direct” tracking algorithm. Currently, this solution has been tested on standard monocular cameras, but it could also be applied to an SOS. We believe the challenges that we attempted to solve have not been considered previously with the level of detail needed for successfully performing VO with a single camera as the ultimate goal in both real-life and simulated scenes

    Sliding Mode Control

    Get PDF
    The main objective of this monograph is to present a broad range of well worked out, recent application studies as well as theoretical contributions in the field of sliding mode control system analysis and design. The contributions presented here include new theoretical developments as well as successful applications of variable structure controllers primarily in the field of power electronics, electric drives and motion steering systems. They enrich the current state of the art, and motivate and encourage new ideas and solutions in the sliding mode control area

    Contributions to road safety: from abstractions and control theory to real solutions, discussion and evaluation

    Get PDF
    This manuscript aims to describe my career in the transportation domain, putting in evidence my contributions in different levels, as for example thesis advising, teaching, research animation and coordination, projects construction and participation in expert committees, among others, besides my scientific research itself. The goal, besides the HDR diploma itself, is to show very clearly, including to myself, this 'pack' of contributions in order to look for better contributions to the transportation and control communities or to other communities in the future, and also which research directions I will define to work on in the following. I obtained my PhD degree in the Laboratoire des Signaux et Systèmes - L2S 1 in collaboration with MIT, in 2001, having worked in a purely theoretical automatic control topic scarcely known in the literature - the adaptive control of systems with nonlinear parameterization problem. Arriving in 2002 as a permanent researcher to the former LCPC (Laboratoire Central des Ponts et haussées), now called IFSTTAR (Institut Français des Sciences et Technologies des Transports, de l'Aménagement et des Réseaux), I have been faced to real problems to solve in practice, and faced to the new community of transportation, with a completely different philosophy of work. I have nowadays this double vision - of the very applied transportation domain with concrete problems to be solved that touch the citizen every day, and the vision of a very rich high-level theoretical research in automatic control with powerful tools to solve the real problems, or on the other hand, with control problems that appear because of the need for new tools to solve the real problems. I consider this as an important characteristic for my future contributions. Besides the knowledge in Transportation itself, my eleven years of career in IFSTTAR gave me as well the following new features : 1. From the individual research, I have learned also how to coordinate work (in projects for example, as in the PReVAL sub-project of the European PReVENT project, in which I co-leaded one workpackage, or for research teams, as the control team of LIVIC, coordinated by myself from 2006 to 2009). I have also learned how to animate research (by coordinating research working groups or organizing scientific events and workshops - see for example the working group RSEI and the related scientific event below that I have organized in June 2012) and how to advise students. 2. Besides the double vision I have described above, the experience gave me also the acquisition of a quite multidisciplinary view of the problems in the domain. Firstly, arriving in LIVIC, in the frame of the French consortium ARCOS, I have worked for two years in close cooperation with experts in cognitive sciences (the PsyCoTech group from IRCCyN, Nantes) on designing driving assistance systems to a human driver. After this work, I have continued the collaboration with experts in human sciences within the PReVAL subproject of PReVENT on driving assistance systems evaluation and within the French ANR PARTAGE project, that I have constructed together with the PsyCoTec team of IRCCyN and leaded the IFSTTAR partner for one year. In a dition, through my participation in PReVENT at dirent levels (in two meetings of the Core Group, in PReVAL by co-leading the workpackage 3 on Technical Evaluation of ADAS - ADAS is the shortcut for Advanced Driving Assistance Systems - and in the SAFELANE subproject), I have learned many different aspects of ITS systems. I consider this as an add-on value for my 'pack of knowledge'. 3. What I call "from abstractions to real problems : coming back and forth to solve these real problems" has been matured in my mind, and I am very grateful to my students, with whom I have learned and that helped me in this maturing process. By this sentence, I mean, with a problem to solve in hands, and after building an abstraction, or a simplified view of the problem, and the design of a solution, how to apply it, and to come back again to the theory to change it and to come back to the practice, and so on. This is exactly one of the pillars of the NoE HYCON2, for making interact the theory with the application domains. 4. Considering a problem inserted into the societal context, or inserted within its related context, has been another maturing for myself that I consider very important, notably in the transportation domain, that represents a very complex context containing many different parameters, scenarios and objectives and in addition all the uncertainties linked to the human behavior. I think that it is very important to have a very large view of the context in which the specific problem we are treating is placed. Without this, one cannot say in most of the cases, from my point of view, that the problem is solved. This point will be discussed in Chapter 9.5. 5. Another point that I consider important and where I have been contributing recently is the road mapping work. The acquisition of the multidisciplinary knowledge and a larger view of the domain that I have mentioned in the preceding items, together with my theoretical knowledge in automatic control, allowed myself to start contributing to theroad mapping work in Transportation (through my participation in the imobility forum, in HYCON2 and the in the support action T-Area-SoS on Systems of Systems - all these actions to give advice to the European Commission on the priority areas to be considered in the new Calls, notably in the frame of the H2020 program). I had also the pleasure of opening again books and thesis that I had studied in my PhD work, this time now for advising students in the frame of other very different problems. The very beautiful thesis of Mikael Johansson, Lund University, on piecewise linear systems stability theory is an example. My previous study on switched systems, and the implication of switched Lyapunov functions on stability helped me also in advising my students (Post-Docs, PhD, and M.Sc. students), this time for real applications, with very interesting results blooming up from their work. I realize also that the experience that I have described in the five items above must be put in favor of students since this kind of knowledge cannot be found in the books. Concluding, in these last eleven years, from 2002 to 2013, I could bring to the scientic community and to my students a set of contributions of different kinds. I will try to make clear these contributions for the reader in the next two chapters (written in English and in French). This document is organized in the following way : Part II contains my complete curriculum vitae (in french) where all these contributions will be described in detail. Part III contains then the scientific contributions of the manuscript. What I aim in this chapter is to describe, but further, to analyze them with a distanced look and providing a critical view, announcing perspectives, and placing and discussing the obtained results in the societal context. This is in straight relation with item 4 above. Also, I prefer to adopt, as far as possible, a form comprehensible to the non-automatic control expert, with, as far as possible as well, qualitative explanations and then appropriated references containing the theorems and the definitions corresponding to the qualitative explanations will be provided. In the case it is necessary, they are provided within the text. The Part III is structured in the following chapters. Chapter 8 contains an overview of the global transportation scenario with the associated challenges and a description of the driving assistance systems context. Chapter 9 contains my scientific contributions. These include my research results, my contributions in students advising, in the coordination of research groups, and the collaborative works. It is structured in 3 sections : Section 9.1 introduces what will be the greed for a part of the main contributions, that are described in Sections 9.2 and 9.3. Section 9.1 is also dedicated to showing to the reader how theory and abstractions can be very important for solving real problems. Chapter 9.4 describes other contributions that are the result of collaborative works. A discussion from a multidisciplinary view is provided in Chapter 9.5 based on a survey paper of myself. Chapter 10 will be finally dedicated to the perspectives and the general conclusions. Then last Part contains as annexes a selection of the publications that I consider the most illustrative of my contributions described in Chapter 9. Finally, since the described work is in the intersection of two communities - the transportation and the control theory communities - I decided to write a part of the document dedicated to the non control experts readers. This is Part VI of the document whose aim is to provide some fundamental notions on control theory in a very simple qualitative description whose understanding will help the different readers to understand the contributions
    corecore