14 research outputs found

    Scale-space analysis and active contours for omnidirectional images

    Get PDF
    A new generation of optical devices that generate images covering a larger part of the field of view than conventional cameras, namely catadioptric cameras, is slowly emerging. These omnidirectional images will most probably deeply impact computer vision in the forthcoming years, providing the necessary algorithmic background stands strong. In this paper we propose a general framework that helps defining various computer vision primitives. We show that geometry, which plays a central role in the formation of omnidirectional images, must be carefully taken into account while performing such simple tasks as smoothing or edge detection. Partial Differential Equations (PDEs) offer a very versatile tool that is well suited to cope with geometrical constraints. We derive new energy functionals and PDEs for segmenting images obtained from catadioptric cameras and show that they can be implemented robustly using classical finite difference schemes. Various experimental results illustrate the potential of these new methods on both synthetic and natural images

    Vision Sensors and Edge Detection

    Get PDF
    Vision Sensors and Edge Detection book reflects a selection of recent developments within the area of vision sensors and edge detection. There are two sections in this book. The first section presents vision sensors with applications to panoramic vision sensors, wireless vision sensors, and automated vision sensor inspection, and the second one shows image processing techniques, such as, image measurements, image transformations, filtering, and parallel computing

    Towards tactile sensing active capsule endoscopy

    Get PDF
    Examination of the gastrointestinal(GI) tract has traditionally been performed using tethered endoscopy tools with limited reach and more recently with passive untethered capsule endoscopy with limited capability. Inspection of small intestines is only possible using the latter capsule endoscopy with on board camera system. Limited to visual means it cannot detect features beneath the lumen wall if they have not affected the lumen structure or colour. This work presents an improved capsule endoscopy system with locomotion for active exploration of the small intestines and tactile sensing to detect deformation of the capsule outer surface when it follows the intestinal wall. In laboratory conditions this system is capable of identifying sub-lumen features such as submucosal tumours.Through an extensive literary review the current state of GI tract inspection in particular using remote operated miniature robotics, was investigated, concluding no solution currently exists that utilises tactile sensing with a capsule endoscopy. In order to achieve such a platform, further investigation was made in to tactile sensing technologies, methods of locomotion through the gut, and methods to support an increased power requirement for additional electronics and actuation. A set of detailed criteria were compiled for a soft formed sensor and flexible bodied locomotion system. The sensing system is built on the biomimetic tactile sensing device, Tactip, \cite{Chorley2008, Chorley2010, Winstone2012, Winstone2013} which has been redesigned to fit the form of a capsule endoscopy. These modifications have required a 360o360^{o} cylindrical sensing surface with 360o360^{o} panoramic optical system. Multi-material 3D printing has been used to build an almost complete sensor assembly with a combination of hard and soft materials, presenting a soft compliant tactile sensing system that mimics the tactile sensing methods of the human finger. The cylindrical Tactip has been validated using artificial submucosal tumours in laboratory conditions. The first experiment has explored the new form factor and measured the device's ability to detect surface deformation when travelling through a pipe like structure with varying lump obstructions. Sensor data was analysed and used to reconstruct the test environment as a 3D rendered structure. A second tactile sensing experiment has explored the use of classifier algorithms to successfully discriminate between three tumour characteristics; shape, size and material hardness. Locomotion of the capsule endoscopy has explored further bio-inspiration from earthworm's peristaltic locomotion, which share operating environment similarities. A soft bodied peristaltic worm robot has been developed that uses a tuned planetary gearbox mechanism to displace tendons that contract each worm segment. Methods have been identified to optimise the gearbox parameter to a pipe like structure of a given diameter. The locomotion system has been tested within a laboratory constructed pipe environment, showing that using only one actuator, three independent worm segments can be controlled. This configuration achieves comparable locomotion capabilities to that of an identical robot with an actuator dedicated to each individual worm segment. This system can be miniaturised more easily due to reduced parts and number of actuators, and so is more suitable for capsule endoscopy. Finally, these two developments have been integrated to demonstrate successful simultaneous locomotion and sensing to detect an artificial submucosal tumour embedded within the test environment. The addition of both tactile sensing and locomotion have created a need for additional power beyond what is available from current battery technology. Early stage work has reviewed wireless power transfer (WPT) as a potential solution to this problem. Methods for optimisation and miniaturisation to implement WPT on a capsule endoscopy have been identified with a laboratory built system that validates the methods found. Future work would see this combined with a miniaturised development of the robot presented. This thesis has developed a novel method for sub-lumen examination. With further efforts to miniaturise the robot it could provide a comfortable and non-invasive procedure to GI tract inspection reducing the need for surgical procedures and accessibility for earlier stage of examination. Furthermore, these developments have applicability in other domains such as veterinary medicine, industrial pipe inspection and exploration of hazardous environments

    Computational Multimedia for Video Self Modeling

    Get PDF
    Video self modeling (VSM) is a behavioral intervention technique in which a learner models a target behavior by watching a video of oneself. This is the idea behind the psychological theory of self-efficacy - you can learn or model to perform certain tasks because you see yourself doing it, which provides the most ideal form of behavior modeling. The effectiveness of VSM has been demonstrated for many different types of disabilities and behavioral problems ranging from stuttering, inappropriate social behaviors, autism, selective mutism to sports training. However, there is an inherent difficulty associated with the production of VSM material. Prolonged and persistent video recording is required to capture the rare, if not existed at all, snippets that can be used to string together in forming novel video sequences of the target skill. To solve this problem, in this dissertation, we use computational multimedia techniques to facilitate the creation of synthetic visual content for self-modeling that can be used by a learner and his/her therapist with a minimum amount of training data. There are three major technical contributions in my research. First, I developed an Adaptive Video Re-sampling algorithm to synthesize realistic lip-synchronized video with minimal motion jitter. Second, to denoise and complete the depth map captured by structure-light sensing systems, I introduced a layer based probabilistic model to account for various types of uncertainties in the depth measurement. Third, I developed a simple and robust bundle-adjustment based framework for calibrating a network of multiple wide baseline RGB and depth cameras

    Airborne vision-based attitude estimation and localisation

    Get PDF
    Vision plays an integral part in a pilot's ability to navigate and control an aircraft. Therefore Visual Flight Rules have been developed around the pilot's ability to see the environment outside of the cockpit in order to control the attitude of the aircraft, to navigate and to avoid obstacles. The automation of these processes using a vision system could greatly increase the reliability and autonomy of unmanned aircraft and flight automation systems. This thesis investigates the development and implementation of a robust vision system which fuses inertial information with visual information in a probabilistic framework with the aim of aircraft navigation. The horizon appearance is a strong visual indicator of the attitude of the aircraft. This leads to the first research area of this thesis, visual horizon attitude determination. An image processing method was developed to provide high performance horizon detection and extraction from camera imagery. A number of horizon models were developed to link the detected horizon to the attitude of the aircraft with varying degrees of accuracy. The second area investigated in this thesis was visual localisation of the aircraft. A terrain-aided horizon model was developed to estimate the position, altitude as well as attitude of the aircraft. This gives rough positions estimates with highly accurate attitude information. The visual localisation accuracy was improved by incorporating ground feature-based map-aided navigation. Road intersections were detected using a developed image processing algorithm and then they were matched to a database to provide positional information. The developed vision system show comparable performance to other non-vision-based systems while removing the dependence on external systems for navigation. The vision system and techniques developed in this thesis helps to increase the autonomy of unmanned aircraft and flight automation systems for manned flight

    Advances in Robot Navigation

    Get PDF
    Robot navigation includes different interrelated activities such as perception - obtaining and interpreting sensory information; exploration - the strategy that guides the robot to select the next direction to go; mapping - the construction of a spatial representation by using the sensory information perceived; localization - the strategy to estimate the robot position within the spatial map; path planning - the strategy to find a path towards a goal location being optimal or not; and path execution, where motor actions are determined and adapted to environmental changes. This book integrates results from the research work of authors all over the world, addressing the abovementioned activities and analyzing the critical implications of dealing with dynamic environments. Different solutions providing adaptive navigation are taken from nature inspiration, and diverse applications are described in the context of an important field of study: social robotics

    Basic Astronomy Labs

    Get PDF
    Providing the tools and know-how to apply the principles of astronomy first-hand, these 43 laboratory exercises each contain an introduction that clearly shows budding astronomers why the particular topic of that lab is of interest and relevant to astronomy. About one-third of the exercises are devoted solely to observation, and no mathematics is required beyond simple high school algebra and trigonometry.Organizes exercises into six major topics—sky, optics and spectroscopy, celestial mechanics, solar system, stellar properties, and exploration and other topics—providing clear outlines of what is involved in the exercise, its purpose, and what procedures and apparatus are to be used. Offers variations on standard and popular exercises, and includes many that are new and innovative, such as The Messier List which helps users discover basic facts about the Milky Way Galaxy by plotting these objects on a star chart; Motions of Earth demonstrates just how fast the Earth is moving through space and in which direction it is going, and; Radioactivity and Time which measures the half-life of a short-lived isotope, and consider radioactive dating and heating of celestial bodies. Includes a guide to astronomical pronunciations, a guide to the constellations, spectral classifications, quotes on science, and more. For astronomers

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE

    A Full Scale Camera Calibration Technique with Automatic Model Selection – Extension and Validation

    Get PDF
    This thesis presents work on the testing and development of a complete camera calibration approach which can be applied to a wide range of cameras equipped with normal, wide-angle, fish-eye, or telephoto lenses. The full scale calibration approach estimates all of the intrinsic and extrinsic parameters. The calibration procedure is simple and does not require prior knowledge of any parameters. The method uses a simple planar calibration pattern. Closed-form estimates for the intrinsic and extrinsic parameters are computed followed by nonlinear optimization. Polynomial functions are used to describe the lens projection instead of the commonly used radial model. Statistical information criteria are used to automatically determine the complexity of the lens distortion model. In the first stage experiments were performed to verify and compare the performance of the calibration method. Experiments were performed on a wide range of lenses. Synthetic data was used to simulate real data and validate the performance. Synthetic data was also used to validate the performance of the distortion model selection which uses Information Theoretic Criterion (AIC) to automatically select the complexity of the distortion model. In the second stage work was done to develop an improved calibration procedure which addresses shortcomings of previously developed method. Experiments on the previous method revealed that the estimation of the principal point during calibration was erroneous for lenses with a large focal length. To address this issue the calibration method was modified to include additional methods to accurately estimate the principal point in the initial stages of the calibration procedure. The modified procedure can now be used to calibrate a wide spectrum of imaging systems including telephoto and verifocal lenses. Survey of current work revealed a vast amount of research concentrating on calibrating only the distortion of the camera. In these methods researchers propose methods to calibrate only the distortion parameters and suggest using other popular methods to find the remaining camera parameters. Using this proposed methodology we apply distortion calibration to our methods to separate the estimation of distortion parameters. We show and compare the results with the original method on a wide range of imaging systems

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning
    corecore