837 research outputs found

    Visualization and image based characterization of hydrodynamic cavity bubbles for kidney stone treatment

    Get PDF
    Accurate detection, tracking and classification of micro structures through high speed imaging are very important in many biomedical applications. In particular, visualization and characterization of hydrodynamic cavity bubbles in breaking kidney stones have become a real challenge for researchers. Various micro imaging techniques have been used to monitor either an entire bubble cloud or individual bubbles within the cloud. The main target of this thesis is to perform an image based characterization of hydrodynamic cavity bubbles for kidney stone treatment by designing and constructing a new imaging setup and implementing several image processing and computer vision algorithms for detecting, tracking and classifying cavity bubbles. A high speed CMOS camera with a long distance microscope illuminated by 2 pulsed 198 high performance LED arrays is designed. This system and a ΞΌ-PIV setup are used for capturing images of high speed bubbles. Several image processing algorithms including median and morphological filters, segmentation, edge detection and contour extraction algorithms are extensively used for the detection of the bubbles. Furthermore, incremental selftuning particle filtering (ISPF) method is utilized to track the motion of the high speed cavity bubbles. These bubbles are also classified by their geometric features such as size, shape and orientation. An extensive visualisation work is conducted on the new setup and cavity bubbles are successfully detected, tracked and classified from the microscopic images. Despite very low exposure times and high speed motion of the bubbles, developed system and methods work in a very robust manner. All the algorithms are implemented in Microsoft Visual C++ using OpenCV 2.4.2 library

    Robust vision based slope estimation and rocks detection for autonomous space landers

    Get PDF
    As future robotic surface exploration missions to other planets, moons and asteroids become more ambitious in their science goals, there is a rapidly growing need to significantly enhance the capabilities of entry, descent and landing technology such that landings can be carried out with pin-point accuracy at previously inaccessible sites of high scientific value. As a consequence of the extreme uncertainty in touch-down locations of current missions and the absence of any effective hazard detection and avoidance capabilities, mission designers must exercise extreme caution when selecting candidate landing sites. The entire landing uncertainty footprint must be placed completely within a region of relatively flat and hazard free terrain in order to minimise the risk of mission ending damage to the spacecraft at touchdown. Consequently, vast numbers of scientifically rich landing sites must be rejected in favour of safer alternatives that may not offer the same level of scientific opportunity. The majority of truly scientifically interesting locations on planetary surfaces are rarely found in such hazard free and easily accessible locations, and so goals have been set for a number of advanced capabilities of future entry, descent and landing technology. Key amongst these is the ability to reliably detect and safely avoid all mission critical surface hazards in the area surrounding a pre-selected landing location. This thesis investigates techniques for the use of a single camera system as the primary sensor in the preliminary development of a hazard detection system that is capable of supporting pin-point landing operations for next generation robotic planetary landing craft. The requirements for such a system have been stated as the ability to detect slopes greater than 5 degrees and surface objects greater than 30cm in diameter. The primary contribution in this thesis, aimed at achieving these goals, is the development of a feature-based,self-initialising, fully adaptive structure from motion (SFM) algorithm based on a robust square-root unscented Kalman filtering framework and the fusion of the resulting SFM scene structure estimates with a sophisticated shape from shading (SFS) algorithm that has the potential to produce very dense and highly accurate digital elevation models (DEMs) that possess sufficient resolution to achieve the sensing accuracy required by next generation landers. Such a system is capable of adapting to potential changes in the external noise environment that may result from intermittent and varying rocket motor thrust and/or sudden turbulence during descent, which may translate to variations in the vibrations experienced by the platform and introduce varying levels of motion blur that will affect the accuracy of image feature tracking algorithms. Accurate scene structure estimates have been obtained using this system from both real and synthetic descent imagery, allowing for the production of accurate DEMs. While some further work would be required in order to produce DEMs that possess the resolution and accuracy needed to determine slopes and the presence of small objects such as rocks at the levels of accuracy required, this thesis presents a very strong foundation upon which to build and goes a long way towards developing a highly robust and accurate solution

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    Self-correcting Bayesian target tracking

    Get PDF
    The copyright of this thesis rests with the author and no quotation from it or information derived from it may be published without the prior written consent of the authorAbstract Visual tracking, a building block for many applications, has challenges such as occlusions,illumination changes, background clutter and variable motion dynamics that may degrade the tracking performance and are likely to cause failures. In this thesis, we propose Track-Evaluate-Correct framework (self-correlation) for existing trackers in order to achieve a robust tracking. For a tracker in the framework, we embed an evaluation block to check the status of tracking quality and a correction block to avoid upcoming failures or to recover from failures. We present a generic representation and formulation of the self-correcting tracking for Bayesian trackers using a Dynamic Bayesian Network (DBN). The self-correcting tracking is done similarly to a selfaware system where parameters are tuned in the model or different models are fused or selected in a piece-wise way in order to deal with tracking challenges and failures. In the DBN model representation, the parameter tuning, fusion and model selection are done based on evaluation and correction variables that correspond to the evaluation and correction, respectively. The inferences of variables in the DBN model are used to explain the operation of self-correcting tracking. The specific contributions under the generic self-correcting framework are correlation-based selfcorrecting tracking for an extended object with model points and tracker-level fusion as described below. For improving the probabilistic tracking of extended object with a set of model points, we use Track-Evaluate-Correct framework in order to achieve self-correcting tracking. The framework combines the tracker with an on-line performance measure and a correction technique. We correlate model point trajectories to improve on-line the accuracy of a failed or an uncertain tracker. A model point tracker gets assistance from neighbouring trackers whenever degradation in its performance is detected using the on-line performance measure. The correction of the model point state is based on the correlation information from the states of other trackers. Partial Least Square regression is used to model the correlation of point tracker states from short windowed trajectories adaptively. Experimental results on data obtained from optical motion capture systems show the improvement in tracking performance of the proposed framework compared to the baseline tracker and other state-of-the-art trackers. The proposed framework allows appropriate re-initialisation of local trackers to recover from failures that are caused by clutter and missed detections in the motion capture data. Finally, we propose a tracker-level fusion framework to obtain self-correcting tracking. The fusion framework combines trackers addressing different tracking challenges to improve the overall performance. As a novelty of the proposed framework, we include an online performance measure to identify the track quality level of each tracker to guide the fusion. The trackers in the framework assist each other based on appropriate mixing of the prior states. Moreover, the track quality level is used to update the target appearance model. We demonstrate the framework with two Bayesian trackers on video sequences with various challenges and show its robustness compared to the independent use of the trackers used in the framework, and also compared to other state-of-the-art trackers. The appropriate online performance measure based appearance model update and prior mixing on trackers allows the proposed framework to deal with tracking challenges

    Model-driven and Data-driven Approaches for some Object Recognition Problems

    Get PDF
    Recognizing objects from images and videos has been a long standing problem in computer vision. The recent surge in the prevalence of visual cameras has given rise to two main challenges where, (i) it is important to understand different sources of object variations in more unconstrained scenarios, and (ii) rather than describing an object in isolation, efficient learning methods for modeling object-scene `contextual' relations are required to resolve visual ambiguities. This dissertation addresses some aspects of these challenges, and consists of two parts. First part of the work focuses on obtaining object descriptors that are largely preserved across certain sources of variations, by utilizing models for image formation and local image features. Given a single instance of an object, we investigate the following three problems. (i) Representing a 2D projection of a 3D non-planar shape invariant to articulations, when there are no self-occlusions. We propose an articulation invariant distance that is preserved across piece-wise affine transformations of a non-rigid object `parts', under a weak perspective imaging model, and then obtain a shape context-like descriptor to perform recognition; (ii) Understanding the space of `arbitrary' blurred images of an object, by representing an unknown blur kernel of a known maximum size using a complete set of orthonormal basis functions spanning that space, and showing that subspaces resulting from convolving a clean object and its blurred versions with these basis functions are equal under some assumptions. We then view the invariant subspaces as points on a Grassmann manifold, and use statistical tools that account for the underlying non-Euclidean nature of the space of these invariants to perform recognition across blur; (iii) Analyzing the robustness of local feature descriptors to different illumination conditions. We perform an empirical study of these descriptors for the problem of face recognition under lighting change, and show that the direction of image gradient largely preserves object properties across varying lighting conditions. The second part of the dissertation utilizes information conveyed by large quantity of data to learn contextual information shared by an object (or an entity) with its surroundings. (i) We first consider a supervised two-class problem of detecting lane markings from road video sequences, where we learn relevant feature-level contextual information through a machine learning algorithm based on boosting. We then focus on unsupervised object classification scenarios where, (ii) we perform clustering using maximum margin principles, by deriving some basic properties on the affinity of `a pair of points' belonging to the same cluster using the information conveyed by `all' points in the system, and (iii) then consider correspondence-free adaptation of statistical classifiers across domain shifting transformations, by generating meaningful `intermediate domains' that incrementally convey potential information about the domain change

    Benelux meeting on systems and control, 23rd, March 17-19, 2004, Helvoirt, The Netherlands

    Get PDF
    Book of abstract
    • …
    corecore