1,346 research outputs found

    SCHLIEREN SEQUENCE ANALYSIS USING COMPUTER VISION

    Get PDF
    Computer vision-based methods are proposed for extraction and measurement of flow structures of interest in schlieren video. As schlieren data has increased with faster frame rates, we are faced with thousands of images to analyze. This presents an opportunity to study global flow structures over time that may not be evident from surface measurements. A degree of automation is desirable to extract flow structures and features to give information on their behavior through the sequence. Using an interdisciplinary approach, the analysis of large schlieren data is recast as a computer vision problem. The double-cone schlieren sequence is used as a testbed for the methodology; it is unique in that it contains 5,000 images, complex phenomena, and is feature rich. Oblique structures such as shock waves and shear layers are common in schlieren images. A vision-based methodology is used to provide an estimate of oblique structure angles through the unsteady sequence. The methodology has been applied to a complex flowfield with multiple shocks. A converged detection success rate between 94% and 97% for these structures is obtained. The modified curvature scale space is used to define features at salient points on shock contours. A challenge in developing methods for feature extraction in schlieren images is the reconciliation of existing techniques with features of interest to an aerodynamicist. Domain-specific knowledge of physics must therefore be incorporated into the definition and detec- tion phases. Known location and physically possible structure representations form a knowledge base that provides a unique feature definition and extraction. Model tip location and the motion of a shock intersection across several thousand frames are identified, localized, and tracked. Images are parsed into physically meaningful labels using segmentation. Using this representation, it is shown that in the double-cone flowfield, the dominant unsteady motion is associated with large scale random events within the aft-cone bow shock. Small scale organized motion is associated with the shock-separated flow on the fore-cone surface. We show that computer vision is a natural and useful extension to the evaluation of schlieren data, and that segmentation has the potential to permit new large scale measurements of flow motion

    Basic Science to Clinical Research: Segmentation of Ultrasound and Modelling in Clinical Informatics

    Get PDF
    The world of basic science is a world of minutia; it boils down to improving even a fraction of a percent over the baseline standard. It is a domain of peer reviewed fractions of seconds and the world of squeezing every last ounce of efficiency from a processor, a storage medium, or an algorithm. The field of health data is based on extracting knowledge from segments of data that may improve some clinical process or practice guideline to improve the time and quality of care. Clinical informatics and knowledge translation provide this information in order to reveal insights to the world of improving patient treatments, regimens, and overall outcomes. In my world of minutia, or basic science, the movement of blood served an integral role. The novel detection of sound reverberations map out the landscape for my research. I have applied my algorithms to the various anatomical structures of the heart and artery system. This serves as a basis for segmentation, active contouring, and shape priors. The algorithms presented, leverage novel applications in segmentation by using anatomical features of the heart for shape priors and the integration of optical flow models to improve tracking. The presented techniques show improvements over traditional methods in the estimation of left ventricular size and function, along with plaque estimation in the carotid artery. In my clinical world of data understanding, I have endeavoured to decipher trends in Alzheimer’s disease, Sepsis of hospital patients, and the burden of Melanoma using mathematical modelling methods. The use of decision trees, Markov models, and various clustering techniques provide insights into data sets that are otherwise hidden. Finally, I demonstrate how efficient data capture from providers can achieve rapid results and actionable information on patient medical records. This culminated in generating studies on the burden of illness and their associated costs. A selection of published works from my research in the world of basic sciences to clinical informatics has been included in this thesis to detail my transition. This is my journey from one contented realm to a turbulent one

    Biomimetic Design for Efficient Robotic Performance in Dynamic Aquatic Environments - Survey

    Get PDF
    This manuscript is a review over the published articles on edge detection. At first, it provides theoretical background, and then reviews wide range of methods of edge detection in different categorizes. The review also studies the relationship between categories, and presents evaluations regarding to their application, performance, and implementation. It was stated that the edge detection methods structurally are a combination of image smoothing and image differentiation plus a post-processing for edge labelling. The image smoothing involves filters that reduce the noise, regularize the numerical computation, and provide a parametric representation of the image that works as a mathematical microscope to analyze it in different scales and increase the accuracy and reliability of edge detection. The image differentiation provides information of intensity transition in the image that is necessary to represent the position and strength of the edges and their orientation. The edge labelling calls for post-processing to suppress the false edges, link the dispread ones, and produce a uniform contour of objects

    Automated location of active fire perimeters in aerial infrared imaging using unsupervised edge detectors

    Get PDF
    A variety of remote sensing techniques have been applied to forest fires. However, there is at present no system capable of monitoring an active fire precisely in a totally automated manner. Spaceborne sensors show too coarse spatio-temporal resolutions and all previous studies that extracted fire properties from infrared aerial imagery incorporated manual tasks within the image processing workflow. As a contribution to this topic, this paper presents an algorithm to automatically locate the fuel burning interface of an active wildfire in georeferenced aerial thermal infrared (TIR) imagery. An unsupervised edge detector, built upon the Canny method, was accompanied by the necessary modules for the extraction of line coordinates and the location of the total burned perimeter. The system was validated in different scenarios ranging from laboratory tests to large-scale experimental burns performed under extreme weather conditions. Output accuracy was computed through three common similarity indices and proved acceptable. Computing times were below 1Âżs per image on average. The produced information was used to measure the temporal evolution of the fire perimeter and automatically generate rate of spread (ROS) fields. Information products were easily exported to standard Geographic Information Systems (GIS), such as GoogleEarth and QGIS. Therefore, this work contributes towards the development of an affordable and totally automated system for operational wildfire surveillance.Peer ReviewedPostprint (author's final draft

    A Multicamera System for Gesture Tracking With Three Dimensional Hand Pose Estimation

    Get PDF
    The goal of any visual tracking system is to successfully detect then follow an object of interest through a sequence of images. The difficulty of tracking an object depends on the dynamics, the motion and the characteristics of the object as well as on the environ ment. For example, tracking an articulated, self-occluding object such as a signing hand has proven to be a very difficult problem. The focus of this work is on tracking and pose estimation with applications to hand gesture interpretation. An approach that attempts to integrate the simplicity of a region tracker with single hand 3D pose estimation methods is presented. Additionally, this work delves into the pose estimation problem. This is ac complished by both analyzing hand templates composed of their morphological skeleton, and addressing the skeleton\u27s inherent instability. Ligature points along the skeleton are flagged in order to determine their effect on skeletal instabilities. Tested on real data, the analysis finds the flagging of ligature points to proportionally increase the match strength of high similarity image-template pairs by about 6%. The effectiveness of this approach is further demonstrated in a real-time multicamera hand tracking system that tracks hand gestures through three-dimensional space as well as estimate the three-dimensional pose of the hand

    Superpixel segmentation based on anisotropic edge strength

    Get PDF
    Superpixel segmentation can benefit from the use of an appropriate method to measure edge strength. In this paper, we present such a method based on the first derivative of anisotropic Gaussian kernels. The kernels can capture the position, direction, prominence, and scale of the edge to be detected. We incorporate the anisotropic edge strength into the distance measure between neighboring superpixels, thereby improving the performance of an existing graph-based superpixel segmentation method. Experimental results validate the superiority of our method in generating superpixels over the competing methods. It is also illustrated that the proposed superpixel segmentation method can facilitate subsequent saliency detection

    Segmentation of neuroanatomy in magnetic resonance images

    Get PDF
    Segmentation in neurological Magnetic Resonance Imaging (MRI) is necessary for volume measurement, feature extraction and for the three-dimensional display of neuroanatomy. This thesis proposes several automated and semi-automated methods which offer considerable advantages over manual methods because of their lack of subjectivity, their data reduction capabilities, and the time savings they give. Work has concentrated on the use of dual echo multi-slice spin-echo data sets in order to take advantage of the intrinsically multi-parametric nature of MRI. Such data is widely acquired clinically and segmentation therefore does not require additional scans. The literature has been reviewed. Factors affecting image non-uniformity for a modem 1.5 Tesla imager have been investigated. These investigations demonstrate that a robust, fast, automatic three-dimensional non-uniformity correction may be applied to data as a pre-processing step. The merit of using an anisotropic smoothing method for noisy data has been demonstrated. Several approaches to neurological MRI segmentation have been developed. Edge-based processing is used to identify the skin (the major outer contour) and the eyes. Edge-focusing, two threshold based techniques and a fast radial CSF identification approach are proposed to identify the intracranial region contour in each slice of the data set. Once isolated, the intracranial region is further processed to identify CSF, and, depending upon the MRI pulse sequence used, the brain itself may be sub-divided into grey matter and white matter using semiautomatic contrast enhancement and clustering methods. The segmentation of Multiple Sclerosis (MS) plaques has also been considered. The utility of the stack, a data driven multi-resolution approach to segmentation, has been investigated, and several improvements to the method suggested. The factors affecting the intrinsic accuracy of neurological volume measurement in MRI have been studied and their magnitudes determined for spin-echo imaging. Geometric distortion - both object dependent and object independent - has been considered, as well as slice warp, slice profile, slice position and the partial volume effect. Finally, the accuracy of the approaches to segmentation developed in this thesis have been evaluated. Intracranial volume measurements are within 5% of expert observers' measurements, white matter volumes within 10%, and CSF volumes consistently lower than the expert observers' measurements due to the observers' inability to take the partial volume effect into account

    Unsupervised Learning of Edges

    Full text link
    Data-driven approaches for edge detection have proven effective and achieve top results on modern benchmarks. However, all current data-driven edge detectors require manual supervision for training in the form of hand-labeled region segments or object boundaries. Specifically, human annotators mark semantically meaningful edges which are subsequently used for training. Is this form of strong, high-level supervision actually necessary to learn to accurately detect edges? In this work we present a simple yet effective approach for training edge detectors without human supervision. To this end we utilize motion, and more specifically, the only input to our method is noisy semi-dense matches between frames. We begin with only a rudimentary knowledge of edges (in the form of image gradients), and alternate between improving motion estimation and edge detection in turn. Using a large corpus of video data, we show that edge detectors trained using our unsupervised scheme approach the performance of the same methods trained with full supervision (within 3-5%). Finally, we show that when using a deep network for the edge detector, our approach provides a novel pre-training scheme for object detection.Comment: Camera ready version for CVPR 201
    • …
    corecore