14 research outputs found

    Robotic Goal-Based Semi-Autonomous Algorithms Improve Remote Operator Performance

    Get PDF
    The focus of this research was to determine if reliable goal-based semi-autonomous algorithms are able to improve remote operator performance or not. Two semi-autonomous algorithms were examined: visual servoing and visual dead reckoning. Visual servoing uses computer vision techniques to generate movement commands while using internal properties of the camera combined with sensor data that tell the robot its current position based on its previous position. This research shows that the semi-autonomous algorithms developed increased performance in a measurable way. An analysis of tracking algorithms for visual servoing was conducted and tracking algorithms were enhanced to make them as robust as possible. The developed algorithms were implemented on a currently fielded military robot and a human-in-the-loop experiment was conducted to measure performance

    A Hierarchical Image Processing Approach for Diagnostic Analysis of Microcirculation Videos

    Get PDF
    Knowledge of the microcirculatory system has added significant value to the analysis of tissue oxygenation and perfusion. While developments in videomicroscopy technology have enabled medical researchers and physicians to observe the microvascular system, the available software tools are limited in their capabilities to determine quantitative features of microcirculation, either automatically or accurately. In particular, microvessel density has been a critical diagnostic measure in evaluating disease progression and a prognostic indicator in various clinical conditions. As a result, automated analysis of the microcirculatory system can be substantially beneficial in various real-time and off-line therapeutic medical applications, such as optimization of resuscitation. This study focuses on the development of an algorithm to automatically segment microvessels, calculate the density of capillaries in microcirculatory videos, and determine the distribution of blood circulation. The proposed technique is divided into four major steps: video stabilization, video enhancement, segmentation and post-processing. The stabilization step estimates motion and corrects for the motion artifacts using an appropriate motion model. Video enhancement improves the visual quality of video frames through preprocessing, vessel enhancement and edge enhancement. The resulting frames are combined through an adjusted weighted median filter and the resulting frame is then thresholded using an entropic thresholding technique. Finally, a region growing technique is utilized to correct for the discontinuity of blood vessels. Using the final binary results, the most commonly used measure for the assessment of microcirculation, i.e. Functional Capillary Density (FCD), is calculated. The designed technique is applied to video recordings of healthy and diseased human and animal samples obtained by MicroScan device based on Sidestream Dark Field (SDF) imaging modality. To validate the final results, the calculated FCD results are compared with the results obtained by blind detailed inspection of three medical experts, who have used AVA (Automated Vascular Analysis) semi-automated microcirculation analysis software. Since there is neither a fully automated accurate microcirculation analysis program, nor a publicly available annotated database of microcirculation videos, the results acquired by the experts are considered the gold standard. Bland-Altman plots show that there is ``Good Agreement between the results of the algorithm and that of gold standard. In summary, the main objective of this study is to eliminate the need for human interaction to edit/ correct results, to improve the accuracy of stabilization and segmentation, and to reduce the overall computation time. The proposed methodology impacts the field of computer science through development of image processing techniques to discover the knowledge in grayscale video frames. The broad impact of this work is to assist physicians, medical researchers and caregivers in making diagnostic and therapeutic decisions for microcirculatory abnormalities and in studying of the human microcirculation

    Proceedings of the Augmented VIsual Display (AVID) Research Workshop

    Get PDF
    The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics

    Image Processing Algorithms for Diagnostic Analysis of Microcirculation

    Get PDF
    Microcirculation has become a key factor for the study and assessment of tissue perfusion and oxygenation. Detection and assessment of the microvasculature using videomicroscopy from the oral mucosa provides a metric on the density of blood vessels in each single frame. Information pertaining to the density of these microvessels within a field of view can be used to quantitatively monitor and assess the changes occurring in tissue oxygenation and perfusion over time. Automated analysis of this information can be used for real-time diagnostic and therapeutic planning of a number of clinical applications including resuscitation. The objective of this study is to design an automated image processing system to segment microvessels, estimate the density of blood vessels in video recordings, and identify the distribution of blood flow. The proposed algorithm consists of two main stages: video processing and image segmentation. The first step of video processing is stabilization. In the video stabilization step, block matching is applied to the video frames. Similarity is measured by cross-correlation coefficients. The main technique used in the segmentation step is multi-thresholding and pixel verification based on calculated geometric and contrast parameters. Segmentation results and differences of video frames are then used to identify the capillaries with blood flow. After categorizing blood vessels as active or passive, according to the amount of blood flow, quantitative measures identifying microcirculation are calculated. The algorithm is applied to the videos obtained using Microscan Side-stream Dark Field (SDF) imaging technique captured from healthy and critically ill humans/animals. Segmentation results were compared and validated using a blind detailed inspection by experts who used a commercial semi-automated image analysis software program, AVA (Automated Vascular Analysis). The algorithm was found to extract approximately 97% of functionally active capillaries and blood vessels in every frame. The aim of this study is to eliminate the human interaction, increase accuracy and reduce the computation time. The proposed method is an entirely automated process that can perform stabilization, pre-processing, segmentation, and microvessel identification without human intervention. The method may allow for assessment of microcirculatory abnormalities occurring in critically ill and injured patients including close to real-time determination of the adequacy of resuscitation

    NAVIGATION AND AUTONOMOUS CONTROL OF MAVS IN GPS-DENIED ENVIRONMENTS

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Schätzung dichter Korrespondenzfelder unter Verwendung mehrerer Bilder

    Get PDF
    Most optical flow algorithms assume pairs of images that are acquired with an ideal, short exposure time. We present two approaches, that use additional images of a scene to estimate highly accurate, dense correspondence fields. In our first approach we consider video sequences that are acquired with alternating exposure times so that a short-exposure image is followed by a long-exposure image that exhibits motion-blur. With the help of the two enframing short-exposure images, we can decipher not only the motion information encoded in the long-exposure image, but also estimate occlusion timings, which are a basis for artifact-free frame interpolation. In our second approach we consider the data modality of multi-view video sequences, as it commonly occurs, e.g., in stereoscopic video. As several images capture nearly the same data of a scene, this redundancy can be used to establish more robust and consistent correspondence fields than the consideration of two images permits.Die meisten Verfahren zur Schätzung des optischen Flusses verwenden zwei Bilder, die mit einer optimalen, kurzen Belichtungszeit aufgenommen wurden. Wir präsentieren zwei Methoden, die zusätzliche Bilder zur Schätzung von hochgenauen, dichten Korrespondenzfeldern verwenden. Die erste Methode betrachtet Videosequenzen, die mit alternierender Belichtungsdauer aufgenommen werden, so dass auf eine Kurzzeitbelichtung eine Langzeitbelichtung folgt, die Bewegungsunschärfe enthält. Mit der Hilfe von zwei benachbarten Kurzzeitbelichtungen können wir nicht nur die Bewegung schätzen, die in der Bewegungsunschärfe der Langzeitbelichtung verschlüsselt ist, sondern zusätzlich auch Verdeckungszeiten schätzen, die sich bei der Interpolation von Zwischenbildern als große Hilfe erweisen. Die zweite Methode betrachtet Videos, die eine Szene aus mehreren Ansichten aufzeichnen, wie z.B. Stereovideos. Dabei enthalten mehrere Bilder fast dieselbe Information über die Szene. Wir nutzen diese Redundanz aus, um konsistentere und robustere Bewegungsfelder zu bestimmen, als es mit zwei Bildern möglich ist

    Mathematically inspired approaches to face recognition in uncontrolled conditions: super resolution and compressive sensing

    Get PDF
    Face recognition systems under uncontrolled conditions using surveillance cameras is becom-ing essential for establishing the identity of a person at a distance from the camera and providing safety and security against terrorist, attack, robbery and crime. Therefore, the performance of face recognition in low-resolution degraded images with low quality against im-ages with high quality/and of good resolution/size is considered the most challenging tasks and constitutes focus of this thesis. The work in this thesis is designed to further investigate these issues and the following being our main aim: “To investigate face identification from a distance and under uncontrolled conditions by pri-marily addressing the problem of low-resolution images using existing/modified mathemati-cally inspired super resolution schemes that are based on the emerging new paradigm of compressive sensing and non-adaptive dictionaries based super resolution.” We shall firstly investigate and develop the compressive sensing (CS) based sparse represen-tation of a sample image to reconstruct a high-resolution image for face recognition, by tak-ing different approaches to constructing CS-compliant dictionaries such as Gaussian Random Matrix and Toeplitz Circular Random Matrix. In particular, our focus is on constructing CS non-adaptive dictionaries (independent of face image information), which contrasts with ex-isting image-learnt dictionaries, but satisfies some form of the Restricted Isometry Property (RIP) which is sufficient to comply with the CS theorem regarding the recovery of sparsely represented images. We shall demonstrate that the CS dictionary techniques for resolution enhancement tasks are able to develop scalable face recognition schemes under uncontrolled conditions and at a distance. Secondly, we shall clarify the comparisons of the strength of sufficient CS property for the various types of dictionaries and demonstrate that the image-learnt dictionary far from satisfies the RIP for compressive sensing. Thirdly, we propose dic-tionaries based on the high frequency coefficients of the training set and investigate the im-pact of using dictionaries on the space of feature vectors of the low-resolution image for face recognition when applied to the wavelet domain. Finally, we test the performance of the de-veloped schemes on CCTV images with unknown model of degradation, and show that these schemes significantly outperform existing techniques developed for such a challenging task. However, the performance is still not comparable to what could be achieved in controlled en-vironment, and hence we shall identify remaining challenges to be investigated in the future
    corecore