429,757 research outputs found

    A sliding mode approach to visual motion estimation

    Get PDF
    The problem of estimating motion from a sequence of images has been a major research theme in machine vision for many years and remains one of the most challenging ones. In this work, we use sliding mode observers to estimate the motion of a moving body with the aid of a CCD camera. We consider a variety of dynamical systems which arise in machine vision applications and develop a novel identication procedure for the estimation of both constant and time varying parameters. The basic procedure introduced for parameter estimation is to recast image feature dynamics linearly in terms of unknown parameters and construct a sliding mode observer to produce asymptotically correct estimates of the observed image features, and then use “equivalent control” to explicitly compute parameters. Much of our analysis has been substantiated by computer simulations and real experiments

    A New 3-D automated computational method to evaluate in-stent neointimal hyperplasia in in-vivo intravascular optical coherence tomography pullbacks

    Get PDF
    Abstract. Detection of stent struts imaged in vivo by optical coherence tomography (OCT) after percutaneous coronary interventions (PCI) and quantification of in-stent neointimal hyperplasia (NIH) are important. In this paper, we present a new computational method to facilitate the physician in this endeavor to assess and compare new (drug-eluting) stents. We developed a new algorithm for stent strut detection and utilized splines to reconstruct the lumen and stent boundaries which provide automatic measurements of NIH thickness, lumen and stent area. Our original approach is based on the detection of stent struts unique characteristics: bright reflection and shadow behind. Furthermore, we present for the first time to our knowledge a rotation correction method applied across OCT cross-section images for 3D reconstruction and visualization of reconstructed lumen and stent boundaries for further analysis in the longitudinal dimension of the coronary artery. Our experiments over OCT cross-sections taken from 7 patients presenting varying degrees of NIH after PCI illustrate a good agreement between the computer method and expert evaluations: Bland-Altmann analysis revealed a mean difference for lumen cross-section area of 0.11 ± 0.70mm2 and for the stent cross-section area of 0.10 ± 1.28mm2

    Beautiful and damned. Combined effect of content quality and social ties on user engagement

    Get PDF
    User participation in online communities is driven by the intertwinement of the social network structure with the crowd-generated content that flows along its links. These aspects are rarely explored jointly and at scale. By looking at how users generate and access pictures of varying beauty on Flickr, we investigate how the production of quality impacts the dynamics of online social systems. We develop a deep learning computer vision model to score images according to their aesthetic value and we validate its output through crowdsourcing. By applying it to over 15B Flickr photos, we study for the first time how image beauty is distributed over a large-scale social system. Beautiful images are evenly distributed in the network, although only a small core of people get social recognition for them. To study the impact of exposure to quality on user engagement, we set up matching experiments aimed at detecting causality from observational data. Exposure to beauty is double-edged: following people who produce high-quality content increases one's probability of uploading better photos; however, an excessive imbalance between the quality generated by a user and the user's neighbors leads to a decline in engagement. Our analysis has practical implications for improving link recommender systems.Comment: 13 pages, 12 figures, final version published in IEEE Transactions on Knowledge and Data Engineering (Volume: PP, Issue: 99

    Imaging of solid flow in a gravity flow rig using infra-red tomography

    Get PDF
    Information on flow regimes is vital in the analysis and measurement of industrial process flow. Almost all currently available method of measuring the flow of two-component mixtures in industrial pipelines endeavors to average a property of the flow over the pipe cross-section. They do not give information on the nature of the flow regime and they are unsuitable for accurate measurement where the component distribution is spatially or time varying. The overall aim of this project is to investigate the use of an optical tomography method based on infra-red sensors for real-time monitoring of solid particles conveyed by a rotary valve in a pneumatic pipeline. The infra-red tomography system can be divided into two distinct portions of hardware and software development process. The hardware development process covers the infra-red sensor selection, fixtures and signals conditioning circuits, and control circuits. The software development involves data acquisition system, sensor modeling, image algorithms, and programming for a tomographic display to provide solids flow information in pipeline such as concentration and velocity profiles. Collimating the radiated beam from a light source and passing it via a flow regime ensures that the intensity of radiation detected on the opposite side is linked to the distribution and the absorption coefficients of the different phases in the path of the beam. The information is obtained from the combination of two orthogonal and two diagonal light projection system and 30 cycles of real-time measurements. Those information on the flow captured using upstream and downstream infra-red sensors are digitized by the DAS system before it was passed into a computer for analysis such as image reconstructions and cross-correlation process that provide velocity profiles represented by 16 × 16 pixels mapped onto the pipe cross-section. This project successfully developed and tested an infra-red tomography system to display two-dimensional images of concentration and velocity

    Analysis of Edge Detection Technique for Hardware Realization

    Get PDF
    Edge detection plays an important role in image processing and computer vision applications. Different edge detection technique with distinct criteria have been proposed in various literatures. Thus an evaluation of different edge detection techniques is essential to measure their effectiveness over a wide range of natural images with varying applications. Several performance indices for quantitative evaluation of edge detectors may be found in the literature among which Edge Mis-Match error (EMM), F-Measure (FM), Figure of Merit (FOM) and Precision and Recall (PR) curve are most effective. Several experiments on different database containing a wide range of natural and synthetic images illustrate the effectiveness of Canny edge detector over other detectors for varying conditions. Moreover, due to the ever increasing demand for high speed and time critical tasks in many image processing application, we have implemented an efficient hardware architecture for Canny edge detector in VHDL. The studied implementation technique adopts parallel architecture of Field Programmable Gate Array (FPGA) to accelerate the process of edge detection via. Canny’s algorithm. In this dissertation, we have simulated the considered architecture in Modelsim 10.4a student edition to demonstrate the potential of parallel processing for edge detection. This analysis and implementation may encourage and serve as a basis building block for several complex computer vision applications. With the advent of Field Programmable Gate Arrays (FPGA), massively parallel architectures can be developed to accelerate the execution speed of several image processing algorithms. In this work, such a parallel architecture is proposed to accelerate the Canny edge detection algorithm. The architecture is simulated in Modelsim 10.4a student edition platform

    SLCV–a supervised learning—computer vision combined strategy for automated muscle fibre detection in cross-sectional images

    Get PDF
    Muscle fibre cross-sectional area (CSA) is an important biomedical measure used to determine the structural composition of skeletal muscle, and it is relevant for tackling research questions in many different fields of research. To date, time consuming and tedious manual delineation of muscle fibres is often used to determine the CSA. Few methods are able to automatically detect muscle fibres in muscle fibre cross-sections to quantify CSA due to challenges posed by variation of brightness and noise in the staining images. In this paper, we introduce the supervised learning-computer vision combined pipeline (SLCV), a robust semi-automatic pipeline for muscle fibre detection, which combines supervised learning (SL) with computer vision (CV). SLCV is adaptable to different staining methods and is quickly and intuitively tunable by the user. We are the first to perform an error analysis with respect to cell count and area, based on which we compare SLCV to the best purely CV-based pipeline in order to identify the contribution of SL and CV steps to muscle fibre detection. Our results obtained on 27 fluorescence-stained cross-sectional images of varying staining quality suggest that combining SL and CV performs significantly better than both SL-based and CV-based methods with regards to both the cell separation- and the area reconstruction error. Furthermore, applying SLCV to our test set images yielded fibre detection results of very high quality, with average sensitivity values of 0.93 or higher on different cluster sizes and an average Dice similarity coefficient of 0.9778
    corecore