1,359 research outputs found

    A survey of exemplar-based texture synthesis

    Full text link
    Exemplar-based texture synthesis is the process of generating, from an input sample, new texture images of arbitrary size and which are perceptually equivalent to the sample. The two main approaches are statistics-based methods and patch re-arrangement methods. In the first class, a texture is characterized by a statistical signature; then, a random sampling conditioned to this signature produces genuinely different texture images. The second class boils down to a clever "copy-paste" procedure, which stitches together large regions of the sample. Hybrid methods try to combine ideas from both approaches to avoid their hurdles. The recent approaches using convolutional neural networks fit to this classification, some being statistical and others performing patch re-arrangement in the feature space. They produce impressive synthesis on various kinds of textures. Nevertheless, we found that most real textures are organized at multiple scales, with global structures revealed at coarse scales and highly varying details at finer ones. Thus, when confronted with large natural images of textures the results of state-of-the-art methods degrade rapidly, and the problem of modeling them remains wide open.Comment: v2: Added comments and typos fixes. New section added to describe FRAME. New method presented: CNNMR

    Passive method for 3D reconstruction of human jaw: theory and application.

    Get PDF
    Oral dental applications based on visual data pose various challenges. There are problems with lighting (effect of saliva, tooth dis-colorization, gum texture, and other sources of specularity) and motion (even inevitable slight motions of the upper/ lower jaw may lead to errors far beyond the desired tolerance of sub-millimeter accuracy). Nowadays, the dental CAM systems have become more compromised and accurate to obtain the geometric data of the jaw from the active sensor (laser scanner). However, they have not met the expectations and the needs of dental professionals in many ways. The probes in these systems are bulky { even their newer versions - and are hard to maneuver. It requires multiple scans to get full coverage of the oral cavity. In addition, the dominant drawback of these systems is the cost. Stereo-based 3D reconstruction provides the highest accuracy among vision systems of this type. However, the evaluation of it\u27s performance for both accuracy results and the number of 3D points that are reconstructed would be affected by the type of the application and the quality of the data that is been acquired from the object of interest. Therefore, in this study, the stereo-based 3D reconstruction will vi be evaluated for the dental application. The handpiece of sensors holder would reach to areas inside the oral cavity, the gap between the tooth in the upper jaw and the tooth in the lower jaw in these areas would be very small, in such the stereo algorithms would not be able to reconstruct the tooth in that areas because of the distance between the optical sensors and the object of interest \tooth as well as the configuration of optical sensors are contradicted the geometric constraint roles of the stereo-based 3D reconstruction. Therefore, the configuration of the optical sensors as well as the number of sensors in the hand piece of sensors holder will be determined based on the morphological of the teeth surfaces. In addition to the 3D reconstruction, the panoramic view of a complete arch of human teeth will be accomplished as an application of dental imaging. Due to the low rate of features on teeth surfaces, the normal tooth surface is extracted using shape from shading. The extracted surface normals impact many imprecise values because of the oral environment; hence an algorithm is being formulated to rectify these values and generate normal maps. The normal maps reveal the impacted geometric properties of the images inside an area, boundary, and shape. Furthermore, the unrestricted camera movement problem is investigated. The camera may be moved along the jaw curve with different angles and distances due to handshaking. To overcome this problem, each frame is tested after warping it, and only correct frames are used to generate the panoramic view. The proposed approach outperforms comparing to the state-of-art auto stitching method

    ???????????? ????????? ????????? ??????????????? ?????? ????????? ????????? ??????????????? ?????????

    Get PDF
    Department of Biomedical EngineeringImage stitching is a well-known method to make panoramic image which has a wide field-of-view and high resolution. It has been used in various fields such as digital map, gigapixel imaging, and 360-degree camera. However, commercial stitching tools often fail, require a lot of processing time, and only work on certain images. The problems of existing tools are mainly caused by trying to stitch the wrong image pair. To overcome these problems, it is important to select suitable image pair for stitching in advance. Nevertheless, there are no universal standards to judge the good image pairs. Moreover, the derived stitching algorithms cannot be compatible with each other because they conform to their own available criteria. Here, we present universal stitching parameters and their conditions for selecting good image pairs. The proposed stitching parameters can be easily calculated through analysis of corresponding features and homography, which are basic elements in feature-based image stitching algorithm. In order to specify the conditions of the stitching parameters, we devised a new method to calculate stitching accuracy for qualifying stitching results into 3 classesgood, bad, and fail. With the classed stitching results, the values of the stitching parameters could be checked how they differ in each class. Through experiments with large datasets, the most valid parameter for each class is identified as filtering level which is calculated in corresponding feature analysis. In addition, supplemental experiments were conducted with various datasets to demonstrate the validity of the filtering level. As a result of our study, universal stitching parameters can judge the success of stitching, so that it is possible to prevent stitching errors through parameter verification test in advance. This paper can greatly contribute to guide for creating high performance and high efficiency stitching software by applying the proposed stitching conditions.ope

    Streaming visualisation of quantitative mass spectrometry data based on a novel raw signal decomposition method

    Get PDF
    As data rates rise, there is a danger that informatics for high-throughput LC-MS becomes more opaque and inaccessible to practitioners. It is therefore critical that efficient visualisation tools are available to facilitate quality control, verification, validation, interpretation, and sharing of raw MS data and the results of MS analyses. Currently, MS data is stored as contiguous spectra. Recall of individual spectra is quick but panoramas, zooming and panning across whole datasets necessitates processing/memory overheads impractical for interactive use. Moreover, visualisation is challenging if significant quantification data is missing due to data-dependent acquisition of MS/MS spectra. In order to tackle these issues, we leverage our seaMass technique for novel signal decomposition. LC-MS data is modelled as a 2D surface through selection of a sparse set of weighted B-spline basis functions from an over-complete dictionary. By ordering and spatially partitioning the weights with an R-tree data model, efficient streaming visualisations are achieved. In this paper, we describe the core MS1 visualisation engine and overlay of MS/MS annotations. This enables the mass spectrometrist to quickly inspect whole runs for ionisation/chromatographic issues, MS/MS precursors for coverage problems, or putative biomarkers for interferences, for example. The open-source software is available from http://seamass.net/viz/

    Connectomic analysis of the input to the principal cells of the mammalian cerebral cortex

    Get PDF

    Comparing Feature Detectors: A bias in the repeatability criteria, and how to correct it

    Full text link
    Most computer vision application rely on algorithms finding local correspondences between different images. These algorithms detect and compare stable local invariant descriptors centered at scale-invariant keypoints. Because of the importance of the problem, new keypoint detectors and descriptors are constantly being proposed, each one claiming to perform better (or to be complementary) to the preceding ones. This raises the question of a fair comparison between very diverse methods. This evaluation has been mainly based on a repeatability criterion of the keypoints under a series of image perturbations (blur, illumination, noise, rotations, homotheties, homographies, etc). In this paper, we argue that the classic repeatability criterion is biased towards algorithms producing redundant overlapped detections. To compensate this bias, we propose a variant of the repeatability rate taking into account the descriptors overlap. We apply this variant to revisit the popular benchmark by Mikolajczyk et al., on classic and new feature detectors. Experimental evidence shows that the hierarchy of these feature detectors is severely disrupted by the amended comparator.Comment: Fixed typo in affiliation

    Discriminative learning of local image descriptors

    Get PDF
    In this paper, we explore methods for learning local image descriptors from training data. We describe a set of building blocks for constructing descriptors which can be combined together and jointly optimized so as to minimize the error of a nearest-neighbor classifier. We consider both linear and nonlinear transforms with dimensionality reduction, and make use of discriminant learning techniques such as Linear Discriminant Analysis (LDA) and Powell minimization to solve for the parameters. Using these techniques, we obtain descriptors that exceed state-of-the-art performance with low dimensionality. In addition to new experiments and recommendations for descriptor learning, we are also making available a new and realistic ground truth data set based on multiview stereo data

    Classic Mosaics and Visual Correspondence via Graph-Cut based Energy Optimization

    Get PDF
    Computer graphics and computer vision were traditionally two distinct research fields focusing on opposite topics. Lately, they have been increasingly borrowing ideas and tools from each other. In this thesis, we investigate two problems in computer vision and graphics that rely on the same tool, namely energy optimization with graph cuts. In the area of computer graphics, we address the problem of generating artificial classic mosaics, still and animated. The main purpose of artificial mosaics is to help a user to create digital art. First we reformulate our previous static mosaic work in a more principled global optimization framework. Then, relying on our still mosaic algorithm, we develop a method for producing animated mosaics directly from real video sequences, which is the first such method, we believe. Our mosaic animation style is uniquely expressive. Our method estimates the motion of the pixels in the video, renders the frames with mosaic effect based on both the colour and motion information from the input video. This algorithm relies extensively on our novel motion segmentation approach, which is a computer vision problem. To improve the quality of our animated mosaics, we need to improve the motion segmentation algorithm. Since motion and stereo problems have a similar setup, we start with the problem of finding visual correspondence for stereo, which has the advantage of having datasets with ground truth, useful for evaluation. Most previous methods for stereo correspondence do not provide any measure of reliability in their estimates. We aim to find the regions for which correspondence can be determined reliably. Our main idea is to find corresponding regions that have a sufficiently strong texture cue on the boundary, since texture is a reliable cue for matching. Unlike the previous work, we allow the disparity range within each such region to vary smoothly, instead of being constant. This produces blob-like semi-dense visual features for which we have a high confidence in their estimated ranges of disparities

    Tracking and Mapping in Medical Computer Vision: A Review

    Full text link
    As computer vision algorithms are becoming more capable, their applications in clinical systems will become more pervasive. These applications include diagnostics such as colonoscopy and bronchoscopy, guiding biopsies and minimally invasive interventions and surgery, automating instrument motion and providing image guidance using pre-operative scans. Many of these applications depend on the specific visual nature of medical scenes and require designing and applying algorithms to perform in this environment. In this review, we provide an update to the field of camera-based tracking and scene mapping in surgery and diagnostics in medical computer vision. We begin with describing our review process, which results in a final list of 515 papers that we cover. We then give a high-level summary of the state of the art and provide relevant background for those who need tracking and mapping for their clinical applications. We then review datasets provided in the field and the clinical needs therein. Then, we delve in depth into the algorithmic side, and summarize recent developments, which should be especially useful for algorithm designers and to those looking to understand the capability of off-the-shelf methods. We focus on algorithms for deformable environments while also reviewing the essential building blocks in rigid tracking and mapping since there is a large amount of crossover in methods. Finally, we discuss the current state of the tracking and mapping methods along with needs for future algorithms, needs for quantification, and the viability of clinical applications in the field. We conclude that new methods need to be designed or combined to support clinical applications in deformable environments, and more focus needs to be put into collecting datasets for training and evaluation.Comment: 31 pages, 17 figure
    corecore