2,075 research outputs found

    Improving accuracy and efficiency of mutual information for multi-modal retinal image registration using adaptive probability density estimation

    Get PDF
    Mutual information (MI) is a popular similarity measure for performing image registration between different modalities. MI makes a statistical comparison between two images by computing the entropy from the probability distribution of the data. Therefore, to obtain an accurate registration it is important to have an accurate estimation of the true underlying probability distribution. Within the statistics literature, many methods have been proposed for finding the 'optimal' probability density, with the aim of improving the estimation by means of optimal histogram bin size selection. This provokes the common question of how many bins should actually be used when constructing a histogram. There is no definitive answer to this. This question itself has received little attention in the MI literature, and yet this issue is critical to the effectiveness of the algorithm. The purpose of this paper is to highlight this fundamental element of the MI algorithm. We present a comprehensive study that introduces methods from statistics literature and incorporates these for image registration. We demonstrate this work for registration of multi-modal retinal images: colour fundus photographs and scanning laser ophthalmoscope images. The registration of these modalities offers significant enhancement to early glaucoma detection, however traditional registration techniques fail to perform sufficiently well. We find that adaptive probability density estimation heavily impacts on registration accuracy and runtime, improving over traditional binning techniques. © 2013 Elsevier Ltd

    Excitability in autonomous Boolean networks

    Full text link
    We demonstrate theoretically and experimentally that excitable systems can be built with autonomous Boolean networks. Their experimental implementation is realized with asynchronous logic gates on a reconfigurabe chip. When these excitable systems are assembled into time-delay networks, their dynamics display nanosecond time-scale spike synchronization patterns that are controllable in period and phase.Comment: 6 pages, 5 figures, accepted in Europhysics Letters (epljournal.edpsciences.org

    Between images and built form: Automating the recognition of standardised building components using deep learning

    Get PDF
    Building on the richness of recent contributions in the field, this paper presents a state-of-the-art CNN analysis method for automatingthe recognition of standardised building components in modern heritage buildings. At the turn of the twentieth century manufacturedbuilding components became widely advertised for specification by architects. Consequently, a form of standardisation across varioustypologies began to take place. During this era of rapid economic and industrialised growth, many forms of public building wereerected. This paper seeks to demonstrate a method for informing the recognition of such elements using deep learning to recognise'families' of elements across a range of buildings in order to retrieve and recognise their technical specifications from the contemporarytrade literature. The method is illustrated through the case of Carnegie Public Libraries in the UK, which provides a unique butubiquitous platform from which to explore the potential for the automated recognition of manufactured standard architecturalcomponents. The aim of enhancing this knowledge base is to use the degree to which these were standardised originally as a means toinform and so support their ongoing care but also that of many other contemporary buildings. Although these libraries are numerous,they are maintained at a local level and as such, their shared challenges for maintenance remain unknown to one another. Additionally,this paper presents a methodology to indirectly retrieve useful indicators and semantics, relating to emerging HBIM families, byapplying deep learning to a varied range of architectural imagery

    A pitfall in the reconstruction of fibre ODFs using spherical deconvolution of diffusion MRI data

    Get PDF
    Diffusion weighted ( DW ) MRI facilitates non-invasive quantification of tissue microstructure and, in combination with appropriate signal processing, three-dimensional estimates of fibrous orientation. In recent years, attention has shifted from the diffusion tensor model, which assumes a unimodal Gaussian diffusion displacement profile to recover fibre orientation ( with various well-documented limitations ), towards more complex high angular resolution diffusion imaging ( HARDI ) analysis techniques. Spherical deconvolution ( SD ) approaches assume that the fibre orientation density function ( fODF ) within a voxel can be obtained by deconvolving a ‘common’ single fibre response function from the observed set of DW signals. In practice, this common response function is not known a priori and thus an estimated fibre response must be used. Here the establishment of this single-fibre response function is referred to as ‘calibration’. This work examines the vulnerability of two different SD approaches to inappropriate response function calibration: ( 1 ) constrained spherical harmonic deconvolution ( CSHD )—a technique that exploits spherical harmonic basis sets and ( 2 ) damped Richardson–Lucy ( dRL ) deconvolution—a technique based on the standard Richardson–Lucy deconvolution. Through simulations, the impact of a discrepancy between the calibrated diffusion profiles and the observed ( ‘Target’ ) DW-signals in both single and crossing-fibre configurations was investigated. The results show that CSHD produces spurious fODF peaks ( consistent with well known ringing artefacts ) as the discrepancy between calibration and target response increases, while dRL demonstrates a lower over-all sensitivity to miscalibration ( with a calibration response function for a highly anisotropic fibre being optimal ). However, dRL demonstrates a reduced ability to resolve low anisotropy crossing-fibres compared to CSHD. It is concluded that the range and spatial-distribution of expected single-fibre anisotropies within an image must be carefully considered to ensure selection of the appropriate algorithm, parameters and calibration. Failure to choose the calibration response function carefully may severely impact the quality of any resultant tractography

    Speeding up Simplification of Polygonal Curves using Nested Approximations

    Full text link
    We develop a multiresolution approach to the problem of polygonal curve approximation. We show theoretically and experimentally that, if the simplification algorithm A used between any two successive levels of resolution satisfies some conditions, the multiresolution algorithm MR will have a complexity lower than the complexity of A. In particular, we show that if A has a O(N2/K) complexity (the complexity of a reduced search dynamic solution approach), where N and K are respectively the initial and the final number of segments, the complexity of MR is in O(N).We experimentally compare the outcomes of MR with those of the optimal "full search" dynamic programming solution and of classical merge and split approaches. The experimental evaluations confirm the theoretical derivations and show that the proposed approach evaluated on 2D coastal maps either shows a lower complexity or provides polygonal approximations closer to the initial curves.Comment: 12 pages + figure

    CONSTRUCTION AND PERCEPTUAL EVALUATION OF A 3D HEAD MODEL

    Get PDF
    Abstract This paper presents a method to construct a compact 3D head model capable of synthesizing realistic face expressions with subtle details such as wrinkles and muscle folds. The model is assessed by Psychologists using the certified FACS coding method. Such a compact and accurate model offers a large market potential not only in Computer Graphics industries but also in low-bandwidth applications e.g. tele-conferencing, and provides a valuable novel tool for Perceptual Studies. Method and Implementation The method used to construct the 3D head model in this work is inspired from the 2D Active Appearance Model described in Besides, a synthesized face looks more authentic if not only it appears like a human, but also moves like a human. Therefore, it is very important to accurately model the dynamics of the facial expressions. Not many researches have achieved this task so far in 3D animation, which is mostly due to the limitations of their data capture equipments. In this research, we use a fast 3D video camera (48fps) to capture our training data, which allows to model a fine temporal dynamic of the face movements. Finally, we combine the method described above with FACS coding to further improve the precision of our head model. FACS is a certified method used in Psychology to study facial movements Results Our training data consists of short video sequences of Action Units (about 60 frames each). After building a joint PCA model of shape and texture, we obtain a set of Eigenvectors which represent the different modes of variations of the facial changes. Conclusion We have successfully built a 3D head model capable of synthesizing realistic-looking face expressions, reproducing accurate skin folds and expression dynamics. We plan to use this model to study and model facial idiosyncrasies

    Abstract art by shape classification

    Get PDF
    his paper shows that classifying shapes is a tool useful in nonphotorealistic rendering (NPR) from photographs. Our classifier inputs regions from an image segmentation hierarchy and outputs the "best” fitting simple shape such as a circle, square, or triangle. Other approaches to NPR have recognized the benefits of segmentation, but none have classified the shape of segments. By doing so, we can create artwork of a more abstract nature, emulating the style of modern artists such as Matisse and other artists who favored shape simplification in their artwork. The classifier chooses the shape that "best” represents the region. Since the classifier is trained by a user, the "best shape” has a subjective quality that can over-ride measurements such as minimum error and more importantly captures user preferences. Once trained, the system is fully automatic, although simple user interaction is also possible to allow for differences in individual tastes. A gallery of results shows how this classifier contributes to NPR from images by producing abstract artwork. INDEX TERM

    Automated registration of multimodal optic disc images: clinical assessment of alignment accuracy

    Get PDF
    Purpose: To determine the accuracy of automated alignment algorithms for the registration of optic disc images obtained by 2 different modalities: fundus photography and scanning laser tomography. Materials and Methods: Images obtained with the Heidelberg Retina Tomograph II and paired photographic optic disc images of 135 eyes were analyzed. Three state-of-the-art automated registration techniques Regional Mutual Information, rigid Feature Neighbourhood Mutual Information (FNMI), and nonrigid FNMI (NRFNMI) were used to align these image pairs. Alignment of each composite picture was assessed on a 5-point grading scale: “Fail” (no alignment of vessels with no vessel contact), “Weak” (vessels have slight contact), “Good” (vessels with 50% contact), and “Excellent” (complete alignment). Custom software generated an image mosaic in which the modalities were interleaved as a series of alternate 5×5-pixel blocks. These were graded independently by 3 clinically experienced observers. Results: A total of 810 image pairs were assessed. All 3 registration techniques achieved a score of “Good” or better in >95% of the image sets. NRFNMI had the highest percentage of “Excellent” (mean: 99.6%; range, 95.2% to 99.6%), followed by Regional Mutual Information (mean: 81.6%; range, 86.3% to 78.5%) and FNMI (mean: 73.1%; range, 85.2% to 54.4%). Conclusions: Automated registration of optic disc images by different modalities is a feasible option for clinical application. All 3 methods provided useful levels of alignment, but the NRFNMI technique consistently outperformed the others and is recommended as a practical approach to the automated registration of multimodal disc images

    Sketch-based interaction and modeling: where do we stand?

    Get PDF
    Sketching is a natural and intuitive communication tool used for expressing concepts or ideas which are difficult to communicate through text or speech alone. Sketching is therefore used for a variety of purposes, from the expression of ideas on two-dimensional (2D) physical media, to object creation, manipulation, or deformation in three-dimensional (3D) immersive environments. This variety in sketching activities brings about a range of technologies which, while having similar scope, namely that of recording and interpreting the sketch gesture to effect some interaction, adopt different interpretation approaches according to the environment in which the sketch is drawn. In fields such as product design, sketches are drawn at various stages of the design process, and therefore, designers would benefit from sketch interpretation technologies which support these differing interactions. However, research typically focuses on one aspect of sketch interpretation and modeling such that literature on available technologies is fragmented and dispersed. In this paper, we bring together the relevant literature describing technologies which can support the product design industry, namely technologies which support the interpretation of sketches drawn on 2D media, sketch-based search interactions, as well as sketch gestures drawn in 3D media. This paper, therefore, gives a holistic view of the algorithmic support that can be provided in the design process. In so doing, we highlight the research gaps and future research directions required to provide full sketch-based interaction support
    • 

    corecore