2,916 research outputs found

    Tanner Graph Based Image Interpolation

    Full text link
    This paper interprets image interpolation as a channel decoding problem and proposes a tanner graph based interpolation framework, which regards each pixel in an image as a variable node and the local image structure around each pixel as a check node. The pixels available from low-resolution image are 'received' whereas other missing pixels of high-resolution image are 'erased', through an imaginary channel. Local image structures exhibited by the low-resolution image provide information on the joint distribution of pixels in a small neighborhood, and thus play the same role as parity symbols in the classic channel coding scenarios. We develop an efficient solution for the sum-product algorithm of belief propagation in this framework, based on a gaussian auto-regressive image model. Initial experiments show up to 3dB gain over other methods with the same image model. The proposed framework is flexible in message processing at each node and provides much room for incorporating more sophisticated image modelling techniques. ? 2010 IEEE.EI

    A practical algorithm for tanner graph based image interpolation

    Full text link
    This paper interprets image interpolation as a decoding problem on tanner graph and proposes a practical belief propagation algorithm based on a gaussian autoregressive image model. This algorithm regards belief propagation as a way to generate and fuse predictions from various check nodes. A low complexity implementation of this algorithm measures and distributes the departure of current interpolation result from the image model. Convergence speed of the proposed algorithm is discussed. Experimental results show that good interpolation results can be obtained by a very small number of iterations.Engineering, Electrical & ElectronicImaging Science & Photographic TechnologyEICPCI-S(ISTP)

    Tight bounds for LDPC and LDGM codes under MAP decoding

    Full text link
    A new method for analyzing low density parity check (LDPC) codes and low density generator matrix (LDGM) codes under bit maximum a posteriori probability (MAP) decoding is introduced. The method is based on a rigorous approach to spin glasses developed by Francesco Guerra. It allows to construct lower bounds on the entropy of the transmitted message conditional to the received one. Based on heuristic statistical mechanics calculations, we conjecture such bounds to be tight. The result holds for standard irregular ensembles when used over binary input output symmetric channels. The method is first developed for Tanner graph ensembles with Poisson left degree distribution. It is then generalized to `multi-Poisson' graphs, and, by a completion procedure, to arbitrary degree distribution.Comment: 28 pages, 9 eps figures; Second version contains a generalization of the previous resul

    Iterative Algebraic Soft-Decision List Decoding of Reed-Solomon Codes

    Get PDF
    In this paper, we present an iterative soft-decision decoding algorithm for Reed-Solomon codes offering both complexity and performance advantages over previously known decoding algorithms. Our algorithm is a list decoding algorithm which combines two powerful soft decision decoding techniques which were previously regarded in the literature as competitive, namely, the Koetter-Vardy algebraic soft-decision decoding algorithm and belief-propagation based on adaptive parity check matrices, recently proposed by Jiang and Narayanan. Building on the Jiang-Narayanan algorithm, we present a belief-propagation based algorithm with a significant reduction in computational complexity. We introduce the concept of using a belief-propagation based decoder to enhance the soft-input information prior to decoding with an algebraic soft-decision decoder. Our algorithm can also be viewed as an interpolation multiplicity assignment scheme for algebraic soft-decision decoding of Reed-Solomon codes.Comment: Submitted to IEEE for publication in Jan 200

    Phenomenological model of diffuse global and regional atrophy using finite-element methods

    Get PDF
    The main goal of this work is the generation of ground-truth data for the validation of atrophy measurement techniques, commonly used in the study of neurodegenerative diseases such as dementia. Several techniques have been used to measure atrophy in cross-sectional and longitudinal studies, but it is extremely difficult to compare their performance since they have been applied to different patient populations. Furthermore, assessment of performance based on phantom measurements or simple scaled images overestimates these techniques' ability to capture the complexity of neurodegeneration of the human brain. We propose a method for atrophy simulation in structural magnetic resonance (MR) images based on finite-element methods. The method produces cohorts of brain images with known change that is physically and clinically plausible, providing data for objective evaluation of atrophy measurement techniques. Atrophy is simulated in different tissue compartments or in different neuroanatomical structures with a phenomenological model. This model of diffuse global and regional atrophy is based on volumetric measurements such as the brain or the hippocampus, from patients with known disease and guided by clinical knowledge of the relative pathological involvement of regions and tissues. The consequent biomechanical readjustment of structures is modelled using conventional physics-based techniques based on biomechanical tissue properties and simulating plausible tissue deformations with finite-element methods. A thermoelastic model of tissue deformation is employed, controlling the rate of progression of atrophy by means of a set of thermal coefficients, each one corresponding to a different type of tissue. Tissue characterization is performed by means of the meshing of a labelled brain atlas, creating a reference volumetric mesh that will be introduced to a finite-element solver to create the simulated deformations. Preliminary work on the simulation of acquisition artefa- - cts is also presented. Cross-sectional and

    Prediction of risk of fracture in the tibia due to altered bone mineral density distribution resulting from disuse : a finite element study

    Get PDF
    The disuse-related bone loss that results from immobilisation following injury shares characteristics with osteoporosis in postmenopausal women and the aged, with decreases in bone mineral density (BMD) leading to weakening of the bone and increased risk of fracture. The aim of the study was to use the finite element method to: (i) calculate the mechanical response of the tibia under mechanical load and (ii) estimate the risk of fracture; comparing between two groups, an able bodied (AB) group and spinal cord injury (SCI) patients group suffering from varying degree of bone loss. The tibiae of eight male subjects with chronic SCI and those of four able-bodied (AB) age-matched controls were scanned using multi-slice peripheral Quantitative Computed Tomography. Images were used to develop full three-dimensional models of the tibiae in Mimics (Materialise) and exported into Abaqus (Simulia) for calculation of stress distribution and fracture risk in response to specified loading conditions – compression, bending and torsion. The percentage of elements that exceeded a calculated value of the ultimate stress provided an estimate of the risk of fracture for each subject, which differed between SCI subjects and their controls. The differences in BMD distribution along the tibia in different subjects resulted in different regions of the bone being at high risk of fracture under set loading conditions, illustrating the benefit of creating individual material distribution models. A predictive tool can be developed based on these models, to enable clinicians to estimate the amount of loading that can be safely allowed onto the skeletal frame of individual patients who suffer from extensive musculoskeletal degeneration (including SCI, multiple sclerosis and the ageing population). The ultimate aim would be to reduce fracture occurrence in these vulnerable groups

    SurfelWarp: Efficient Non-Volumetric Single View Dynamic Reconstruction

    Full text link
    We contribute a dense SLAM system that takes a live stream of depth images as input and reconstructs non-rigid deforming scenes in real time, without templates or prior models. In contrast to existing approaches, we do not maintain any volumetric data structures, such as truncated signed distance function (TSDF) fields or deformation fields, which are performance and memory intensive. Our system works with a flat point (surfel) based representation of geometry, which can be directly acquired from commodity depth sensors. Standard graphics pipelines and general purpose GPU (GPGPU) computing are leveraged for all central operations: i.e., nearest neighbor maintenance, non-rigid deformation field estimation and fusion of depth measurements. Our pipeline inherently avoids expensive volumetric operations such as marching cubes, volumetric fusion and dense deformation field update, leading to significantly improved performance. Furthermore, the explicit and flexible surfel based geometry representation enables efficient tackling of topology changes and tracking failures, which makes our reconstructions consistent with updated depth observations. Our system allows robots to maintain a scene description with non-rigidly deformed objects that potentially enables interactions with dynamic working environments.Comment: RSS 2018. The video and source code are available on https://sites.google.com/view/surfelwarp/hom

    Data analysis of retinal recordings from multi-electrode arrays under in situ electrical stimulation

    Get PDF
    The development of retinal implants has become an important field of study in recent years, with increasing numbers of people falling victim to legal or physical blindness as a result of retinal damage. Important weaknesses in current retinal implants include a lack of the resolution necessary to give a patient a viable level of visual acuity, question marks over the amount of power and energy required to deliver adequate stimulation, and the removal of eye movements from the analysis of the visual scene. This thesis documents investigations by the author into a new CMOS stimulation and imaging chip with the potential to overcome these difficulties. An overview is given of the testing and characterisation of the componments incorporated in the device to mimic the normal functioning of the human retina. Its application to in situ experimental studies of frog retina is also described, as well as how the data gathered from these experiments enables the optimisation of the geometry of the electrode array through which the device will interface with the retina. Such optimisation is important as the deposit of excess electrical charge and energy can lead to detrimental medical side effects. Avoidance of such side effects is crucial to the realisation of the next generation of retinal implants

    A Comparison of Video and Accelerometer Based Approaches Applied to Performance Monitoring in Swimming.

    Get PDF
    The aim of this paper is to present a comparison of video- and sensor based studies of swimming performance. The video-based approach is reviewed and contrasted to the newer sensor-based technology, specifically accelerometers based upon Micro-Electro-Mechanical Systems (MEMS) technology. Results from previously published swim performance studies using both the video and sensor technologies are summarised and evaluated against the conventional theory that upper arm movements are of primary interest when quantifying free-style technique. The authors conclude that multiple sensor-based measurements of swimmers’ acceleration profiles have the potential to offer significant advances in coaching technique over the traditional video based approach
    corecore