20 research outputs found

    Parallel image restoration

    Get PDF
    Cataloged from PDF version of article.In this thesis, we are concerned with the image restoration problem which has been formulated in the literature as a system of linear inequalities. With this formulation, the resulting constraint matrix is an unstructured sparse-matrix and even with small size images we end up with huge matrices. So, to solve the restoration problem, we have used the surrogate constraint methods, that can work efficiently for large size problems and are amenable for parallel implementations. Among the surrogate constraint methods, the basic method considers all of the violated constraints in the system and performs a single block projection in each step. On the other hand, parallel method considers a subset of the constraints, and makes simultaneous block projections. Using several partitioning strategies and adopting different communication models we have realized several parallel implementations of the two methods. We have used the hypergraph partitioning based decomposition methods in order to minimize the communication costs while ensuring load balance among the processors. The implementations are evaluated based on the per iteration performance and on the overall performance. Besides, the effects of different partitioning strategies on the speed of convergence are investigated. The experimental results reveal that the proposed parallelization schemes have practical usage in the restoration problem and in many other real-world applications which can be modeled as a system of linear inequalities.Malas, TahirM.S

    Efficient successor retrieval operations for aggregate query processing on clustered road networks

    Get PDF
    Cataloged from PDF version of article.Get-Successors (GS) which retrieves all successors of a junction is a kernel operation used to facilitate aggregate computations in road network queries. Efficient implementation of the GS operation is crucial since the disk access cost of this operation constitutes a considerable portion of the total query processing cost. Firstly, we propose a new successor retrieval operation Get-Unevaluated-Successors (GUS), which retrieves only the unevaluated successors of a given junction. The GUS operation is an efficient implementation of the GS operation, where the candidate successors to be retrieved are pruned according to the properties and state of the algorithm. Secondly, we propose a hypergraph-based model for clustering successively retrieved junctions by the GUS operations to the same pages. The proposed model utilizes query logs to correctly capture the disk access cost of GUS operations. The proposed GUS operation and associated clustering model are evaluated for two different instances of GUS operations which typically arise in Dijkstra's single source shortest path algorithm and incremental network expansion framework. Our simulation results show that the proposed successor retrieval operation together with the proposed clustering hypergraph model is quite effective in reducing the number of disk accesses in query processing. (C) 2010 Published by Elsevier Inc

    On computational approaches for size-and-shape distributions from sedimentation velocity analytical ultracentrifugation

    Get PDF
    Sedimentation velocity analytical ultracentrifugation has become a very popular technique to study size distributions and interactions of macromolecules. Recently, a method termed two-dimensional spectrum analysis (2DSA) for the determination of size-and-shape distributions was described by Demeler and colleagues (Eur Biophys J 2009). It is based on novel ideas conceived for fitting the integral equations of the size-and-shape distribution to experimental data, illustrated with an example but provided without proof of the principle of the algorithm. In the present work, we examine the 2DSA algorithm by comparison with the mathematical reference frame and simple well-known numerical concepts for solving Fredholm integral equations, and test the key assumptions underlying the 2DSA method in an example application. While the 2DSA appears computationally excessively wasteful, key elements also appear to be in conflict with mathematical results. This raises doubts about the correctness of the results from 2DSA analysis

    Restauración de imágenes basada en metaheurísticas y entornos paralelos

    Get PDF
    La restauración de imágenes consiste en recuperar imágenes registradas en presencia de distintas fuentes de degradación. Este problema es relevante, por ejemplo, en astronomía y reconocimiento aéreo (imágenes degradadas por turbulencias atmosféricas, aberraciones de sistemas ópticos y movimiento de la cámara), o en medicina (imágenes radiográficas de bajo contraste debido a la naturaleza de los sistemas de rayos X). Los métodos de resolución clásicos para estos problemas presentan varios inconvenientes tales como la necesidad de conocer parámetros a priori y la alta complejidad de sus modelos matemáticos de resolución. En los últimos años, surgieron dos líneas de investigación que pueden ayudar a mitigar estos inconvenientes: las metaheurísticas y la computación paralela. Los métodos que utilizan metaheurísticas permiten una rápida convergencia y son adecuados para tratar un gran número de variables de decisión ofreciendo un mejor compromiso entre la calidad de la solución y la eficiencia de cómputo. El uso de arquitecturas paralelas permite reducir los tiempos de procesamiento debido al cómputo del gran volumen de datos asociados al proceso de restauración, incluso con imágenes de tamaño pequeño. En esta línea de investigación se trabajará en el diseño de algoritmos para la restauración de imágenes aplicando metaheurísticas en entornos paralelos.Eje: Procesamiento Distribuido y ParaleloRed de Universidades con Carreras en Informática (RedUNCI

    Restauración de imágenes y metaheurísticas en Hadoop

    Get PDF
    El campo del procesamiento digital de imágenes abarca técnicas, algoritmos, métodos y procedimientos que manipulan una imagen digital cualquiera con el fin de evaluar su contenido, mejorar su apariencia, recuperar información perdida por degradación, comprimir la información para su almacenamiento o transmisión, detectar las características de los objetos presentes en la imagen, o interpretar su contenido para llevar a cabo una serie de procesos informáticos, como el aprendizaje de patrones y objetos, reconocimiento de caracteres escritos, reconocimiento facial, reconstrucción tridimensional de imágenes bidimensionales, detección de movimiento y clasificación de imágenes, entre otros. Considerando esto, el procesamiento digital de imágenes puede resultar computacionalmente costoso y más aún si se procesa un volumen de imágenes que puede rondar el orden de los TB. Consecuentemente, trabajar sobre una única computadora resulta poco práctico por restricciones de memoria y tiempo. Lógicamente, esto deriva en la búsqueda de alternativas tecnológicas que permitan el procesamiento de grandes volúmenes de información así como la obtención de imágenes de buena calidad. El uso de plataformas de procesamiento masivo y escalable de datos y las técnicas de optimización basadas en metaheurísticas aparecen entonces como una alternativa factible. Por un lado, Hadoop es un framework para el procesamiento paralelo que ganó gran popularidad en los últimos dados su modelo de programación simple y gran capacidad de almacenamiento. Por otro lado, las metaheurísticas se vienen aplicando con excelentes resultados en la resolución de problemas de optimización y tareas relacionadas al procesamiento de imágenes. Es por ello que, la línea de investigación presentada aquí se enfoca en la integración de algoritmos de procesamiento de imágenes, metaheurísticas y plataformas de procesamiento para su aplicación en la restauración de imágenes digitales.Eje: Procesamiento Distribuído y ParaleloRed de Universidades con Carreras en Informática (RedUNCI

    A link-based storage scheme for efficient aggregate query processing on clustered road networks

    Get PDF
    Cataloged from PDF version of article.The need to have efficient storage schemes for spatial networks is apparent when the volume of query processing in some road networks (e.g., the navigation systems) is considered. Specifically, under the assumption that the road network is stored in a central server, the adjacent data elements in the network must be clustered on the disk in such a way that the number of disk page accesses is kept minimal during the processing of network queries. In this work, we introduce the link-based storage scheme for clustered road networks and compare it with the previously proposed junction-based storage scheme. in order to investigate the performance of aggregate network queries in clustered road networks, we extend our recently proposed clustering hypergraph model from junction-based storage to link-based storage. We propose techniques for additional storage savings in bidirectional networks that make the link-based storage scheme even more preferable in terms of the storage efficiency. We evaluate the performance of our link-based storage scheme against the junction-based storage scheme both theoretically and empirically. The results of the experiments conducted on a wide range of road network datasets show that the link-based storage scheme is preferable in terms of both storage and query processing efficiency. (C) 2009 Elsevier B.V. All rights reserved

    In-situ backplane inspection of fiber optic ferrules

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2006.Includes bibliographical references (p. 193-200).The next generation of supercomputers, routers, and switches are envisioned to have hundreds and thousands of optical interconnects among components. An optical interconnect attains a bandwidth-distance product as high as 90 GHz.km, about 200 times higher than can be attained by a copper interconnect. But defects (such as dust or scratches) as small as 1 micron on the connector endfaces can seriously degrade performance. Therefore, for every mate and de-mate, optical connectors must be inspected to ensure high performance data transmission capabilities. The tedious and time consuming task of manually inspecting each connector is one of the barriers to adoption of optics in the backplanes of large card-based machines. This thesis provides a framework and method for in-situ automatic inspection of backplane optical connectors. We develop an inspection system that fits into the envelope of a single daughter card, moves a custom microscope objective in three degrees of freedom to image the connector endfaces, and detects and classifies defects with major diameter of one micron or larger.The inspection machine mounts to the backplane in the same manner as a daughter card, and positions the microscope with better than 0.2 micron resolution and 15 micron repeatability in three degrees of freedom. Despite tight packaging constraints, the ultra-long working distance custom microscope objective attains 1 micron Rayleigh resolution via deconvolution. Several images taken at different exposures and focus settings are fused to extend the imaging sensor's limited dynamic range and depth of field. A set of machine-vision algorithms are developed to process the resulting image and detect and classify the fiber core, cladding and their defects.by Andrew K. Wilson.Ph.D

    Imaging using volume holograms

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2004.Includes bibliographical references (p. 185-196).Volume holograms can be thought of as self-aligned 3D stacks of diffractive elements that operate coherently on incident fields as they propagate through the structure. In this thesis, we propose, design and implement imaging systems that incorporate a volume hologram as one of the elements that operate on the optical field incident on the imaging system. We show that a volume hologram acts like a "smart" lens that can perform several useful functions in an imaging system and demonstrate the same experimentally. To this end, we first develop the theory of volume holographic imaging and calculate the imaging properties of the field diffracted by a volume hologram for the special cases of coherent and incoherent monochromatic illumination. We concentrate on two simple imaging system configurations, viz. volume holograms recorded using a planar signal and either a spherical or a planar reference beam. We pay particular attention to the depth resolution of each system and discuss how appropriately designed objective optics placed before the volume hologram can enhance the depth resolution. We also derive the imaging properties of the volume holographic "smart" lens under conditions of incoherent broadband illumination. We show that multiple volume holographic sensors can be configured to acquire different perspectives of an object with enhanced resolution. We experimentally verify the developed theories and implement several volume holographic imaging systems for a wide range of imaging applications. We compare volume holographic imaging with some commonly used 3D imaging systems and discuss the merits of each system. We find that volume holograms with low diffraction efficiencies result in lower photon counts(cont.) and information loss and hence poorer imaging performance. We present an optical method to solve this problem by resonating the volume hologram inside an optical cavity. Finally, we conclude with some directions for future work in this emerging field.by Arnab Sinha.Ph.D

    Parallel triangular solution in the out-of-core multifrontal approach for solving large sparse linear systems

    Get PDF
    We consider the solution of very large systems of linear equations with direct multifrontal methods. In this context the size of the factors is an important limitation for the use of sparse direct solvers. We will thus assume that the factors have been written on the local disks of our target multiprocessor machine during parallel factorization. Our main focus is the study and the design of efficient approaches for the forward and backward substitution phases after a sparse multifrontal factorization. These phases involve sparse triangular solution and have often been neglected in previous works on sparse direct factorization. In many applications, however, the time for the solution can be the main bottleneck for the performance. This thesis consists of two parts. The focus of the first part is on optimizing the out-of-core performance of the solution phase. The focus of the second part is to further improve the performance by exploiting the sparsity of the right-hand side vectors. In the first part, we describe and compare two approaches to access data from the hard disk. We then show that in a parallel environment the task scheduling can strongly influence the performance. We prove that a constraint ordering of the tasks is possible; it does not introduce any deadlock and it improves the performance. Experiments on large real test problems (more than 8 million unknowns) using an out-of-core version of a sparse multifrontal code called MUMPS (MUltifrontal Massively Parallel Solver) are used to analyse the behaviour of our algorithms. In the second part, we are interested in applications with sparse multiple right-hand sides, particularly those with single nonzero entries. The motivating applications arise in electromagnetism and data assimilation. In such applications, we need either to compute the null space of a highly rank deficient matrix or to compute entries in the inverse of a matrix associated with the normal equations of linear least-squares problems. We cast both of these problems as linear systems with multiple right-hand side vectors, each containing a single nonzero entry. We describe, implement and comment on efficient algorithms to reduce the input-output cost during an outof- core execution. We show how the sparsity of the right-hand side can be exploited to limit both the number of operations and the amount of data accessed. The work presented in this thesis has been partially supported by SOLSTICE ANR project (ANR-06-CIS6-010)
    corecore