10 research outputs found

    A Dynamic Programming Solution to Bounded Dejittering Problems

    Full text link
    We propose a dynamic programming solution to image dejittering problems with bounded displacements and obtain efficient algorithms for the removal of line jitter, line pixel jitter, and pixel jitter.Comment: The final publication is available at link.springer.co

    A variational method for dejittering large fluorescence line scanner images

    Get PDF
    International audienceWe propose a variational method dedicated to jitter correction of large fluorescence scanner images. Our method consists in minimizing a global energy functional to estimate a dense displacement field representing the spatially-varying jitter. The computational approach is based on a half-quadratic splitting of the energy functional, which decouples the realignment data term and the dedicated differential-based regularizer. The resulting problem amounts to alternatively solving two convex and nonconvex optimization subproblems with appropriate algorithms. Experimental results on artificial and large real fluorescence images demonstrate that our method is not only capable to handle large displacements but is also efficient in terms of subpixel precision without inducing additional intensity artifacts

    A class of second-order geometric quasilinear hyperbolic PDEs and their application in imaging science

    Get PDF
    In this paper, we study damped second-order dynamics, which are quasilinear hyperbolic partial differential equations (PDEs). This is inspired by the recent development of second-order damping systems for accelerating energy decay of gradient flows. We concentrate on two equations: one is a damped second-order total variation flow, which is primarily motivated by the application of image denoising; the other is a damped second-order mean curvature flow for level sets of scalar functions, which is related to a non-convex variational model capable of correcting displacement errors in image data (e.g. dejittering). For the former equation, we prove the existence and uniqueness of the solution. For the latter, we draw a connection between the equation and some second-order geometric PDEs evolving the hypersurfaces which are described by level sets of scalar functions, and show the existence and uniqueness of the solution for a regularized version of the equation. The latter is used in our algorithmic development. A general algorithm for numerical discretization of the two nonlinear PDEs is proposed and analyzed. Its efficiency is demonstrated by various numerical examples, where simulations on the behavior of solutions of the new equations and comparisons with first-order flows are also documented

    A class of second-order geometric quasilinear hyperbolic PDEs and their application in imaging science

    Get PDF
    In this paper, we study damped second-order dynamics, which are quasilinear hyperbolic partial differential equations (PDEs). This is inspired by the recent development of second-order damping systems for accelerating energy decay of gradient flows. We concentrate on two equations: one is a damped second-order total variation flow, which is primarily motivated by the application of image denoising; the other is a damped second-order mean curvature flow for level sets of scalar functions, which is related to a non-convex variational model capable of correcting displacement errors in image data (e.g. dejittering). For the former equation, we prove the existence and uniqueness of the solution. For the latter, we draw a connection between the equation and some second-order geometric PDEs evolving the hypersurfaces which are described by level sets of scalar functions, and show the existence and uniqueness of the solution for a regularized version of the equation. The latter is used in our algorithmic development. A general algorithm for numerical discretization of the two nonlinear PDEs is proposed and analyzed. Its efficiency is demonstrated by various numerical examples, where simulations on the behavior of solutions of the new equations and comparisons with first-order flows are also documented

    Generating structured non-smooth priors and associated primal-dual methods

    Get PDF
    The purpose of the present chapter is to bind together and extend some recent developments regarding data-driven non-smooth regularization techniques in image processing through the means of a bilevel minimization scheme. The scheme, considered in function space, takes advantage of a dualization framework and it is designed to produce spatially varying regularization parameters adapted to the data for well-known regularizers, e.g. Total Variation and Total Generalized variation, leading to automated (monolithic), image reconstruction workflows. An inclusion of the theory of bilevel optimization and the theoretical background of the dualization framework, as well as a brief review of the aforementioned regularizers and their parameterization, makes this chapter a self-contained one. Aspects of the numerical implementation of the scheme are discussed and numerical examples are provided

    Holistic improvement of image acquisition and reconstruction in fluorescence microscopy

    Get PDF
    Recent developments in microscopic imaging led to a better understanding of intra- and intercellular metabolic processes and, for example, to visualize structural properties of viral pathogens. In this thesis, the imaging process of widefield and confocal scanning microscopy techniques is treated holistically to highlight general strategies and maximise their information content. Poisson or shot noise is assumed to be the fundamental noise process for the given measurements. A stable focus position is a basic condition for e.g. long-term measurements in order to provide reliable information about potential changes inside the Field-of-view. While newer microscopy systems can be equipped with hardware autofocus, this is not yet the widespread standard. For image-based focus analysis, different metrics for ideal, noisy and aberrated, in case of spherical aberration and astigmatism, measurements are presented. A stable focus position is also relevant in the example of 2-photon confocal imaging and at the same time the situation is aggravated in the given example, the measurement of the retina in the living mouse. In addition to the natural drift of the focal position, which can be evaluated by means of previously introduced metrics, rhythmic heartbeat, respiration, unrhythmic muscle twitching and movement of the mouse kept in artificial sleep are added. A dejittering algorithm is presented for the measurement data obtained under these circumstances. Using the additional information about the sample distribution in ISM, a method for reconstructing 3D from 2D image data is presented in the form of thick slice unmixing. This method can further be used for suppression of light generated outside the focal layer of 3D data stacks and is compared to selective layer multi-view deconvolution. To reduce phototoxicity and save valuable measurement time for a 3D stack, the method of zLEAP is presented, by which omitted Z-planes are subsequently calculated and inserted

    Journal of Telecommunications and Information Technology, 2002, nr 2

    Get PDF
    kwartalni

    Designing new network adaptation and ATM adaptation layers for interactive multimedia applications

    Get PDF
    Multimedia services, audiovisual applications composed of a combination of discrete and continuous data streams, will be a major part of the traffic flowing in the next generation of high speed networks. The cornerstones for multimedia are Asynchronous Transfer Mode (ATM) foreseen as the technology for the future Broadband Integrated Services Digital Network (B-ISDN) and audio and video compression algorithms such as MPEG-2 that reduce applications bandwidth requirements. Powerful desktop computers available today can integrate seamlessly the network access and the applications and thus bring the new multimedia services to home and business users. Among these services, those based on multipoint capabilities are expected to play a major role.    Interactive multimedia applications unlike traditional data transfer applications have stringent simultaneous requirements in terms of loss and delay jitter due to the nature of audiovisual information. In addition, such stream-based applications deliver data at a variable rate, in particular if a constant quality is required.    ATM, is able to integrate traffic of different nature within a single network creating interactions of different types that translate into delay jitter and loss. Traditional protocol layers do not have the appropriate mechanisms to provide the required network quality of service (QoS) for such interactive variable bit rate (VBR) multimedia multipoint applications. This lack of functionalities calls for the design of protocol layers with the appropriate functions to handle the stringent requirements of multimedia.    This thesis contributes to the solution of this problem by proposing new Network Adaptation and ATM Adaptation Layers for interactive VBR multimedia multipoint services.    The foundations to build these new multimedia protocol layers are twofold; the requirements of real-time multimedia applications and the nature of compressed audiovisual data.    On this basis, we present a set of design principles we consider as mandatory for a generic Multimedia AAL capable of handling interactive VBR multimedia applications in point-to-point as well as multicast environments. These design principles are then used as a foundation to derive a first set of functions for the MAAL, namely; cell loss detection via sequence numbering, packet delineation, dummy cell insertion and cell loss correction via RSE FEC techniques.    The proposed functions, partly based on some theoretical studies, are implemented and evaluated in a simulated environment. Performances are evaluated from the network point of view using classic metrics such as cell and packet loss. We also study the behavior of the cell loss process in order to evaluate the efficiency to be expected from the proposed cell loss correction method. We also discuss the difficulties to map network QoS parameters to user QoS parameters for multimedia applications and especially for video information. In order to present a complete performance evaluation that is also meaningful to the end-user, we make use of the MPQM metric to map the obtained network performance results to a user level. We evaluate the impact that cell loss has onto video and also the improvements achieved with the MAAL.    All performance results are compared to an equivalent implementation based on AAL5, as specified by the current ITU-T and ATM Forum standards.    An AAL has to be by definition generic. But to fully exploit the functionalities of the AAL layer, it is necessary to have a protocol layer that will efficiently interface the network and the applications. This role is devoted to the Network Adaptation Layer.    The network adaptation layer (NAL) we propose, aims at efficiently interface the applications to the underlying network to achieve a reliable but low overhead transmission of video streams. Since this requires an a priori knowledge of the information structure to be transmitted, we propose the NAL to be codec specific.    The NAL targets interactive multimedia applications. These applications share a set of common requirements independent of the encoding scheme used. This calls for the definition of a set of design principles that should be shared by any NAL even if the implementation of the functions themselves is codec specific. On the basis of the design principles, we derive the common functions that NALs have to perform which are mainly two; the segmentation and reassembly of data packets and the selective data protection.    On this basis, we develop an MPEG-2 specific NAL. It provides a perceptual syntactic information protection, the PSIP, which results in an intelligent and minimum overhead protection of video information. The PSIP takes advantage of the hierarchical organization of the compressed video data, common to the majority of the compression algorithms, to perform a selective data protection based on the perceptual relevance of the syntactic information.    The transmission over the combined NAL-MAAL layers shows significant improvement in terms of CLR and perceptual quality compared to equivalent transmissions over AAL5 with the same overhead.    The usage of the MPQM as a performance metric, which is one of the main contributions of this thesis, leads to a very interesting observation. The experimental results show that for unexpectedly high CLRs, the average perceptual quality remains close to the original value. The economical potential of such an observation is very important. Given that the data flows are VBR, it is possible to improve network utilization by means of statistical multiplexing. It is therefore possible to reduce the cost per communication by increasing the number of connections with a minimal loss in quality.    This conclusion could not have been derived without the combined usage of perceptual and network QoS metrics, which have been able to unveil the economic potential of perceptually protected streams.    The proposed concepts are finally tested in a real environment where a proof-of-concept implementation of the MAAL has shown a behavior close to the simulated results therefore validating the proposed multimedia protocol layers

    Applications of satellite technology to broadband ISDN networks

    Get PDF
    Two satellite architectures for delivering broadband integrated services digital network (B-ISDN) service are evaluated. The first is assumed integral to an existing terrestrial network, and provides complementary services such as interconnects to remote nodes as well as high-rate multicast and broadcast service. The interconnects are at a 155 Mbs rate and are shown as being met with a nonregenerative multibeam satellite having 10-1.5 degree spots. The second satellite architecture focuses on providing private B-ISDN networks as well as acting as a gateway to the public network. This is conceived as being provided by a regenerative multibeam satellite with on-board ATM (asynchronous transfer mode) processing payload. With up to 800 Mbs offered, higher satellite EIRP is required. This is accomplished with 12-0.4 degree hopping beams, covering a total of 110 dwell positions. It is estimated the space segment capital cost for architecture one would be about 190Mwhereasthesecondarchitecturewouldbeabout190M whereas the second architecture would be about 250M. The net user cost is given for a variety of scenarios, but the cost for 155 Mbs services is shown to be about $15-22/minute for 25 percent system utilization

    Hardware-accelerated algorithms in visual computing

    Get PDF
    This thesis presents new parallel algorithms which accelerate computer vision methods by the use of graphics processors (GPUs) and evaluates them with respect to their speed, scalability, and the quality of their results. It covers the fields of homogeneous and anisotropic diffusion processes, diffusion image inpainting, optic flow, and halftoning. In this turn, it compares different solvers for homogeneous diffusion and presents a novel \u27extended\u27 box filter. Moreover, it suggests to use the fast explicit diffusion scheme (FED) as an efficient and flexible solver for nonlinear and in particular for anisotropic parabolic diffusion problems on graphics hardware. For elliptic diffusion-like processes, it recommends to use cascadic FED or Fast Jacobi schemes. The presented optic flow algorithm represents one of the fastest yet very accurate techniques. Finally, it presents a novel halftoning scheme which yields state-of-the-art results for many applications in image processing and computer graphics.Diese Arbeit präsentiert neue parallele Algorithmen zur Beschleunigung von Methoden in der Bildinformatik mittels Grafikprozessoren (GPUs), und evaluiert diese im Hinblick auf Geschwindigkeit, Skalierungsverhalten, und Qualität der Resultate. Sie behandelt dabei die Gebiete der homogenen und anisotropen Diffusionsprozesse, Inpainting (Bildvervollständigung) mittels Diffusion, die Bestimmung des optischen Flusses, sowie Halbtonverfahren. Dabei werden verschiedene Löser für homogene Diffusion verglichen und ein neuer \u27erweiterter\u27 Mittelwertfilter präsentiert. Ferner wird vorgeschlagen, das schnelle explizite Diffusionsschema (FED) als effizienten und flexiblen Löser für parabolische nichtlineare und speziell anisotrope Diffusionsprozesse auf Grafikprozessoren einzusetzen. Für elliptische diffusionsartige Prozesse wird hingegen empfohlen, kaskadierte FED- oder schnelle Jacobi-Verfahren einzusetzen. Der vorgestellte Algorithmus zur Berechnung des optischen Flusses stellt eines der schnellsten und dennoch äußerst genauen Verfahren dar. Schließlich wird ein neues Halbtonverfahren präsentiert, das in vielen Bereichen der Bildverarbeitung und Computergrafik Ergebnisse produziert, die den Stand der Technik repräsentieren
    corecore