26,724 research outputs found

    Solvent fluctuations induce non-Markovian kinetics in hydrophobic pocket-ligand binding

    Full text link
    We investigate the impact of water fluctuations on the key-lock association kinetics of a hydrophobic ligand (key) binding to a hydrophobic pocket (lock) by means of a minimalistic stochastic model system. It describes the collective hydration behavior of the pocket by bimodal fluctuations of a water-pocket interface that dynamically couples to the diffusive motion of the approaching ligand via the hydrophobic interaction. This leads to a set of overdamped Langevin equations in 2D-coordinate-space, that is Markovian in each dimension. Numerical simulations demonstrate locally increased friction of the ligand, decelerated binding kinetics, and local non-Markovian (memory) effects in the ligand's reaction coordinate as found previously in explicit-water molecular dynamics studies of model hydrophobic pocket-ligand binding [1,2]. Our minimalistic model elucidates the origin of effectively enhanced friction in the process that can be traced back to long-time decays in the force-autocorrelation function induced by the effective, spatially fluctuating pocket-ligand interaction. Furthermore, we construct a generalized 1D-Langevin description including a spatially local memory function that enables further interpretation and a semi-analytical quantification of the results of the coupled 2D-system

    Iterative Decoding and Turbo Equalization: The Z-Crease Phenomenon

    Full text link
    Iterative probabilistic inference, popularly dubbed the soft-iterative paradigm, has found great use in a wide range of communication applications, including turbo decoding and turbo equalization. The classic approach of analyzing the iterative approach inevitably use the statistical and information-theoretical tools that bear ensemble-average flavors. This paper consider the per-block error rate performance, and analyzes it using nonlinear dynamical theory. By modeling the iterative processor as a nonlinear dynamical system, we report a universal "Z-crease phenomenon:" the zig-zag or up-and-down fluctuation -- rather than the monotonic decrease -- of the per-block errors, as the number of iteration increases. Using the turbo decoder as an example, we also report several interesting motion phenomenons which were not previously reported, and which appear to correspond well with the notion of "pseudo codewords" and "stopping/trapping sets." We further propose a heuristic stopping criterion to control Z-crease and identify the best iteration. Our stopping criterion is most useful for controlling the worst-case per-block errors, and helps to significantly reduce the average-iteration numbers.Comment: 6 page

    Class of near-perfect coded apertures

    Get PDF
    Coded aperture imaging of gamma ray sources has long promised an improvement in the sensitivity of various detector systems. The promise has remained largely unfulfilled, however, for either one of two reasons. First, the encoding/decoding method produces artifacts, which even in the absence of quantum noise, restrict the quality of the reconstructed image. This is true of most correlation-type methods. Second, if the decoding procedure is of the deconvolution variety, small terms in the transfer function of the aperture can lead to excessive noise in the reconstructed image. It is proposed to circumvent both of these problems by use of a uniformly redundant array (URA) as the coded aperture in conjunction with a special correlation decoding method

    Image Characterization and Classification by Physical Complexity

    Full text link
    We present a method for estimating the complexity of an image based on Bennett's concept of logical depth. Bennett identified logical depth as the appropriate measure of organized complexity, and hence as being better suited to the evaluation of the complexity of objects in the physical world. Its use results in a different, and in some sense a finer characterization than is obtained through the application of the concept of Kolmogorov complexity alone. We use this measure to classify images by their information content. The method provides a means for classifying and evaluating the complexity of objects by way of their visual representations. To the authors' knowledge, the method and application inspired by the concept of logical depth presented herein are being proposed and implemented for the first time.Comment: 30 pages, 21 figure

    Instruments of RT-2 Experiment onboard CORONAS-PHOTON and their test and evaluation III: Coded Aperture Mask and Fresnel Zone Plates in RT-2/CZT Payload

    Full text link
    Imaging in hard X-rays of any astrophysical source with high angular resolution is a challenging job. Shadow-casting technique is one of the most viable options for imaging in hard X-rays. We have used two different types of shadow-casters, namely, Coded Aperture Mask (CAM) and Fresnel Zone Plate (FZP) pair and two types of pixellated solid-state detectors, namely, CZT and CMOS in RT-2/CZT payload, the hard X-ray imaging instrument onboard the CORONAS-PHOTON satellite. In this paper, we present the results of simulations with different combinations of coders (CAM & FZP) and detectors that are employed in the RT-2/CZT payload. We discuss the possibility of detecting transient Solar flares with good angular resolution for various combinations. Simulated results are compared with laboratory experiments to verify the consistency of the designed configuration.Comment: 27 pages, 16 figures, Accepted for publication in Experimental Astronomy (in press

    Steered mixture-of-experts for light field images and video : representation and coding

    Get PDF
    Research in light field (LF) processing has heavily increased over the last decade. This is largely driven by the desire to achieve the same level of immersion and navigational freedom for camera-captured scenes as it is currently available for CGI content. Standardization organizations such as MPEG and JPEG continue to follow conventional coding paradigms in which viewpoints are discretely represented on 2-D regular grids. These grids are then further decorrelated through hybrid DPCM/transform techniques. However, these 2-D regular grids are less suited for high-dimensional data, such as LFs. We propose a novel coding framework for higher-dimensional image modalities, called Steered Mixture-of-Experts (SMoE). Coherent areas in the higher-dimensional space are represented by single higher-dimensional entities, called kernels. These kernels hold spatially localized information about light rays at any angle arriving at a certain region. The global model consists thus of a set of kernels which define a continuous approximation of the underlying plenoptic function. We introduce the theory of SMoE and illustrate its application for 2-D images, 4-D LF images, and 5-D LF video. We also propose an efficient coding strategy to convert the model parameters into a bitstream. Even without provisions for high-frequency information, the proposed method performs comparable to the state of the art for low-to-mid range bitrates with respect to subjective visual quality of 4-D LF images. In case of 5-D LF video, we observe superior decorrelation and coding performance with coding gains of a factor of 4x in bitrate for the same quality. At least equally important is the fact that our method inherently has desired functionality for LF rendering which is lacking in other state-of-the-art techniques: (1) full zero-delay random access, (2) light-weight pixel-parallel view reconstruction, and (3) intrinsic view interpolation and super-resolution

    Radar signal categorization using a neural network

    Get PDF
    Neural networks were used to analyze a complex simulated radar environment which contains noisy radar pulses generated by many different emitters. The neural network used is an energy minimizing network (the BSB model) which forms energy minima - attractors in the network dynamical system - based on learned input data. The system first determines how many emitters are present (the deinterleaving problem). Pulses from individual simulated emitters give rise to separate stable attractors in the network. Once individual emitters are characterized, it is possible to make tentative identifications of them based on their observed parameters. As a test of this idea, a neural network was used to form a small data base that potentially could make emitter identifications

    Study of information transfer optimization for communication satellites

    Get PDF
    The results are presented of a study of source coding, modulation/channel coding, and systems techniques for application to teleconferencing over high data rate digital communication satellite links. Simultaneous transmission of video, voice, data, and/or graphics is possible in various teleconferencing modes and one-way, two-way, and broadcast modes are considered. A satellite channel model including filters, limiter, a TWT, detectors, and an optimized equalizer is treated in detail. A complete analysis is presented for one set of system assumptions which exclude nonlinear gain and phase distortion in the TWT. Modulation, demodulation, and channel coding are considered, based on an additive white Gaussian noise channel model which is an idealization of an equalized channel. Source coding with emphasis on video data compression is reviewed, and the experimental facility utilized to test promising techniques is fully described
    corecore