22,247 research outputs found

    Inversion Ranks for Lossless Compression of Color Palette Images

    Get PDF
    Palette images are widely used in World Wide Web (WWW) and game cartridges applications. Many image used in the WWW are stored and transmitted after they are compressed losslessly with the standard graphics interchange format (GIF), or portable network graphic (PNG). Well known two dimensional compression scheme; such as JPEG-LS and CALIC, fails to yield better compression than GIF or PNG, due to the fact that the pixel value represent indices that point to color values in a look-up table. The GIF standard uses Lempel-Ziv compression, which treats the image as a one-dimensional sequence of index values, ignoring two-dimensional nature. Bzip, another universal compressor, yields even better compression gain that the GIF, PNG, JPEG-LS, and CALIC. Variants of block sorting coders, such as Bzip2, utilize Burrows-Wheeler transformation (BWT) by Burrows M. and Wheeler D. J. (1994), followed by move-to-front (MTF) transformation by Bentley J. L. (1986), Elias, P (1987) before using a statistical coder at the final stage. In this paper, we show that the compression performance of block sorting coder can be improved almost 14% on average by utilizing inversion ranks instead of the move-to-front coding

    Active inference and oculomotor pursuit: the dynamic causal modelling of eye movements.

    Get PDF
    This paper introduces a new paradigm that allows one to quantify the Bayesian beliefs evidenced by subjects during oculomotor pursuit. Subjects' eye tracking responses to a partially occluded sinusoidal target were recorded non-invasively and averaged. These response averages were then analysed using dynamic causal modelling (DCM). In DCM, observed responses are modelled using biologically plausible generative or forward models - usually biophysical models of neuronal activity

    Chained activation of the motor system during language understanding

    Get PDF
    Two experiments were carried out to investigate whether and how one important characteristic of the motor system, that is its goal-directed organization in motor chains, is reflected in language processing. This possibility stems from the embodied theory of language, according to which the linguistic system re-uses the structures of the motor system. The participants were presented with nouns of common tools preceded by a pair of verbs expressing grasping or observational motor chains (i.e., grasp-to-move, grasp-to-use, look-at-to-grasp, and look-at-to-stare). They decided whether the tool mentioned in the sentence was the same as that displayed in a picture presented shortly after. A primacy of the grasp-to-use motor chain over the other motor chains in priming the participants' performance was observed in both the experiments. More interestingly, we found that the motor information evoked by the noun was modulated by the specific motor-chain expressed by the preceding verbs. Specifically, with the grasping chain aimed at using the tool, the functional motor information prevailed over the volumetric information, and vice versa with the grasping chain aimed at moving the tool (Experiment 2). Instead, the functional and volumetric information were balanced for those motor chains that comprise at least an observational act (Experiment 1). Overall our results are in keeping with the embodied theory of language and suggest that understanding sentences expressing an action directed toward a tool drives a chained activation of the motor system

    Depth Fields: Extending Light Field Techniques to Time-of-Flight Imaging

    Full text link
    A variety of techniques such as light field, structured illumination, and time-of-flight (TOF) are commonly used for depth acquisition in consumer imaging, robotics and many other applications. Unfortunately, each technique suffers from its individual limitations preventing robust depth sensing. In this paper, we explore the strengths and weaknesses of combining light field and time-of-flight imaging, particularly the feasibility of an on-chip implementation as a single hybrid depth sensor. We refer to this combination as depth field imaging. Depth fields combine light field advantages such as synthetic aperture refocusing with TOF imaging advantages such as high depth resolution and coded signal processing to resolve multipath interference. We show applications including synthesizing virtual apertures for TOF imaging, improved depth mapping through partial and scattering occluders, and single frequency TOF phase unwrapping. Utilizing space, angle, and temporal coding, depth fields can improve depth sensing in the wild and generate new insights into the dimensions of light's plenoptic function.Comment: 9 pages, 8 figures, Accepted to 3DV 201

    Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks

    Get PDF
    Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made both neurobiologically more plausible and computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a 'recognizing RNN' (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, for example, fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of recurrent neural networks may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics
    corecore