1,258 research outputs found

    Optical simulation of quantum logic

    Get PDF
    A constructive method for simulating small-scale quantum circuits by use of linear optical devices is presented. It relies on the representation of several quantum bits by a single photon, and on the implementation of universal quantum gates using simple optical components (beam splitters, phase shifters, etc.). This suggests that the optical realization of small quantum networks with present-day quantum optics technology is a reasonable goal. This technique could be useful for demonstrating basic concepts of simple quantum algorithms or error-correction schemes. The optical analog of a nontrivial three-bit quantum circuit is presented as an illustration

    Performance of a Distributed Video Codec Behaviours in Presence of Transmission Errors

    Get PDF
    Distributed Video Coding (DVC) is one of the most important and active research fields in video coding. The basic idea underlying DVC is to exploit the temporal correlation among frames directly in the decoding phase. The main properties of a distributed video coding system is that the computational load could in principle be shifted towards the decoder, with respect to a traditional video coding system. Anyway, the distributed coding approach has other interesting properties. In particular, one of the most promising benefits derived by the use of DVC is its natural error resilience to channel errors. Nevertheless, very few results on the actual error resilience properties of distributed video coding systems have been presented in literature. In this contribution we present a detailed analysis of the error resilience properties of a video coding system based on Stanford architecture. We analyze the behavior of such codec in presence of channel error, first focusing on the effect of such errors on the different parts of the encoded stream, and then making a preliminary comparison with H264

    High Dynamic Range Images Coding: Embedded and Multiple Description

    Get PDF
    The aim of this work is to highlight and discuss a new paradigm for representing high-dynamic range (HDR) images that can be used for both its coding and describing its multimedia content. In particular, the new approach defines a new representation domain that, conversely from the classical compressed one, enables to identify and exploit content metadata. Information related to content are used here to control both the encoding and the decoding process and are directly embedded in the compressed data stream. Firstly, thanks to the proposed solution, the content description can be quickly accessed without the need of fully decoding the compressed stream. This fact ensures a significant improvement in the performance of search and retrieval systems, such as for semantic browsing of image databases. Then, other potential benefits can be envisaged especially in the field of management and distribution of multimedia content, because the direct embedding of content metadata preserves the consistency between content stream and content description without the need of other external frameworks, such as MPEG-21. The paradigm proposed here may also be shifted to Multiple description coding, where different representations of the HDR image can be generated accordingly to its content. The advantages provided by the new proposed method are visible at different levels, i.e. when evaluating the redundancy reduction. Moreover, the descriptors extracted from the compressed data stream could be actively used in complex applications, such as fast retrieval of similar images from huge databases

    Low Level Processing of Audio and Video Information for Extracting the Semantics of Content

    Get PDF
    The problem of semantic indexing of multimedia documents is actually of great interest due to the wide diffusion of large audio-video databases. We first briefly describe some techniques used to extract low-level features (e.g., shot change detection, dominant color extraction, audio classification etc.). Then the ToCAI (table of contents and analytical index) framework for content description of multimedia material is presented, together with an application which implements it. Finally we propose two algorithms suitable for extracting the high level semantics of a multimedia document. The first is based on finite-state machines and low-level motion indices, whereas the second uses hidden Markov models

    The Luminosity Function of Nearby Galaxy Clusters II: Redshifts and Luminosity Function for Galaxies in the Region of the Centaurus Cluster

    Full text link
    We acquired spectra for a random sample of galaxies within a 0.83 square degree region centered on the core of the Centaurus cluster. Radial velocities were obtained for 225 galaxies to limiting magnitudes of V < 19.5. Of the galaxies for which velocities were obtained, we find 35% to be member galaxies. Of the 78 member galaxies, magnitudes range from 11.8 < V < 18.5 (-21.6 < M_{V} < -14.9 for H_o = 70 km s^-1 Mpc^-1) with a limiting central surface brightness of \mu_o < 22.5 mag arcsec^-2. We constructed the cluster galaxy luminosity function by using these spectroscopic results to calculate the expected fraction of cluster members in each magnitude bin. The faint-end slope of the luminosity function using this method is shallower than the one obtained using a statistical method to correct for background galaxy contamination. We also use the spectroscopy results to define surface brightness criteria to establish membership for the full sample. Using these criteria, we find a luminosity function very similar to the one constructed with the statistical background correction. For both, we find a faint-end slope alpha ~ -1.4. Adjusting the surface brightness membership criteria we find that the data are consistent with a faint-end slope as shallow as -1.22 or as steep as -1.50. We describe in this paper some of the limitations of using these methods for constructing the galaxy luminosity function.Comment: 16 pages, 12 figures, accepted by A

    The ToCAI DS for audio-visual documents. Structure and concepts

    Get PDF
    This document complements the description of the audio-visual (AV) description scheme (DS) called Table of Content-Analytical Index (TOCAI) proposed in MPEG-7 CFP that was evaluated in Lancaster (February 1999). This DS provides a hierarchical description of the time sequential structure of a multimedia document (suitable for browsing) together with an “analytical index” of AV objects of the document (suitable for retrieval). The TOCAI purposes and general characteristics are explained. The detailed structure of the DS is presented by means of UML notation as well, to clarify some issues that were not included in the original proposal. Some examples of XML instantiation are enclosed as well. Then an application example is shown. For an indication on how the TOCAI DS matches MPEG-7 requirements and evaluation criteria, refer to the original proposal submission

    Multimedia documents description by ordered hierarchies: the ToCAIdescription scheme

    Get PDF
    The authors present the ToCAI (Table of Content Analytical Index) framework, a description scheme (DS) for content description of audio-visual (AV) documents. The idea for such a description scheme comes from the structures used for indexing technical books (table of content and analytical index). This description scheme provides therefore a hierarchical description of the time sequential structure of a multimedia document (ToC), suitable for browsing, together with an “Analytical Index” (AI) of the key items of the document, suitable for retrieval. The AI allows one to represent in a ordered way the items of the AV document which are most relevant from the semantic point of view. The ordering criteria are therefore selected according to the application context. The detailed structure of the DS is presented by means of UML notation and an application example is also shown

    SPATION: Services Platforms and Applications for Transparent Information management in an in-hOme Network

    Get PDF
    The characteristics of PCs, huge storage capacity, tremendous processing power, and high flexibility are becoming available for consumer devices like set-top boxes, TVs, and VCRs. Interconnection of these devices and wireless communication with various portable devices will create a complex home system with the capacity to store many types of data and offer new ways of interacting with it. To offer the user high flexibility and ease of use, new solutions are required. Advanced retrieval methods are needed to support accessing data stored anywhere in the home system from any device. Meta-data obtained through analysis, services, and logging user behaviour is needed to support these functions. Transfer of data must be easy, and transfer and adaptation of accompanying meta-data must be transparent to the user. The combination of broadcast, storage, and internet will open the way to new types of applications and interactions with the home system. A large distributed storage space will be available in future home networks consisting of CE equipment, PCs and handheld devices. The objective of the project is to find innovative solutions for the movement, organization and retrieval of information in such a heterogeneous home system. Three major technical issues are under consideration: 1) New Meta-data computing methods are needed to support advanced retrieval methods. This means ways to solve how to generate meta-data by analysing the content, howto combine meta-data from various sources and how to transform meta-data for use by different devices; 2) New services providing meta-data, applications and UIs to make retrieval of information easier for non-IT-expert users; 3) Standards for inter-storage communication need to be extended in the area of handheld devices, meta-data storage and services

    Describing multimedia documents in natural and semantic-driven ordered hierarchies

    Get PDF
    In this work we present the ToCAI (Table of Content-Analytical Index) framework, a description scheme (DS) for content description of audio-visual (AV) documents. The idea for such a description scheme comes out from the structures used for indexing technical books (table of content and analytical index). This description scheme provides therefore a hierarchical description of the time sequential structure of a multimedia document (ToC), suitable for browsing, together with an analytical index (AI) of the key items of the document, suitable for retrieval. The AI allows to represent in an ordered way the items of the AV document which are most relevant from the semantic point of view. The ordering criteria are therefore selected according to the application context. The detailed structure of the DS is presented by means of UML notation as well and an application example is shown

    The XMM-LSS survey: the Class 1 cluster sample over the extended 11 deg2^2 and its spatial distribution

    Full text link
    This paper presents 52 X-ray bright galaxy clusters selected within the 11 deg2^2 XMM-LSS survey. 51 of them have spectroscopic redshifts (0.05<z<1.060.05<z<1.06), one is identified at zphot=1.9z_{\rm phot}=1.9, and all together make the high-purity "Class 1" (C1) cluster sample of the XMM-LSS, the highest density sample of X-ray selected clusters with a monitored selection function. Their X-ray fluxes, averaged gas temperatures (median TX=2T_X=2 keV), luminosities (median LX,500=5×1043L_{X,500}=5\times10^{43} ergs/s) and total mass estimates (median 5×1013h1M5\times10^{13} h^{-1} M_{\odot}) are measured, adapting to the specific signal-to-noise regime of XMM-LSS observations. The redshift distribution of clusters shows a deficit of sources when compared to the cosmological expectations, regardless of whether WMAP-9 or Planck-2013 CMB parameters are assumed. This lack of sources is particularly noticeable at 0.4z0.90.4 \lesssim z \lesssim 0.9. However, after quantifying uncertainties due to small number statistics and sample variance we are not able to put firm (i.e. >3σ>3 \sigma) constraints on the presence of a large void in the cluster distribution. We work out alternative hypotheses and demonstrate that a negative redshift evolution in the normalization of the LXTXL_{X}-T_X relation (with respect to a self-similar evolution) is a plausible explanation for the observed deficit. We confirm this evolutionary trend by directly studying how C1 clusters populate the LXTXzL_{X}-T_X-z space, properly accounting for selection biases. We point out that a systematically evolving, unresolved, central component in clusters and groups (AGN contamination or cool core) can impact the classification as extended sources and be partly responsible for the observed redshift distribution.[abridged]Comment: 33 pages, 21 figures, 3 tables ; accepted for publication in MNRA
    corecore