403 research outputs found

    Distributed image reconstruction for very large arrays in radio astronomy

    Get PDF
    Current and future radio interferometric arrays such as LOFAR and SKA are characterized by a paradox. Their large number of receptors (up to millions) allow theoretically unprecedented high imaging resolution. In the same time, the ultra massive amounts of samples makes the data transfer and computational loads (correlation and calibration) order of magnitudes too high to allow any currently existing image reconstruction algorithm to achieve, or even approach, the theoretical resolution. We investigate here decentralized and distributed image reconstruction strategies which select, transfer and process only a fraction of the total data. The loss in MSE incurred by the proposed approach is evaluated theoretically and numerically on simple test cases.Comment: Sensor Array and Multichannel Signal Processing Workshop (SAM), 2014 IEEE 8th, Jun 2014, Coruna, Spain. 201

    Distributed Deblurring of Large Images of Wide Field-Of-View

    Full text link
    Image deblurring is an economic way to reduce certain degradations (blur and noise) in acquired images. Thus, it has become essential tool in high resolution imaging in many applications, e.g., astronomy, microscopy or computational photography. In applications such as astronomy and satellite imaging, the size of acquired images can be extremely large (up to gigapixels) covering wide field-of-view suffering from shift-variant blur. Most of the existing image deblurring techniques are designed and implemented to work efficiently on centralized computing system having multiple processors and a shared memory. Thus, the largest image that can be handle is limited by the size of the physical memory available on the system. In this paper, we propose a distributed nonblind image deblurring algorithm in which several connected processing nodes (with reasonable computational resources) process simultaneously different portions of a large image while maintaining certain coherency among them to finally obtain a single crisp image. Unlike the existing centralized techniques, image deblurring in distributed fashion raises several issues. To tackle these issues, we consider certain approximations that trade-offs between the quality of deblurred image and the computational resources required to achieve it. The experimental results show that our algorithm produces the similar quality of images as the existing centralized techniques while allowing distribution, and thus being cost effective for extremely large images.Comment: 16 pages, 10 figures, submitted to IEEE Trans. on Image Processin

    Transcription et codage des imprimés de la Renaissance.: Réflexions pour un inventaire des caractÚres anciens

    Get PDF
    International audiencePreserving as many informations as possible from the original document, a transcription of ancient printed text should serve as a basis not only for literary analysis, but also for palaeotypographic studies. With this aim, we require a standardized encoding able to preserve a unequivocal link between the characters of the digital transcription and those of the original source. We define here the new concept of typem, a transitional element between the notion of character and glyph as defined by Unicode. It is proposed here to use MUFI, an extension to the Unicode standard, by adding new code points dedicated to "typems", in order to produce what we call "typemic transcriptions", reproducing all the characters of the original document. Finally, a project of a census of all the typems, named PICA (Projet d'Inventaire des CaractĂšres Anciens), is described.Conservant le plus grand nombre possible d'informations du document-source, une transcription de texte imprimĂ© ancien devrait pouvoir servir de base non seulement Ă  des analyses littĂ©raires, mais Ă©galement Ă  des Ă©tudes " palĂ©otypographiques ". Pour ce faire, il faudrait disposer d'un codage normalisĂ© permettant d'assurer une correspondance univoque entre les caractĂšres de la transcription numĂ©rique et ceux de la source originale. Le terme " caractĂšre " pouvant prĂȘter Ă  confusion, nous introduisons un nouveau concept : celui de " typĂšme ", intermĂ©diaire entre le caractĂšre et le glyphe tel qu'Unicode les dĂ©finit. Nous proposons d'utiliser le codage MUFI, une extension d'Unicode, augmentĂ©e des typĂšmes attestĂ©s dans les imprimĂ©s anciens, afin de produire une transcription dite " typĂ©mique ", reproduction fidĂšle de la composition typographique du document original. Nous concluons sur la nĂ©cessitĂ© de rĂ©aliser l'inventaire des typĂšmes attestĂ©s dans les imprimĂ©s anciens, qui fera l'objet d'un Projet d'Inventaire des CaractĂšres Anciens (PICA) actuellement Ă  l'Ă©tude

    The Strehl Ratio in Adaptive Optics Images: Statistics and Estimation

    Full text link
    Statistical properties of the intensity in adaptive optics images are usually modeled with a Rician distribution. We study the central point of the image, where this model is inappropriate for high to very high correction levels. The central point is an important problem because it gives the Strehl ratio distribution. We show that the central point distribution can be modeled using a non-central Gamma distribution.Comment: 8 pages, 5 figure

    Improved Classical and Quantum Algorithms for Subset-Sum

    Get PDF
    We present new classical and quantum algorithms for solving random subset-sum instances. First, we improve over the Becker-Coron-Joux algorithm (EUROCRYPT 2011) from O~(20.291n)\tilde{\mathcal{O}}(2^{0.291 n}) downto O~(20.283n)\tilde{\mathcal{O}}(2^{0.283 n}), using more general representations with values in {−1,0,1,2}\{-1,0,1,2\}. Next, we improve the state of the art of quantum algorithms for this problem in several directions. By combining the Howgrave-Graham-Joux algorithm (EUROCRYPT 2010) and quantum search, we devise an algorithm with asymptotic cost O~(20.236n)\tilde{\mathcal{O}}(2^{0.236 n}), lower than the cost of the quantum walk based on the same classical algorithm proposed by Bernstein, Jeffery, Lange and Meurer (PQCRYPTO 2013). This algorithm has the advantage of using \emph{classical} memory with quantum random access, while the previously known algorithms used the quantum walk framework, and required \emph{quantum} memory with quantum random access. We also propose new quantum walks for subset-sum, performing better than the previous best time complexity of O~(20.226n)\tilde{\mathcal{O}}(2^{0.226 n}) given by Helm and May (TQC 2018). We combine our new techniques to reach a time O~(20.216n)\tilde{\mathcal{O}}(2^{0.216 n}). This time is dependent on a heuristic on quantum walk updates, formalized by Helm and May, that is also required by the previous algorithms. We show how to partially overcome this heuristic, and we obtain an algorithm with quantum time O~(20.218n)\tilde{\mathcal{O}}(2^{0.218 n}) requiring only the standard classical subset-sum heuristics

    Recursive cheating strategies for the relativistic FQ\mathbb F_Q bit commitment protocol

    Get PDF
    International audienceIn this paper, we study relativistic bit commitment, which uses timing and location constraints to achieve information theoretic security. We consider the FQ multi-round bit commitment scheme introduced by Lunghi et al. [LKB + 15]. This protocol was shown secure against classical adversaries as long as the number of rounds m is small compared to √ Q where Q is the size of the used field in the protocol [CCL15, FF16]. In this work, we study classical attacks on this scheme. We use classical strategies for the CHSHQ game described in [BS15] to derive cheating strategies for this protocol. In particular, our cheating strategy shows that if Q is an even power of any prime, then the protocol is not secure when the number of rounds m is of the order of √ Q. For those values of Q, this means that the upper bound of [CCL15, FF16] is essentially optimal
    • 

    corecore