341 research outputs found

    Sémiotique et signifique

    Get PDF
    Qu'est-ce que la sémiotique de Peirce doit à la signifique de Lady Welby? Intérêt mutuel dans l'échange de correspondance. Importance de la signifique au début du siècle: influence de Peirce sur Wittgenstein par l'intermédiaire de Lady Welby. Définition de la signifique: sens, signification et signifiance. Rapports certains avec le signe triadique mais la théorie de Peirce est une théorie du fonctionnement des signes et non du sensLady Welbys significs: the contribution of this theory to the semiotic of Peirce. Mutual interest in maintaining correspondence. The importance of significs at the turn of the century and Peirce's influence on Wittgenstein by way of Lady Welby. A definition of significs: sense, meaning and significance. Despite a certain link between significs and the triadic sign, Peirce's semiotic is a theory of the action of signs not a theory of meaning

    Traduire Charles S. Peirce. Le signe : le concept et son usage

    Get PDF

    La traduction dans les systèmes sémiotiques

    Get PDF
    Tout discours est traduction. La traduction est une activité fort diversifiée dont le champ sémiotique s'étend aussi bien à la médecine, au théâtre, à la musique qu'au domaine linguistique proprement dit. L'auteur étudie particulièrement les signes musicaux et leurs différents interprètes : compositeur, exécutant et auditeur. L'étude se termine par l'examen du cas de la traduction d'une langue indo-européenne en japonais.All discourse implies translation, a highly diversified discipline with a semiotical field encompassing medical science, theater, music as well as linguistics proper. The analysis focuses primarily on musical signs in conjunction with their various interpreters: composer, performer and listener. In conclusion, the author examines specific examples of an Indo-European language transiated into Japanese

    Poisson noise reduction with non-local PCA

    Full text link
    Photon-limited imaging arises when the number of photons collected by a sensor array is small relative to the number of detector elements. Photon limitations are an important concern for many applications such as spectral imaging, night vision, nuclear medicine, and astronomy. Typically a Poisson distribution is used to model these observations, and the inherent heteroscedasticity of the data combined with standard noise removal methods yields significant artifacts. This paper introduces a novel denoising algorithm for photon-limited images which combines elements of dictionary learning and sparse patch-based representations of images. The method employs both an adaptation of Principal Component Analysis (PCA) for Poisson noise and recently developed sparsity-regularized convex optimization algorithms for photon-limited images. A comprehensive empirical evaluation of the proposed method helps characterize the performance of this approach relative to other state-of-the-art denoising methods. The results reveal that, despite its conceptual simplicity, Poisson PCA-based denoising appears to be highly competitive in very low light regimes.Comment: erratum: Image man is wrongly name pepper in the journal versio

    The Degrees of Freedom of the Group Lasso

    Full text link
    This paper studies the sensitivity to the observations of the block/group Lasso solution to an overdetermined linear regression model. Such a regularization is known to promote sparsity patterns structured as nonoverlapping groups of coefficients. Our main contribution provides a local parameterization of the solution with respect to the observations. As a byproduct, we give an unbiased estimate of the degrees of freedom of the group Lasso. Among other applications of such results, one can choose in a principled and objective way the regularization parameter of the Lasso through model selection criteria

    CLEAR: Covariant LEAst-square Re-fitting with applications to image restoration

    Full text link
    In this paper, we propose a new framework to remove parts of the systematic errors affecting popular restoration algorithms, with a special focus for image processing tasks. Generalizing ideas that emerged for â„“1\ell_1 regularization, we develop an approach re-fitting the results of standard methods towards the input data. Total variation regularizations and non-local means are special cases of interest. We identify important covariant information that should be preserved by the re-fitting method, and emphasize the importance of preserving the Jacobian (w.r.t. the observed signal) of the original estimator. Then, we provide an approach that has a "twicing" flavor and allows re-fitting the restored signal by adding back a local affine transformation of the residual term. We illustrate the benefits of our method on numerical simulations for image restoration tasks

    The relevance of C. S. Peirce for socio-semiotics

    Get PDF
    Neither Peirce’s thought in general nor his semeiotic in particular would appear to be concerned with ‘society’ as it is generally conceived today. Moreover, Peirce rarely mentions ‘society’, preferring the term ‘community’, which his readers have often interpreted restrictively. There are two essential points to be borne in mind. In the first place, the epithet ‘social’ refers here not to the object of thought, but to its production, its mode of action and its transmission and conservation. In the second place, the term ‘community’ is not restricted to the scientific community, as is sometimes supposed. On the contrary, it refers to the ideal form of a society, which he calls ‘the unlimited community’, i. e. a group of people striving towards a common goal. Furthermore, Peirce’s semeiotic has been put in doubt as capable of providing a model for communication, the basis of social, dialogic, thought and action. The aim of the present article is to show that semeiotic, funded as it is on Peirce’s three categories, which define and delimit the ways in which man perceives and represents the phenomena, can provide a comprehensive model for the analysis of all types of communication in all social contexts. Finally, in this domain, as in others, Peirce was a forerunner, with the result that his thought has often been misunderstood or forgotten. In addition, he was pre-eminently a philosopher, thus his work has been neglected in other disciplines. The elaboration of other triadic systems, such as, notably, that of Rossi-Landi, shows that the tendency of semiotics in general is to move away from the former static, dyadic model towards that involving a triadic process. This trend, with which Peircean theory is in harmony, has been sharply accentuated in recent years, but often lacks a philosophical justification for its assumptions, which Peirce provides

    BATUD: Blind Atmospheric TUrbulence Deconvolution

    Get PDF
    A new blind image deconvolution technique is developed for atmospheric turbulence deblurring. The originality of the proposed approach relies on an actual physical model, known as the Fried kernel, that quantifies the impact of the atmospheric turbulence on the optical resolution of images. While the original expression of the Fried kernel can seem cumbersome at first sight, we show that it can be reparameterized in a much simpler form. This simple expression allows us to efficiently embed this kernel in the proposed Blind Atmospheric TUrbulence Deconvolution (BATUD) algorithm. BATUD is an iterative algorithm that alternately performs deconvolution and estimates the Fried kernel by jointly relying on a Gaussian Mixture Model prior of natural image patches and controlling for the square Euclidean norm of the Fried kernel. Numerical experiments show that our proposed blind deconvolution algorithm behaves well in different simulated turbulence scenarios, as well as on real images. Not only BATUD outperforms state-of-the-art approaches used in atmospheric turbulence deconvolution in terms of image quality metrics, but is also faster

    How to compare noisy patches? Patch similarity beyond Gaussian noise

    No full text
    International audienceMany tasks in computer vision require to match image parts. While higher-level methods consider image features such as edges or robust descriptors, low-level approaches (so-called image-based) compare groups of pixels (patches) and provide dense matching. Patch similarity is a key ingredient to many techniques for image registration, stereo-vision, change detection or denoising. Recent progress in natural image modeling also makes intensive use of patch comparison. A fundamental difficulty when comparing two patches from "real" data is to decide whether the differences should be ascribed to noise or intrinsic dissimilarity. Gaussian noise assumption leads to the classical definition of patch similarity based on the squared differences of intensities. For the case where noise departs from the Gaussian distribution, several similarity criteria have been proposed in the literature of image processing, detection theory and machine learning. By expressing patch (dis)similarity as a detection test under a given noise model, we introduce these criteria with a new one and discuss their properties. We then assess their performance for different tasks: patch discrimination, image denoising, stereo-matching and motion-tracking under gamma and Poisson noises. The proposed criterion based on the generalized likelihood ratio is shown to be both easy to derive and powerful in these diverse applications
    • …
    corecore