22 research outputs found
Toward a typeface for the transcription of facial actions in sign languages
International audienceNon-manual actions, and more specifically facial actions (FA) can be found in all Sign Languages (SL). Those actions involve all the different facial parts and can have various and intricate linguistic relations with manual signs. Unlike in vocal languages, FA in SL provide more meaning than simple expressions of feelings and emotions. Yet non-manual parameters are among the most unknown formal features in SL studies. During the past 30 years, some studies have started questioning the meanings and linguistic values and the relations between manual and non-manual signs (Crashborn et al. 2008; Crashborn & Bank 2014); more recently, SL corpora have been analysed, segmented, and transcribed to help study FA (Vogst-Svenden 2008; Bergman et al. 2008; Sutton-Spence & Day 2008).Moreover, to fill the lack of an annotation system for FA, a few manual annotation systems have integrated facial glyphs, such as HamNoSys (Prillwitz et al. 1989) and SignWriting (Sutton 1995). On one hand, HamNoSys has been developed to describe all existing SLs at a phonetic level; it allows a formal, linear, highly detailed and searchable description of manual parameters. As for non-manual parameters, HamNoSys offers the replacement of hands by another articulators. Non-manual parameters can be written as “eyes” or “mouth” and described by the same symbols developed for hands (Hanke 2004). Unfortunately only a limited number of manual symbols can be translated into FA and the annotation system remains incomplete. On the other hand, SignWriting describes SL with iconic symbols placed in a 2D space representing the signer’s body. Facial expressions are divided into mouth, eyes, nose, eyebrows, etc., and are drawn in a circular “head” much like emoticons. SignWriting offers a detailed description of posture and actions of non-manual parameters, but does not ensure compatibility with the most common annotation software used by SL linguists (e.g., ELAN).Typannot, a interdisciplinary project led by linguists, designers, and developers, which aims to set up a complete transcription system for SL that includes every SL parameter (handshape, localisation, movement, FA), has developed a different methodologie. As mentioned earlier, FA have various linguistic values (mouthings, adverbial mouth gestures, semantically empty, enacting, whole face) and also include prosody and emotional meanings. In this regard, they can be more variable and signer-related than manual parameters. To offer the best annotation tool, Typannot’s approach has been to define facial parameters and all their possible tangible configurations. The goal is to set up the most efficient, simple, yet complete and universal formula to describe all possible FA.This formula is based on a 3 dimensional grid. Indeed all the different configurations of FA can be described by its X, Y, Z axis position. As a result, all FA can be described and encoded using a restricted list of 39 qualifiers. Based on this model and to help reduce the annotation process, a set of generic glyphs has been developed. Each qualifier has its own symbolic “generic” glyph. This methodical decomposition of all facial components enables a precise and accurate transcription of a complex FA using only a few glyphs.This formula and its generic glyphs have gone through a series of tests and revisions. Recently, an 18m20s long FA corpus of two deaf signers has been recorded using two different cameras. The first one, RGB HQ, is used to capture a high quality image and the second, infrared Kinect, is used to captured the depth. The latter was linked with Brekel Proface 2 (Leong et al. 2015), a 3D animation software that enables an automatic recognition of FA. This corpus has been fully annotated using Typannot’s generic glyphs. These annotations have enabled the validation of the general structure of Typannot FAformula and to identify some minor corrections to be made. For instance, it has been shown that the description of the air used to puff out or suck in cheeks is too restrictive and the description of the opening and closure of the eyelids is too unnecessarily precise.When those changes are implemented, our next task will be to develop a morphological glyphic system that will combine the different generic glyphs used for each facial parameter into one unique morphological glyph. This means that for any given FA, all the information contained in Typannot descriptive formula will be contained within one legible glyph. Some early research and work has already begun on this topic, but needs further development before providing a statement on its typographic structure. When this system is completed, it will be released with its own virtual keyboard (Typannot Keyboard, currently in development for handshapes) to help transcription and improve annotation processes.Bibliography :-Chételat-Pelé, E. (2010). Les Gestes Non Manuels en Langue des Signes Française ; Annotation, analyse et formalisation : application aux mouvements des sourcils et aux clignements des yeux. Université de Provence - Aix-Marseille I.-Crasborn, O., Van Der Kooij, E., Waters, D., Woll, B., & Mesch, J. (2008). Frequency distribution and spreading behavior of different types of mouth actions in three sign languages. Sign Language & Linguistics, 11(1), 45–67.-Crasborn, O. A., & Bank, R. (2014). An annotation scheme for the linguistic study of mouth actions in sign languages. http://repository.ubn.ru.nl/handle/2066/132960-Fontana, S. (2008). Mouth actions as gesture in sign language. Gesture, 8(1), 104‑123.-Hanke, T. (2004). HamNoSys—Representing sign language data in language resources and language processing contexts. In Workshop on the Representation and Processing of Sign Languages on the occasion of the Fourth International Conference on Language Resources and Evaluation (p. 1‑6).-Leong, C. W., Chen, L., Feng, G., Lee, C. M., & Mulholland, M. (2015). Utilizing depth sensors for analyzing multimodal presentations: Hardware, software and toolkits (p. 547‑556).Presented at Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, ACM.-Prillwitz, S., Leven, R., Zienert, H., Hanke, T., & Henning, J. (1989). Hamburg notation system for sign languages: An introductory guide. Signum Press, Hamburg.-Sandler, W. (2009). Symbiotic symbolization by hand and mouth in sign language. Semiotica, 2009(174), 241‑275. http://doi.org/10.1515/semi.2009.035-Sutton, V. (1995). Lessons in Sign Writing: Textbook. DAC, La Jolla (CA).-Sutton-Spence, R., & Boyes-Braem, P. (2001). The hands are the head of the mouth: The mouth as articulator in sign languages. Signum Press, Hamburg
Radiation damages during synchrotron X-ray micro-analyses of Prussian blue and zinc white historic paintings: detection, mitigation and integration
High-flux synchrotron techniques allow microspectroscopic analyses of artworks that were not feasible even a few years ago, allowing for a more detailed characterization of their constituent materials and a better understanding of their chemistry. However, interaction between high-flux photons and matter at the sub-microscale can generate damages which are not visually detectable. We show here different methodologies allowing to evidence the damages induced by microscopic X-ray absorption near-edge structure spectroscopy analysis ( ÎĽ XANES) at the Fe and Zn K-edges of a painting dating from the turn of the twentieth century containing Prussian blue and zinc white. No significant degradation of the pigments was noticed, in agreement with the excellent condition of the painting. However, synchrotron radiation damages occurred at several levels, from chemical changes of the binder, modification of crystal defects in zinc oxide, to Prussian blue photoreduction. They could be identified by using both the ÎĽ XANES signal during analysis and with photoluminescence imaging in the deep ultraviolet and visible ranges after analysis. We show that recording accurately damaged areas is a key step to prevent misinterpretation of results during future re-examination of the sample. We conclude by proposing good practices that could help in integrating radiation damage avoidance into the analytical pathway
Radiation damages during synchrotron X-ray micro-analyses of Prussian blue and zinc white historic paintings: detection, mitigation and integration
International audienc
GestualScript, le dispositif TypannotSign
International audienc
Typannot: un sistema tipografico per trascrivere le lingue dei segni
International audienc
Typannot family fonts: how to transcribe Sign Languages?
International audienc
Typannot family fonts: how to transcribe Sign Languages?
International audienc
Typannot, verso un set di caratteri per la trascrizione della mimica facciale nelle lingue dei segni
(poster)International audienceNon-manual actions, and more specifically facial actions (FA) can be found in all Sign Languages (SL). Those actions involve all the different facial parts and can have various and intricate linguistic relations with manual signs. Unlike in vocal languages, FA in SL provide more meaning than simple expressions of feelings and emotions. Yet non-manual parameters are among the most unknown formal features in SL studies. During the past 30 years, some studies have started questioning the meanings and linguistic values and the relations between manual and non-manual signs (Crashborn et al. 2008; Crashborn & Bank 2014); more recently, SL corpora have been analysed, segmented, and transcribed to help study FA (Vogst-Svenden 2008; Bergman et al. 2008; Sutton-Spence & Day 2008).Moreover, to fill the lack of an annotation system for FA, a few manual annotation systems have integrated facial glyphs, such as HamNoSys (Prillwitz et al. 1989) and SignWriting (Sutton 1995). On one hand, HamNoSys has been developed to describe all existing SLs at a phonetic level; it allows a formal, linear, highly detailed and searchable description of manual parameters. As for non-manual parameters, HamNoSys offers the replacement of hands by another articulators. Non-manual parameters can be written as “eyes” or “mouth” and described by the same symbols developed for hands (Hanke 2004). Unfortunately only a limited number of manual symbols can be translated into FA and the annotation system remains incomplete. On the other hand, SignWriting describes SL with iconic symbols placed in a 2D space representing the signer’s body. Facial expressions are divided into mouth, eyes, nose, eyebrows, etc., and are drawn in a circular “head” much like emoticons. SignWriting offers a detailed description of posture and actions of non-manual parameters, but does not ensure compatibility with the most common annotation software used by SL linguists (e.g., ELAN).Typannot, a interdisciplinary project led by linguists, designers, and developers, which aims to set up a complete transcription system for SL that includes every SL parameter (handshape, localisation, movement, FA), has developed a different methodology. As mentioned earlier, FA have various linguistic values (mouthings, adverbial mouth gestures, semantically empty, enacting, whole face) and also include prosody and emotional meanings. In this regard, they can be more variable and signer-related than manual parameters. To offer the best annotation tool, Typannot approach has been to define facial parameters and all their possible tangible configurations. The goal is to set up the most efficient, simple, yet complete and universal formula to describe all possible FA.This formula is based on a 3-dimensional grid. Indeed all the different configurations of FA can be described by its X, Y, Z axis position. As a result, all FA can be described and encoded using a restricted list of 39 qualifiers. Based on this model and to help reduce the annotation process, a set of generic glyphs has been developed. Each qualifier has its own symbolic “generic” glyph. This methodical decomposition of all facial components enables a precise and accurate transcription of a complex FA using only a few glyphs.This formula and its generic glyphs have gone through a series of tests and revisions. Recently, an 18m20s long FA corpus of two deaf signers has been recorded using two different cameras. The first one, RGB HQ, is used to capture a high quality image and the second, infrared Kinect, is used to captured the depth. The latter was linked with Brekel Proface 2 (Leong et al. 2015), a 3D animation software that enables an automatic recognition of FA. This corpus has been fully annotated using Typannot generic glyphs. These annotations have enabled the validation of the general structure of Typannot FA formula and to identify some minor corrections to be made. For instance, it has been shown that the description of the air used to puff out or suck in cheeks is too restrictive and the description of the opening and closure of the eyelids is too unnecessarily precise.When those changes are implemented, our next task will be to develop a morphological glyphic system that will combine the different generic glyphs used for each facial parameter into one unique morphological glyph. This means that for any given FA, all the information contained in Typannot descriptive formula will be contained within one legible glyph. Some early research and work has already begun on this topic, but needs further development before providing a statement on its typographic structure. When this system is completed, it will be released with its own virtual keyboard (Typannot Keyboard, currently in development for handshapes) to help transcription and improve annotation processes.Bibliography:-Chételat-Pelé, E. (2010). Les Gestes Non Manuels en Langue des Signes Française ; Annotation, analyse et formalisation : application aux mouvements des sourcils et aux clignements des yeux. Université de Provence – Aix-Marseille I.-Crasborn, O., van der Kooij, E., Waters, D., Woll, B., & Mesch, J. (2008). Frequency distribution and spreading behavior of different types of mouth actions in three sign languages. Sign Language & Linguistics, 11(1), 45-67.-Crasborn, O.A., & Bank, R. (2014). An annotation scheme for the linguistic study of mouth actions in sign languages. http://repository.ubn.ru.nl/handle/2066/132960-Fontana, S. (2008). Mouth actions as gesture in sign language. Gesture, 8(1), 104-123.-Hanke, T. (2004). HamNoSys - Representing sign language data in language resources and language processing contexts. In Workshop on the Representation and Processing of Sign Languages on the occasion of the Fourth International Conference on Language Resources and Evaluation (p. 1-6).-Leong, C.W., Chen, L., Feng, G., Lee, C.M., & Mulholland, M. (2015). Utilizing depth sensors for analyzing multimodal presentations: Hardware, software and toolkits (p. 547-556). Proceedings of the 2015 ACM International Conference on Multimodal Interaction, ACM.-Prillwitz, S., Leven, R., Zienert, H., Hanke, T., & Henning, J. (1989). Hamburg notation system for sign languages: an introductory guide. Signum Press, Hamburg.-Sandler, W. (2009). Symbiotic symbolization by hand and mouth in sign language. Semiotica, 2009(174), 241-275. http://doi.org/10.1515/semi.2009.035-Sutton, V. (1995). Lessons in Sign Writing: textbook. DAC, La Jolla (CA).-Sutton-Spence, R., & Boyes-Braem, P. (2001). The hands are the head of the mouth: the mouth as articulator in sign languages. Signum Press, Hambur
Typannot, verso un set di caratteri per la trascrizione della mimica facciale nelle lingue dei segni
(poster)International audienceNon-manual actions, and more specifically facial actions (FA) can be found in all Sign Languages (SL). Those actions involve all the different facial parts and can have various and intricate linguistic relations with manual signs. Unlike in vocal languages, FA in SL provide more meaning than simple expressions of feelings and emotions. Yet non-manual parameters are among the most unknown formal features in SL studies. During the past 30 years, some studies have started questioning the meanings and linguistic values and the relations between manual and non-manual signs (Crashborn et al. 2008; Crashborn & Bank 2014); more recently, SL corpora have been analysed, segmented, and transcribed to help study FA (Vogst-Svenden 2008; Bergman et al. 2008; Sutton-Spence & Day 2008).Moreover, to fill the lack of an annotation system for FA, a few manual annotation systems have integrated facial glyphs, such as HamNoSys (Prillwitz et al. 1989) and SignWriting (Sutton 1995). On one hand, HamNoSys has been developed to describe all existing SLs at a phonetic level; it allows a formal, linear, highly detailed and searchable description of manual parameters. As for non-manual parameters, HamNoSys offers the replacement of hands by another articulators. Non-manual parameters can be written as “eyes” or “mouth” and described by the same symbols developed for hands (Hanke 2004). Unfortunately only a limited number of manual symbols can be translated into FA and the annotation system remains incomplete. On the other hand, SignWriting describes SL with iconic symbols placed in a 2D space representing the signer’s body. Facial expressions are divided into mouth, eyes, nose, eyebrows, etc., and are drawn in a circular “head” much like emoticons. SignWriting offers a detailed description of posture and actions of non-manual parameters, but does not ensure compatibility with the most common annotation software used by SL linguists (e.g., ELAN).Typannot, a interdisciplinary project led by linguists, designers, and developers, which aims to set up a complete transcription system for SL that includes every SL parameter (handshape, localisation, movement, FA), has developed a different methodology. As mentioned earlier, FA have various linguistic values (mouthings, adverbial mouth gestures, semantically empty, enacting, whole face) and also include prosody and emotional meanings. In this regard, they can be more variable and signer-related than manual parameters. To offer the best annotation tool, Typannot approach has been to define facial parameters and all their possible tangible configurations. The goal is to set up the most efficient, simple, yet complete and universal formula to describe all possible FA.This formula is based on a 3-dimensional grid. Indeed all the different configurations of FA can be described by its X, Y, Z axis position. As a result, all FA can be described and encoded using a restricted list of 39 qualifiers. Based on this model and to help reduce the annotation process, a set of generic glyphs has been developed. Each qualifier has its own symbolic “generic” glyph. This methodical decomposition of all facial components enables a precise and accurate transcription of a complex FA using only a few glyphs.This formula and its generic glyphs have gone through a series of tests and revisions. Recently, an 18m20s long FA corpus of two deaf signers has been recorded using two different cameras. The first one, RGB HQ, is used to capture a high quality image and the second, infrared Kinect, is used to captured the depth. The latter was linked with Brekel Proface 2 (Leong et al. 2015), a 3D animation software that enables an automatic recognition of FA. This corpus has been fully annotated using Typannot generic glyphs. These annotations have enabled the validation of the general structure of Typannot FA formula and to identify some minor corrections to be made. For instance, it has been shown that the description of the air used to puff out or suck in cheeks is too restrictive and the description of the opening and closure of the eyelids is too unnecessarily precise.When those changes are implemented, our next task will be to develop a morphological glyphic system that will combine the different generic glyphs used for each facial parameter into one unique morphological glyph. This means that for any given FA, all the information contained in Typannot descriptive formula will be contained within one legible glyph. Some early research and work has already begun on this topic, but needs further development before providing a statement on its typographic structure. When this system is completed, it will be released with its own virtual keyboard (Typannot Keyboard, currently in development for handshapes) to help transcription and improve annotation processes.Bibliography:-Chételat-Pelé, E. (2010). Les Gestes Non Manuels en Langue des Signes Française ; Annotation, analyse et formalisation : application aux mouvements des sourcils et aux clignements des yeux. Université de Provence – Aix-Marseille I.-Crasborn, O., van der Kooij, E., Waters, D., Woll, B., & Mesch, J. (2008). Frequency distribution and spreading behavior of different types of mouth actions in three sign languages. Sign Language & Linguistics, 11(1), 45-67.-Crasborn, O.A., & Bank, R. (2014). An annotation scheme for the linguistic study of mouth actions in sign languages. http://repository.ubn.ru.nl/handle/2066/132960-Fontana, S. (2008). Mouth actions as gesture in sign language. Gesture, 8(1), 104-123.-Hanke, T. (2004). HamNoSys - Representing sign language data in language resources and language processing contexts. In Workshop on the Representation and Processing of Sign Languages on the occasion of the Fourth International Conference on Language Resources and Evaluation (p. 1-6).-Leong, C.W., Chen, L., Feng, G., Lee, C.M., & Mulholland, M. (2015). Utilizing depth sensors for analyzing multimodal presentations: Hardware, software and toolkits (p. 547-556). Proceedings of the 2015 ACM International Conference on Multimodal Interaction, ACM.-Prillwitz, S., Leven, R., Zienert, H., Hanke, T., & Henning, J. (1989). Hamburg notation system for sign languages: an introductory guide. Signum Press, Hamburg.-Sandler, W. (2009). Symbiotic symbolization by hand and mouth in sign language. Semiotica, 2009(174), 241-275. http://doi.org/10.1515/semi.2009.035-Sutton, V. (1995). Lessons in Sign Writing: textbook. DAC, La Jolla (CA).-Sutton-Spence, R., & Boyes-Braem, P. (2001). The hands are the head of the mouth: the mouth as articulator in sign languages. Signum Press, Hambur