218 research outputs found

    RecordMe: A Smartphone Application for Experimental Collections of Large Amount of Data Respecting Volunteer's Privacy

    No full text
    International audienceSince the spread of smartphones, researchers now have opportunities to collect more ecological data. However, despite the many advantages of existing databases (e.g., clean data, direct comparison), they may not suit all criteria for a particular experiment, resulting in an unavoidable tradeoff between the gain they provide and the lack of some labels or data sources. In this paper, we introduce RecordMe, an Android application ready to use for the research community. RecordMe allows to continuously record many different sensors and sources and provides a basic GUI for quick and easy settings. Also, a mark up interface is embedded for experiments that need it. Because of the high sensitivity of some data, RecordMe includes features for protecting volunteers' privacy and securing their data. RecordMe has already been successfully tested on different smartphones for 3 data collections

    Note sur le Pin Noir d'Autriche : région sud

    Get PDF
    International audienc

    Reconnaissance de scÚnes multimodale embarquée

    Get PDF
    Context: This PhD takes place in the contexts of Ambient Intelligence and (Mobile) Context/Scene Awareness. Historically, the project comes from the company ST-Ericsson. The project was depicted as a need to develop and embed a “context server” on the smartphone that would get and provide context information to applications that would require it. One use case was given for illustration: when someone is involved in a meeting and receives a call, then thanks to the understanding of the current scene (meet at work), the smartphone is able to automatically act and, in this case, switch to vibrate mode in order not to disturb the meeting. The main problems consist of i) proposing a definition of what is a scene and what examples of scenes would suit the use case, ii) acquiring a corpus of data to be exploited with machine learning based approaches, and iii) propose algorithmic solutions to the problem of scene recognition.Data collection: After a review of existing databases, it appeared that none fitted the criteria I fixed (long continuous records, multi-sources synchronized records necessarily including audio, relevant labels). Hence, I developed an Android application for collecting data. The application is called RecordMe and has been successfully tested on 10+ devices, running Android 2.3 and 4.0 OS versions. It has been used for 3 different campaigns including the one for scenes. This results in 500+ hours recorded, 25+ volunteers were involved, mostly in Grenoble area but abroad also (Dublin, Singapore, Budapest). The application and the collection protocol both include features for protecting volunteers privacy: for instance, raw audio is not saved, instead MFCCs are saved; sensitive strings (GPS coordinates, device ids) are hashed on the phone.Scene definition: The study of existing works related to the task of scene recognition, along with the analysis of the annotations provided by the volunteers during the data collection, allowed me to propose a definition of a scene. It is defined as a generalisation of a situation, composed of a place and an action performed by one person (the smartphone owner). Examples of scenes include taking a transportation, being involved in a work meeting, walking in the street. The composition allows to get different kinds of information to provide on the current scene. However, the definition is still too generic, and I think that it might be completed with additionnal information, integrated as new elements of the composition.Algorithmics: I have performed experiments involving machine learning techniques, both supervised and unsupervised. The supervised one is about classification. The method is quite standard: find relevant descriptors of the data through the use of an attribute selection method. Then train and test several classifiers (in my case, there were J48 and Random Forest trees ; GMM ; HMM ; and DNN). Also, I have tried a 2-stage system composed of a first step of classifiers trained to identify intermediate concepts and whose predictions are merged in order to estimate the most likely scene. The unsupervised part of the work aimed at extracting information from the data, in an unsupervised way. For this purpose, I applied a bottom-up hierarchical clustering, based on the EM algorithm on acceleration and audio data, taken separately and together. One of the results is the distinction of acceleration into groups based on the amount of agitation.Contexte : Cette thĂšse se dĂ©roule dans les contextes de l'intelligence ambiante et de la reconnaissance de scĂšne (sur mobile). Historiquement, le projet vient de l'entreprise ST-Ericsson. Il Ă©mane d'un besoin de dĂ©velopper et intĂ©grer un "serveur de contexte" sur smartphone capable d'estimer et de fournir des informations de contexte pour les applications tierces qui le demandent. Un exemple d'utilisation consiste en une rĂ©union de travail oĂč le tĂ©lĂ©phone sonne~; grĂące Ă  la reconnaissance de la scĂšne, le tĂ©lĂ©phone peut automatiquement rĂ©agir et adapter son comportement, par exemple en activant le mode vibreur pour ne pas dĂ©ranger.Les principaux problĂšmes de la thĂšse sont les suivants : d'abord, proposer une dĂ©finition de ce qu'est une scĂšne et des exemples de scĂšnes pertinents pour l'application industrielle ; ensuite, faire l'acquisition d'un corpus de donnĂ©es Ă  exploiter par des approches d'apprentissage automatique~; enfin, proposer des solutions algorithmiques au problĂšme de la reconnaissance de scĂšne.Collecte de donnĂ©es : Aucune des bases de donnĂ©es existantes ne remplit les critĂšres fixĂ©s (longs enregistrements continus, composĂ©s de plusieurs sources de donnĂ©es synchronisĂ©es dont l'audio, avec des annotations pertinentes).Par consĂ©quent, j'ai dĂ©veloppĂ© une application Android pour la collecte de donnĂ©es. L'application est appelĂ©e RecordMe et a Ă©tĂ© testĂ© avec succĂšs sur plus de 10 appareils. L'application a Ă©tĂ© utilisĂ©e pour 2 campagnes diffĂ©rentes, incluant la collecte de scĂšnes. Cela se traduit par plus de 500 heures enregistrĂ©es par plus de 25 bĂ©nĂ©voles, rĂ©partis principalement dans la rĂ©gion de Grenoble, mais aussi Ă  l'Ă©tranger (Dublin, Singapour, Budapest). Pour faire face au problĂšme de protection de la vie privĂ©e et de sĂ©curitĂ© des donnĂ©es, des mesures ont Ă©tĂ© mises en place dans le protocole et l'application de collecte. Par exemple, le son n'est pas sauvegardĂ©, mes des coefficients MFCCs sont enregistrĂ©s.DĂ©finition de scĂšne : L'Ă©tude des travaux existants liĂ©s Ă  la tĂąche de reconnaissance de scĂšne, et l'analyse des annotations fournies par les bĂ©nĂ©voles lors de la collecte de donnĂ©es, ont permis de proposer une dĂ©finition d'une scĂšne. Elle est dĂ©finie comme la gĂ©nĂ©ralisation d'une situation, composĂ©e d'un lieu et une action effectuĂ©e par une seule personne (le propriĂ©taire du smartphone). Des exemples de scĂšnes incluent les moyens de transport, la rĂ©union de travail, ou le dĂ©placement Ă  pied dans la rue. La notion de composition permet de dĂ©crire la scĂšne avec plusieurs types d'informations. Cependant, la dĂ©finition est encore trop gĂ©nĂ©rique, et elle pourrait ĂȘtre complĂ©tĂ©e par des informations additionnelles, intĂ©grĂ©e Ă  la dĂ©finition comme de nouveaux Ă©lĂ©ments de la composition.Algorithmique : J'ai rĂ©alisĂ© plusieurs expĂ©riences impliquant des techniques d'apprentissage automatique supervisĂ©es et non non-supervisĂ©es. La partie supervisĂ©e consiste en de la classification. La mĂ©thode est commune~: trouver des descripteurs des donnĂ©es pertinents grĂące Ă  l'utilisation d'une mĂ©thode de sĂ©lection d'attribut ; puis, entraĂźner et tester plusieurs classifieurs (arbres de dĂ©cisions et forĂȘt d'arbres dĂ©cisionnels ; GMM ; HMM, et DNN). Également, j'ai proposĂ© un systĂšme Ă  2 Ă©tages composĂ© de classifieurs formĂ©s pour identifier les concepts intermĂ©diaires et dont les prĂ©dictions sont fusionnĂ©es afin d'estimer la scĂšne la plus probable. Les expĂ©rimentations non-supervisĂ©es visent Ă  extraire des informations Ă  partir des donnĂ©es. Ainsi, j'ai appliquĂ© un algorithme de regroupement hiĂ©rarchique ascendant, basĂ© sur l'algorithme EM, sur les donnĂ©es d'accĂ©lĂ©ration et acoustiques considĂ©rĂ©es sĂ©parĂ©ment et ensemble. L'un des rĂ©sultats est la distinction des donnĂ©es d'accĂ©lĂ©ration en groupes basĂ©s sur la quantitĂ© d'agitation

    Pierre Aubert (1642-1733) et son legs : la naissance de la bibliothĂšque publique de Lyon

    Get PDF
    En 1731, Pierre Aubert lĂšgue prĂšs de 6 200 ouvrages Ă  la ville de Lyon, donnant ainsi naissance Ă  la premiĂšre bibliothĂšque publique de la ville dĂšs 1733. Toutefois, seuls trois volumes issus de ce legs sont prĂ©sents dans les collections actuelles de la BibliothĂšque municipale. Cette Ă©tude s’attache Ă  connaitre les motivations et les ambitions de ce collectionneur qui l’ont conduit, dans le contexte culturel du XVIIIe siĂšcle, Ă  une donation jusqu’alors inĂ©dite Ă  Lyon

    Collecte de parole pour l’étude des langues peu dotĂ©es ou en danger avec l’application mobile Lig-Aikuma

    No full text
    International audienceNous rapportons dans cet article les travaux en cours portant sur la collecte de langues africaines peu dotées ou en danger. Une collecte de données a été menée à l'aide d'une version modifiée de l'application Android AIKUMA, initialement développée par Steven Bird et coll. (Bird et al., 2014). Les modifications apportées suivent les spécifications du projet franco-allemand ANR/DFG BULB 1 pour faciliter la collecte sur le terrain de corpus de parole parallÚles. L'application résultante, appelée LIG-AIKUMA, a été testée avec succÚs sur plusieurs smartphones et tablettes et propose plusieurs modes de fonctionnement (enregistrement de parole, respeaking de parole, traduction et élicitation). Entre autres fonctionnalités, LIG-AIKUMA permet la génération et la manipulation avancée de fichiers de métadonnées ainsi que la prise en compte d'informations d'alignement entre phrases prononcées parallÚles dans les modes de respeaking et de traduction. L'application a été utilisée aux cours de campagnes de collecte sur le terrain, au Congo-Brazzaville, permettant l'acquisition de 80 heures de parole. La conception de l'application et l'illustration de son usage dans deux campagnes de collecte sont décrites plus en détail dans cet article

    Parallel Speech Collection for Under-resourced Language Studies Using the Lig-Aikuma Mobile Device App

    Get PDF
    International audienceThis paper reports on our ongoing efforts to collect speech data in under-resourced or endangered languages of Africa. Data collection is carried out using an improved version of the Android application Aikuma developed by Steven Bird and colleagues 1. Features were added to the app in order to facilitate the collection of parallel speech data in line with the requirements of the French-German ANR/DFG BULB (Breaking the Unwritten Language Barrier) project. The resulting app, called Lig-Aikuma, runs on various mobile phones and tablets and proposes a range of different speech collection modes (recording, respeaking, translation and elicitation). Lig-Aikuma's improved features include a smart generation and handling of speaker metadata as well as respeaking and parallel audio data mapping. It was used for field data collections in Congo-Brazzaville resulting in a total of over 80 hours of speech. Design issues of the mobile app as well as the use of Lig-Aikuma during two recording campaigns, are further described in this paper

    Forces and trauma associated with minimally invasive image-guided cochlear implantation

    Get PDF
    Objective. Minimally invasive image-guided cochlear implantation (CI) utilizes a patient-customized microstereotactic frame to access the cochlea via a single drill-pass. We investigate the average force and trauma associated with the insertion of lateral wall CI electrodes using this technique. Study Design. Assessment using cadaveric temporal bones. Setting. Laboratory setup. Subjects and Methods. Microstereotactic frames for 6 fresh cadaveric temporal bones were built using CT scans to determine an optimal drill path following which drilling was performed. CI electrodes were inserted using surgical forceps to manually advance the CI electrode array, via the drilled tunnel, into the cochlea. Forces were recorded using a 6-axis load sensor placed under the temporal bone during the insertion of lateral wall electrode arrays (2 each of Nucleus CI422, MED-EL standard, and modified MED-EL electrodes with stiffeners). Tissue histology was performed by microdissection of the otic capsule and apical photo documentation of electrode position and intracochlear tissue. Results. After drilling, CT scanning demonstrated successful access to cochlea in all 6 bones. Average insertion forces ranged from 0.009 to 0.078 N. Peak forces were in the range of 0.056 to 0.469 N. Tissue histology showed complete scala tympani insertion in 5 specimens and scala vestibuli insertion in the remaining specimen with depth of insertion ranging from 360° to 600°. No intracochlear trauma was identified. Conclusion. The use of lateral wall electrodes with the minimally invasive image-guided CI approach was associated with insertion forces comparable to traditional CI surgery. Deep insertions were obtained without identifiable trauma. © American Academy of Otolaryngology-Head and Neck Surgery Foundation 2014

    LIG-AIKUMA: a Mobile App to Collect Parallel Speech for Under-Resourced Language Studies

    No full text
    International audienceThis paper reports on our ongoing efforts to collect speech data in under-resourced or endangered languages of Africa. Data collection is carried out using an improved version of the An-droid application (AIKUMA) developed by Steven Bird and colleagues [1]. Features were added to the app in order to facilitate the collection of parallel speech data in line with the requirements of the French-German ANR/DFG BULB (Breaking the Unwritten Language Barrier) project. The resulting app, called LIG-AIKUMA, runs on various mobile phones and tablets and proposes a range of different speech collection modes (recording , respeaking, translation and elicitation). It was used for field data collections in Congo-Brazzaville resulting in a total of over 80 hours of speech

    A combination of LongSAGE with Solexa sequencing is well suited to explore the depth and the complexity of transcriptome

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>"Open" transcriptome analysis methods allow to study gene expression without <it>a priori </it>knowledge of the transcript sequences. As of now, SAGE (Serial Analysis of Gene Expression), LongSAGE and MPSS (Massively Parallel Signature Sequencing) are the mostly used methods for "open" transcriptome analysis. Both LongSAGE and MPSS rely on the isolation of 21 pb tag sequences from each transcript. In contrast to LongSAGE, the high throughput sequencing method used in MPSS enables the rapid sequencing of very large libraries containing several millions of tags, allowing deep transcriptome analysis. However, a bias in the complexity of the transcriptome representation obtained by MPSS was recently uncovered.</p> <p>Results</p> <p>In order to make a deep analysis of mouse hypothalamus transcriptome avoiding the limitation introduced by MPSS, we combined LongSAGE with the Solexa sequencing technology and obtained a library of more than 11 millions of tags. We then compared it to a LongSAGE library of mouse hypothalamus sequenced with the Sanger method.</p> <p>Conclusion</p> <p>We found that Solexa sequencing technology combined with LongSAGE is perfectly suited for deep transcriptome analysis. In contrast to MPSS, it gives a complex representation of transcriptome as reliable as a LongSAGE library sequenced by the Sanger method.</p
    • 

    corecore