57 research outputs found

    Synthèse totale d'azaspirannes tricycliques (le FR901483 et le TAN 1251C)

    No full text
    Le FR901483 et la famille des TAN1251 dont le TAN1251C auquel nous nous sommes particulièrement intéressés, représentent une catégorie d'alcaloïdes naturels sans précédent d'un point de vue structural : ce sont des azaspirannes tricycliques, dimères de la L-tyrosine (acide aminé naturel). Le FR901483 est un immunosuppresseur puissant, dont le mode d'action probable est l'inhibition de la biosynthèse de purines, spécialement l'adénine. Les TAN1251 sont des inhibiteurs des récepteurs muscariniques de l'acétylcholine. Certains d'entre eux possèdent des propriétés antispasmodique et antiulcère. Par conséquent, ces deux familles de composés ont des visées thérapeutiques très importantes dans le domaine de la transplantation d'organes ou de maladies auto-immunes (pour le FR901483) et dans le domaine des maladies inhérentes aux troubles du système parasympathique (pour les TAN1251). À la suite de la mise au point d'une nouvelle méthodologie d'oxydation d'oxazolines phénoliques en spirolactames à l'aide du bis(acétoxy)iodobenzène, les synthèses totales du FR901483 et TAN1251C ont été réalisées avec succès en utilisant cette réaction comme étape clé. En effet, cette réaction d'oxydation a permis de préparer un intermédiaire synthétique commun aux deux produits. L'introduction du troisième cycle dans le FR901483 a été élaborée à partir d'une réaction d'aldolisation intramoléculaire diastéréosélective. La construction du dernier cycle dans le TAN1251C correspond à la formation d'une énamime. La synthèse totale du TAN1251C nous permet également de proposer la synthèse formelle des autres dérivés TAN1251C (A, B et D).LYON1-BU.Sciences (692662101) / SudocSudocFranceF

    QuantumLeap, a Framework for Engineering Gestural User Interfaces based on the Leap Motion Controller

    No full text
    Despite the tremendous progress made for recognizing gestures acquired by various devices, such as the Leap Motion Controller, developing a gestural user interface based on such devices still induces a significant programming and software engineering effort before obtaining a running interactive application. To facilitate this development, we present QuantumLeap, a framework for engineering gestural user interfaces based on the Leap Motion Controller. Its pipeline software architecture can be parameterized to define a workflow among modules for acquiring gestures from the Leap Motion Controller, for segmenting them, recognizing them, and managing their mapping to functions of the application. To demonstrate its practical usage, we implement two gesture-based applications: an image viewer that allows healthcare workers to browse DICOM medical images of their patients without any hygiene issues commonly associated with touch user interfaces and a large-scale application for managing multimedia contents on wall screens. To evaluate the usability of QuantumLeap, seven participants took part in an experiment in which they used QuantumLeap to add a gestural interface to an existing application

    Recognizing 3D Trajectories as 2D Multi-stroke Gestures

    No full text
    While end-users can acquire full 3D gestures with many input devices, they often capture only 3D trajectories, which are 3D uni-path, uni-stroke single-point gestures performed in thin air. Such trajectories with their (x,y,z) coordinates could be interpreted as three 2D stroke gestures projected on three planes,i.e., XY, YZ, and ZX, thus making them admissible for established 2D stroke gesture recognizers. To investigate whether 3D trajectories could be effectively and efficiently recognized, four 2D stroke gesture recognizers, i.e., P,P, P+, Q,andRubine,areextendedtothethirddimension:Q, and Rubine, are extended to the third dimension: P^3, P+3,P+^3, Q^3, and Rubine-Sheng, an extension of Rubine for 3D with more features. Two new variations are also introduced: FforflexiblecloudmatchingandFreeHandUniforunipathrecognition.Rubine3D,anotherextensionofRubinefor3Dwhichprojectsthe3Dgestureonthreeorthogonalplanes,isalsoincluded.Thesesevenrecognizersarecomparedagainstthreechallengingdatasetscontaining3Dtrajectories,i.e.,SHREC2019and3DTCGS,inauserindependentscenario,and3DMadLabSDwithitsfourdomains,inbothuserdependentanduserindependentscenarios,withvaryingnumberoftemplatesandsampling.IndividualrecognitionratesandexecutiontimesperdatasetandaggregatedonesonalldatasetsshowahighlysignificantdifferenceofF for flexible cloud matching and FreeHandUni for uni-path recognition. Rubine3D, another extension of Rubine for 3D which projects the 3D gesture on three orthogonal planes, is also included. These seven recognizers are compared against three challenging datasets containing 3D trajectories, i.e., SHREC2019 and 3DTCGS, in a user-independent scenario, and 3DMadLabSD with its four domains, in both user-dependent and user-independent scenarios, with varying number of templates and sampling. Individual recognition rates and execution times per dataset and aggregated ones on all datasets show a highly significant difference of P+^3 over its competitors. The potential effects of the dataset, the number of templates, and the sampling are also studie

    QuantumLeap

    No full text

    Recognizing 3D trajectories as 2D multi-stroke gestures

    No full text
    While end users can acquire full 3D gestures with many input devices, they often capture only 3D trajectories, which are 3D uni-path, uni-stroke single-point gestures performed in thin air. Such trajectories with their (x,y,z)(x,y,z) coordinates could be interpreted as three 2D stroke gestures projected on three planes,ie, XYXY, YZYZ, and ZXZX, thus making them admissible for established 2D stroke gesture recognizers. To investigate whether 3D trajectories could be effectively and efficiently recognized, four 2D stroke gesture recognizers, ie, P,P, P+, Q,andRubine,areextendedtothethirddimension:Q, and Rubine, are extended to the third dimension: P3P^3, P+^3$, Q^3,andRubineSheng,anextensionofRubinefor3Dwithmorefeatures.Twonewvariationsarealsointroduced:, and Rubine-Sheng, an extension of Rubine for 3D with more features. Two new variations are also introduced: F for flexible cloud matching and FreeHandUni for uni-path recognition. Rubine3D, another extension of Rubine for 3D which projects the 3D gesture on three orthogonal planes, is also included. These seven recognizers are compared against three challenging datasets containing 3D trajectories, ie, SHREC2019 and 3DTCGS, in a user-independent scenario, and 3DMadLabSD with its four domains, in both user-dependent and user-independent scenarios, with varying number of templates and sampling. Individual recognition rates and execution times per dataset and aggregated ones on all datasets show a highly significant difference of P+^3$ over its competitors. The potential effects of the dataset, the number of templates, and the sampling are also studied

    A Systematic Procedure for Comparing Template-Based Gesture Recognizers

    No full text
    To consistently compare gesture recognizers under identical conditions, a systematic procedure for comparative testing should investigate how the number of templates, the number of sampling points, the number of fingers, and their configuration with other hand parameters such as hand joints, palm, and fingertips impact performance. This paper defines a systematic procedure for comparing recognizers using a series of test definitions, i.e. an ordered list of test cases with controlled variables common to all test cases. For each test case, its accuracy is measured by the recognition rate and its responsiveness by the execution time. This procedure is applied to six state-of-the-art template-based gesture recognizers on SHREC2019, a gesture dataset that contains simple and complex hand gestures tested and is largely used in the literature for competition in a user-independent scenario, and on Jackknife-lm, another challenging dataset. The results of the procedure identify the configurations in which each recognizer is the most accurate or the fastest

    An Ontology for Reasoning on Body-based Gestures

    No full text
    Body-based gestures, such as acquired by Kinect sensor, today benefit from efficient tools for their recognition and development, but less for automated reasoning. To facilitate this activity, an ontology for structuring body-based gestures, based on user, body and body parts, gestures, and environment, is designed and encoded in Ontology Web Language according to modelling triples ⟨subject, predicate, object⟩. As a proof-of-concept and to feed this ontology, a gesture elicitation study collected 24 participants × 19 referents for IoT tasks = 456 elicited body-based gestures, which were classified and expressed according to the ontology
    corecore