61 research outputs found

    Photochemical synthesis of a “cage” compound in a microreactor: Rigorous comparison with a batch photoreactor

    Get PDF
    An intramolecular [2 + 2] photocycloaddition is performed in a microphotoreactor (0.81 mL) built by winding FEP tubing around a commercially available Pyrex immersion well in which a medium pressure mercury lamp is inserted. A rigorous comparison with a batch photoreactor (225 mL) is proposed by means of a simple model coupling the reaction kinetics with the mass, momentum and radiative transfer equations. This serves as a basis to explain why the chemical conversion and the irradiation time are respectively increased and reduced in the microphotoreactor relative to those in the batch photoreactor. Through this simple model reaction, some criteria for transposing photochemical synthesis from a batch photoreactor to a continuous microphotoreactor are defined

    La micro-échelle en synthèse organique : un outil commun chimie/génie chimique

    Get PDF
    Cet article est une restitution courte de la présentation réalisée au cours des JIREC 2013 sur l’enseignement de la notion de changement d’échelle et de passage d’un mode batch à un mode continu en synthèse organique. L’enjeu est de faire travailler des étudiants issus des départements chimie et génie chimique autour d’un même outil, le microréacteur. Au cours d’une séance de travaux pratiques, les étudiants mettent en oeuvre une synthèse organique en continu à micro-échelle et comparent les résultats obtenus à ceux du procédé batch. Ils appréhendent ainsi la notion de synthèse en continu et de suivi cinétique le long du microréacteur et comprennent l’intérêt et les difficultés liés à la petite échelle. L’outil microréacteur mis en place à l’INP-ENSIACET peut être transféré vers d’autres formations de type ingénieurs, mais aussi CPGE, BTS ou IUT, pour accompagner le lien entre les domaines « génie de la réaction » et « synthèse organique »

    Microreactors as a Tool for Acquiring Kinetic Data on Photochemical Reactions

    Get PDF
    For the first time, the application of microreactors as a tool for acquiring kinetic data on a photochemical reaction is demonstrated. For illustration, a T-photochromic system is considered. By using modeling tools and carrying out specific experiments in a spiral-shaped microreactor irradiated by an ultraviolet/light-emitting diode (UV-LED) array, the two kinetic parameters of the reaction, namely, quantum yield and rate of thermal back reaction, are determined. Once these parameters are known, the photochromic reaction is performed in two other microreactors in order to investigate a wider range of operating conditions. It is observed that a critical residence time exists beyond which the conversion into the open form decreases due to a decomposition reaction. The value of the critical residence depends on the microreactor type, which can be predicted by applying the model developed

    Méthode d'optimisation temporelle des algorithmes de vision

    Get PDF
    - Compte tenu de la puissance de calcul disponible sur les robots mobiles, la recherche d'une optimisation temporelle devient un élément fondamental. Dans ce papier, une méthode est proposée, illustrée par deux applications réalisées. Enfin, nous présentons des résultats numériques, issus de « benchmarks » quantifiant les améliorations réalisées

    Transposition of a triphosgene-based process for pharmaceutical development: from mg·h-1 to kg·h-1 of an unsymmetrical urea

    Get PDF
    A two reaction synthesis of a urea, using triphosgene, was studied. The objective was to transpose the process from laboratory scale to pre-industrial plant. The whole study was performed in a continuous process, adapting the characteristic dimensions and length of the reactor. In this paper, the development of the process is presented, and the choices about safety and operating conditions constraints are discussed. The final operation allows a 70% global yield in a 7 week study. Furthermore, the use of microreactors not only permits an exhaustive study of the process operating parameters, but also provides feedback on the developed chemistry itself. The results obtained are a demonstration of the use of continuous processes in small scale reactors for complex molecule development. The mg·h-1 to kg·h-1 is a key transposition in the pharmaceutical industries project development, as it can help to accelerate the first lot production used in toxicological or pre-clinic stages

    Flow photochemistry: a meso-scale reactor for industrial applications

    Get PDF
    Developing flow photochemistry, especially at meso-scale where significant productivity is required, remains challenging. There is a need for innovative equipments generating highly controlled flow under light irradiation. In this work, a commercial solution, developed by Corning, is presented and studied by LGC and MEPI on an intramolecular (2+2) photo-cycloaddition. Detailed experimental and modelling analysis has been performed to emphasize the flow reactor behaviour and performances, and demonstrate its capability in producing up to 30g.h-1 of the desired molecule. Through this simple model reaction, the G1 photo-reactor is shown to be an efficient meso-scale reactor for industrial photo-applications development and production

    Antischistosomal Activity of Trioxaquines: In Vivo Efficacy and Mechanism of Action on Schistosoma mansoni

    Get PDF
    Schistosomiasis is among the most neglected tropical diseases, since its mode of spreading tends to limit the contamination to people who are in contact with contaminated waters in endemic countries. Here we report the in vitro and in vivo anti-schistosomal activities of trioxaquines. These hybrid molecules are highly active on the larval forms of the worms and exhibit different modes of action, not only the alkylation of heme. The synergy observed with praziquantel on infected mice is in favor of the development of these trioxaquines as potential anti-schistosomal agents

    Identification of genetic variants associated with Huntington's disease progression: a genome-wide association study

    Get PDF
    Background Huntington's disease is caused by a CAG repeat expansion in the huntingtin gene, HTT. Age at onset has been used as a quantitative phenotype in genetic analysis looking for Huntington's disease modifiers, but is hard to define and not always available. Therefore, we aimed to generate a novel measure of disease progression and to identify genetic markers associated with this progression measure. Methods We generated a progression score on the basis of principal component analysis of prospectively acquired longitudinal changes in motor, cognitive, and imaging measures in the 218 indivduals in the TRACK-HD cohort of Huntington's disease gene mutation carriers (data collected 2008–11). We generated a parallel progression score using data from 1773 previously genotyped participants from the European Huntington's Disease Network REGISTRY study of Huntington's disease mutation carriers (data collected 2003–13). We did a genome-wide association analyses in terms of progression for 216 TRACK-HD participants and 1773 REGISTRY participants, then a meta-analysis of these results was undertaken. Findings Longitudinal motor, cognitive, and imaging scores were correlated with each other in TRACK-HD participants, justifying use of a single, cross-domain measure of disease progression in both studies. The TRACK-HD and REGISTRY progression measures were correlated with each other (r=0·674), and with age at onset (TRACK-HD, r=0·315; REGISTRY, r=0·234). The meta-analysis of progression in TRACK-HD and REGISTRY gave a genome-wide significant signal (p=1·12 × 10−10) on chromosome 5 spanning three genes: MSH3, DHFR, and MTRNR2L2. The genes in this locus were associated with progression in TRACK-HD (MSH3 p=2·94 × 10−8 DHFR p=8·37 × 10−7 MTRNR2L2 p=2·15 × 10−9) and to a lesser extent in REGISTRY (MSH3 p=9·36 × 10−4 DHFR p=8·45 × 10−4 MTRNR2L2 p=1·20 × 10−3). The lead single nucleotide polymorphism (SNP) in TRACK-HD (rs557874766) was genome-wide significant in the meta-analysis (p=1·58 × 10−8), and encodes an aminoacid change (Pro67Ala) in MSH3. In TRACK-HD, each copy of the minor allele at this SNP was associated with a 0·4 units per year (95% CI 0·16–0·66) reduction in the rate of change of the Unified Huntington's Disease Rating Scale (UHDRS) Total Motor Score, and a reduction of 0·12 units per year (95% CI 0·06–0·18) in the rate of change of UHDRS Total Functional Capacity score. These associations remained significant after adjusting for age of onset. Interpretation The multidomain progression measure in TRACK-HD was associated with a functional variant that was genome-wide significant in our meta-analysis. The association in only 216 participants implies that the progression measure is a sensitive reflection of disease burden, that the effect size at this locus is large, or both. Knockout of Msh3 reduces somatic expansion in Huntington's disease mouse models, suggesting this mechanism as an area for future therapeutic investigation

    Efficient Connected Component Labeling Algorithms for High Performance Architectures

    No full text
    Ces travaux de thèse, dans le domaine de l'adéquation algorithme architecture pour la vision par ordinateur, ont pour cadre l'étiquetage en composantes connexes (ECC) dans le contexte parallèle des architectures hautes performances. Alors que les architectures généralistes modernes sont multi-coeur, les algorithmes d'ECC sont majoritairement séquentiels, irréguliers et utilisent une structure de graphe pour représenter les relations d'équivalences entre étiquettes ce qui rend complexe leur parallélisation. L'ECC permet à partir d'une image binaire, de regrouper sous une même étiquette tous les pixels connexes, il fait ainsi le pont entre les traitements bas niveaux tels que le filtrage et ceux de haut niveau tels que la reconnaissance de forme ou la prise de décision. Il est donc impliqué dans un grand nombre de chaînes de traitements qui nécessitent l'analyse d'image segmentées. L'accélération de cette étape représente donc un enjeu pour tout un ensemble d'algorithmes.Les travaux de thèse se sont tout d'abord concentrés sur les performances comparées des algorithmes de l'état de l'art tant pour l'ECC que pour l'analyse des caractéristiques des composantes connexes (ACC) afin d'en dégager une hiérarchie et d’identifier les composantes déterminantes des algorithmes. Pour cela, une méthode d'évaluation des performances, reproductible et indépendante du domaine applicatif, a été proposée et appliquée à un ensemble représentatif des algorithmes de l'état de l'art. Les résultats montrent que l'algorithme séquentiel le plus rapide est l'algorithme LSL qui manipule des segments contrairement aux autres algorithmes qui manipulent des pixels.Dans un deuxième temps, une méthode de parallélisation des algorithmes directs utilisant OpenMP a été proposé avec pour objectif principal de réaliser l’ACC à la volée et de diminuer le coût de la communication entre les threads. Pour cela, l'image binaire est découpée en bandes traitées en parallèle sur chaque coeur du l'architecture, puis une étape de fusion pyramidale d'ensembles deux à deux disjoint d'étiquettes permet d'obtenir l'image complètement étiquetée sans avoir de concurrence d'accès aux données entre les différents threads. La procédure d'évaluation des performances appliquée a des machines de degré de parallélisme variés, a démontré que la méthode de parallélisation proposée était efficace et qu'elle s'appliquait à tous les algorithmes directs. L'algorithme LSL s'est encore avéré être le plus rapide et le seul adapté à l'augmentation du nombre de coeurs du fait de son approche «segments». Pour une architecture à 60 coeurs, l'algorithme LSL permet de traiter de 42,4 milliards de pixels par seconde pour des images de taille 8192x8192, tandis que le plus rapide des algorithmes pixels est limité par la bande passante et sature à 5,8 milliards de pixels par seconde.Après ces travaux, notre attention s'est portée sur les algorithmes d'ECC itératifs dans le but de développer des algorithmes pour les architectures manycore et GPU. Les algorithmes itératifs se basant sur un mécanisme de propagation des étiquettes de proche en proche, aucune autre structure que l'image n'est nécessaire ce qui permet d'en réaliser une implémentation massivement parallèle (MPAR). Ces travaux ont menés à la création de deux nouveaux algorithmes.- Une amélioration incrémentale de MPAR utilisant un ensemble de mécanismes tels qu'un balayage alternatif, l'utilisation d'instructions SIMD ainsi qu'un mécanisme de tuiles actives permettant de répartir la charge entre les différents coeurs tout en limitant le traitement des pixels aux zones actives de l'image et à leurs voisines.- Un algorithme mettant en œuvre la relation d’équivalence directement dans l’image pour réduire le nombre d'itérations nécessaires à l'étiquetage. Une implémentation pour GPU basée sur les instructions atomic avec un pré-étiquetage en mémoire locale a été réalisée et s'est révélée efficace dès les images de petite taille.This PHD work take place in the field of algorithm-architecture matching for computer vision, specifically for the connected component labeling (CCL) for high performance parallel architectures.While modern architectures are overwhelmingly multi-core, CCL algorithms are mostly sequential, irregular and they use a graph structure to represent the equivalences between labels. This aspects make their parallelization challenging.CCL processes a binary image and gathers under the same label all the connected pixels, doing so CCL is a bridge between low level operations like filtering and high level ones like shape recognition and decision-making.It is involved in a large number of processing chains that require segmented image analysis. The acceleration of this step is therefore an issue for a variety of algorithms.At first, the PHD work focused on the comparative performance of the State-of-the-Art algorithms, as for CCL than for the features analysis of the connected components (CCA) in order to identify a hierarchy and the critical components of the algorithms. For this, a benchmarking method, reproducible and independent of the application domain was proposed and applied to a representative set of State-of-the-Art algorithms. The results show that the fastest sequential algorithm is the LSL algorithm which manipulates segments unlike other algorithms that manipulate pixels.Secondly, a parallelization framework of directs algorithms based on OpenMP was proposed with the main objective to compute the CCA on the fly and reduce the cost of communication between threads.For this, the binary image is divided into bands processed in parallel on each core of the architecture and a pyramidal fusion step that processes the generated disjoint sets of labels provides the fully labeled image without concurrent access to data between threads.The benchmarking procedure applied to several machines of various parallelism level, shows that the proposed parallelization framework applies to all the direct algorithms.The LSL algorithm is once again the fastest and the only one suitable when the number of cores increases due to its run-based conception. With an architecture of 60 cores, the LSL algorithm can process 42.4 billion pixels per second for images of 8192x8192 pixels, while the fastest pixel-based algorithm is limited by the bandwidth and saturates at 5.8 billion pixels per second.After these works, our attention focused on iterative CCL algorithms in order to develop new algorithms for many-core and GPU architectures. The Iterative algorithms are based on a local propagation mechanism without supplementary equivalence structure which allows to achieve a massively parallel implementation (MPAR). This work led to the creation of two new algorithms.- An incremental improvement of MPAR using a set of mechanisms such as an alternative scanning, the use of SIMD instructions and an active tile mechanism to distribute the load between the different cores while limiting the processing of the pixels to the active areas of the image and to their neighbors.- An algorithm that implements the equivalence relation directly into the image to reduce the number of iterations required for labeling. An implementation for GPU, based on atomic instructions with a pre-labeling in the local memory has been realized and it has proven effective from the small images
    corecore