2,012 research outputs found

    Traitements automatiques de palatogrammes (palatographie directe)

    Get PDF
    International audiencePalatography has been widely used to investigate consonants articulation. This technique has however some drawbacks since its records of the lingual palatine contact pattern most often does not take into account the shape of the palatal fold. This makes difficult palatine pattern comparison across speakers. To alleviate this limitation we propose a method to obtain a more realistic information about the articulation which copes with individual morphological differences.La palatographie a été largement utilisée pour examiner l'articulation de consonnes. Cette technique a cependant quelques inconvénients puisque la capture des patrons de contact linguopalatal ne prend le plus souvent pas en compte la forme du palais. Cela rend la comparaison de patrons linguopalataux entre locuteurs difficile. Pour atténuer cette limitation nous proposons une méthode permettant d'obtenir des informations plus réalistes sur l'articulation supportant les différences morphologiques individuelles

    Inheritance as Implicit Coercion

    Get PDF
    We present a method for providing semantic interpretations for languages with a type system featuring inheritance polymorphism. Our approach is illustrated on an extension of the language Fun of Cardelli and Wegner, which we interpret via a translation into an extended polymorphic lambda calculus. Our goal is to interpret inheritances in Fun via coercion functions which are definable in the target of the translation. Existing techniques in the theory of semantic domains can be then used to interpret the extended polymorphic lambda calculus, thus providing many models for the original language. This technique makes it possible to model a rich type discipline which includes parametric polymorphism and recursive types as well as inheritance. A central difficulty in providing interpretations for explicit type disciplines featuring inheritance in the sense discussed in this paper arises from the fact that programs can type-check in more than one way. Since interpretations follow the type-checking derivations, coherence theorems are required: that is, one must prove that the meaning of a program does not depend on the way it was type-checked. The proof of such theorems for our proposed interpretation are the basic technical results of this paper. Interestingly, proving coherence in the presence of recursive types, variants, and abstract types forced us to reexamine fundamental equational properties that arise in proof theory (in the form of commutative reductions) and domain theory (in the form of strict vs. non-strict functions)

    Proof Theoretic Concepts for the Semantics of Types and Concurrency

    Get PDF
    We present a method for providing semantic interpretations for languages with a type system featuring inheritance polymorphism. Our approach is illustrated on an extension of the language Fun of Cardelli and Wegner, which we interpret via a translation into an extended polymorphic lambda calculus. Our goal is to interpret inheritances in Fun via coercion functions which are definable in the target of the translation. Existing techniques in the theory of semantic domains can be then used to interpret the extended polymorphic lambda calculus, thus providing many models for the original language. This technique makes it possible to model a rich type discipline which includes parametric polymorphism and recursive types as well as inheritance. A central difficulty in providing interpretations for explicit type disciplines featuring inheritance in the sense discussed in this paper arises from the fact that programs can type-check in more than one way. Since interpretations follow the type-checking derivations, coherence theorems are required: that is, one must prove that the meaning of a program does not depend on the way it was type-checked. The proof of such theorems for our proposed interpretation are the basic technical results of this paper. Interestingly, proving coherence in the presence of recursive types, variants, and abstract types forced us to reexamine fundamental equational properties that arise in proof theory (in the form of commutative reductions) and domain theory (in the form of strict vs. non-strict functions)

    Combining Reinforcement Learning and Constraint Programming for Combinatorial Optimization

    Full text link
    Combinatorial optimization has found applications in numerous fields, from aerospace to transportation planning and economics. The goal is to find an optimal solution among a finite set of possibilities. The well-known challenge one faces with combinatorial optimization is the state-space explosion problem: the number of possibilities grows exponentially with the problem size, which makes solving intractable for large problems. In the last years, deep reinforcement learning (DRL) has shown its promise for designing good heuristics dedicated to solve NP-hard combinatorial optimization problems. However, current approaches have two shortcomings: (1) they mainly focus on the standard travelling salesman problem and they cannot be easily extended to other problems, and (2) they only provide an approximate solution with no systematic ways to improve it or to prove optimality. In another context, constraint programming (CP) is a generic tool to solve combinatorial optimization problems. Based on a complete search procedure, it will always find the optimal solution if we allow an execution time large enough. A critical design choice, that makes CP non-trivial to use in practice, is the branching decision, directing how the search space is explored. In this work, we propose a general and hybrid approach, based on DRL and CP, for solving combinatorial optimization problems. The core of our approach is based on a dynamic programming formulation, that acts as a bridge between both techniques. We experimentally show that our solver is efficient to solve two challenging problems: the traveling salesman problem with time windows, and the 4-moments portfolio optimization problem. Results obtained show that the framework introduced outperforms the stand-alone RL and CP solutions, while being competitive with industrial solvers

    The bioavailability of bromazepam, omeprazole and paracetamol given by nasogastric feeding tube

    Get PDF
    Aims: To characterize and compare the pharmacokinetic profiles of bromazepam, omeprazole and paracetamol when administered by the oral and nasogastric routes to the same healthy cohort of volunteers. Methods: In a prospective, monocentric, randomized crossover study, eight healthy volunteers received the three drugs by the oral (OR) and nasogastric routes (NT). Sequential plasma samples were analyzed by high-performance liquid chromatography-UV, pharmacokinetic parameters (Cmax, AUC0−∞{\text{AUC}}_{0 - \infty } , t½, ke, tmax) were compared statistically, and Cmax, AUC0−∞{\text{AUC}}_{0 - \infty } and tmax were analyzed for bioequivalence. Results: A statistically significant difference was seen in the AUC0−∞{\text{AUC}}_{0 - \infty } of bromazepam, with nasogastric administration decreasing availability by about 25%: AUCOR = 2501 ng mL−1 h; AUCNT = 1855ng mL−1 h (p  0.05); ratio (geometric mean) = 1.01 (90% CI 0.64-1.61). An extended study with a larger number of subjects may possibly provide clearer answers. The narrow 90% confidence limits of paracetamol indicate bioequivalence: AUCOR = 37μg mL−1 h; AUCNT = 41μg mL−1 h(p > 0.05); ratio (geometric mean) = 1.12 (90% CI 0.98-1.28). Conclusion: The results of this study show that the nasogastric route of administration does not appear to cause marked, clinically unsuitable alterations in the bioavailability of the tested drug

    Duration of adjuvant chemotherapy for stage III colon cancer

    Get PDF
    BACKGROUND Since 2004, a regimen of 6 months of treatment with oxaliplatin plus a fluoropyrimidine has been standard adjuvant therapy in patients with stage III colon cancer. However, since oxaliplatin is associated with cumulative neurotoxicity, a shorter duration of therapy could spare toxic effects and health expenditures. METHODS We performed a prospective, preplanned, pooled analysis of six randomized, phase 3 trials that were conducted concurrently to evaluate the noninferiority of adjuvant therapy with either FOLFOX (fluorouracil, leucovorin, and oxaliplatin) or CAPOX (capecitabine and oxaliplatin) administered for 3 months, as compared with 6 months. The primary end point was the rate of disease-free survival at 3 years. Noninferiority of 3 months versus 6 months of therapy could be claimed if the upper limit of the two-sided 95% confidence interval of the hazard ratio did not exceed 1.12. RESULTS After 3263 events of disease recurrence or death had been reported in 12,834 patients, the noninferiority of 3 months of treatment versus 6 months was not confirmed in the overall study population (hazard ratio, 1.07; 95% confidence interval [CI], 1.00 to 1.15). Noninferiority of the shorter regimen was seen for CAPOX (hazard ratio, 0.95; 95% CI, 0.85 to 1.06) but not for FOLFOX (hazard ratio, 1.16; 95% CI, 1.06 to 1.26). In an exploratory analysis of the combined regimens, among the patients with T1, T2, or T3 and N1 cancers, 3 months of therapy was noninferior to 6 months, with a 3-year rate of disease-free survival of 83.1% and 83.3%, respectively (hazard ratio, 1.01; 95% CI, 0.90 to 1.12). Among patients with cancers that were classified as T4, N2, or both, the disease-free survival rate for a 6-month duration of therapy was superior to that for a 3-month duration (64.4% vs. 62.7%) for the combined treatments (hazard ratio, 1.12; 95% CI, 1.03 to 1.23; P=0.01 for superiority). CONCLUSIONS Among patients with stage III colon cancer receiving adjuvant therapy with FOLFOX or CAPOX, noninferiority of 3 months of therapy, as compared with 6 months, was not confirmed in the overall population. However, in patients treated with CAPOX, 3 months of therapy was as effective as 6 months, particularly in the lower-risk subgroup. (Funded by the National Cancer Institute and others.

    LEAP: Looking beyond pixels with continuous-space EstimAtion of Point sources

    Get PDF
    Context. Two main classes of imaging algorithms have emerged in radio interferometry: the CLEAN algorithm and its multiple variants, and compressed-sensing inspired methods. They are both discrete in nature, and estimate source locations and intensities on a regular grid. For the traditional CLEAN-based imaging pipeline, the resolution power of the tool is limited by the width of the synthesized beam, which is inversely proportional to the largest baseline. The finite rate of innovation (FRI) framework is a robust method to find the locations of point-sources in a continuum without grid imposition. The continuous formulation makes the FRI recovery performance only dependent on the number of measurements and the number of sources in the sky. FRI can theoretically find sources below the perceived tool resolution. To date, FRI had never been tested in the extreme conditions inherent to radio astronomy: weak signal / high noise, huge data sets, large numbers of sources. Aims. The aims were (i) to adapt FRI to radio astronomy, (ii) verify it can recover sources in radio astronomy conditions with more accurate positioning than CLEAN, and possibly resolve some sources that would otherwise be missed, (iii) show that sources can be found using less data than would otherwise be required to find them, and (v) show that FRI does not lead to an augmented rate of false positives. Methods. We implemented a continuous domain sparse reconstruction algorithm in Python. The angular resolution performance of the new algorithm was assessed under simulation, and with visibility measurements from the LOFAR telescope. Existing catalogs were used to confirm the existence of sources. Results. We adapted the FRI framework to radio interferometry, and showed that it is possible to determine accurate off-grid point source locations and their corresponding intensities. In addition, FRI-based sparse reconstruction required less integration time and smaller baselines to reach a comparable reconstruction quality compared to a conventional method. The achieved angular resolution is higher than the perceived instrument resolution, and very close sources can be reliably distinguished. The proposed approach has cubic complexity in the total number (typically around a few thousand) of uniform Fourier data of the sky image estimated from the reconstruction. It is also demonstrated that the method is robust to the presence of extended-sources, and that false-positives can be addressed by choosing an adequate model order to match the noise level
    • …
    corecore