601 research outputs found

    FTN multicarrier transmission based on tight Gabor frames

    Get PDF
    A multicarrier signal can be synthesized thanks to a symbol sequence and a Gabor family (i.e., a regularly time-frequency shifted version of a generator pulse). In this article, we consider the case where the signaling density is increased such that inter-pulse interference is unavoidable.Over an additive white Gaussian noise channel, we show that the signal-to-interference-plus-noise ratio is maximized when the transmitter and the receiver use the same tight Gabor frame. What is more, we give practical efficient realization schemes and show how to build tight frames based on usual generators. Theoretical and simulated bit-error-probability are given for a non-coded system using quadrature amplitude modulations. Such a characterization is then used to predict the convergence of a coded system using low-density parity-check codes. We also study the robustness of such a system to errors on the received bits in an interference cancellation context

    Analysis of a FTN Multicarrier System: Interference Mitigation Based on Tight Gabor Frames

    Get PDF
    Cognitive radio applications require flexible waveforms to overcome several challenges such as opportunistic spectrum allocation and white spaces utilization. In this context, multicarrier modulations generalizing traditional cyclic-prefix orthogonal frequency-division multiplexing are particularly justified to fit time-frequency characteristics of the channel while improving spectral efficiency.In our theoretical framework, a multicarrier signal is described as a Gabor family the coefficients of which are the symbols to be transmitted and the generators are the time-frequency shifted pulse shapes to be used. In this article, we consider the case where non-rectangular pulse shapes are used with a signaling density increased such that inter-pulse interference is unavoidable. Such an interference is minimized when the Gabor family used is a tight frame. We show that, in this case, interference can be approximated as an additive Gaussian noise. This allows us to compute theoretical and simulated bit-error-probability for a non-coded system using a quadrature phase-shift keying constellation. Such a characterization is then used in order to predict the convergence of a coded system using low-density parity check codes. We also study the robustness of such a system to errors on the received bits in an interference cancellation context

    The learning problem, classification case

    Get PDF
    “Machine learning” o aprendizaje automático se refiere a un conjunto de algoritmos destinados a hacer las predicciones más precisas posibles de una variable de salida basada en los valores de algunas variables de entrada. Cuando la variable de salida es categórica, el proceso de generación de una predicción se llama clasificación. Problemas de este tipo ocurren muy a menudo en la práctica (por ejemplo: predecir el género de una persona, si un cliente de un banco va a incumplir su hipoteca, o si el precio de una acción en particular va a subir o bajar). Un problema importante en la clasificación es el reconocimiento de imágenes. Por ejemplo, hay reconocimiento facial en las redes sociales, apoyo diagnóstico en imágenes médicas o descubrimiento de productos (encontrar un producto similar usando una imagen de referencia). Presentamos y resolvimos el problema de aprendizaje en clasificación desde una perspectiva teórica y práctica. Primero, explicamos lo que entendemos por “aprender” para un algoritmo. Introducimos la notación matemática de las difer entes partes del problema de aprendizaje en clasificación, y demostramos matemáticamente que el aprendizaje es factible bajo nuestra definición de “capacidad de aprendizaje”. A continuación, nos concentramos en un método de aprendizaje llamado red neuronal artificial. Este método es una forma muy flexible de modelar fenómenos altamente no lineales. Introdujimos la notación matemática, y demostramos las diferentes ecuaciones que rigen su funcionamiento (el algoritmo de backpropagation en particular). Luego, mostramos cómo podemos implementar el método de red neuronal en el paquete de software R. Por último, presentamos las actuaciones del programa en un famoso conjunto de datos de prueba, a saber, la base de datos MNIST, y comparamos nuestros resultados con los mencionados en el sitio web de Lecun, que estudió ampliamente esta base de datos.Machine learning refers to a set of algorithms aimed at making the most accurate possible pre dictions of an output variable based on the values of some input variables. When the output variable is categorical, the task of generating a prediction is called classification. Classification problems occur very frequently in practice (e.g.: predicting the gender of a person, if a client of a bank is going to default on his mortgage, or if a particular share price is going to go up or down). A major problem in classification is image recognition. There is for example face recognition on social networks, diagnostic support in medical imaging, or product discoverability (finding a similar product using a reference image). We presented and solved the classification learning problem from a theorical and practical perspective. First, we explained what we mean by “learning” for an algorithm. We introduced the mathematical notation of the different parts of the classification learning problem, and we mathematically demonstrated that learning is fea sible under our definition of “learnability”. Next, concentrated on one method of learning named artificial neural network. This method is a very flexible way of modeling highly nonlinear phe nomena. We introduced the mathematical notation, and we demonstrated the different equations that govern its functioning (the backpropagation algorithm in particular). Then, we showed how we can implement the neural network method in the R software package. Finally, we presented the performances of the program on a famous testing data set, namely the MNIST database, and compared our results with those mentioned on the web site of Yann Lecun, who studied this database extensively.Magíster en MatemáticasMaestrí

    Compilation for heterogeneous SoCs : bridging the gap between software and target-specific mechanisms

    Get PDF
    International audienceCurrent applications constraints are pushing for higher computation power while reducing energy consumption, driving the development of increasingly specialized socs. In the mean time, these socs are still programmed in assembly language to make use of their specific hardware mechanisms. The constraints on hardware development bringing specialization, hence heterogeneity, it is essential to support these new mechanisms using high-level programming. In this work, we use a parametric data flow formalism to abstract the application from any hardware platform. From this premise, we propose to contribute to the compilation of target independent programs on heterogeneous platforms. These developments are threefold, with 1) the support of hardware accelerators for computation using actor fusion, 2) the automatic generation of communications on complex memory layouts and 3) the synchronization of distributed cores using hardware mechanisms for scheduling. The code generation is illustrated on a telecommunication dedicated heterogeneous soc

    Contrôle d'application flot de données pour les systèmes sur puces : étude de cas sur la plateforme Magali

    Get PDF
    International audienceLes applications embarquées demandent toujours plus de puissance de calcul pour moins de consommation, avec comme conséquence l'apparition de systèmes sur puces dédiés. Dans le domaine du traitement du signal, le modèle de calcul flot de données est couramment utilisé pour la programmation de ces systèmes sur puce. Il est donc nécessaire d'avoir un modèle d'exécution adapté à ces architectures et répondant aux contraintes applicatives. Dans ce tra- vail, nous proposons un nouveau modèle d'exécution pour le contrôle d'applications flot de données. Notre approche s'appuie sur les liens entre les caractéristiques des applications et les performances selon le modèle d'exécution associé. Ce travail est illustré avec une étude de cas sur la plateforme Magali

    Cognitive Radio Programming: Existing Solutions and Open Issues

    Get PDF
    Software defined radio (sdr) technology has evolved rapidly and is now reaching market maturity, providing solutions for cognitive radio applications. Still, a lot of issues have yet to be studied. In this paper, we highlight the constraints imposed by recent radio protocols and we present current architectures and solutions for programming sdr. We also list the challenges to overcome in order to reach mastery of future cognitive radios systems.La radio logicielle a évolué rapidement pour atteindre la maturité nécessaire pour être mise sur le marché, offrant de nouvelles solutions pour les applications de radio cognitive. Cependant, beaucoup de problèmes restent à étudier. Dans ce papier, nous présentons les contraintes imposées par les nouveaux protocoles radios, les architectures matérielles existantes ainsi que les solutions pour les programmer. De plus, nous listons les difficultés à surmonter pour maitriser les futurs systèmes de radio cognitive

    Pharmacokinetic modelling and development of Bayesian estimators for therapeutic drug monitoring of mycophenolate mofetil in reduced-intensity haematopoietic stem cell transplantation.

    No full text
    International audienceBACKGROUND: Mycophenolate mofetil, a prodrug of mycophenolic acid (MPA), is used during non-myeloablative and reduced-intensity conditioning haematopoetic stem cell transplantation (HCT) to improve engraftment and reduce graft-versus-host disease (GVHD). However, information about MPA pharmacokinetics is sparse in this context and its use is still empirical. OBJECTIVES: To perform a pilot pharmacokinetic study and to develop maximum a posteriori Bayesian estimators (MAP-BEs) for the estimation of MPA exposure in HCT. PATIENTS AND METHODS: Fourteen patients administered oral mycophenolate mofetil 15 g/kg three times daily were included. Two consecutive 8-hour pharmacokinetic profiles were performed on the same day, 3 days before and 4 days after the HCT. One 8-hour pharmacokinetic profile was performed on day 27 after transplantation. For these 8-hour pharmacokinetic profiles, blood samples were collected predose and 20, 40, 60, 90 minutes and 2, 4, 6 and 8 hours post-dose. Using the iterative two-stage (ITS) method, two different one-compartment open pharmacokinetic models with first-order elimination were developed to describe the data: one with two gamma laws and one with three gamma laws to describe the absorption phase. For each pharmacokinetic profile, the Akaike information criterion (AIC) was calculated to evaluate model fitting. On the basis of the population pharmacokinetic parameters, MAP-BEs were developed for the estimation of MPA pharmacokinetics and area under the plasma concentration-time curve (AUC) from 0 to 8 hours at the different studied periods using a limited-sampling strategy. These MAP-BEs were then validated using a data-splitting method. RESULTS: The ITS approach allowed the development of MAP-BEs based either on 'double-gamma' or 'triple-gamma' models, the combination of which allowed correct estimation of MPA pharmacokinetics and AUC on the basis of a 20 minute-90 minute-240 minute sampling schedule. The mean bias of the Bayesian versus reference (trapezoidal) AUCs was 20%. AIC was systematically calculated for the choice of the most appropriate model fitting the data. CONCLUSION: Pharmacokinetic models and MAP-BEs for mycophenolate mofetil when administered to HCT patients have been developed. In the studied population, they allowed the estimation of MPA exposure based on three blood samples, which could be helpful in conducting clinical trials for the optimization of MPA in reduced-intensity HCT. However, prior studies will be needed to validate them in larger populations

    Large scale analysis of routine dose adjustments of mycophenolate mofetil based on global exposure in renal transplant patients.

    No full text
    International audienceBACKGROUND: : We report a feasibility study based on our large-scale experience with mycophenolate mofetil dose adjustment based on mycophenolic acid interdose area under the curve (AUC) in renal transplant patients. METHODS: : Between 2005 and 2010, 13,930 requests for 7090 different patients (outside any clinical trial) were posted by more than 30 different transplantation centers on a free, secure web site for mycophenolate mofetil dose recommendations using three plasma concentrations and Bayesian estimation. RESULTS: : This retrospective study showed that 1) according to a consensually recommended 30- to 60-mg*h/L target, dose adjustment was needed for approximately 35% of the patients, 25% being underexposed with the highest proportion observed in the first weeks after transplantation; 2) when dose adjustment had been previously proposed, the subsequent AUC was significantly more often in the recommended range if the dose was applied than not at all posttransplantation periods (72-80% vs. 43-54%); and 3) the interindividual AUC variability in the "respected-dose" group was systematically lower than that in the "not respected-dose" group (depending on the posttransplantation periods; coefficient of variation %, 31-41% vs 49-70%, respectively). Further analysis suggested that mycophenolic acid AUC should best be monitored at least every 2 weeks during the first month, every 1 to 3 months between months 1 and 12, whereas in the stable phase, the odds to be still in the 30- to 60-mg*h/L range on the following visit was still 75% up to 1 year after the previous dose adjustment. CONCLUSION: : This study showed that the monitoring of mycophenolate mofetil on the basis of AUC measurements is a clinically feasible approach, apparently acceptable by the patients, the nurses, and the physicians owing to its large use in routine clinics

    Analysis of a Multicarrier Communication System Based on Overcomplete Gabor Frames

    Get PDF
    A multicarrier signal can be seen as a Gabor family whose coefficients are the symbols to be transmitted and whose generators are the time-frequency shifted pulse shapes to be used. In this article, we consider the case where the signaling density is increased such that inter-pulse interference is unavoidable. Such an interference is minimized when the Gabor family used is a tight frame. We show that, in this case, interference can be approximated as an additive Gaussian noise. This allows us to compute theoretical and simulated bit-error-probability for a non-coded system using a quadrature phase-shift keying constellation. Such a characterization is then used in order to predict the convergence of a coded system using low-density parity check codes. We also study the robustness of such a system to errors on the received bits in an interference cancellation context
    corecore