553 research outputs found

    ViVaMBC: estimating viral sequence variation in complex populations from illumina deep-sequencing data using model-based clustering

    Get PDF
    Background: Deep-sequencing allows for an in-depth characterization of sequence variation in complex populations. However, technology associated errors may impede a powerful assessment of low-frequency mutations. Fortunately, base calls are complemented with quality scores which are derived from a quadruplet of intensities, one channel for each nucleotide type for Illumina sequencing. The highest intensity of the four channels determines the base that is called. Mismatch bases can often be corrected by the second best base, i.e. the base with the second highest intensity in the quadruplet. A virus variant model-based clustering method, ViVaMBC, is presented that explores quality scores and second best base calls for identifying and quantifying viral variants. ViVaMBC is optimized to call variants at the codon level (nucleotide triplets) which enables immediate biological interpretation of the variants with respect to their antiviral drug responses. Results: Using mixtures of HCV plasmids we show that our method accurately estimates frequencies down to 0.5%. The estimates are unbiased when average coverages of 25,000 are reached. A comparison with the SNP-callers V-Phaser2, ShoRAH, and LoFreq shows that ViVaMBC has a superb sensitivity and specificity for variants with frequencies above 0.4%. Unlike the competitors, ViVaMBC reports a higher number of false-positive findings with frequencies below 0.4% which might partially originate from picking up artificial variants introduced by errors in the sample and library preparation step. Conclusions: ViVaMBC is the first method to call viral variants directly at the codon level. The strength of the approach lies in modeling the error probabilities based on the quality scores. Although the use of second best base calls appeared very promising in our data exploration phase, their utility was limited. They provided a slight increase in sensitivity, which however does not warrant the additional computational cost of running the offline base caller. Apparently a lot of information is already contained in the quality scores enabling the model based clustering procedure to adjust the majority of the sequencing errors. Overall the sensitivity of ViVaMBC is such that technical constraints like PCR errors start to form the bottleneck for low frequency variant detection

    Phase Combination and its Application to the Solution of Macromolecular Structures: Developing ALIXE and SHREDDER

    Get PDF
    [eng] Phasing X-ray data within the frame of the ARCIMBOLDO programs requires very accurate models and a sophisticated evaluation of the possible hypotheses. ARCIMBOLDO uses small fragments, that are placed with the maximum likelihood molecular replacement program Phaser, and are subject to density modification and autotracing with the program SHELXE. The software receives its name from the Italian painter Giuseppe Arcimboldo, who used to compose portraits out of common objects such as vegetables or flowers. Out of most possible arrangements of such objects, only a still-life will result, and just a few ones will truly produce a portrait. In a similar way, from all possible placements with small protein fragments, only a few will be correct and will allow to get the full “protein’s portrait”. The work presented in this thesis has explored new ways to exploit partial information and increase the signal in the process of phasing with fragments. This has been achieved through two main pieces of software, ALIXE and SHREDDER. With the spherical mode in ARCIMBOLDO_SHREDDER, the aim is to derive compact fragments starting from a distant homolog to our unknown protein of interest. Then, locations for these fragments are searched with Phaser. These include strategies for refining the fragments against the experimental data and giving them more degrees of freedom. With ALIXE, the aim is to combine information in reciprocal space from partial solutions, such as the ones produced by SHREDDER, and use the coherence between them to guide their merging and to increase the information content, so that the step of density modification and autotracing starts from a more complete solution. Even if partial solutions contain both correct and incorrect information, the combination of solutions that share some similarity will allow to get a better approximation to the correct structure. Both ARCIMBOLDO_SHREDDER and ALIXE have been used on test data for development and optimisation but also on datasets from previously unknown structures, which have been solved thanks to these programs. These programs are distributed through the website of the group but also through software suites of general use in the crystallographic community such as CCP4 and SBGrid

    Spectral print reproduction modeling and feasibility

    Get PDF

    Bibliographie

    Get PDF

    Tempo Synchronised Effects Controlled by a Beat Tracking Digital Audio System

    Get PDF
    Tempo Synchronised Effects Controlled by a Beat Tracking Digital Audio SystemArchitecture & Allied Art

    A review of differentiable digital signal processing for music and speech synthesis

    Get PDF
    The term “differentiable digital signal processing” describes a family of techniques in which loss function gradients are backpropagated through digital signal processors, facilitating their integration into neural networks. This article surveys the literature on differentiable audio signal processing, focusing on its use in music and speech synthesis. We catalogue applications to tasks including music performance rendering, sound matching, and voice transformation, discussing the motivations for and implications of the use of this methodology. This is accompanied by an overview of digital signal processing operations that have been implemented differentiably, which is further supported by a web book containing practical advice on differentiable synthesiser programming (https://intro2ddsp.github.io/). Finally, we highlight open challenges, including optimisation pathologies, robustness to real-world conditions, and design trade-offs, and discuss directions for future research

    Real-time predictive control for SI engines using linear parameter-varying models

    Get PDF
    As a response to the ever more stringent emission standards, automotive engines have become more complex with more actuators. The traditional approach of using many single-input single output controllers has become more difficult to design, due to complex system interactions and constraints. Model predictive control offers an attractive solution to this problem because of its ability to handle multi-input multi-output systems with constraints on inputs and outputs. The application of model based predictive control to automotive engines is explored below and a multivariable engine torque and air-fuel ratio controller is described using a quasi-LPV model predictive control methodology. Compared with the traditional approach of using SISO controllers to control air fuel ratio and torque separately, an advantage is that the interactions between the air and fuel paths are handled explicitly. Furthermore, the quasi-LPV model-based approach is capable of capturing the model nonlinearities within a tractable linear structure, and it has the potential of handling hard actuator constraints. The control design approach was applied to a 2010 Chevy Equinox with a 2.4L gasoline engine and simulation results are presented. Since computational complexity has been the main limiting factor for fast real time applications of MPC, we present various simplifications to reduce computational requirements. A benchmark comparison of estimated computational speed is included

    Sub-Nanosecond Time of Flight on Commercial Wi-Fi Cards

    Full text link
    Time-of-flight, i.e., the time incurred by a signal to travel from transmitter to receiver, is perhaps the most intuitive way to measure distances using wireless signals. It is used in major positioning systems such as GPS, RADAR, and SONAR. However, attempts at using time-of-flight for indoor localization have failed to deliver acceptable accuracy due to fundamental limitations in measuring time on Wi-Fi and other RF consumer technologies. While the research community has developed alternatives for RF-based indoor localization that do not require time-of-flight, those approaches have their own limitations that hamper their use in practice. In particular, many existing approaches need receivers with large antenna arrays while commercial Wi-Fi nodes have two or three antennas. Other systems require fingerprinting the environment to create signal maps. More fundamentally, none of these methods support indoor positioning between a pair of Wi-Fi devices without~third~party~support. In this paper, we present a set of algorithms that measure the time-of-flight to sub-nanosecond accuracy on commercial Wi-Fi cards. We implement these algorithms and demonstrate a system that achieves accurate device-to-device localization, i.e. enables a pair of Wi-Fi devices to locate each other without any support from the infrastructure, not even the location of the access points.Comment: 14 page
    • …
    corecore