509 research outputs found
Numerical Modelling Of The V-J Combinations Of The T Cell Receptor TRA/TRD Locus
T-Cell antigen Receptor (TR) repertoire is generated through rearrangements of V and J genes encoding α and ÎČ chains. The quantification and frequency for every V-J combination during ontogeny and development of the immune system remain to be precisely established. We have addressed this issue by building a model able to account for Vα-Jα gene rearrangements during thymus development of mice. So we developed a numerical model on the whole TRA/TRD locus, based on experimental data, to estimate how Vα and Jα genes become accessible to rearrangements. The progressive opening of the locus to V-J gene recombinations is modeled through windows of accessibility of different sizes and with different speeds of progression. Furthermore, the possibility of successive secondary V-J rearrangements was included in the modelling. The model points out some unbalanced V-J associations resulting from a preferential access to gene rearrangements and from a non-uniform partition of the accessibility of the J genes, depending on their location in the locus. The model shows that 3 to 4 successive rearrangements are sufficient to explain the use of all the V and J genes of the locus. Finally, the model provides information on both the kinetics of rearrangements and frequencies of each V-J associations. The model accounts for the essential features of the observed rearrangements on the TRA/TRD locus and may provide a reference for the repertoire of the V-J combinatorial diversity
Behavioural Correlate of Choice Confidence in a Discrete Trial Paradigm
How animals make choices in a changing and often uncertain environment is a central theme in the behavioural sciences. There is a substantial literature on how animals make choices in various experimental paradigms but less is known about the way they assess a choice after it has been made in terms of the expected outcome. Here, we used a discrete trial paradigm to characterise how the reward history shaped the behaviour on a trial by trial basis. Rats initiated each trial which consisted of a choice between two drinking spouts that differed in their probability of delivering a sucrose solution. Critically, sucrose was delivered after a delay from the first lick at the spouts â this allowed us to characterise the behavioural profile during the window between the time of choice and its outcome. Rats' behaviour converged to optimum choice, both during the acquisition phase and after the reversal of contingencies. We monitored the post-choice behaviour at a temporal precision of 1 millisecond; lick-response profiles revealed that rats spent more time at the spout with the higher reward probability and exhibited a sparser lick pattern. This was the case when we exclusively examined the unrewarded trials, where the outcome was identical. The differential licking profiles preceded the differential choice ratios and could thus predict the changes in choice behaviour
Recommended from our members
A Search for MeV to TeV Neutrinos from Fast Radio Bursts with IceCube
We present two searches for IceCube neutrino events coincident with 28 fast radio bursts (FRBs) and 1 repeating FRB. The first improves on a previous IceCube analysis - searching for spatial and temporal correlation of events with FRBs at energies greater than roughly 50 GeV - by increasing the effective area by an order of magnitude. The second is a search for temporal correlation of MeV neutrino events with FRBs. No significant correlation is found in either search; therefore, we set upper limits on the time-integrated neutrino flux emitted by FRBs for a range of emission timescales less than one day. These are the first limits on FRB neutrino emission at the MeV scale, and the limits set at higher energies are an order-of-magnitude improvement over those set by any neutrino telescope
Recommended from our members
Efficient propagation of systematic uncertainties from calibration to analysis with the SnowStorm method in IceCube
Efficient treatment of systematic uncertainties that depend on a large number of nuisance parameters is a persistent difficulty in particle physics and astrophysics experiments. Where low-level effects are not amenable to simple parameterization or re-weighting, analyses often rely on discrete simulation sets to quantify the effects of nuisance parameters on key analysis observables. Such methods may become computationally untenable for analyses requiring high statistics Monte Carlo with a large number of nuisance degrees of freedom, especially in cases where these degrees of freedom parameterize the shape of a continuous distribution. In this paper we present a method for treating systematic uncertainties in a computationally efficient and comprehensive manner using a single simulation set with multiple and continuously varied nuisance parameters. This method is demonstrated for the case of the depth-dependent effective dust distribution within the IceCube Neutrino Telescope
Recommended from our members
Search for sources of astrophysical neutrinos using seven years of icecube cascade events
Low-background searches for astrophysical neutrino sources anywhere in the sky can be performed using cascade events induced by neutrinos of all flavors interacting in IceCube with energies as low as âŒ1 TeV. Previously we showed that, even with just two years of data, the resulting sensitivity to sources in the southern sky is competitive with IceCube and ANTARES analyses using muon tracks induced by charge current muon neutrino interactions - especially if the neutrino emission follows a soft energy spectrum or originates from an extended angular region. Here, we extend that work by adding five more years of data, significantly improving the cascade angular resolution, and including tests for point-like or diffuse Galactic emission to which this data set is particularly well suited. For many of the signal candidates considered, this analysis is the most sensitive of any experiment to date. No significant clustering was observed, and thus many of the resulting constraints are the most stringent to date. In this paper we will describe the improvements introduced in this analysis and discuss our results in the context of other recent work in neutrino astronomy
The effect of standard dose multivitamin supplementation on disease progression in HIV-infected adults initiating HAART: a randomized double blind placebo-controlled trial in Uganda
Turbospeedz: Double Your Online SPDZ! Improving SPDZ using Function Dependent Preprocessing
Secure multiparty computation allows a set of mutually distrusting parties to securely compute a function of their private inputs, revealing only the output, even if some of the parties are corrupt. Recent years have seen an enormous amount of work that drastically improved the concrete efficiency of secure multiparty computation protocols. Many secure multiparty protocols work in an ``offline-online model. In this model, the computation is split into two main phases: a relatively slow ``offline phase , which the parties execute before they know their input, and a fast ``online phase , which the parties execute after receiving their input.
One of the most popular and efficient protocols for secure multiparty computation working in this model is the SPDZ protocol (Damgaard et al., CRYPTO 2012). The SPDZ offline phase is function independent, i.e., does not requires knowledge of the computed function at the offline phase. Thus, a natural question is: can the efficiency of the SPDZ protocol be improved if the function is known at the offline phase?
In this work, we answer the above question affirmatively. We show that by using a function dependent preprocessing protocol, the online communication of the SPDZ protocol can be brought down significantly, almost by a factor of 2, and the online computation is often also significantly reduced. In scenarios where communication is the bottleneck, such as strong computers on low bandwidth networks, this could potentially almost double the online throughput of the SPDZ protocol, when securely computing the same circuit many times in parallel (on different inputs).
We present two versions of our protocol: Our first version uses the SPDZ offline phase protocol as a black-box, which achieves the improved online communication at the cost of slightly increasing the offline communication. Our second version works by modifying the state-of-the-art SPDZ preprocessing protocol, Overdrive (Keller et al., Eurocrypt 2018). This version improves the overall communication over the state-of-the-art SPDZ when the function is known at the offline phase
An evaluation of the discriminant and predictive validity of relative social disadvantage as screening criteria for priority access to public general dental care, in Australia
- âŠ