10,178 research outputs found

    T-Cell activation: a queuing theory analysis at low agonist density

    Get PDF
    We analyze a simple linear triggering model of the T-cell receptor (TCR) within the framework of queuing theory, in which TCRs enter the queue upon full activation and exit by downregulation. We fit our model to four experimentally characterized threshold activation criteria and analyze their specificity and sensitivity: the initial calcium spike, cytotoxicity, immunological synapse formation, and cytokine secretion. Specificity characteristics improve as the time window for detection increases, saturating for time periods on the timescale of downregulation; thus, the calcium spike (30 s) has low specificity but a sensitivity to single-peptide MHC ligands, while the cytokine threshold (1 h) can distinguish ligands with a 30% variation in the complex lifetime. However, a robustness analysis shows that these properties are degraded when the queue parameters are subject to variation—for example, under stochasticity in the ligand number in the cell-cell interface and population variation in the cellular threshold. A time integration of the queue over a period of hours is shown to be able to control parameter noise efficiently for realistic parameter values when integrated over sufficiently long time periods (hours), the discrimination characteristics being determined by the TCR signal cascade kinetics (a kinetic proofreading scheme). Therefore, through a combination of thresholds and signal integration, a T cell can be responsive to low ligand density and specific to agonist quality. We suggest that multiple threshold mechanisms are employed to establish the conditions for efficient signal integration, i.e., coordinate the formation of a stable contact interface

    Variation in habitat choice and delayed reproduction: Adaptive queuing strategies or individual quality differences?

    Get PDF
    In most species, some individuals delay reproduction or occupy inferior breeding positions. The queue hypothesis tries to explain both patterns by proposing that individuals strategically delay breeding (queue) to acquire better breeding or social positions. In 1995, Ens, Weissing, and Drent addressed evolutionarily stable queuing strategies in situations with habitat heterogeneity. However, their model did not consider the non - mutually exclusive individual quality hypothesis, which suggests that some individuals delay breeding or occupy inferior breeding positions because they are poor competitors. Here we extend their model with individual differences in competitive abilities, which are probably plentiful in nature. We show that including even the smallest competitive asymmetries will result in individuals using queuing strategies completely different from those in models that assume equal competitors. Subsequently, we investigate how well our models can explain settleme! nt patterns in the wild, using a long-term study on oystercatchers. This long-lived shorebird exhibits strong variation in age of first reproduction and territory quality. We show that only models that include competitive asymmetries can explain why oystercatchers' settlement patterns depend on natal origin. We conclude that predictions from queuing models are very sensitive to assumptions about competitive asymmetries, while detecting such differences in the wild is often problematic.

    Bayesian inference for queueing networks and modeling of internet services

    Get PDF
    Modern Internet services, such as those at Google, Yahoo!, and Amazon, handle billions of requests per day on clusters of thousands of computers. Because these services operate under strict performance requirements, a statistical understanding of their performance is of great practical interest. Such services are modeled by networks of queues, where each queue models one of the computers in the system. A key challenge is that the data are incomplete, because recording detailed information about every request to a heavily used system can require unacceptable overhead. In this paper we develop a Bayesian perspective on queueing models in which the arrival and departure times that are not observed are treated as latent variables. Underlying this viewpoint is the observation that a queueing model defines a deterministic transformation between the data and a set of independent variables called the service times. With this viewpoint in hand, we sample from the posterior distribution over missing data and model parameters using Markov chain Monte Carlo. We evaluate our framework on data from a benchmark Web application. We also present a simple technique for selection among nested queueing models. We are unaware of any previous work that considers inference in networks of queues in the presence of missing data.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS392 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Tools for modelling and simulating migration-based preservation

    No full text
    This report describes two tools for modelling and simulating the costs and risks of using IT storage systems for the long-term archiving of file-based AV assets. The tools include a model of storage costs, the ingest and access of files, the possibility of data corruption and loss from a range of mechanisms, and the impact of having limited resources with which to fulfill access requests and preservation actions. Applications include archive planning, development of a technology strategy, cost estimation for business planning, operational decision support, staff training and generally promoting awareness of the issues and challenges archives face in digital preservation

    On the Impact of Wireless Jamming on the Distributed Secondary Microgrid Control

    Full text link
    The secondary control in direct current microgrids (MGs) is used to restore the voltage deviations caused by the primary droop control, where the latter is implemented locally in each distributed generator and reacts to load variations. Numerous recent works propose to implement the secondary control in a distributed fashion, relying on a communication system to achieve consensus among MG units. This paper shows that, if the system is not designed to cope with adversary communication impairments, then a malicious attacker can apply a simple jamming of a few units of the MG and thus compromise the secondary MG control. Compared to other denial-of-service attacks that are oriented against the tertiary control, such as economic dispatch, the attack on the secondary control presented here can be more severe, as it disrupts the basic functionality of the MG

    Towards Robust Deep Reinforcement Learning for Traffic Signal Control: Demand Surges, Incidents and Sensor Failures

    Full text link
    Reinforcement learning (RL) constitutes a promising solution for alleviating the problem of traffic congestion. In particular, deep RL algorithms have been shown to produce adaptive traffic signal controllers that outperform conventional systems. However, in order to be reliable in highly dynamic urban areas, such controllers need to be robust with the respect to a series of exogenous sources of uncertainty. In this paper, we develop an open-source callback-based framework for promoting the flexible evaluation of different deep RL configurations under a traffic simulation environment. With this framework, we investigate how deep RL-based adaptive traffic controllers perform under different scenarios, namely under demand surges caused by special events, capacity reductions from incidents and sensor failures. We extract several key insights for the development of robust deep RL algorithms for traffic control and propose concrete designs to mitigate the impact of the considered exogenous uncertainties.Comment: 8 page
    corecore