1,254 research outputs found

    Optimization of the leak conductance in the squid giant axon

    Full text link
    We report on a theoretical study showing that the leak conductance density, \GL, in the squid giant axon appears to be optimal for the action potential firing frequency. More precisely, the standard assumption that the leak current is composed of chloride ions leads to the result that the experimental value for \GL is very close to the optimal value in the Hodgkin-Huxley model which minimizes the absolute refractory period of the action potential, thereby maximizing the maximum firing frequency under stimulation by sharp, brief input current spikes to one end of the axon. The measured value of \GL also appears to be close to optimal for the frequency of repetitive firing caused by a constant current input to one end of the axon, especially when temperature variations are taken into account. If, by contrast, the leak current is assumed to be composed of separate voltage-independent sodium and potassium currents, then these optimizations are not observed.Comment: 9 pages; 9 figures; accepted for publication in Physical Review

    CT Automated Exposure Control Using A Generalized Detectability Index

    Get PDF
    Purpose Identifying an appropriate tube current setting can be challenging when using iterative reconstruction due to the varying relationship between spatial resolution, contrast, noise, and dose across different algorithms. This study developed and investigated the application of a generalized detectability index (d\u27gen) to determine the noise parameter to input to existing automated exposure control (AEC) systems to provide consistent image quality (IQ) across different reconstruction approaches. Methods This study proposes a task‐based automated exposure control (AEC) method using a generalized detectability index (d\u27gen). The proposed method leverages existing AEC methods that are based on a prescribed noise level. The generalized d\u27gen metric is calculated using lookup tables of task‐based modulation transfer function (MTF) and noise power spectrum (NPS). To generate the lookup tables, the American College of Radiology CT accreditation phantom was scanned on a multidetector CT scanner (Revolution CT, GE Healthcare) at 120 kV and tube current varied manually from 20 to 240 mAs. Images were reconstructed using a reference reconstruction algorithm and four levels of an in‐house iterative reconstruction algorithm with different regularization strengths (IR1–IR4). The task‐based MTF and NPS were estimated from the measured images to create lookup tables of scaling factors that convert between d\u27gen and noise standard deviation. The performance of the proposed d\u27gen‐AEC method in providing a desired IQ level over a range of iterative reconstruction algorithms was evaluated using the American College of Radiology (ACR) phantom with elliptical shell and using a human reader evaluation on anthropomorphic phantom images. Results The study of the ACR phantom with elliptical shell demonstrated reasonable agreement between the d\u27gen predicted by the lookup table and d\u27 measured in the images, with a mean absolute error of 15% across all dose levels and maximum error of 45% at the lowest dose level with the elliptical shell. For the anthropomorphic phantom study, the mean reader scores for images resulting from the d\u27gen‐AEC method were 3.3 (reference image), 3.5 (IR1), 3.6 (IR2), 3.5 (IR3), and 2.2 (IR4). When using the d\u27gen‐AEC method, the observers’ IQ scores for the reference reconstruction were statistical equivalent to the scores for IR1, IR2, and IR3 iterative reconstructions (P \u3e 0.35). The d\u27gen‐AEC method achieved this equivalent IQ at lower dose for the IR scans compared to the reference scans. Conclusions A novel AEC method, based on a generalized detectability index, was investigated. The proposed method can be used with some existing AEC systems to derive the tube current profile for iterative reconstruction algorithms. The results provide preliminary evidence that the proposed d\u27gen‐AEC can produce similar IQ across different iterative reconstruction approaches at different dose levels

    On the gravitational production of superheavy dark matter

    Get PDF
    The dark matter in the universe can be in the form of a superheavy matter species (WIMPZILLA). Several mechanisms have been proposed for the production of WIMPZILLA particles during or immediately following the inflationary epoch. Perhaps the most attractive mechanism is through gravitational particle production, where particles are produced simply as a result of the expansion of the universe. In this paper we present a detailed numerical calculation of WIMPZILLA gravitational production in hybrid-inflation models and natural-inflation models. Generalizing these findings, we also explore the dependence of the gravitational production mechanism on various models of inflation. We show that superheavy dark matter production seems to be robust, with Omega_X h^2 ~ (M_X / (10^11 GeV))^2 (T_RH / (10^9 GeV)), so long as M_X < H_I, where M_X is the WIMPZILLA mass, T_RH is the reheat temperature, and H_I is the expansion rate of the universe during inflation.Comment: 26 pages, 7 figures; LaTeX; submitted to Physical Review D; minor typographical error correcte

    Current cosmological bounds on neutrino masses and relativistic relics

    Get PDF
    We combine the most recent observations of large-scale structure (2dF and SDSS galaxy surveys) and cosmic microwave anisotropies (WMAP and ACBAR) to put constraints on flat cosmological models where the number of massive neutrinos and of massless relativistic relics are both left arbitrary. We discuss the impact of each dataset and of various priors on our bounds. For the standard case of three thermalized neutrinos, we find an upper bound on the total neutrino mass sum m_nu < 1.0 (resp. 0.6) eV (at 2sigma), using only CMB and LSS data (resp. including priors from supernovae data and the HST Key Project), a bound that is quite insensitive to the splitting of the total mass between the three species. When the total number of neutrinos or relativistic relics N_eff is left free, the upper bound on sum m_nu (at 2sigma, including all priors) ranges from 1.0 to 1.5 eV depending on the mass splitting. We provide an explanation of the parameter degeneracy that allows larger values of the masses when N_eff increases. Finally, we show that the limit on the total neutrino mass is not significantly modified in the presence of primordial gravitational waves, because current data provide a clear distinction between the corresponding effects.Comment: 13 pages, 6 figure

    Probing neutrino masses with future galaxy redshift surveys

    Get PDF
    We perform a new study of future sensitivities of galaxy redshift surveys to the free-streaming effect caused by neutrino masses, adding the information on cosmological parameters from measurements of primary anisotropies of the cosmic microwave background (CMB). Our reference cosmological scenario has nine parameters and three different neutrino masses, with a hierarchy imposed by oscillation experiments. Within the present decade, the combination of the Sloan Digital Sky Survey (SDSS) and CMB data from the PLANCK experiment will have a 2-sigma detection threshold on the total neutrino mass close to 0.2 eV. This estimate is robust against the inclusion of extra free parameters in the reference cosmological model. On a longer term, the next generation of experiments may reach values of order sum m_nu = 0.1 eV at 2-sigma, or better if a galaxy redshift survey significantly larger than SDSS is completed. We also discuss how the small changes on the free-streaming scales in the normal and inverted hierarchy schemes are translated into the expected errors from future cosmological data.Comment: 14 pages, 7 figures. Added results with the KAOS proposal and 1 referenc

    New constraint on the cosmological background of relativistic particles

    Full text link
    We have derived new bounds on the relativistic energy density in the Universe from cosmic microwave background (CMB), large scale structure (LSS), and type Ia supernova (SNI-a) observations. In terms of the effective number of neutrino species a bound of N_\nu = 4.2^{+1.2}_{-1.7} is derived at 95% confidence. This bound is significantly stronger than previous determinations, mainly due to inclusion of new CMB and SNI-a observations. The absence of a cosmological neutrino background (N_\nu = 0) is now excluded at 5.4 \sigma. The value of N_\nu is compatible with the value derived from big bang nucleosynthesis considerations, marking one of the most remarkable successes of the standard cosmological model. In terms of the cosmological helium abundance, the CMB, LSS, and SNI-a observations predict a value of 0.240 < Y < 0.281.Comment: 10 pages, 3 figures, references adde

    Virus Replication as a Phenotypic Version of Polynucleotide Evolution

    Full text link
    In this paper we revisit and adapt to viral evolution an approach based on the theory of branching process advanced by Demetrius, Schuster and Sigmund ("Polynucleotide evolution and branching processes", Bull. Math. Biol. 46 (1985) 239-262), in their study of polynucleotide evolution. By taking into account beneficial effects we obtain a non-trivial multivariate generalization of their single-type branching process model. Perturbative techniques allows us to obtain analytical asymptotic expressions for the main global parameters of the model which lead to the following rigorous results: (i) a new criterion for "no sure extinction", (ii) a generalization and proof, for this particular class of models, of the lethal mutagenesis criterion proposed by Bull, Sanju\'an and Wilke ("Theory of lethal mutagenesis for viruses", J. Virology 18 (2007) 2930-2939), (iii) a new proposal for the notion of relaxation time with a quantitative prescription for its evaluation, (iv) the quantitative description of the evolution of the expected values in in four distinct "stages": extinction threshold, lethal mutagenesis, stationary "equilibrium" and transient. Finally, based on these quantitative results we are able to draw some qualitative conclusions.Comment: 23 pages, 1 figure, 2 tables. arXiv admin note: substantial text overlap with arXiv:1110.336

    SAP regulates T cell–mediated help for humoral immunity by a mechanism distinct from cytokine regulation

    Get PDF
    X-linked lymphoproliferative disease is caused by mutations affecting SH2D1A/SAP, an adaptor that recruits Fyn to signal lymphocyte activation molecule (SLAM)-related receptors. After infection, SLAM-associated protein (SAP)−/− mice show increased T cell activation and impaired humoral responses. Although SAP−/− mice can respond to T-independent immunization, we find impaired primary and secondary T-dependent responses, with defective B cell proliferation, germinal center formation, and antibody production. Nonetheless, transfer of wild-type but not SAP-deficient CD4 cells rescued humoral responses in reconstituted recombination activating gene 2−/− and SAP−/− mice. To investigate these T cell defects, we examined CD4 cell function in vitro and in vivo. Although SAP-deficient CD4 cells have impaired T cell receptor–mediated T helper (Th)2 cytokine production in vitro, we demonstrate that the humoral defects can be uncoupled from cytokine expression defects in vivo. Instead, SAP-deficient T cells exhibit decreased and delayed inducible costimulator (ICOS) induction and heightened CD40L expression. Notably, in contrast to Th2 cytokine defects, humoral responses, ICOS expression, and CD40L down-regulation were rescued by retroviral reconstitution with SAP-R78A, a SAP mutant that impairs Fyn binding. We further demonstrate a role for SLAM/SAP signaling in the regulation of early surface CD40L expression. Thus, SAP affects expression of key molecules required for T–B cell collaboration by mechanisms that are distinct from its role in cytokine regulation

    In-reach specialist nursing teams for residential care homes : uptake of services, impact on care provision and cost-effectiveness

    Get PDF
    Background: A joint NHS-Local Authority initiative in England designed to provide a dedicated nursing and physiotherapy in-reach team (IRT) to four residential care homes has been evaluated.The IRT supported 131 residents and maintained 15 'virtual' beds for specialist nursing in these care homes. Methods: Data captured prospectively (July 2005 to June 2007) included: numbers of referrals; reason for referral; outcome (e.g. admission to IRT bed, short-term IRT support); length of stay in IRT; prevented hospital admissions; early hospital discharges; avoided nursing home transfers; and detection of unrecognised illnesses. An economic analysis was undertaken. Results: 733 referrals were made during the 2 years (range 0.5 to 13.0 per resident per annum)resulting in a total of 6,528 visits. Two thirds of referrals aimed at maintaining the resident's independence in the care home. According to expert panel assessment, 197 hospital admissions were averted over the period; 20 early discharges facilitated; and 28 resident transfers to a nursing home prevented. Detection of previously unrecognised illnesses accounted for a high number of visits. Investment in IRT equalled ÂŁ44.38 per resident per week. Savings through reduced hospital admissions, early discharges, delayed transfers to nursing homes, and identification of previously unrecognised illnesses are conservatively estimated to produce a final reduction in care cost of ÂŁ6.33 per resident per week. A sensitivity analysis indicates this figure might range from a weekly overall saving of ÂŁ36.90 per resident to a 'worst case' estimate of ÂŁ2.70 extra expenditure per resident per week. Evaluation early in implementation may underestimate some cost-saving activities and greater savings may emerge over a longer time period. Similarly, IRT costs may reduce over time due to the potential for refinement of team without major loss in effectiveness. Conclusion: Introduction of a specialist nursing in-reach team for residential homes is at least cost neutral and, in all probability, cost saving. Further benefits include development of new skills in the care home workforce and enhanced quality of care. Residents are enabled to stay in familiar surroundings rather than unnecessarily spending time in hospital or being transferred to a higher dependency nursing home setting

    Doing descriptive phenomenological data collection in sport psychology research

    Get PDF
    Researchers in the field of sport psychology have begun to highlight the potential of phenomenological ap-proaches in recognising subjective experience and the essential structure of experience. Despite this, phenom-enology has been used inconsistently in the sport psychology literature thus far. Therefore, the aim of this paper is to provide theoretically informed practical guidelines for researchers who wish to employ the descrip-tive phenomenological interview in their studies. The recommended guidelines will be supported by under-pinning theory and brief personal accounts. An argument will also be presented for the potential that descrip-tive phenomenology holds in creating new knowledge through rich description. In doing so, it is hoped that this method will be utilised appropriately in future sport psychology research to not only strengthen and diver-sify the existing literature, but also the knowledge of practitioners working within the applied world of profes-sional sport
    • 

    corecore