404 research outputs found

    A Markovian event-based framework for stochastic spiking neural networks

    Full text link
    In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks

    Quantifying vertical mixing in estuaries

    Get PDF
    © 2008 The Authors. This is an open-access article distributed under the terms of the Creative Commons Attribution Noncommercial License. The definitive version was published in Environmental Fluid Mechanics 8 (2008): 495-509, doi:10.1007/s10652-008-9107-2.Estuarine turbulence is notable in that both the dissipation rate and the buoyancy frequency extend to much higher values than in other natural environments. The high dissipation rates lead to a distinct inertial subrange in the velocity and scalar spectra, which can be exploited for quantifying the turbulence quantities. However, high buoyancy frequencies lead to small Ozmidov scales, which require high sampling rates and small spatial aperture to resolve the turbulent fluxes. A set of observations in a highly stratified estuary demonstrate the effectiveness of a vessel-mounted turbulence array for resolving turbulent processes, and for relating the turbulence to the forcing by the Reynolds-averaged flow. The observations focus on the ebb, when most of the buoyancy flux occurs. Three stages of mixing are observed: (1) intermittent and localized but intense shear instability during the early ebb; (2) continuous and relatively homogeneous shear-induced mixing during the mid-ebb, and weakly stratified, boundary-layer mixing during the late ebb. The mixing efficiency as quantified by the flux Richardson number Rf was frequently observed to be higher than the canonical value of 0.15 from Osborn (J Phys Oceanogr 10:83–89, 1980). The high efficiency may be linked to the temporal–spatial evolution of shear instabilities.The funding for this research was obtained from ONR Grant N00014-06-1-0292 and NSF Grant OCE-0729547

    Influence of auxin and its polar transport inhibitor on the development of somatic embryos in Digitalis trojana

    Get PDF
    The present study reports the role of auxin and its transport inhibitor during the establishment of an efficient and optimized protocol for the somatic embryogenesis in Digitalis trojana Ivan. Hypocotyl segments (5 mm long) were placed vertically in the Murashige and Skoog medium supplemented with three sets [indole-3-acetic acid (IAA) alone or 2,3,5-triiodobenzoic acid (TIBA) alone or IAA-TIBA combination] of formulations of plant growth regulators, to assess their differential influence on induction and proliferation of somatic embryos (SEs). IAA alone was found to be the most effective, at a concentration of 0.5 mg/l, inducing similar to 10 SEs per explant with 52% induction frequency. On the other hand, the combination of 0.5 mg/l of IAA and 1 mg/l of TIBA produced significantly fewer (similar to 3.6 SEs) and abnormal (enlarged, oblong, jar and cup-shaped) SEs per explant with 24% induction frequency in comparison to that in the IAA alone. The explants treated with IAA-TIBA exhibited a delayed response along with the formation of abnormal SEs. Our study revealed that IAA induces high-frequency SE formation when used singly, but the frequency gradually declines when IAA was coupled with increasing levels of TIBA. Eventually, our findings bring new insights into the roles of auxin and its polar transport in somatic embryogenesis of D. trojana

    Estimating Genetic Variability in Non-Model Taxa: A General Procedure for Discriminating Sequence Errors from Actual Variation

    Get PDF
    Genetic variation is the driving force of evolution and as such is of central interest for biologists. However, inadequate discrimination of errors from true genetic variation could lead to incorrect estimates of gene copy number, population genetic parameters, phylogenetic relationships and the deposition of gene and protein sequences in databases that are not actually present in any organism. Misincorporation errors in multi-template PCR cloning methods, still commonly used for obtaining novel gene sequences in non-model species, are difficult to detect, as no previous information may be available about the number of expected copies of genes belonging to multi-gene families. However, studies employing these techniques rarely describe in any great detail how errors arising in the amplification process were detected and accounted for. Here, we estimated the rate of base misincorporation of a widely-used PCR-cloning method, using a single copy mitochondrial gene from a single individual to minimise variation in the template DNA, as 1.62×10−3 errors per site, or 9.26×10−5 per site per duplication. The distribution of errors among sequences closely matched that predicted by a binomial distribution function. The empirically estimated error rate was applied to data, obtained using the same methods, from the Phospholipase A2 toxin family from the pitviper Ovophis monticola. The distribution of differences detected closely matched the expected distribution of errors and we conclude that, when undertaking gene discovery or assessment of genetic diversity using this error-prone method, it will be informative to empirically determine the rate of base misincorporation

    A Long Baseline Neutrino Oscillation Experiment Using J-PARC Neutrino Beam and Hyper-Kamiokande

    Get PDF
    Document submitted to 18th J-PARC PAC meeting in May 2014. 50 pages, 41 figuresDocument submitted to 18th J-PARC PAC meeting in May 2014. 50 pages, 41 figuresDocument submitted to 18th J-PARC PAC meeting in May 2014. 50 pages, 41 figuresHyper-Kamiokande will be a next generation underground water Cherenkov detector with a total (fiducial) mass of 0.99 (0.56) million metric tons, approximately 20 (25) times larger than that of Super-Kamiokande. One of the main goals of Hyper-Kamiokande is the study of CPCP asymmetry in the lepton sector using accelerator neutrino and anti-neutrino beams. In this document, the physics potential of a long baseline neutrino experiment using the Hyper-Kamiokande detector and a neutrino beam from the J-PARC proton synchrotron is presented. The analysis has been updated from the previous Letter of Intent [K. Abe et al., arXiv:1109.3262 [hep-ex]], based on the experience gained from the ongoing T2K experiment. With a total exposure of 7.5 MW ×\times 107^7 sec integrated proton beam power (corresponding to 1.56×10221.56\times10^{22} protons on target with a 30 GeV proton beam) to a 2.52.5-degree off-axis neutrino beam produced by the J-PARC proton synchrotron, it is expected that the CPCP phase δCP\delta_{CP} can be determined to better than 19 degrees for all possible values of δCP\delta_{CP}, and CPCP violation can be established with a statistical significance of more than 3σ3\,\sigma (5σ5\,\sigma) for 7676% (5858%) of the δCP\delta_{CP} parameter space

    Conditional embryonic lethality to improve the sterile insect technique in Ceratitis capitata (Diptera: Tephritidae)

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The sterile insect technique (SIT) is an environment-friendly method used in area-wide pest management of the Mediterranean fruit fly <it>Ceratitis capitata </it>(Wiedemann; Diptera: Tephritidae). Ionizing radiation used to generate reproductive sterility in the mass-reared populations before release leads to reduction of competitiveness.</p> <p>Results</p> <p>Here, we present a first alternative reproductive sterility system for medfly based on transgenic embryonic lethality. This system is dependent on newly isolated medfly promoter/enhancer elements of cellularization-specifically-expressed genes. These elements act differently in expression strength and their ability to drive lethal effector gene activation. Moreover, position effects strongly influence the efficiency of the system. Out of 60 combinations of driver and effector construct integrations, several lines resulted in larval and pupal lethality with one line showing complete embryonic lethality. This line was highly competitive to wildtype medfly in laboratory and field cage tests.</p> <p>Conclusion</p> <p>The high competitiveness of the transgenic lines and the achieved 100% embryonic lethality causing reproductive sterility without the need of irradiation can improve the efficacy of operational medfly SIT programs.</p

    Evaluation of a clinical decision support tool for osteoporosis disease management: protocol for an interrupted time series design

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Osteoporosis affects over 200 million people worldwide at a high cost to healthcare systems. Although guidelines on assessing and managing osteoporosis are available, many patients are not receiving appropriate diagnostic testing or treatment. Findings from a systematic review of osteoporosis interventions, a series of mixed-methods studies, and advice from experts in osteoporosis and human-factors engineering were used collectively to develop a multicomponent tool (targeted to family physicians and patients at risk for osteoporosis) that may support clinical decision making in osteoporosis disease management at the point of care.</p> <p>Methods</p> <p>A three-phased approach will be used to evaluate the osteoporosis tool. In phase 1, the tool will be implemented in three family practices. It will involve ensuring optimal functioning of the tool while minimizing disruption to usual practice. In phase 2, the tool will be pilot tested in a quasi-experimental interrupted time series (ITS) design to determine if it can improve osteoporosis disease management at the point of care. Phase 3 will involve conducting a qualitative postintervention follow-up study to better understand participants' experiences and perceived utility of the tool and readiness to adopt the tool at the point of care.</p> <p>Discussion</p> <p>The osteoporosis tool has the potential to make several contributions to the development and evaluation of complex, chronic disease interventions, such as the inclusion of an implementation strategy prior to conducting an evaluation study. Anticipated benefits of the tool may be to increase awareness for patients about osteoporosis and its associated risks and provide an opportunity to discuss a management plan with their physician, which may all facilitate patient self-management.</p

    Has Motivational Interviewing fallen into its own Premature Focus Trap?

    Get PDF
    Since the initial conception of the behaviour change method Motivational Interviewing, there has been a shift evident in epistemological, methodological and practical applications, from an inductive, process and practitioner-focussed approach to that which is more deductive, research-outcome, and confirmatory-focussed. This paper highlights the conceptual and practical problems of adopting this approach, including the consequences of assessing the what (deductive outcome-focussed) at the expense of the how (inductively process-focussed). We encourage a return to an inductive, practitioner and client-focussed MI approach and propose the use of Computer Assisted Qualitative Data Analysis Systems such as NVivo in research initiatives to support this aim

    Utilisation of an operative difficulty grading scale for laparoscopic cholecystectomy

    Get PDF
    Background A reliable system for grading operative difficulty of laparoscopic cholecystectomy would standardise description of findings and reporting of outcomes. The aim of this study was to validate a difficulty grading system (Nassar scale), testing its applicability and consistency in two large prospective datasets. Methods Patient and disease-related variables and 30-day outcomes were identified in two prospective cholecystectomy databases: the multi-centre prospective cohort of 8820 patients from the recent CholeS Study and the single-surgeon series containing 4089 patients. Operative data and patient outcomes were correlated with Nassar operative difficultly scale, using Kendall’s tau for dichotomous variables, or Jonckheere–Terpstra tests for continuous variables. A ROC curve analysis was performed, to quantify the predictive accuracy of the scale for each outcome, with continuous outcomes dichotomised, prior to analysis. Results A higher operative difficulty grade was consistently associated with worse outcomes for the patients in both the reference and CholeS cohorts. The median length of stay increased from 0 to 4 days, and the 30-day complication rate from 7.6 to 24.4% as the difficulty grade increased from 1 to 4/5 (both p < 0.001). In the CholeS cohort, a higher difficulty grade was found to be most strongly associated with conversion to open and 30-day mortality (AUROC = 0.903, 0.822, respectively). On multivariable analysis, the Nassar operative difficultly scale was found to be a significant independent predictor of operative duration, conversion to open surgery, 30-day complications and 30-day reintervention (all p < 0.001). Conclusion We have shown that an operative difficulty scale can standardise the description of operative findings by multiple grades of surgeons to facilitate audit, training assessment and research. It provides a tool for reporting operative findings, disease severity and technical difficulty and can be utilised in future research to reliably compare outcomes according to case mix and intra-operative difficulty
    corecore