14,747 research outputs found

    Conditioned stochastic particle systems and integrable quantum spin systems

    Full text link
    We consider from a microscopic perspective large deviation properties of several stochastic interacting particle systems, using their mapping to integrable quantum spin systems. A brief review of recent work is given and several new results are presented: (i) For the general disordered symmectric exclusion process (SEP) on some finite lattice conditioned on no jumps into some absorbing sublattice and with initial Bernoulli product measure with density ρ\rho we prove that the probability Sρ(t)S_\rho(t) of no absorption event up to microscopic time tt can be expressed in terms of the generating function for the particle number of a SEP with particle injection and empty initial lattice. Specifically, for the symmetric simple exclusion process on Z\mathbb Z conditioned on no jumps into the origin we obtain the explicit first and second order expansion in ρ\rho of Sρ(t)S_\rho(t) and also to first order in ρ\rho the optimal microscopic density profile under this conditioning. For the disordered ASEP on the finite torus conditioned on a very large current we show that the effective dynamics that optimally realizes this rare event does not depend on the disorder, except for the time scale. For annihilating and coalescing random walkers we obtain the generating function of the number of annihilated particles up to time tt, which turns out to exhibit some universal features.Comment: 25 page

    Evidence and Extrapolation: Mechanisms for Regulating Off-Label Uses of Drugs and Devices

    Get PDF
    A recurring, foundational issue for evidence-based regulation is deciding whether to extend governmental approval from an existing use with sufficient current evidence of safety and efficacy to a novel use for which such evidence is currently lacking. This extrapolation issue arises in the medicines context when an approved drug or device that is already being marketed is being considered (1) for new conditions (such as off-label diagnostic categories), (2) for new patients (such as new subpopulations), (3) for new dosages or durations, or (4) as the basis for approving a related drug or device (such as a generic or biosimilar drug). Although the logic of preapproval testing and the precautionary principle—first, do no harm—would counsel in favor of prohibiting extrapolation approvals until after traditional safety and efficacy evidence exists, such delays would unreasonably sacrifice beneficial uses. The harm of accessing unsafe products must be balanced against the harm of restricting access to effective products. In fact, the Food and Drug Administration\u27s (FDA\u27s) current regulations in many ways reject the precautionary principle because they largely permit individual physicians to prescribe medications for off-label uses before any testing tailored to those uses has been done. The FDA\u27s approach empowers physicians, but overshoots the mark by allowing enduring use of drugs and devices with insubstantial support of safety and efficacy. This Article instead proposes a more dynamic and evolving evidence-based regime that charts a course between the Scylla and Charybdis of the overly conservative precautionary principle on one hand, and the overly liberal FDA regime on the other. Our approach calls for improvements in reporting, testing, and enforcement regulations to provide a more layered and nuanced system of regulatory incentives. First, we propose a more thoroughgoing reporting of off-label use (via the disclosure of diagnostic codes and detailing data) in manufacturers\u27 annual reports to the FDA, in the adverse event reports to the FDA, in Medicare/Medicaid reimbursement requests, and, for a subset of FDA-designated drugs, in prescriptions themselves. Second, we would substantially expand the agency\u27s utilization of postmarket testing, and we provide a novel framework for evaluating the need for postmarket testing. Finally, our approach calls for a tiered labeling system that would allow regulators and courts to draw finer reimbursement and liability distinctions among various drug uses, and would provide the agency both the regulatory teeth and the flexibility it presently lacks. Together, these reforms would improve the role of the FDA in the informational marketplace underlying physicians\u27 prescribing decisions. This evolutionary extrapolation framework could also be applied to other contexts

    Statistical Understanding of Quark and Lepton Masses in Gaussian Landscapes

    Get PDF
    The fundamental theory of nature may allow a large landscape of vacua. Even if the theory contains a unified gauge symmetry, the 22 flavor parameters of the Standard Model, including neutrino masses, may be largely determined by the statistics of this landscape, and not by any symmetry. Then the measured values of the flavor parameters do not lead to any fundamental symmetries, but are statistical accidents; their precise values do not provide any insights into the fundamental theory, rather the overall pattern of flavor reflects the underlying landscape. We investigate whether random selection from the statistics of a simple landscape can explain the broad patterns of quark, charged lepton, and neutrino masses and mixings. We propose Gaussian landscapes as simplified models of landscapes where Yukawa couplings result from overlap integrals of zero-mode wavefunctions in higher-dimensional supersymmetric gauge theories. In terms of just five free parameters, such landscapes can account for all gross features of flavor, including: the hierarchy of quark and charged lepton masses; small quark mixing angles, with 13 mixing less than 12 and 23 mixing; very light Majorana neutrino masses, with the solar to atmospheric neutrino mass ratio consistent with data; distributions for leptonic 12 and 23 mixings that are peaked at large values, while the distribution for 13 mixing is peaked at low values; and order unity CP violating phases in both the quark and lepton sectors. While the statistical distributions for flavor parameters are broad, the distributions are robust to changes in the geometry of the extra dimensions. Constraining the distributions by loose cuts about observed values leads to narrower distributions for neutrino measurements of 13 mixing, CP violation, and neutrinoless double beta decay.Comment: 86 pages, 26 figures, 2 tables, and table of content

    Neural mechanisms of visual categorization

    Get PDF
    The ability to categorize is a fundamental cognitive skill for animals, including human beings. Our lives would be utterly confusing without categories. We would feel overwhelmed or miss out on important aspects of our environment if we would perceive every single entity as one-of-a-kind. Therefore, categorization is of great importance for perception, learning, remembering, decision making, performing an action, certain aspects of social interaction, and reasoning. The seemingly effortless and instantaneous ability to transform sensory information into meaningful categories determines the success for interacting with our environment. However, the apparent ease with which we use categorization and categories conceals the complexity of the underlying brain processing that makes categorization and categorical representations possible. Therefore, the question arises: how are categorical information encoded and represented in the brain
    corecore