8,612 research outputs found

    The use of performance information case studies in local social services departments

    Get PDF
    New Public Management (NPM) is the commonly used label for the growing popularity of businesslike control tools in governmental organisations. NPM includes several dimensions of change, such as divisionalisation, visible and active control and a prominent role for performance measurement. Developments in Dutch local government demonstrate several of these elements of NPM. Pollitt and Bouckaert (2000) and Pollitt (2002) defined four levels of NPM change: (1) discourse; (2) decisions; (3) practices and (4) results. This paper focuses on performance measurement. The politicians and managers at the top of the investigated municipalities took the decision to adopt instruments that generate performance information. This paper seeks to explain the extent to which the information resulting from these instruments is actually being used in the management practices at work floor level. It investigates two categories of explanations for information use: characteristics of the available information (such as its contents, amount and quality) and characteristics of the organization and its routines. The paper thus analyses how decisions taken by politicians and top managers to adopt NPM relate practices at work floor level.

    The robustness of proofreading to crowding-induced pseudo-processivity in the MAPK pathway

    Get PDF
    Double phosphorylation of protein kinases is a common feature of signalling cascades. This motif may reduce cross-talk between signalling pathways, as the second phosphorylation site allows for proofreading, especially when phosphorylation is distributive rather than processive. Recent studies suggest that phosphorylation can be `pseudo-processive' in the crowded cellular environment, as rebinding after the first phosphorylation is enhanced by slow diffusion. Here, we use a simple model with unsaturated reactants to show that specificity for one substrate over another drops as rebinding increases and pseudo-processive behavior becomes possible. However, this loss of specificity with increased rebinding is typically also observed if two distinct enzyme species are required for phosphorylation, i.e. when the system is necessarily distributive. Thus the loss of specificity is due to an intrinsic reduction in selectivity with increased rebinding, which benefits inefficient reactions, rather than pseudo-processivity itself. We also show that proofreading can remain effective when the intended signalling pathway exhibits high levels of rebinding-induced pseudo-processivity, unlike other proposed advantages of the dual phosphorylation motif.Comment: To appear in Biohys.

    Decline and decadence in Iraq and Syria after the age of Avicenna? : ʿAbd al-Laṭīf al-Baghdādī (1162–1231) between myth and history

    Get PDF
    ‘Abd al-Laṭīf al-Baghdādī’s (d. 1231) work Book of the Two Pieces of Advice (Kitāb al Nasīḥatayn) challenges the idea that Islamic medicine declined after the twelfth century AD. Moreover, it offers some interesting insights into the social history of medicine. ‘Abd al-Laṭīf advocated using the framework of Greek medical epistemology to criticize the rationalist physicians of his day; he argued that female and itinerant practitioners, relying on experience, were superior to some rationalists. He lambasted contemporaneous medical education because it put too much faith in a restricted number of textbooks such as the Canon by Ibn Sīnā (Avicenna, d. 1037) or imperfect abridgments

    Periodic Structure of the Exponential Pseudorandom Number Generator

    Full text link
    We investigate the periodic structure of the exponential pseudorandom number generator obtained from the map xgx(modp)x\mapsto g^x\pmod p that acts on the set {1,,p1}\{1, \ldots, p-1\}

    Denominators of Bernoulli polynomials

    Full text link
    For a positive integer nn let Pn=sp(n)pp,\mathfrak{P}_n=\prod_{s_p(n)\ge p} p, where pp runs over all primes and sp(n)s_p(n) is the sum of the base pp digits of nn. For all nn we prove that Pn\mathfrak{P}_n is divisible by all "small" primes with at most one exception. We also show that Pn\mathfrak{P}_n is large, has many prime factors exceeding n\sqrt{n}, with the largest one exceeding n20/37n^{20/37}. We establish Kellner's conjecture, which says that the number of prime factors exceeding n\sqrt{n} grows asymptotically as κn/logn\kappa \sqrt{n}/\log n for some constant κ\kappa with κ=2\kappa=2. Further, we compare the sizes of Pn\mathfrak{P}_n and Pn+1\mathfrak{P}_{n+1}, leading to the somewhat surprising conclusion that although Pn\mathfrak{P}_n tends to infinity with nn, the inequality Pn>Pn+1\mathfrak{P}_n>\mathfrak{P}_{n+1} is more frequent than its reverse.Comment: 25 page

    Applications of atomic ensembles in distributed quantum computing

    Get PDF
    Thesis chapter. The fragility of quantum information is a fundamental constraint faced by anyone trying to build a quantum computer. A truly useful and powerful quantum computer has to be a robust and scalable machine. In the case of many qubits which may interact with the environment and their neighbors, protection against decoherence becomes quite a challenging task. The scalability and decoherence issues are the main difficulties addressed by the distributed model of quantum computation. A distributed quantum computer consists of a large quantum network of distant nodes - stationary qubits which communicate via flying qubits. Quantum information can be transferred, stored, processed and retrieved in decoherence-free fashion by nodes of a quantum network realized by an atomic medium - an atomic quantum memory. Atomic quantum memories have been developed and demonstrated experimentally in recent years. With the help of linear optics and laser pulses, one is able to manipulate quantum information stored inside an atomic quantum memory by means of electromagnetically induced transparency and associated propagation phenomena. Any quantum computation or communication necessarily involves entanglement. Therefore, one must be able to entangle distant nodes of a distributed network. In this article, we focus on the probabilistic entanglement generation procedures such as well-known DLCZ protocol. We also demonstrate theoretically a scheme based on atomic ensembles and the dipole blockade mechanism for generation of inherently distributed quantum states so-called cluster states. In the protocol, atomic ensembles serve as single qubit systems. Hence, we review single-qubit operations on qubit defined as collective states of atomic ensemble. Our entangling protocol requires nearly identical single-photon sources, one ultra-cold ensemble per physical qubit, and regular photodetectors. The general entangling procedure is presented, as well as a procedure that generates in a single step Q-qubit GHZ states with success probability p(success) similar to eta(Q/2), where eta is the combined detection and source efficiency. This is signifcantly more efficient than any known robust probabilistic entangling operation. The GHZ states form the basic building block for universal cluster states, a resource for the one-way quantum computer

    Reconciling the observed star-forming sequence with the observed stellar mass function

    Get PDF
    We examine the connection between the observed star-forming sequence (SFR \propto MαM^{\alpha}) and the observed evolution of the stellar mass function between 0.2<z<2.50.2 < z < 2.5. We find the star-forming sequence cannot have a slope α\alpha \lesssim 0.9 at all masses and redshifts, as this would result in a much higher number density at 10<log(M/M)<1110 < \log(\mathrm{M/M_{\odot}}) < 11 by z=1z=1 than is observed. We show that a transition in the slope of the star-forming sequence, such that α=1\alpha=1 at log(M/M)<10.5\log(\mathrm{M/M_{\odot}})<10.5 and α=0.70.13z\alpha=0.7-0.13z ({Whitaker} {et~al.} 2012) at log(M/M)>10.5\log(\mathrm{M/M_{\odot}})>10.5, greatly improves agreement with the evolution of the stellar mass function. We then derive a star-forming sequence which reproduces the evolution of the mass function by design. This star-forming sequence is also well-described by a broken-power law, with a shallow slope at high masses and a steep slope at low masses. At z=2z=2, it is offset by \sim0.3 dex from the observed star-forming sequence, consistent with the mild disagreement between the cosmic SFR and recent observations of the growth of the stellar mass density. It is unclear whether this problem stems from errors in stellar mass estimates, errors in SFRs, or other effects. We show that a mass-dependent slope is also seen in other self-consistent models of galaxy evolution, including semi-analytical, hydrodynamical, and abundance-matching models. As part of the analysis, we demonstrate that neither mergers nor hidden low-mass quiescent galaxies are likely to reconcile the evolution of the mass function and the star-forming sequence. These results are supported by observations from {Whitaker} {et~al.} (2014).Comment: 17 pages, 13 figures, accepted to ApJ Oct 31st 201
    corecore