1,126 research outputs found

    Pseudotumor cerebri syndrome in childhood : incidence, clinical profile and risk factors in a national prospective population-based cohort study

    Get PDF
    Aim To investigate the epidemiology, clinical profile and risk factors of pseudotumor cerebri syndrome (PTCS) in children aged 1-16 years. Methods A national prospective population-based cohort study over 25 months. Newly diagnosed PTCS cases notified via British Paediatric Surveillance Unit (BPSU) were ascertained using classical diagnostic criteria and categorised according to 2013 revised diagnostic criteria. We derived national age, sex and weight-specific annual incidence rates and assessed effects of sex and weight category. Results We identified 185 PTCS cases of which 166 also fulfilled revised diagnostic criteria. The national annual incidence (95% CI) of childhood PTCS aged 1-16 years was 0.71 (0.57- 0.87) per 100,000 population increasing with age and weight to 4.18 and 10.7 per 100,000 in obese 12-15 year old boys and girls respectively. Incidence rates under 7 years were similar in both sexes. From 7 years onwards, the incidence in girls was double that in boys, but only in overweight (including obese) children. In 12-15 year old children, an estimated 82% of the incidence of PTCS was attributable to obesity. Two subgroups of PTCS were apparent: 168 (91%) cases aged from 7 years frequently presented on medication and with headache, and were predominantly female and obese. The remaining 17 (9%) cases under 7 years often lacked these risk factors and commonly presented with new onset squint. Conclusions This uniquely largest population-based study of childhood PTCS will inform the design of future intervention studies. It suggests that weight reduction is central to the prevention of PTCS

    The structure of chromatophores from purple photosynthetic bacteria fused with lipid-impregnated collodion films determined by near-field scanning optical microscopy

    Get PDF
    AbstractLipid-impregnated collodion (nitrocellulose) films have been frequently used as a fusion substrate in the measurement and analysis of electrogenic activity in biological membranes and proteoliposomes. While the method of fusion of biological membranes or proteoliposomes with such films has found a wide application, little is known about the structures formed after the fusion. Yet, knowledge of this structure is important for the interpretation of the measured electric potential. To characterize structures formed after fusion of membrane vesicles (chromatophores) from the purple bacterium Rhodobacter sphaeroides with lipid-impregnated collodion films, we used near-field scanning optical microscopy. It is shown here that structures formed from chromatophores on the collodion film can be distinguished from the lipid-impregnated background by measuring the fluorescence originating either from endogenous fluorophores of the chromatophores or from fluorescent dyes trapped inside the chromatophores. The structures formed after fusion of chromatophores to the collodion film look like isolated (or sometimes aggregated, depending on the conditions) blisters, with diameters ranging from 0.3 to 10 μm (average ≈1 μm) and heights from 0.01 to 1 μm (average ≈0.03 μm). These large sizes indicate that the blisters are formed by the fusion of many chromatophores. Results with dyes trapped inside chromatophores reveal that chromatophores fused with lipid-impregnated films retain a distinct internal water phase

    On the Generalizability and Predictability of Recommender Systems

    Full text link
    While other areas of machine learning have seen more and more automation, designing a high-performing recommender system still requires a high level of human effort. Furthermore, recent work has shown that modern recommender system algorithms do not always improve over well-tuned baselines. A natural follow-up question is, "how do we choose the right algorithm for a new dataset and performance metric?" In this work, we start by giving the first large-scale study of recommender system approaches by comparing 18 algorithms and 100 sets of hyperparameters across 85 datasets and 315 metrics. We find that the best algorithms and hyperparameters are highly dependent on the dataset and performance metric, however, there are also strong correlations between the performance of each algorithm and various meta-features of the datasets. Motivated by these findings, we create RecZilla, a meta-learning approach to recommender systems that uses a model to predict the best algorithm and hyperparameters for new, unseen datasets. By using far more meta-training data than prior work, RecZilla is able to substantially reduce the level of human involvement when faced with a new recommender system application. We not only release our code and pretrained RecZilla models, but also all of our raw experimental results, so that practitioners can train a RecZilla model for their desired performance metric: https://github.com/naszilla/reczilla.Comment: NeurIPS 202

    Structural Determinants for Ligand-Receptor Conformational Selection in a Peptide G Protein-coupled Receptor

    Get PDF
    G protein coupled receptors (GPCRs) modulate the majority of physiological processes through specific intermolecular interactions with structurally diverse ligands and activation of differential intracellular signaling. A key issue yet to be resolved is how GPCRs developed selectivity and diversity of ligand binding and intracellular signaling during evolution. We have explored the structural basis of selectivity of naturally occurring gonadotropin-releasing hormones (GnRHs) from different species in the single functional human GnRH receptor. We found that the highly variable amino acids in position 8 of the naturally occurring isoforms of GnRH play a discriminating role in selecting receptor conformational states. The human GnRH receptor has a higher affinity for the cognate GnRH I but a lower affinity for GnRH II and GnRHs from other species possessing substitutions for Arg(8). The latter were partial agonists in the human GnRH receptor. Mutation of Asn(7.45) in transmembrane domain (TM) 7 had no effect on GnRH I affinity but specifically increased affinity for other GnRHs and converted them to full agonists. Using molecular modeling and site-directed mutagenesis, we demonstrated that the highly conserved Asn(7.45) makes intramolecular interactions with a highly conserved Cys(6.47) in TM 6, suggesting that disruption of this intramolecular interaction induces a receptor conformational change which allosterically alters ligand specific binding sites and changes ligand selectivity and signaling efficacy. These results reveal GnRH ligand and receptor structural elements for conformational selection, and support co-evolution of GnRH ligand and receptor conformations

    Robust Communication-Optimal Distributed Clustering Algorithms

    Get PDF
    In this work, we study the k-median and k-means clustering problems when the data is distributed across many servers and can contain outliers. While there has been a lot of work on these problems for worst-case instances, we focus on gaining a finer understanding through the lens of beyond worst-case analysis. Our main motivation is the following: for many applications such as clustering proteins by function or clustering communities in a social network, there is some unknown target clustering, and the hope is that running a k-median or k-means algorithm will produce clusterings which are close to matching the target clustering. Worst-case results can guarantee constant factor approximations to the optimal k-median or k-means objective value, but not closeness to the target clustering. Our first result is a distributed algorithm which returns a near-optimal clustering assuming a natural notion of stability, namely, approximation stability [Awasthi and Balcan, 2014], even when a constant fraction of the data are outliers. The communication complexity is O~(sk+z) where s is the number of machines, k is the number of clusters, and z is the number of outliers. Next, we show this amount of communication cannot be improved even in the setting when the input satisfies various non-worst-case assumptions. We give a matching Omega(sk+z) lower bound on the communication required both for approximating the optimal k-means or k-median cost up to any constant, and for returning a clustering that is close to the target clustering in Hamming distance. These lower bounds hold even when the data satisfies approximation stability or other common notions of stability, and the cluster sizes are balanced. Therefore, Omega(sk+z) is a communication bottleneck, even for real-world instances

    Detection of the Entropy of the Intergalactic Medium: Accretion Shocks in Clusters, Adiabatic Cores in Groups

    Full text link
    The thermodynamics of the diffuse, X-ray emitting gas in clusters of galaxies is linked to the entropy level of the intra cluster medium. In particular, models that successfully reproduce the properties of local X-ray clusters and groups require the presence of a minimum value for the entropy in the center of X-ray halos. Such a minimum entropy is most likely generated by non-gravitational processes, in order to produce the observed break in self-similarity of the scaling relations of X-ray halos. At present there is no consensus on the level, the source or the time evolution of this excess entropy. In this paper we describe a strategy to investigate the physics of the heating processes acting in groups and clusters. We show that the best way to extract information from the local data is the observation of the entropy profile at large radii in nearby X-ray halos (z~0.1), both at the upper and lower extremes of the cluster mass scale. The spatially and spectrally resolved observation of such X-ray halos provides information on the mechanism of the heating. We demonstrate how measurements of the size of constant entropy (adiabatic) cores in clusters and groups can directly constrain heating models, and the minimum entropy value. We also consider two specific experiments: the detection of the shock fronts expected at the virial boundary of rich clusters, and the detection of the isentropic, low surface-brightness emission extending to radii larger than the virial ones in low mass clusters and groups. Such observations will be a crucial probe of both the physics of clusters and the relationship of non-gravitational processes to the thermodynamics of the intergalactic medium.Comment: ApJ accepted, 31 pages including 8 figures. Important material added; references update

    Cosmological Constraints from the ROSAT Deep Cluster Survey

    Get PDF
    The ROSAT Deep Cluster Survey (RDCS) has provided a new large deep sample of X-ray selected galaxy clusters. Observables such as the flux number counts n(S), the redshift distribution n(z) and the X-ray luminosity function (XLF) over a large redshift baseline (z\lesssim 0.8) are used here in order to constrain cosmological models. Our analysis is based on the Press-Schechter approach, whose reliability is tested against N-body simulations. Following a phenomenological approach, no assumption is made a priori on the relation between cluster masses and observed X-ray luminosities. As a first step, we use the local XLF from RDCS, along with the high-luminosity extension provided by the XLF from the BCS, in order to constrain the amplitude of the power spectrum, \sigma_8, and the shape of the local luminosity-temperature relation. We obtain \sigma_8=0.58 +/- 0.06 for Omega_0=1 for open models at 90% confidence level, almost independent of the L-T shape. The density parameter \Omega_0 and the evolution of the L-T relation are constrained by the RDCS XLF at z>0 and the EMSS XLF at z=0.33, and by the RDCS n(S) and n(z) distributions. By modelling the evolution for the amplitude of the L-T relation as (1+z)^A, an \Omega_0=1 model can be accommodated for the evolution of the XLF with 1<A<3 at 90% confidence level, while \Omega_0=0.4^{+0.3}_{-0.2} and \Omega_0<0.6 are implied by a non--evolving L-T for open and flat models, respectively.Comment: 12 pages, 9 colour figures, LateX, uses apj.sty, ApJ, in press, May 20 issu

    The z=5 Quasar Luminosity Function from SDSS Stripe 82

    Full text link
    We present a measurement of the Type I quasar luminosity function at z=5 using a large sample of spectroscopically confirmed quasars selected from optical imaging data. We measure the bright end (M_1450<-26) with Sloan Digital Sky Survey (SDSS) data covering ~6000 deg^2, then extend to lower luminosities (M_1450<-24) with newly discovered, faint z~5 quasars selected from 235 deg^2 of deep, coadded imaging in the SDSS Stripe 82 region (the celestial equator in the Southern Galactic Cap). The faint sample includes 14 quasars with spectra obtained as ancillary science targets in the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS), and 59 quasars observed at the MMT and Magellan telescopes. We construct a well-defined sample of 4.7<z<5.1 quasars that is highly complete, with 73 spectroscopic identifications out of 92 candidates. Our color selection method is also highly efficient: of the 73 spectra obtained, 71 are high redshift quasars. These observations reach below the break in the luminosity function (M_1450* ~ -27). The bright end slope is steep (beta <~ -4), with a constraint of beta < -3.1 at 95% confidence. The break luminosity appears to evolve strongly at high redshift, providing an explanation for the flattening of the bright end slope reported previously. We find a factor of ~2 greater decrease in the number density of luminous quasars (M_1450<-26) from z=5 to z=6 than from z=4 to z=5, suggesting a more rapid decline in quasar activity at high redshift than found in previous surveys. Our model for the quasar luminosity function predicts that quasars generate ~30% of the ionizing photons required to keep the universe ionized at z=5.Comment: 29 pages, 22 figures, ApJ accepted (updated to published version

    CORE and the Haldane Conjecture

    Get PDF
    The Contractor Renormalization group formalism (CORE) is a real-space renormalization group method which is the Hamiltonian analogue of the Wilson exact renormalization group equations. In an earlier paper\cite{QGAF} I showed that the Contractor Renormalization group (CORE) method could be used to map a theory of free quarks, and quarks interacting with gluons, into a generalized frustrated Heisenberg antiferromagnet (HAF) and proposed using CORE methods to study these theories. Since generalizations of HAF's exhibit all sorts of subtle behavior which, from a continuum point of view, are related to topological properties of the theory, it is important to know that CORE can be used to extract this physics. In this paper I show that despite the folklore which asserts that all real-space renormalization group schemes are necessarily inaccurate, simple Contractor Renormalization group (CORE) computations can give highly accurate results even if one only keeps a small number of states per block and a few terms in the cluster expansion. In addition I argue that even very simple CORE computations give a much better qualitative understanding of the physics than naive renormalization group methods. In particular I show that the simplest CORE computation yields a first principles understanding of how the famous Haldane conjecture works for the case of the spin-1/2 and spin-1 HAF.Comment: 36 pages, 4 figures, 5 tables, latex; extensive additions to conten
    • …
    corecore