1,262 research outputs found

    Testing the Solar Probe Cup, an Instrument Designed to Touch the Sun

    Get PDF
    Solar Probe Plus will be the first, fastest, and closest mission to the sun, providing the first direct sampling of the sub-Alfvenic corona. The Solar Probe Cup (SPC) is a unique re-imagining of the traditional Faraday Cup design and materials for immersion in this high temperature environment. Sending an instrument of this type into a never-seen particle environment requires extensive characterization prior to launch to establish sufficient measurement accuracy and instrument response. To reach this end, a slew of tests for allowing SPC to see ranges of appropriate ions and electrons, as well as a facility that reproduces solar photon spectra and fluxes for this mission. Having already tested the SPC at flight like temperatures with no significant modification of the noise floor, we recently completed a round of particle testing to see if the deviations in Faraday Cup design fundamentally change the operation of the instrument. Results and implications from these tests will be presented, as well as performance comparisons to cousin instruments such as those on the WIND spacecraft

    Testing the Solar Probe Cup, An Instrument Designed to Touch The Sun

    Get PDF
    Abstract: Solar Probe Plus will be the first, fastest, and closest mission to the Sun, providing the first direct sampling of the sub-Alfvnic corona. The Solar Probe Cup (SPC) is a unique re-imagining of the traditional Faraday Cup design and materials for immersion in this high temperature environment. Sending an instrument of this type into a never-seen particle environment requires extensive characterization prior to launch to establish sufficient measurement accuracy and instrument response. To reach this end, a slew of tests are created for allowing SPC to see ranges of appropriate ions and electrons, as well as a facility that reproduces solar photon spectra and fluxes for this mission. Having already tested the SPC at flight-like temperatures with no significant modification of the noise floor, we recently completed a round of particle testing to see if the deviations in Faraday Cup design fundamentally change the operation of the instrument. Results and implications from these tests will be presented, as well as performance comparisons to cousin instruments such as those on the WIND spacecraft

    Competition between Primary Nucleation and Autocatalysis in Amyloid Fibril Self-Assembly

    Get PDF
    AbstractKinetic measurements of the self-assembly of proteins into amyloid fibrils are often used to make inferences about molecular mechanisms. In particular, the lag time—the quiescent period before aggregates are detected—is often found to scale with the protein concentration as a power law, whose exponent has been used to infer the presence or absence of autocatalytic growth processes such as fibril fragmentation. Here we show that experimental data for lag time versus protein concentration can show signs of kinks: clear changes in scaling exponent, indicating changes in the dominant molecular mechanism determining the lag time. Classical models for the kinetics of fibril assembly suggest that at least two mechanisms are at play during the lag time: primary nucleation and autocatalytic growth. Using computer simulations and theoretical calculations, we investigate whether the competition between these two processes can account for the kinks which we observe in our and others’ experimental data. We derive theoretical conditions for the crossover between nucleation-dominated and growth-dominated regimes, and analyze their dependence on system volume and autocatalysis mechanism. Comparing these predictions to the data, we find that the experimentally observed kinks cannot be explained by a simple crossover between nucleation-dominated and autocatalytic growth regimes. Our results show that existing kinetic models fail to explain detailed features of lag time versus concentration curves, suggesting that new mechanistic understanding is needed. More broadly, our work demonstrates that care is needed in interpreting lag-time scaling exponents from protein assembly data

    Creative destruction in science

    Get PDF
    Drawing on the concept of a gale of creative destruction in a capitalistic economy, we argue that initiatives to assess the robustness of findings in the organizational literature should aim to simultaneously test competing ideas operating in the same theoretical space. In other words, replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. Achieving this will typically require adding new measures, conditions, and subject populations to research designs, in order to carry out conceptual tests of multiple theories in addition to directly replicating the original findings. To illustrate the value of the creative destruction approach for theory pruning in organizational scholarship, we describe recent replication initiatives re-examining culture and work morality, working parents\u2019 reasoning about day care options, and gender discrimination in hiring decisions. Significance statement It is becoming increasingly clear that many, if not most, published research findings across scientific fields are not readily replicable when the same method is repeated. Although extremely valuable, failed replications risk leaving a theoretical void\u2014 reducing confidence the original theoretical prediction is true, but not replacing it with positive evidence in favor of an alternative theory. We introduce the creative destruction approach to replication, which combines theory pruning methods from the field of management with emerging best practices from the open science movement, with the aim of making replications as generative as possible. In effect, we advocate for a Replication 2.0 movement in which the goal shifts from checking on the reliability of past findings to actively engaging in competitive theory testing and theory building. Scientific transparency statement The materials, code, and data for this article are posted publicly on the Open Science Framework, with links provided in the article

    Comprehensive Rare Variant Analysis via Whole-Genome Sequencing to Determine the Molecular Pathology of Inherited Retinal Disease

    Get PDF
    Inherited retinal disease is a common cause of visual impairment and represents a highly heterogeneous group of conditions. Here, we present findings from a cohort of 722 individuals with inherited retinal disease, who have had whole-genome sequencing (n = 605), whole-exome sequencing (n = 72), or both (n = 45) performed, as part of the NIHR-BioResource Rare Diseases research study. We identified pathogenic variants (single-nucleotide variants, indels, or structural variants) for 404/722 (56%) individuals. Whole-genome sequencing gives unprecedented power to detect three categories of pathogenic variants in particular: structural variants, variants in GC-rich regions, which have significantly improved coverage compared to whole-exome sequencing, and variants in non-coding regulatory regions. In addition to previously reported pathogenic regulatory variants, we have identified a previously unreported pathogenic intronic variant in CHM\textit{CHM} in two males with choroideremia. We have also identified 19 genes not previously known to be associated with inherited retinal disease, which harbor biallelic predicted protein-truncating variants in unsolved cases. Whole-genome sequencing is an increasingly important comprehensive method with which to investigate the genetic causes of inherited retinal disease.This work was supported by The National Institute for Health Research England (NIHR) for the NIHR BioResource – Rare Diseases project (grant number RG65966). The Moorfields Eye Hospital cohort of patients and clinical and imaging data were ascertained and collected with the support of grants from the National Institute for Health Research Biomedical Research Centre at Moorfields Eye Hospital, National Health Service Foundation Trust, and UCL Institute of Ophthalmology, Moorfields Eye Hospital Special Trustees, Moorfields Eye Charity, the Foundation Fighting Blindness (USA), and Retinitis Pigmentosa Fighting Blindness. M.M. is a recipient of an FFB Career Development Award. E.M. is supported by UCLH/UCL NIHR Biomedical Research Centre. F.L.R. and D.G. are supported by Cambridge NIHR Biomedical Research Centre

    Optimasi Portofolio Resiko Menggunakan Model Markowitz MVO Dikaitkan dengan Keterbatasan Manusia dalam Memprediksi Masa Depan dalam Perspektif Al-Qur`an

    Full text link
    Risk portfolio on modern finance has become increasingly technical, requiring the use of sophisticated mathematical tools in both research and practice. Since companies cannot insure themselves completely against risk, as human incompetence in predicting the future precisely that written in Al-Quran surah Luqman verse 34, they have to manage it to yield an optimal portfolio. The objective here is to minimize the variance among all portfolios, or alternatively, to maximize expected return among all portfolios that has at least a certain expected return. Furthermore, this study focuses on optimizing risk portfolio so called Markowitz MVO (Mean-Variance Optimization). Some theoretical frameworks for analysis are arithmetic mean, geometric mean, variance, covariance, linear programming, and quadratic programming. Moreover, finding a minimum variance portfolio produces a convex quadratic programming, that is minimizing the objective function ðð¥with constraintsð ð 𥠥 ðandð´ð¥ = ð. The outcome of this research is the solution of optimal risk portofolio in some investments that could be finished smoothly using MATLAB R2007b software together with its graphic analysis

    Differential cross section measurements for the production of a W boson in association with jets in proton–proton collisions at √s = 7 TeV

    Get PDF
    Measurements are reported of differential cross sections for the production of a W boson, which decays into a muon and a neutrino, in association with jets, as a function of several variables, including the transverse momenta (pT) and pseudorapidities of the four leading jets, the scalar sum of jet transverse momenta (HT), and the difference in azimuthal angle between the directions of each jet and the muon. The data sample of pp collisions at a centre-of-mass energy of 7 TeV was collected with the CMS detector at the LHC and corresponds to an integrated luminosity of 5.0 fb[superscript −1]. The measured cross sections are compared to predictions from Monte Carlo generators, MadGraph + pythia and sherpa, and to next-to-leading-order calculations from BlackHat + sherpa. The differential cross sections are found to be in agreement with the predictions, apart from the pT distributions of the leading jets at high pT values, the distributions of the HT at high-HT and low jet multiplicity, and the distribution of the difference in azimuthal angle between the leading jet and the muon at low values.United States. Dept. of EnergyNational Science Foundation (U.S.)Alfred P. Sloan Foundatio

    Penilaian Kinerja Keuangan Koperasi di Kabupaten Pelalawan

    Full text link
    This paper describe development and financial performance of cooperative in District Pelalawan among 2007 - 2008. Studies on primary and secondary cooperative in 12 sub-districts. Method in this stady use performance measuring of productivity, efficiency, growth, liquidity, and solvability of cooperative. Productivity of cooperative in Pelalawan was highly but efficiency still low. Profit and income were highly, even liquidity of cooperative very high, and solvability was good
    corecore