529,629 research outputs found

    Blind component separation for polarized observations of the CMB

    Full text link
    We present in this paper the PolEMICA (Polarized Expectation-Maximization Independent Component Analysis) algorithm which is an extension to polarization of the SMICA (Spectral Matching Independent Component Analysis) temperature multi-detectors multi-components (MD-MC) component separation method (Delabrouille et al. 2003). This algorithm allows us to estimate blindly in harmonic space multiple physical components from multi-detectors polarized sky maps. Assuming a linear noisy mixture of components we are able to reconstruct jointly the anisotropies electromagnetic spectra of the components for each mode T, E and B, as well as the temperature and polarization spatial power spectra, TT, EE, BB, TE, TB and EB for each of the physical components and for the noise on each of the detectors. PolEMICA is specially developed to estimate the CMB temperature and polarization power spectra from sky observations including both CMB and foreground emissions. This has been tested intensively using as a first approach full sky simulations of the Planck satellite polarized channels for a 14-months nominal mission assuming a simplified linear sky model including CMB, and optionally Galactic synchrotron emission and a Gaussian dust emission. Finally, we have applied our algorithm to more realistic Planck full sky simulations, including synchrotron, realistic dust and free-free emissions.Comment: 20 pages, 21 figures, 1 table, TeX file, accepted for publication in MNRA

    An Approach to Evaluate Software Effectiveness

    Get PDF
    The Air Force Operational Test and Evaluation Center (AFOTEC) is tasked with the evaluation of operational effectiveness of new systems for the Air Force. Currently, the software analysis team within AFOTEC has no methodology to directly address the effectiveness of the software portion of these new systems. This research develops a working definition for software effectiveness, then outlines an approach to evaluate software effectiveness-- the Software Effectiveness Traceability Approach (SETA). Effectiveness is defined as the degree to which the software requirements are satisfied and is therefore application-independent. With SETA, requirements satisfaction is measured by the degree of traceability throughout the software development effort. A degree of traceability is determined for specific pairs of software life-cycle phases, such as the traceability from software requirements to high-level design and low-level design to code. The degrees of traceability are combined for an overall software effectiveness value. It is shown that SETA can be implemented in a simplified database, and basic database operations are described to retrieve traceability information and quantify the software\u27s effectiveness. SETA is demonstrated using actual software development data from a small software component of the avionics subsystem of the C-17, the Air Force\u27s newest transport aircraft

    Kinematic Modeling of the Determinants of Diastolic Function

    Get PDF
    Multiple modalities are routinely used in clinical cardiology to determine cardiovascular function, and many of the indexes derived from these modalities are causally interconnected. A correlative approach to cardiovascular function however, where indexes are correlated to disease presence and progression, fails to fully capitalize on the information content of the indexes. Causal quantitative modeling of cardiovascular physiology on the other hand offers a predictive rather than accommodative approach to cardiovascular function determination. In this work we apply a kinematic modeling approach to understanding diastolic function. We discuss novel insights related to the physiological determinants of diastolic function, and define novel causal indexes of diastolic function that go beyond the limitations of current established clinical indexes. Diastolic function is typically characterized by physiologists and cardiologists as being determined by the interplay between chamber stiffness, chamber relaxation/viscoelasticity, and chamber filling volume or load. In this work we provide kinematic modeling based analysis of each of these clinical diastolic function determinants. Considering the kinematic elastic (stiffness) components of filling, we argue for the universality of diastolic suction and define a novel in-vivo equilibrium volume. Application of this novel equilibrium volume in the clinical setting results in a novel approach to determination of global chamber stiffness. Considering the viscoelastic components of filling, we demonstrate the limitations associated with ignoring viscoelastic effects, an assumption often made in the clinical setting. We extend the viscoelastic component of filling into the invasive hemodynamic domain, and demonstrate the causal link between invasively recorded LV pressure and noninvasively recorded transmitral flow by describing a method for extracting flow contours from pressure signals alone. Finally, in considering load, we solve the problem of load dependence in diastolic function analysis. Indeed all traditional clinical indexes of diastolic function are load dependent, and therefore are imperfect indexes of intrinsic diastolic function. Applying kinematic modeling, we derive a load independent index of diastolic function. Validation involves showing that the index is indeed load-independent and can differentiate between control and diastolic dysfunction states. We apply this novel analysis to derive surrogates for filling pressure, and generalize the kinematic modeling approach to the analysis of isovolumic relaxation. To aid widespread adoption of the load independent index, we derive and validate simplified expressions for model-based physiological parameters of diastolic function. Our goal is to provide a causal approach to cardiovascular function analysis based on how things move, to explain prior phenomenological observations of others under a single causal paradigm, to discover `new physiology\u27, facilitate the discovery of more robust indexes of cardiovascular function, and provide a means for widespread adoption of the kinematic modeling approach suitable for the general clinical setting

    On the Effective Description of Large Volume Compactifications

    Full text link
    We study the reliability of the Two-Step moduli stabilization in the type-IIB Large Volume Scenarios with matter and gauge interactions. The general analysis is based on a family of N=1 Supergravity models with a factorizable Kaehler invariant function, where the decoupling between two sets of fields without a mass hierarchy is easily understood. For the Large Volume Scenario particular analyses are performed for explicit models, one of such developed for the first time here, finding that the simplified version, where the Dilaton and Complex structure moduli are regarded as frozen by a previous stabilization, is a reliable supersymmetric description whenever the neglected fields stand at their leading F-flatness conditions and be neutral. The terms missed by the simplified approach are either suppressed by powers of the Calabi-Yau volume, or are higher order operators in the matter fields, and then irrelevant for the moduli stabilization rocedure. Although the power of the volume suppressing such corrections depends on the particular model, up to the mass level it is independent of the modular weight for the matter fields. This at least for the models studied here but we give arguments to expect the same in general. These claims are checked through numerical examples. We discuss how the factorizable models present a context where despite the lack of a hierarchy with the supersymmetry breaking scale, the effective theory still has a supersymmetric description. This can be understood from the fact that it is possible to find vanishing solution for the auxiliary components of the fields being integrated out, independently of the remaining dynamics. Our results settle down the question on the reliability of the way the Dilaton and Complex structure are treated in type-IIB compactifications with large compact manifold volumina.Comment: 23 pages + 2 appendices (38 pages total). v2: minor improvements, typos fixed. Version published in JHE

    Model simplification of signal transduction pathway networks via a hybrid inference strategy

    Get PDF
    A full-scale mathematical model of cellular networks normally involves a large number of variables and parameters. How to effectively develop manageable and reliable models is crucial for effective computation, analysis and design of such systems. The aim of model simplification is to eliminate parts of a model that are unimportant for the properties of interest. In this work, a model reduction strategy via hybrid inference is proposed for signal pathway networks. It integrates multiple techniques including conservation analysis, local sensitivity analysis, principal component analysis and flux analysis to identify the reactions and variables that can be considered to be eliminated from the full-scale model. Using an I·B-NF-·B signalling pathway model as an example, simulation analysis demonstrates that the simplified model quantitatively predicts the dynamic behaviours of the network

    Ramsauer approach for light scattering on non-absorbing spherical particles and application to the Henyey-Greenstein phase function

    Full text link
    We present a new method to study light scattering on non-absorbing spherical particles. This method is based on the Ramsauer approach, a model known in atomic an nuclear physics. Its main advantage is its intuitive understanding of the underlying physics phenomena. We show that although the approximations are numerous, the Ramsauer analytical solutions describe fairly well the scattering phase function and the total cross section. Then this model is applied to the Henyey-Greenstein parameterisation of scattering phase function to give a relation between its asymmetry parameter and the mean particle size.Comment: 25 pages, 12 figures, journal paper, accepted in Applied Optics. arXiv admin note: text overlap with arXiv:0903.297

    Indirect Dark Matter Signatures in the Cosmic Dark Ages I. Generalizing the Bound on s-wave Dark Matter Annihilation from Planck

    Get PDF
    Recent measurements of the cosmic microwave background (CMB) anisotropies by Planck provide a sensitive probe of dark matter annihilation during the cosmic dark ages, and specifically constrain the annihilation parameter feffσv/mχf_\mathrm{eff} \langle \sigma v \rangle/m_\chi. Using new results (Paper II) for the ionization produced by particles injected at arbitrary energies, we calculate and provide fefff_\mathrm{eff} values for photons and e+ee^+e^- pairs injected at keV-TeV energies; the fefff_\mathrm{eff} value for any dark matter model can be obtained straightforwardly by weighting these results by the spectrum of annihilation products. This result allows the sensitive and robust constraints on dark matter annihilation presented by the Planck Collaboration to be applied to arbitrary dark matter models with ss-wave annihilation. We demonstrate the validity of this approach using principal component analysis. As an example, we integrate over the spectrum of annihilation products for a range of Standard Model final states to determine the CMB bounds on these models as a function of dark matter mass, and demonstrate that the new limits generically exclude models proposed to explain the observed high-energy rise in the cosmic ray positron fraction. We make our results publicly available at http://nebel.rc.fas.harvard.edu/epsilon.Comment: 14 pages, 4 figures, supplemental data / tools available at http://nebel.rc.fas.harvard.edu/epsilon. Accompanying paper to "Indirect Dark Matter Signatures in the Cosmic Dark Ages II. Ionization, Heating and Photon Production from Arbitrary Energy Injections". v2 adds references, extra example in Fig. 4, and small updates from accompanying paper. This version to be submitted to Phys Rev

    Simple connectome inference from partial correlation statistics in calcium imaging

    Full text link
    In this work, we propose a simple yet effective solution to the problem of connectome inference in calcium imaging data. The proposed algorithm consists of two steps. First, processing the raw signals to detect neural peak activities. Second, inferring the degree of association between neurons from partial correlation statistics. This paper summarises the methodology that led us to win the Connectomics Challenge, proposes a simplified version of our method, and finally compares our results with respect to other inference methods

    Energy-based Analysis of Biochemical Cycles using Bond Graphs

    Full text link
    Thermodynamic aspects of chemical reactions have a long history in the Physical Chemistry literature. In particular, biochemical cycles - the building-blocks of biochemical systems - require a source of energy to function. However, although fundamental, the role of chemical potential and Gibb's free energy in the analysis of biochemical systems is often overlooked leading to models which are physically impossible. The bond graph approach was developed for modelling engineering systems where energy generation, storage and transmission are fundamental. The method focuses on how power flows between components and how energy is stored, transmitted or dissipated within components. Based on early ideas of network thermodynamics, we have applied this approach to biochemical systems to generate models which automatically obey the laws of thermodynamics. We illustrate the method with examples of biochemical cycles. We have found that thermodynamically compliant models of simple biochemical cycles can easily be developed using this approach. In particular, both stoichiometric information and simulation models can be developed directly from the bond graph. Furthermore, model reduction and approximation while retaining structural and thermodynamic properties is facilitated. Because the bond graph approach is also modular and scaleable, we believe that it provides a secure foundation for building thermodynamically compliant models of large biochemical networks
    corecore