1,478 research outputs found

    Methods of Solution of Second Order Linear Equations on Time Scales

    Get PDF
    A time scale, T, is a nonempty, closed subset of the real numbers, R. Several methods of solution exist for second order linear equations on a time scale. An advantage of these methods is that we can obtain solutions on a system comprising of continuous and/or discrete elements. After restricting the time scale to be R, these solutions are equivalent to those obtained using differential equations methods. A time scale, T, is a nonempty, closed subset of the real numbers, R. Several methods of solution exist for second order linear equations on a time scale. An advantage of these methods is that we can obtain solutions on a system comprising of continuous and/or discrete elements. After restricting the time scale to be R, these solutions are equivalent to those obtained using differential equations methods

    Methods of Solution of Second Order Linear Equations on Time Scales

    Get PDF
    A time scale, T, is a nonempty, closed subset of the real numbers, R. Several methods of solution exist for second order linear equations on a time scale. An advantage of these methods is that we can obtain solutions on a system comprising of continuous and/or discrete elements. After restricting the time scale to be R, these solutions are equivalent to those obtained using differential equations methods. A time scale, T, is a nonempty, closed subset of the real numbers, R. Several methods of solution exist for second order linear equations on a time scale. An advantage of these methods is that we can obtain solutions on a system comprising of continuous and/or discrete elements. After restricting the time scale to be R, these solutions are equivalent to those obtained using differential equations methods

    Characterization of Three-Dimensional Spatial Aggregation and Association Patterns of Brown Rot Symptoms within Intensively Mapped Sour Cherry Trees

    Get PDF
    Characterization of spatial patterns of plant disease can provide insights into important epidemiological processes such as sources of inoculum, mechanisms of dissemination, and reproductive strategies of the pathogen population. While two-dimensional patterns of disease (among plants within fields) have been studied extensively, there is limited information on three-dimensional patterns within individual plant canopies. Reported here are the detailed mapping of different symptom types of brown rot (caused by Monilinia laxa) in individual sour cherry tree (Prunus cerasus) canopies, and the application of spatial statistics to the resulting data points to de-termine patterns of symptom aggregation and association. Methods – A magnetic digitizer was uti-lized to create detailed three-dimensional maps of three symptom types (blossom blight, shoot blight, and twig canker) in eight sour cherry tree canopies during the green fruit stage of develop-ment. The resulting point patterns were analyzed for aggregation (within a given symptom type) and pairwise association (between symptom types) using a three-dimensional extension of nearest-neighbor analysis. Key Results – Symptoms of M. laxa infection were generally aggregated within the canopy volume, but there was no consistent pattern for one symptom type to be more or less aggre-gated than the other. Analysis of spatial association among symptom types indicated that previous year’s twig cankers may play an important role in influencing the spatial pattern of current year’s symptoms. This observation provides quantitative support for the epidemiological role of twig can-kers as sources of primary inoculum within the tree. Conclusions – Presented here is a new approach to quantify spatial patterns of plant disease in complex fruit tree canopies using point pattern anal-ysis. This work provides a framework for quantitative analysis of three-dimensional spatial patterns within the finite tree canopy, applicable to many fields of research

    Optimasi Portofolio Resiko Menggunakan Model Markowitz MVO Dikaitkan dengan Keterbatasan Manusia dalam Memprediksi Masa Depan dalam Perspektif Al-Qur`an

    Full text link
    Risk portfolio on modern finance has become increasingly technical, requiring the use of sophisticated mathematical tools in both research and practice. Since companies cannot insure themselves completely against risk, as human incompetence in predicting the future precisely that written in Al-Quran surah Luqman verse 34, they have to manage it to yield an optimal portfolio. The objective here is to minimize the variance among all portfolios, or alternatively, to maximize expected return among all portfolios that has at least a certain expected return. Furthermore, this study focuses on optimizing risk portfolio so called Markowitz MVO (Mean-Variance Optimization). Some theoretical frameworks for analysis are arithmetic mean, geometric mean, variance, covariance, linear programming, and quadratic programming. Moreover, finding a minimum variance portfolio produces a convex quadratic programming, that is minimizing the objective function ðð¥with constraintsð ð 𥠥 ðandð´ð¥ = ð. The outcome of this research is the solution of optimal risk portofolio in some investments that could be finished smoothly using MATLAB R2007b software together with its graphic analysis

    Search for supersymmetry in events with one lepton and multiple jets in proton-proton collisions at root s=13 TeV

    Get PDF
    Peer reviewe

    Measurement of the top quark forward-backward production asymmetry and the anomalous chromoelectric and chromomagnetic moments in pp collisions at √s = 13 TeV

    Get PDF
    Abstract The parton-level top quark (t) forward-backward asymmetry and the anomalous chromoelectric (d̂ t) and chromomagnetic (μ̂ t) moments have been measured using LHC pp collisions at a center-of-mass energy of 13 TeV, collected in the CMS detector in a data sample corresponding to an integrated luminosity of 35.9 fb−1. The linearized variable AFB(1) is used to approximate the asymmetry. Candidate t t ¯ events decaying to a muon or electron and jets in final states with low and high Lorentz boosts are selected and reconstructed using a fit of the kinematic distributions of the decay products to those expected for t t ¯ final states. The values found for the parameters are AFB(1)=0.048−0.087+0.095(stat)−0.029+0.020(syst),μ̂t=−0.024−0.009+0.013(stat)−0.011+0.016(syst), and a limit is placed on the magnitude of | d̂ t| < 0.03 at 95% confidence level. [Figure not available: see fulltext.

    Measurement of the top quark mass using charged particles in pp collisions at root s=8 TeV

    Get PDF
    Peer reviewe

    Measurement of t(t)over-bar normalised multi-differential cross sections in pp collisions at root s=13 TeV, and simultaneous determination of the strong coupling strength, top quark pole mass, and parton distribution functions

    Get PDF
    Peer reviewe

    Measurement of the Splitting Function in &ITpp &ITand Pb-Pb Collisions at root&ITsNN&IT=5.02 TeV

    Get PDF
    Data from heavy ion collisions suggest that the evolution of a parton shower is modified by interactions with the color charges in the dense partonic medium created in these collisions, but it is not known where in the shower evolution the modifications occur. The momentum ratio of the two leading partons, resolved as subjets, provides information about the parton shower evolution. This substructure observable, known as the splitting function, reflects the process of a parton splitting into two other partons and has been measured for jets with transverse momentum between 140 and 500 GeV, in pp and PbPb collisions at a center-of-mass energy of 5.02 TeV per nucleon pair. In central PbPb collisions, the splitting function indicates a more unbalanced momentum ratio, compared to peripheral PbPb and pp collisions.. The measurements are compared to various predictions from event generators and analytical calculations.Peer reviewe

    An embedding technique to determine ττ backgrounds in proton-proton collision data

    Get PDF
    An embedding technique is presented to estimate standard model tau tau backgrounds from data with minimal simulation input. In the data, the muons are removed from reconstructed mu mu events and replaced with simulated tau leptons with the same kinematic properties. In this way, a set of hybrid events is obtained that does not rely on simulation except for the decay of the tau leptons. The challenges in describing the underlying event or the production of associated jets in the simulation are avoided. The technique described in this paper was developed for CMS. Its validation and the inherent uncertainties are also discussed. The demonstration of the performance of the technique is based on a sample of proton-proton collisions collected by CMS in 2017 at root s = 13 TeV corresponding to an integrated luminosity of 41.5 fb(-1).Peer reviewe
    corecore