159,262 research outputs found

    Band engineering in dilute nitride and bismide semiconductor lasers

    Full text link
    Highly mismatched semiconductor alloys such as GaNAs and GaBiAs have several novel electronic properties, including a rapid reduction in energy gap with increasing x and also, for GaBiAs, a strong increase in spin orbit- splitting energy with increasing Bi composition. We review here the electronic structure of such alloys and their consequences for ideal lasers. We then describe the substantial progress made in the demonstration of actual GaInNAs telecomm lasers. These have characteristics comparable to conventional InP-based devices. This includes a strong Auger contribution to the threshold current. We show, however, that the large spin-orbit-splitting energy in GaBiAs and GaBiNAs could lead to the suppression of the dominant Auger recombination loss mechanism, finally opening the route to efficient temperature-stable telecomm and longer wavelength lasers with significantly reduced power consumption.Comment: 27 pages, 11 figure

    Evaluation of the ADVIA (R) Centaur (TM) TSH-3 assay

    Get PDF
    An analytical evaluation of the thyroid stimulating hormone (TSH-3) assay on the Sayer ADVIA(R) Centaur(TM) immunoassay system was performed. General analytical requirements (linearity, resistance to typical interferences, absence of a carry-over effect) were fulfilled and reproducibility was satisfactory. Inter-assay coefficient of variation (CV) of a human serum pool with a concentration of 0.014 mU/l was 22.3%; at concentrations between 0.26 and 83 mU/l CV was below 6%. Method comparison study demonstrated close agreement of TSH results compared to those obtained with the Roche Elecsys(R) 2010 TSH assay (ADVIA Centaur = 1.08 x Elecsys - 0.18 mU/l; r = 0.987; n = 324). Handling and practicability of the ADVIA Centaur system proved to be convenient with a very high sample throughput. We conclude that the ADVIA Centaur TSH-3 assay meets requirements for clinical use

    A review of portfolio planning: Models and systems

    Get PDF
    In this chapter, we first provide an overview of a number of portfolio planning models which have been proposed and investigated over the last forty years. We revisit the mean-variance (M-V) model of Markowitz and the construction of the risk-return efficient frontier. A piecewise linear approximation of the problem through a reformulation involving diagonalisation of the quadratic form into a variable separable function is also considered. A few other models, such as, the Mean Absolute Deviation (MAD), the Weighted Goal Programming (WGP) and the Minimax (MM) model which use alternative metrics for risk are also introduced, compared and contrasted. Recently asymmetric measures of risk have gained in importance; we consider a generic representation and a number of alternative symmetric and asymmetric measures of risk which find use in the evaluation of portfolios. There are a number of modelling and computational considerations which have been introduced into practical portfolio planning problems. These include: (a) buy-in thresholds for assets, (b) restriction on the number of assets (cardinality constraints), (c) transaction roundlot restrictions. Practical portfolio models may also include (d) dedication of cashflow streams, and, (e) immunization which involves duration matching and convexity constraints. The modelling issues in respect of these features are discussed. Many of these features lead to discrete restrictions involving zero-one and general integer variables which make the resulting model a quadratic mixed-integer programming model (QMIP). The QMIP is a NP-hard problem; the algorithms and solution methods for this class of problems are also discussed. The issues of preparing the analytic data (financial datamarts) for this family of portfolio planning problems are examined. We finally present computational results which provide some indication of the state-of-the-art in the solution of portfolio optimisation problems

    Physical properties of 6dF dwarf galaxies

    Full text link
    Spectral synthesis is basically the decomposition of an observed spectrum in terms of the superposition of a base of simple stellar populations of various ages and metallicities, producing as output the star formation and chemical histories of a galaxy, its extinction and velocity dispersion. The STARLIGHT code provides one of the most powerful spectral synthesis tools presently available. We have applied this code to the entire Six-Degree-Field Survey (6dF) sample of nearby star-forming galaxies, selecting dwarf galaxy candidates with the goal of: (1) deriving the age and metallicity of their stellar populations and (2) creating a database with the physical properties of our sample galaxies together with the FITS files of pure emission line spectra (i.e. the observed spectra after subtraction of the best-fitting synthetic stellar spectrum). Our results yield a good qualitative and quantitative agreement with previous studies based on the Sloan Digital Sky Survey (SDSS). However, an advantage of 6dF spectra is that they are taken within a twice as large fiber aperture, much reducing aperture effects in studies of nearby dwarf galaxies.Comment: To appear in JENAM Symposium "Dwarf Galaxies: Keys to Galaxy Formation and Evolution", P. Papaderos, S. Recchi, G. Hensler (eds.). Lisbon, September 2010, Springer Verlag, in pres

    Experimental Analysis of Algorithms for Coflow Scheduling

    Full text link
    Modern data centers face new scheduling challenges in optimizing job-level performance objectives, where a significant challenge is the scheduling of highly parallel data flows with a common performance goal (e.g., the shuffle operations in MapReduce applications). Chowdhury and Stoica introduced the coflow abstraction to capture these parallel communication patterns, and Chowdhury et al. proposed effective heuristics to schedule coflows efficiently. In our previous paper, we considered the strongly NP-hard problem of minimizing the total weighted completion time of coflows with release dates, and developed the first polynomial-time scheduling algorithms with O(1)-approximation ratios. In this paper, we carry out a comprehensive experimental analysis on a Facebook trace and extensive simulated instances to evaluate the practical performance of several algorithms for coflow scheduling, including the approximation algorithms developed in our previous paper. Our experiments suggest that simple algorithms provide effective approximations of the optimal, and that the performance of our approximation algorithms is relatively robust, near optimal, and always among the best compared with the other algorithms, in both the offline and online settings.Comment: 29 pages, 8 figures, 11 table

    Chiral corrections to the SU(2)×SU(2)SU(2)\times SU(2) Gell-Mann-Oakes-Renner relation

    Get PDF
    The next to leading order chiral corrections to the SU(2)×SU(2)SU(2)\times SU(2) Gell-Mann-Oakes-Renner (GMOR) relation are obtained using the pseudoscalar correlator to five-loop order in perturbative QCD, together with new finite energy sum rules (FESR) incorporating polynomial, Legendre type, integration kernels. The purpose of these kernels is to suppress hadronic contributions in the region where they are least known. This reduces considerably the systematic uncertainties arising from the lack of direct experimental information on the hadronic resonance spectral function. Three different methods are used to compute the FESR contour integral in the complex energy (squared) s-plane, i.e. Fixed Order Perturbation Theory, Contour Improved Perturbation Theory, and a fixed renormalization scale scheme. We obtain for the corrections to the GMOR relation, ÎŽÏ€\delta_\pi, the value ÎŽÏ€=(6.2,±1.6)\delta_\pi = (6.2, \pm 1.6)%. This result is substantially more accurate than previous determinations based on QCD sum rules; it is also more reliable as it is basically free of systematic uncertainties. It implies a light quark condensate ≃≡∣2 GeV=(−267±5MeV)3 \simeq \equiv |_{2\,\mathrm{GeV}} = (- 267 \pm 5 MeV)^3. As a byproduct, the chiral perturbation theory (unphysical) low energy constant H2rH^r_2 is predicted to be H2r(Μχ=Mρ)=−(5.1±1.8)×10−3H^r_2 (\nu_\chi = M_\rho) = - (5.1 \pm 1.8)\times 10^{-3}, or H2r(Μχ=Mη)=−(5.7±2.0)×10−3H^r_2 (\nu_\chi = M_\eta) = - (5.7 \pm 2.0)\times 10^{-3}.Comment: A comment about the value of the strong coupling has been added at the end of Section 4. No change in results or conslusion

    The game transfer phenomena scale: an instrument for investigating the nonvolitional effects of video game playing

    Get PDF
    A variety of instruments have been developed to assess different dimensions of playing videogames and its effects on cognitions, affect, and behaviors. The present study examined the psychometric properties of the Game Transfer Phenomena Scale (GTPS) that assesses non-volitional phenomena experienced after playing videogames (i.e., altered perceptions, automatic mental processes, and involuntary behaviors). A total of 1,736 gamers participated in an online survey used as the basis for the analysis. Confirmatory factor analysis (CFA) was performed to confirm the factorial structure of the GTPS. The five-factor structure using the 20 indicators based on the analysis of gamers’ self-reports fitted the data well. Population cross-validity was also achieved and the positive associations between the session length and overall scores indicate the GTPS warranted criterion-related validity. Although the understanding of GTP is still in its infancy, the GTPS appears to be a valid and reliable instrument for assessing non-volitional gaming-related phenomena. The GTPS can be used for understanding the phenomenology of post-effects of playing videogames
    • 

    corecore