94 research outputs found

    Progress towards photonic crystal quantum cascade laser

    Get PDF
    The work describes recent progress in the design, simulation, implementation and characterisation of photonic crystal (PhC) GaAs-based quantum cascade lasers (QCLs). The benefits of applying active PhC confinement around a QCL cavity are explained, highlighting a route to reduced threshold current operation. Design of a suitable PhC has been performed using published bandgap maps; simulation results of this PhC show a wide, high reflectivity stopband. Implementation of the PhC for the device is particularly difficult, requiring a very durable metallic dry etch mask, high performance dry etching and a low damage epilayer-down device mounting technique. Preliminary shallow etched PhC QCLs demonstrated the viability of current injection through the metal etch mask and the device mounting technique. Development of the etch mask and dry etching have demonstrated a process suitable for the manufacture of deep etched PhC structures. All the necessary elements for implementing deep etched PhC QCLs have now been demonstrated, allowing for the development of high performance devices

    Eroding market stability by proliferation of financial instruments

    Full text link
    We contrast Arbitrage Pricing Theory (APT), the theoretical basis for the development of financial instruments, with a dynamical picture of an interacting market, in a simple setting. The proliferation of financial instruments apparently provides more means for risk diversification, making the market more efficient and complete. In the simple market of interacting traders discussed here, the proliferation of financial instruments erodes systemic stability and it drives the market to a critical state characterized by large susceptibility, strong fluctuations and enhanced correlations among risks. This suggests that the hypothesis of APT may not be compatible with a stable market dynamics. In this perspective, market stability acquires the properties of a common good, which suggests that appropriate measures should be introduced in derivative markets, to preserve stability.Comment: 26 pages, 8 figure

    Estimating the Fractal Dimension, K_2-entropy, and the Predictability of the Atmosphere

    Full text link
    The series of mean daily temperature of air recorded over a period of 215 years is used for analysing the dimensionality and the predictability of the atmospheric system. The total number of data points of the series is 78527. Other 37 versions of the original series are generated, including ``seasonally adjusted'' data, a smoothed series, series without annual course, etc. Modified methods of Grassberger and Procaccia are applied. A procedure for selection of the ``meaningful'' scaling region is proposed. Several scaling regions are revealed in the ln C(r) versus ln r diagram. The first one in the range of larger ln r has a gradual slope and the second one in the range of intermediate ln r has a fast slope. Other two regions are settled in the range of small ln r. The results lead us to claim that the series arises from the activity of at least two subsystems. The first subsystem is low-dimensional (d_f=1.6) and it possesses the potential predictability of several weeks. We suggest that this subsystem is connected with seasonal variability of weather. The second subsystem is high-dimensional (d_f>17) and its error-doubling time is about 4-7 days. It is found that the predictability differs in dependence on season. The predictability time for summer, winter and the entire year (T_2 approx. 4.7 days) is longer than for transition-seasons (T_2 approx. 4.0 days for spring, T_2 approx. 3.6 days for autumn). The role of random noise and the number of data points are discussed. It is shown that a 15-year-long daily temperature series is not sufficient for reliable estimations based on Grassberger and Procaccia algorithms.Comment: 27 pages (LaTex version 2.09) and 15 figures as .ps files, e-mail: [email protected]

    CSF1R inhibitor JNJ-40346527 attenuates microglial proliferation and neurodegeneration in P301S mice

    Get PDF
    Neuroinflammation and microglial activation are significant processes in Alzheimer's disease pathology. Recent genome-wide association studies have highlighted multiple immune-related genes in association with Alzheimer's disease, and experimental data have demonstrated microglial proliferation as a significant component of the neuropathology. In this study, we tested the efficacy of the selective CSF1R inhibitor JNJ-40346527 (JNJ-527) in the P301S mouse tauopathy model. We first demonstrated the anti-proliferative effects of JNJ-527 on microglia in the ME7 prion model, and its impact on the inflammatory profile, and provided potential CNS biomarkers for clinical investigation with the compound, including pharmacokinetic/pharmacodynamics and efficacy assessment by TSPO autoradiography and CSF proteomics. Then, we showed for the first time that blockade of microglial proliferation and modification of microglial phenotype leads to an attenuation of tau-induced neurodegeneration and results in functional improvement in P301S mice. Overall, this work strongly supports the potential for inhibition of CSF1R as a target for the treatment of Alzheimer's disease and other tau-mediated neurodegenerative diseases

    Inflammatory biomarkers in Alzheimer's disease plasma

    Get PDF
    Introduction: Plasma biomarkers for Alzheimer's disease (AD) diagnosis/stratification are a \u201cHoly Grail\u201d of AD research and intensively sought; however, there are no well-established plasma markers. Methods: A hypothesis-led plasma biomarker search was conducted in the context of international multicenter studies. The discovery phase measured 53 inflammatory proteins in elderly control (CTL; 259), mild cognitive impairment (MCI; 199), and AD (262) subjects from AddNeuroMed. Results: Ten analytes showed significant intergroup differences. Logistic regression identified five (FB, FH, sCR1, MCP-1, eotaxin-1) that, age/APO\u3b54 adjusted, optimally differentiated AD and CTL (AUC: 0.79), and three (sCR1, MCP-1, eotaxin-1) that optimally differentiated AD and MCI (AUC: 0.74). These models replicated in an independent cohort (EMIF; AUC 0.81 and 0.67). Two analytes (FB, FH) plus age predicted MCI progression to AD (AUC: 0.71). Discussion: Plasma markers of inflammation and complement dysregulation support diagnosis and outcome prediction in AD and MCI. Further replication is needed before clinical translation

    Helium identification with LHCb

    Get PDF
    The identification of helium nuclei at LHCb is achieved using a method based on measurements of ionisation losses in the silicon sensors and timing measurements in the Outer Tracker drift tubes. The background from photon conversions is reduced using the RICH detectors and an isolation requirement. The method is developed using pp collision data at √(s) = 13 TeV recorded by the LHCb experiment in the years 2016 to 2018, corresponding to an integrated luminosity of 5.5 fb-1. A total of around 105 helium and antihelium candidates are identified with negligible background contamination. The helium identification efficiency is estimated to be approximately 50% with a corresponding background rejection rate of up to O(10^12). These results demonstrate the feasibility of a rich programme of measurements of QCD and astrophysics interest involving light nuclei

    Momentum scale calibration of the LHCb spectrometer

    Get PDF
    For accurate determination of particle masses accurate knowledge of the momentum scale of the detectors is crucial. The procedure used to calibrate the momentum scale of the LHCb spectrometer is described and illustrated using the performance obtained with an integrated luminosity of 1.6 fb-1 collected during 2016 in pp running. The procedure uses large samples of J/ψ → μ + μ - and B+ → J/ψ K + decays and leads to a relative accuracy of 3 × 10-4 on the momentum scale

    Curvature-bias corrections using a pseudomass method

    Get PDF
    Momentum measurements for very high momentum charged particles, such as muons from electroweak vector boson decays, are particularly susceptible to charge-dependent curvature biases that arise from misalignments of tracking detectors. Low momentum charged particles used in alignment procedures have limited sensitivity to coherent displacements of such detectors, and therefore are unable to fully constrain these misalignments to the precision necessary for studies of electroweak physics. Additional approaches are therefore required to understand and correct for these effects. In this paper the curvature biases present at the LHCb detector are studied using the pseudomass method in proton-proton collision data recorded at centre of mass energy √(s)=13 TeV during 2016, 2017 and 2018. The biases are determined using Z→μ + μ - decays in intervals defined by the data-taking period, magnet polarity and muon direction. Correcting for these biases, which are typically at the 10-4 GeV-1 level, improves the Z→μ + μ - mass resolution by roughly 18% and eliminates several pathological trends in the kinematic-dependence of the mean dimuon invariant mass
    corecore