1,658 research outputs found

    Searching for Dark Photons with Maverick Top Partners

    Full text link
    In this paper, we present a model in which an up-type vector-like quark (VLQ) is charged under a new U(1)dU(1)_d gauge force which kinetically mixes with the SM hypercharge. The gauge boson of the U(1)dU(1)_d is the dark photon, γd\gamma_d. Traditional searches for VLQs rely on decays into Standard Model electroweak bosons W,ZW,Z or Higgs. However, since no evidence for VLQs has been found at the Large Hadron Collider (LHC), it is imperative to search for other novel signatures of VLQs beyond their traditional decays. As we will show, if the dark photon is much less massive than the Standard Model electroweak sector, MγdMZM_{\gamma_d}\ll M_Z, for the large majority of the allowed parameter space the VLQ predominately decays into the dark photon and the dark Higgs that breaks the U(1)dU(1)_d . That is, this VLQ is a `maverick top partner' with nontraditional decays. One of the appeals of this scenario is that pair production of the VLQ at the LHC occurs through the strong force and the rate is determined by the gauge structure. Hence, the production of the dark photon at the LHC only depends on the strong force and is largely independent of the small kinetic mixing with hypercharge. This scenario provides a robust framework to search for a light dark sector via searches for heavy colored particles at the LHC.Comment: 40 pages and 11 figure

    Chern classes and extraspecial groups

    Full text link
    The mod-p cohomology ring of the extraspecial p-group of exponent p is studied for odd p. We investigate the subquotient ch(G) generated by Chern classes modulo the nilradical. The subring of ch(G) generated by Chern classes of one-dimensional representations was studied by Tezuka and Yagita. The subring generated by the Chern classes of the faithful irreducible representations is a polynomial algebra. We study the interplay between these two families of generators, and obtain some relations between them

    State-space models' dirty little secrets: even simple linear Gaussian models can have estimation problems

    Get PDF
    State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (\textit{Ursus maritimus}) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results

    The rotation of alpha Oph investigated using polarimetry

    Get PDF
    Recently we have demonstrated that high-precision polarization observations can detect the polarization resulting from the rotational distortion of a rapidly rotating B-type star. Here we investigate the extension of this approach to an A-type star. Linear-polarization observations of α\alpha Oph (A5IV) have been obtained over wavelengths from 400 to 750 nm. They show the wavelength dependence expected for a rapidly-rotating star combined with a contribution from interstellar polarization. We model the observations by fitting rotating-star polarization models and adding additional constraints including a measured vesiniv_e \sin{i}. However, we cannot fully separate the effects of rotation rate and inclination, leaving a range of possible solutions. We determine a rotation rate ω=Ω/Ωc\omega = \Omega/\Omega_ c between 0.83 and 0.98 and an axial inclination i > 60 deg. The rotation-axis position angle is found to be 142 ±\pm 4 deg, differing by 16 deg from a value obtained by interferometry. This might be due to precession of the rotation axis due to interaction with the binary companion. Other parameters resulting from the analysis include a polar temperature Tp = 8725 ±\pm 175 K, polar gravity loggp=3.93±0.08\log{g_p} = 3.93 \pm 0.08 (dex cgs), and polar radius Rp=2.52±0.06R_{\rm p} = 2.52 \pm 0.06 Rsun. Comparison with rotating-star evolutionary models indicates that α\alpha Oph is in the later half of its main-sequence evolution and must have had an initial ω\omega of 0.8 or greater. The interstellar polarization has a maximum value at a wavelength (λmax\lambda_{\rm max}) of 440±110440 \pm 110 nm, consistent with values found for other nearby stars.Comment: 14 pages, 11 figures, 5 tables, Accepted in MNRA

    A study of the F-giant star θ Scorpii A: a post-merger rapid rotator?

    Get PDF
    We report high-precision observations of the linear polarization of the F1III star θ Scorpii. The polarization has a wavelength dependence of the form expected for a rapid rotator, but with an amplitude several times larger than seen in otherwise similar main-sequence stars. This confirms the expectation that lower-gravity stars should have stronger rotational-polarization signatures as a consequence of the density dependence of the ratio of scattering to absorption opacities. By modelling the polarization, together with additional observational constraints (incorporating a revised analysis of Hipparcos astrometry, which clarifies the system's binary status), we determine a set of precise stellar parameters, including a rotation rate ω (= Ω/Ωc ≥ 0.94, polar gravity log (gp)= 2.091 +0.042-0.039 (dex cgs), mass 3.10 +0.37-0.32 M⊙, and luminosity log (L/L⊙) =3.149+0.041-0.028. These values are incompatible with evolutionary models of single rotating stars, with the star rotating too rapidly for its evolutionary stage, and being undermassive for its luminosity. We conclude that θ Sco A is most probably the product of a binary merger

    Deep Learning Applied to Raman Spectroscopy for the Detection of Microsatellite Instability/MMR Deficient Colorectal Cancer

    Get PDF
    Defective DNA mismatch repair is one pathogenic pathway to colorectal cancer. It is characterised by microsatellite instability which provides a molecular biomarker for its detection. Clinical guidelines for universal testing of this biomarker are not met due to resource limitations; thus, there is interest in developing novel methods for its detection. Raman spectroscopy (RS) is an analytical tool able to interrogate the molecular vibrations of a sample to provide a unique biochemical fingerprint. The resulting datasets are complex and high-dimensional, making them an ideal candidate for deep learning, though this may be limited by small sample sizes. This study investigates the potential of using RS to distinguish between normal, microsatellite stable (MSS) and microsatellite unstable (MSI-H) adenocarcinoma in human colorectal samples and whether deep learning provides any benefit to this end over traditional machine learning models. A 1D convolutional neural network (CNN) was developed to discriminate between healthy, MSI-H and MSS in human tissue and compared to a principal component analysis-linear discriminant analysis (PCA-LDA) and a support vector machine (SVM) model. A nested cross-validation strategy was used to train 30 samples, 10 from each group, with a total of 1490 Raman spectra. The CNN achieved a sensitivity and specificity of 83% and 45% compared to PCA-LDA, which achieved a sensitivity and specificity of 82% and 51%, respectively. These are competitive with existing guidelines, despite the low sample size, speaking to the molecular discriminative power of RS combined with deep learning. A number of biochemical antecedents responsible for this discrimination are also explored, with Raman peaks associated with nucleic acids and collagen being implicated

    Model for floodplain management in urbanizing areas

    Get PDF
    A target land use pattern found using a dynamic programming model is shown to be a useful reference for comparing the success of floodplain management policies. At least in the test case, there is interdependence in the land use allocation for floodplain management--that is, a good solution includes some reduction of current land use in the floodplain and some provision of detention storage. For the test case, current floodplain management policies are not sufficient; some of the existing floodplain use should be removed. Although specific land use patterns are in part sensitive to potential error in land value data and to inaccuracy in the routing model, the general conclusion that some existing use must be removed is stable within the range of likely error. Trend surface analysis is shown to be a potentially useful way of generating bid price data for use in land use allocation models. Sensitivity analysis of the dynamic programming model with respect to routing of hydrographs is conducted through simulation based on expected distributions of error.U.S. Geological SurveyU.S. Department of the InteriorOpe

    Guidelines and considerations for the use of system suitability and quality control samples in mass spectrometry assays applied in untargeted clinical metabolomic studies

    Get PDF
    Background Quality assurance (QA) and quality control (QC) are two quality management processes that are integral to the success of metabolomics including their application for the acquisition of high quality data in any high-throughput analytical chemistry laboratory. QA defines all the planned and systematic activities implemented before samples are collected, to provide confidence that a subsequent analytical process will fulfil predetermined requirements for quality. QC can be defined as the operational techniques and activities used to measure and report these quality requirements after data acquisition. Aim of review This tutorial review will guide the reader through the use of system suitability and QC samples, why these samples should be applied and how the quality of data can be reported. Key scientific concepts of review System suitability samples are applied to assess the operation and lack of contamination of the analytical platform prior to sample analysis. Isotopically-labelled internal standards are applied to assess system stability for each sample analysed. Pooled QC samples are applied to condition the analytical platform, perform intra-study reproducibility measurements (QC) and to correct mathematically for systematic errors. Standard reference materials and long-term reference QC samples are applied for inter-study and inter-laboratory assessment of data
    corecore