2,820 research outputs found

    NLSP Gluino Search at the Tevatron and early LHC

    Full text link
    We investigate the collider phenomenology of gluino-bino co-annihilation scenario both at the Tevatron and 7 TeV LHC. This scenario can be realized, for example, in a class of realistic supersymmetric models with non-universal gaugino masses and t-b-\tau Yukawa unification. The NLSP gluino and LSP bino should be nearly degenerate in mass, so that the typical gluino search channels involving leptons or hard jets are not available. Consequently, the gluino can be lighter than various bounds on its mass from direct searches. We propose a new search for NLSP gluino involving multi-b final states, arising from the three-body decay \tilde{g}-> b\bar{b}\tilde{\chi}_1^0. We identify two realistic models with gluino mass of around 300 GeV for which the three-body decay is dominant, and show that a 4.5 \sigma observation sensitivity can be achieved at the Tevatron with an integrated luminosity of 10 fb^{-1}. For the 7 TeV LHC with 50 pb^{-1} of integrated luminosity, the number of signal events for the two models is O(10), to be compared with negligible SM background event.Comment: 14 pages, 4 figures and 3 tables, minor modifications made and accepted for publication in JHE

    Modelling ground vibrations induced by harmonic loads

    Get PDF
    A finite-element model combining the frequency domain thin-layer method with paraxial boundary conditions to simulate the semi-infinite extent of a soil medium is presented in this paper. The combined numerical model is used to deal with harmonic vibrations of surface rigid foundations on non-horizontal soil profiles. The model can deal with soil media over rigid bedrock or significant depths of half-space. Structured finite elements are used to mesh simple geometry soil domains, whereas unstructured triangular mesh grids are employed to deal with complex geometry problems. Dynamic responses of homogeneous as well as layered soil profiles are simulated and validated against analytical and approximate solutions. Finally, the model is used to deal with surface ground vibration reduction, in which it is first validated against published results and then followed by an example involving a bridge

    Representing complex data using localized principal components with application to astronomical data

    Full text link
    Often the relation between the variables constituting a multivariate data space might be characterized by one or more of the terms: ``nonlinear'', ``branched'', ``disconnected'', ``bended'', ``curved'', ``heterogeneous'', or, more general, ``complex''. In these cases, simple principal component analysis (PCA) as a tool for dimension reduction can fail badly. Of the many alternative approaches proposed so far, local approximations of PCA are among the most promising. This paper will give a short review of localized versions of PCA, focusing on local principal curves and local partitioning algorithms. Furthermore we discuss projections other than the local principal components. When performing local dimension reduction for regression or classification problems it is important to focus not only on the manifold structure of the covariates, but also on the response variable(s). Local principal components only achieve the former, whereas localized regression approaches concentrate on the latter. Local projection directions derived from the partial least squares (PLS) algorithm offer an interesting trade-off between these two objectives. We apply these methods to several real data sets. In particular, we consider simulated astrophysical data from the future Galactic survey mission Gaia.Comment: 25 pages. In "Principal Manifolds for Data Visualization and Dimension Reduction", A. Gorban, B. Kegl, D. Wunsch, and A. Zinovyev (eds), Lecture Notes in Computational Science and Engineering, Springer, 2007, pp. 180--204, http://www.springer.com/dal/home/generic/search/results?SGWID=1-40109-22-173750210-

    Community evolution in patent networks: technological change and network dynamics

    Get PDF
    When studying patent data as a way to understand innovation and technological change, the conventional indicators might fall short, and categorizing technologies based on the existing classification systems used by patent authorities could cause inaccuracy and misclassification, as shown in literature. Gao et al. (International Workshop on Complex Networks and their Applications, 2017) have established a method to analyze patent classes of similar technologies as network communities. In this paper, we adopt the stabilized Louvain method for network community detection to improve consistency and stability. Incorporating the overlapping community mapping algorithm, we also develop a new method to identify the central nodes based on the temporal evolution of the network structure and track the changes of communities over time. A case study of Germany’s patent data is used to demonstrate and verify the application of the method and the results. Compared to the non-network metrics and conventional network measures, we offer a heuristic approach with a dynamic view and more stable results

    Community evolution in patent networks: technological change and network dynamics

    Get PDF
    When studying patent data as a way to understand innovation and technological change, the conventional indicators might fall short, and categorizing technologies based on the existing classification systems used by patent authorities could cause inaccuracy and misclassification, as shown in literature. Gao et al. (International Workshop on Complex Networks and their Applications, 2017) have established a method to analyze patent classes of similar technologies as network communities. In this paper, we adopt the stabilized Louvain method for network community detection to improve consistency and stability. Incorporating the overlapping community mapping algorithm, we also develop a new method to identify the central nodes based on the temporal evolution of the network structure and track the changes of communities over time. A case study of Germany’s patent data is used to demonstrate and verify the application of the method and the results. Compared to the non-network metrics and conventional network measures, we offer a heuristic approach with a dynamic view and more stable results

    Developing and testing a measure of consultation-based reassurance for people with low back pain in primary care:a cross-sectional study

    Get PDF
    BACKGROUND: Reassurance from physicians is commonly recommended in guidelines for the management of low back pain (LBP), but the process of reassurance and its impact on patients is poorly researched. We aimed to develop a valid and reliable measure of the process of reassurance during LBP consultations. METHODS: Items representing the data-gathering stage of the consultation and affective and cognitive reassurance were generated from literature on physician-patient communication and piloted with expert researchers and physicians, a Patient and Public Involvement group, and LBP patients to form a questionnaire. Patients presenting for LBP at 43 General Practice surgeries were sent the questionnaire. The questionnaire was analysed with Rasch modelling, using two samples from the same population of recent LBP consultations: the first (n = 157, follow-up n = 84) for exploratory analysis and the second (n = 162, follow-up n = 74) for confirmatory testing. Responses to the questionnaire were compared with responses to satisfaction and enablement scales to assess the external validity of the items, and participants completed the questionnaire again one-week later to assess test-retest reliability. RESULTS: The questionnaire was separated into four subscales: data-gathering, relationship-building, generic reassurance, and cognitive reassurance, each containing three items. All subscales showed good validity within the Rasch models, and good reliability based on person- and item-separations and test-retest reliability. All four subscales were significantly positively correlated with satisfaction and enablement for both samples. The final version of the questionnaire is presented here. CONCLUSIONS: Overall, the measure has demonstrated a good level of validity and generally acceptable reliability. This is the first measure to focus specifically on reassurance for LBP in primary care settings, and will enable researchers to further understanding of what is reassuring within the context of low back pain consultations, and how outcomes are affected by different types of reassurance. Additionally, the measure may provide a useful training and audit tool for physicians. The new measure requires testing in prospective cohorts, and would benefit from further validation against ethnographic observation of consultations in real time

    Chromosomal-level assembly of the Asian Seabass genome using long sequence reads and multi-layered scaffolding

    Get PDF
    We report here the ~670 Mb genome assembly of the Asian seabass (Lates calcarifer), a tropical marine teleost. We used long-read sequencing augmented by transcriptomics, optical and genetic mapping along with shared synteny from closely related fish species to derive a chromosome-level assembly with a contig N50 size over 1 Mb and scaffold N50 size over 25 Mb that span ~90% of the genome. The population structure of L. calcarifer species complex was analyzed by re-sequencing 61 individuals representing various regions across the species' native range. SNP analyses identified high levels of genetic diversity and confirmed earlier indications of a population stratification comprising three clades with signs of admixture apparent in the South-East Asian population. The quality of the Asian seabass genome assembly far exceeds that of any other fish species, and will serve as a new standard for fish genomics

    General Gauge and Anomaly Mediated Supersymmetry Breaking in Grand Unified Theories with Vector-Like Particles

    Get PDF
    In Grand Unified Theories (GUTs) from orbifold and various string constructions the generic vector-like particles do not need to form complete SU(5) or SO(10) representations. To realize them concretely, we present orbifold SU(5) models, orbifold SO(10) models where the gauge symmetry can be broken down to flipped SU(5) X U(1)_X or Pati-Salam SU(4)_C X SU(2)_L X SU(2)_R gauge symmetries, and F-theory SU(5) models. Interestingly, these vector-like particles can be at the TeV-scale so that the lightest CP-even Higgs boson mass can be lifted, or play the messenger fields in the Gauge Mediated Supersymmetry Breaking (GMSB). Considering GMSB, ultraviolet insensitive Anomaly Mediated Supersymmetry Breaking (AMSB), and the deflected AMSB, we study the general gaugino mass relations and their indices, which are valid from the GUT scale to the electroweak scale at one loop, in the SU(5) models, the flipped SU(5) X U(1)_X models, and the Pati-Salam SU(4)_C X SU(2)_L X SU(2)_R models. In the deflected AMSB, we also define the new indices for the gaugino mass relations, and calculate them as well. Using these gaugino mass relations and their indices, we may probe the messenger fields at intermediate scale in the GMSB and deflected AMSB, determine the supersymmetry breaking mediation mechanisms, and distinguish the four-dimensional GUTs, orbifold GUTs, and F-theory GUTs.Comment: RevTex4, 45 pages, 15 tables, version to appear in JHE

    Linear, Deterministic, and Order-Invariant Initialization Methods for the K-Means Clustering Algorithm

    Full text link
    Over the past five decades, k-means has become the clustering algorithm of choice in many application domains primarily due to its simplicity, time/space efficiency, and invariance to the ordering of the data points. Unfortunately, the algorithm's sensitivity to the initial selection of the cluster centers remains to be its most serious drawback. Numerous initialization methods have been proposed to address this drawback. Many of these methods, however, have time complexity superlinear in the number of data points, which makes them impractical for large data sets. On the other hand, linear methods are often random and/or sensitive to the ordering of the data points. These methods are generally unreliable in that the quality of their results is unpredictable. Therefore, it is common practice to perform multiple runs of such methods and take the output of the run that produces the best results. Such a practice, however, greatly increases the computational requirements of the otherwise highly efficient k-means algorithm. In this chapter, we investigate the empirical performance of six linear, deterministic (non-random), and order-invariant k-means initialization methods on a large and diverse collection of data sets from the UCI Machine Learning Repository. The results demonstrate that two relatively unknown hierarchical initialization methods due to Su and Dy outperform the remaining four methods with respect to two objective effectiveness criteria. In addition, a recent method due to Erisoglu et al. performs surprisingly poorly.Comment: 21 pages, 2 figures, 5 tables, Partitional Clustering Algorithms (Springer, 2014). arXiv admin note: substantial text overlap with arXiv:1304.7465, arXiv:1209.196

    Precipitation of Trichoderma reesei commercial cellulase preparations under standard enzymatic hydrolysis conditions for lignocelluloses

    Get PDF
    Comparative studies between commercial Trichoderma reesei cellulase preparations show that, depending on the preparation and loading, total protein precipitation can be as high as 30 % under standard hydrolysis conditions used for lignocellulosic materials. ATR-IR and SDS-PAGE data verify precipitates are protein-based and contain key cell wall hydrolyzing enzymes. Precipitation increased considerably with incubation temperature; roughly 50–150 % increase from 40 to 50 °C and 800 % greater at 60 °C. All of the reported protein losses translated into significant, and often drastic, losses in activity on related 4-nitrophenyl substrates. In addition, supplementation with the non-ionic surfactant PEG 6,000 decreased precipitation up to 80 % in 24 h precipitation levels. Protein precipitation is potentially substantial during enzymatic hydrolysis of lignocelluloses and should be accounted for during lignocellulose conversion process design, particularly when enzyme recycling is considered.This work was supported by the project "Demonstrating Industrial scale second generation bioethaol production-Kalundborg Cellulosic Ethanol Plant" under the EU FP7 framework program and the project "Development of improved second generation (2G) bioethanol technology to prepare for commercialization under the Danish Energy Technology and Demonstration Programme (EUDP)
    corecore