2,765 research outputs found

    An Optical Study of Stellar and Interstellar Environments of Seven Luminous and Ultraluminous X-Ray Sources

    Get PDF
    We have studied the stellar and interstellar environments of two luminous X-ray sources and five ultraluminous X-ray sources (ULXs) in order to gain insight into their nature. Archival Hubble Space Telescope images were used to identify the optical counterparts of the ULXs Ho IX X-1 and NGC 1313 X-2, and to make photometric measurements of the local stellar populations of these and the luminous source IC 10 X-1. We obtained high-dispersion spectroscopic observations of the nebulae around these seven sources to search for He II lambda-4686 emission and to estimate the expansion velocities and kinetic energies of these nebulae. Our observations did not detect nebular He II emission from any source, with the exception of LMC X-1; this is either because we missed the He III regions or because the nebulae are too diffuse to produce He II surface brightnesses that lie within our detection limit. We compare the observed ionization and kinematics of the supershells around the ULXs Ho IX X-1 and NGC 1313 X-2 with the energy feedback expected from the underlying stellar population to assess whether additional energy contributions from the ULXs are needed. In both cases, we find insufficient UV fluxes or mechanical energies from the stellar population; thus these ULXs may be partially responsible for the ionization and energetics of their supershells. All seven sources we studied are in young stellar environments and six of them have optical counterparts with masses >~7 M_sun; thus, these sources are most likely high-mass X-ray binaries.Comment: 30 pages, 9 figures. Numerous minor revisions, primarily to more accurately cite earlier work by Pakull and Mirioni, and to correct typographical errors. Removed a misleading sentence in the Introduction (re: X-ray photoionization by ULXs). Accepted for publication in The Astrophysical Journal. Figures have been reduced in resolution for space requirements; full-resolution figures may be requested by email to [email protected]

    An Optical Study of Stellar and Interstellar Environments of Seven Luminous and Ultraluminous X-Ray Sources

    Get PDF
    We have studied the stellar and interstellar environments of two luminous X-ray sources and five ultraluminous X-ray sources (ULXs) in order to gain insight into their nature. Archival Hubble Space Telescope images were used to identify the optical counterparts of the ULXs Ho IX X-1 and NGC 1313 X-2, and to make photometric measurements of the local stellar populations of these and the luminous source IC 10 X-1. We obtained high-dispersion spectroscopic observations of the nebulae around these seven sources to search for He II lambda-4686 emission and to estimate the expansion velocities and kinetic energies of these nebulae. Our observations did not detect nebular He II emission from any source, with the exception of LMC X-1; this is either because we missed the He III regions or because the nebulae are too diffuse to produce He II surface brightnesses that lie within our detection limit. We compare the observed ionization and kinematics of the supershells around the ULXs Ho IX X-1 and NGC 1313 X-2 with the energy feedback expected from the underlying stellar population to assess whether additional energy contributions from the ULXs are needed. In both cases, we find insufficient UV fluxes or mechanical energies from the stellar population; thus these ULXs may be partially responsible for the ionization and energetics of their supershells. All seven sources we studied are in young stellar environments and six of them have optical counterparts with masses >~7 M_sun; thus, these sources are most likely high-mass X-ray binaries.Comment: 30 pages, 9 figures. Numerous minor revisions, primarily to more accurately cite earlier work by Pakull and Mirioni, and to correct typographical errors. Removed a misleading sentence in the Introduction (re: X-ray photoionization by ULXs). Accepted for publication in The Astrophysical Journal. Figures have been reduced in resolution for space requirements; full-resolution figures may be requested by email to [email protected]

    Hydrothermal processing, as an alternative for upgrading agriculture residues and marine biomass according to the biorefinery concept : a review

    Get PDF
    The concept of a biorefinery that integrates processes and technologies for biomass conversion demands efficient utilization of all components. Hydrothermal processing is a potential clean technology to convert raw materials such as lignocellulosic materials and aquatic biomass into bioenergy and high added-value chemicals. In this technology, water at high temperatures and pressures is applied for hydrolysis, extraction and structural modification of materials. This review is focused on providing an updated overview on the fundamentals, modelling, separation and applications of the main components of lignocellulosic materials and conversion of aquatic biomass (macro- and micro- algae) into value-added products.The authors Hector A. Ruiz and Bruno D. Fernandes thank to the Portuguese Foundation for Science and Technology (FCT, Portugal) for their fellowships (grant number: SFRH/BPD/77361/2011 and SFRH/BD/44724/2008, respectively) and Rosa M. Rodriguez-Jasso thanks to MexicanScience and Technology Council (CONACYT, Mexico) for PhD fellowship support (grant number: 206607/230415)

    Comparison of classification methods for detecting associations between SNPs and chick mortality

    Get PDF
    Multi-category classification methods were used to detect SNP-mortality associations in broilers. The objective was to select a subset of whole genome SNPs associated with chick mortality. This was done by categorizing mortality rates and using a filter-wrapper feature selection procedure in each of the classification methods evaluated. Different numbers of categories (2, 3, 4, 5 and 10) and three classification algorithms (naïve Bayes classifiers, Bayesian networks and neural networks) were compared, using early and late chick mortality rates in low and high hygiene environments. Evaluation of SNPs selected by each classification method was done by predicted residual sum of squares and a significance test-related metric. A naïve Bayes classifier, coupled with discretization into two or three categories generated the SNP subset with greatest predictive ability. Further, an alternative categorization scheme, which used only two extreme portions of the empirical distribution of mortality rates, was considered. This scheme selected SNPs with greater predictive ability than those chosen by the methods described previously. Use of extreme samples seems to enhance the ability of feature selection procedures to select influential SNPs in genetic association studies

    A Primer on High-Throughput Computing for Genomic Selection

    Get PDF
    High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin–Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized genetic gain). Eventually, HTC may change our view of data analysis as well as decision-making in the post-genomic era of selection programs in animals and plants, or in the study of complex diseases in humans

    LHC Study of Third-Generation Scalar Leptoquarks with Machine-Learned Likelihoods

    Full text link
    We study the impact of machine-learning algorithms on LHC searches for leptoquarks in final states with hadronically decaying tau leptons, multiple bb-jets, and large missing transverse momentum. Pair production of scalar leptoquarks with decays only into third-generation leptons and quarks is assumed. Thanks to the use of supervised learning tools with unbinned methods to handle the high-dimensional final states, we consider simple selection cuts which would possibly translate into an improvement in the exclusion limits at the 95%\% confidence level for leptoquark masses with different values of their branching fraction into charged leptons. In particular, for intermediate branching fractions, we expect that the exclusion limits for leptoquark masses extend to \sim1.3 TeV. As a novelty in the implemented unbinned analysis, we include a simplified estimation of some systematic uncertainties with the aim of studying their possible impact on the stability of the results. Finally, we also present the projected sensitivity within this framework at 14 TeV for 300 and 3000 fb1^{-1} that extends the upper limits to \sim1.6 and \sim1.8 TeV, respectively.Comment: 20 pages, 7 figures, 1 table. Unbinned method code with approach to systematic uncertainty inclusion available from https://github.com/AndresDanielPerez

    Estimating Self-Sustainability in Peer-to-Peer Swarming Systems

    Full text link
    Peer-to-peer swarming is one of the \emph{de facto} solutions for distributed content dissemination in today's Internet. By leveraging resources provided by clients, swarming systems reduce the load on and costs to publishers. However, there is a limit to how much cost savings can be gained from swarming; for example, for unpopular content peers will always depend on the publisher in order to complete their downloads. In this paper, we investigate this dependence. For this purpose, we propose a new metric, namely \emph{swarm self-sustainability}. A swarm is referred to as self-sustaining if all its blocks are collectively held by peers; the self-sustainability of a swarm is the fraction of time in which the swarm is self-sustaining. We pose the following question: how does the self-sustainability of a swarm vary as a function of content popularity, the service capacity of the users, and the size of the file? We present a model to answer the posed question. We then propose efficient solution methods to compute self-sustainability. The accuracy of our estimates is validated against simulation. Finally, we also provide closed-form expressions for the fraction of time that a given number of blocks is collectively held by peers.Comment: 27 pages, 5 figure
    corecore