487 research outputs found

    No-Boundary Thinking in Bioinformatics

    Get PDF
    The following sections are included:Bioinformatics is a Mature DisciplineThe Golden Era of Bioinformatics Has BegunNo-Boundary Thinking in BioinformaticsReference

    A panel analysis of UK industrial company failure

    Get PDF
    We examine the failure determinants for large quoted UK industrials using a panel data set comprising 539 firms observed over the period 1988-93. The empirical design employs data from company accounts and is based on Chamberlain’s conditional binomial logit model, which allows for unobservable, firm-specific, time-invariant factors associated with failure risk. We find a noticeable degree of heterogeneity across the sample companies. Our panel results show that, after controlling for unobservables, lower liquidity measured by the quick assets ratio, slower turnover proxied by the ratio of debtors turnover, and profitability were linked to the higher risk of insolvency in the analysis period. The findings appear to support the proposition that the current cash-flow considerations, rather than the future prospects of the firm, determined company failures over the 1990s recession

    Protostellar mass accretion rates from gravoturbulent fragmentation

    Full text link
    We analyse protostellar mass accretion rates from numerical models of star formation based on gravoturbulent fragmentation, considering a large number of different environments. To within one order of magnitude, the mass accretion rate is approximately given by the mean thermal Jeans mass divided by the corresponding free-fall time. However, mass accretion rates are highly time-variant, with a sharp peak shortly after the formation of the protostellar core. We present an empirical exponential fit formula to describe the time evolution of the mass accretion and discuss the resulting fit parameters. There is a positive correlation between the peak accretion rate and the final mass of the protostar. We also investigate the relation of the accretion rate with the turbulent flow velocity as well as with the driving wavenumbers in different environments. We then compare our results with other theoretical models of star formation and with observational data.Comment: 13 pages, 6 figures; accepted by A&

    Fragmentation and mass segregation in the massive dense cores of Cygnus X

    Full text link
    We present Plateau de Bure interferometer observations obtained in continuum at 1.3 and 3.5 mm towards the six most massive and young (IR-quiet) dense cores in Cygnus X. Located at only 1.7 kpc, the Cygnus X region offers the opportunity of reaching small enough scales (of the order of 1700 AU at 1.3 mm) to separate individual collapsing objects. The cores are sub-fragmented with a total of 23 fragments inside 5 cores. Only the most compact core, CygX-N63, could actually be a single massive protostar with an envelope mass as large as 60 Msun. The fragments in the other cores have sizes and separations similar to low-mass pre-stellar and proto-stellar condensations in nearby protoclusters, and are probably of the same nature. A total of 9 out of these 23 protostellar objects are found to be probable precursors of OB stars with envelope masses ranging from 6 to 23 Msun. The level of fragmentation is globally higher than in the turbulence regulated, monolithic collapse scenario, but is not as high as expected in a pure gravo-turbulent scenario where the distribution of mass is dominated by low-mass protostars/stars. Here, the fractions of the total core masses in the high-mass fragments are reaching values as high as 28, 44, and 100 % in CygX-N12, CygX-N53, and CygX-N63, respectively, much higher than what an IMF-like mass distribution would predict. The increase of the fragmentation efficiency as a function of density in the cores is proposed to be due to the increasing importance of self-gravity leading to gravitational collapse at the scale of the dense cores. At the same time, the cores tend to fragment into a few massive protostars within their central regions. We are therefore probably witnessing here the primordial mass segregation of clusters in formation.Comment: 14 pages, 16 figures, submitted for publication in A&

    A practical, bioinformatic workflow system for large data sets generated by next-generation sequencing

    Get PDF
    Transcriptomics (at the level of single cells, tissues and/or whole organisms) underpins many fields of biomedical science, from understanding the basic cellular function in model organisms, to the elucidation of the biological events that govern the development and progression of human diseases, and the exploration of the mechanisms of survival, drug-resistance and virulence of pathogens. Next-generation sequencing (NGS) technologies are contributing to a massive expansion of transcriptomics in all fields and are reducing the cost, time and performance barriers presented by conventional approaches. However, bioinformatic tools for the analysis of the sequence data sets produced by these technologies can be daunting to researchers with limited or no expertise in bioinformatics. Here, we constructed a semi-automated, bioinformatic workflow system, and critically evaluated it for the analysis and annotation of large-scale sequence data sets generated by NGS. We demonstrated its utility for the exploration of differences in the transcriptomes among various stages and both sexes of an economically important parasitic worm (Oesophagostomum dentatum) as well as the prediction and prioritization of essential molecules (including GTPases, protein kinases and phosphatases) as novel drug target candidates. This workflow system provides a practical tool for the assembly, annotation and analysis of NGS data sets, also to researchers with a limited bioinformatic expertise. The custom-written Perl, Python and Unix shell computer scripts used can be readily modified or adapted to suit many different applications. This system is now utilized routinely for the analysis of data sets from pathogens of major socio-economic importance and can, in principle, be applied to transcriptomics data sets from any organism

    A practical, bioinformatic workflow system for large data sets generated by next-generation sequencing

    Get PDF
    Transcriptomics (at the level of single cells, tissues and/or whole organisms) underpins many fields of biomedical science, from understanding the basic cellular function in model organisms, to the elucidation of the biological events that govern the development and progression of human diseases, and the exploration of the mechanisms of survival, drug-resistance and virulence of pathogens. Next-generation sequencing (NGS) technologies are contributing to a massive expansion of transcriptomics in all fields and are reducing the cost, time and performance barriers presented by conventional approaches. However, bioinformatic tools for the analysis of the sequence data sets produced by these technologies can be daunting to researchers with limited or no expertise in bioinformatics. Here, we constructed a semi-automated, bioinformatic workflow system, and critically evaluated it for the analysis and annotation of large-scale sequence data sets generated by NGS. We demonstrated its utility for the exploration of differences in the transcriptomes among various stages and both sexes of an economically important parasitic worm (Oesophagostomum dentatum) as well as the prediction and prioritization of essential molecules (including GTPases, protein kinases and phosphatases) as novel drug target candidates. This workflow system provides a practical tool for the assembly, annotation and analysis of NGS data sets, also to researchers with a limited bioinformatic expertise. The custom-written Perl, Python and Unix shell computer scripts used can be readily modified or adapted to suit many different applications. This system is now utilized routinely for the analysis of data sets from pathogens of major socio-economic importance and can, in principle, be applied to transcriptomics data sets from any organism

    Ethics review as a component of institutional approval for a multicentre continuous quality improvement project: the investigator's perspective

    Get PDF
    BACKGROUND: For ethical approval of a multicentre study in Canada, investigators must apply separately to individual Research Ethics Boards (REBs). In principle, the protection of human research subjects is of utmost importance. However, in practice, the process of multicentre ethics review can be time consuming and costly, requiring duplication of effort for researchers and REBs. We used our experience with ethical review of The Canadian Perinatal Network (CPN), to gain insight into the Canadian system. METHODS: The applications forms of 16 different REBs were abstracted for a list of standardized items. The application process across sites was compared. Correspondence between the REB and the investigators was documented in order to construct a timeline to approval, identify the specific issues raised by each board, and describe how they were resolved. RESULTS: Each REB had a different application form. Most (n = 9) had a two or three step application process. Overall, it took a median of 31 days (range 2-174 days) to receive an initial response from the REB. Approval took a median of 42 days (range 4-443 days). Privacy and consent were the two major issues raised. Several additional minor or administrative issues were raised which delayed approval. CONCLUSIONS: For CPN, the Canadian REB process of ethical review proved challenging. REBs acted independently and without unified application forms or submission procedures. We call for a critical examination of the ethical, privacy and institutional review processes in Canada, to determine the best way to undertake multicentre review

    Physical Activity May Facilitate Diabetes Prevention in Adolescents

    Get PDF
    OBJECTIVE—The aim of this study was to examine the association of physical activity with glucose tolerance and resting energy expenditure (REE) among adolescents

    Control of star formation by supersonic turbulence

    Full text link
    Understanding the formation of stars in galaxies is central to much of modern astrophysics. For several decades it has been thought that stellar birth is primarily controlled by the interplay between gravity and magnetostatic support, modulated by ambipolar diffusion. Recently, however, both observational and numerical work has begun to suggest that support by supersonic turbulence rather than magnetic fields controls star formation. In this review we outline a new theory of star formation relying on the control by turbulence. We demonstrate that although supersonic turbulence can provide global support, it nevertheless produces density enhancements that allow local collapse. Inefficient, isolated star formation is a hallmark of turbulent support, while efficient, clustered star formation occurs in its absence. The consequences of this theory are then explored for both local star formation and galactic scale star formation. (ABSTRACT ABBREVIATED)Comment: Invited review for "Reviews of Modern Physics", 87 pages including 28 figures, in pres
    corecore