353 research outputs found

    Microarray-based ultra-high resolution discovery of genomic deletion mutations

    Get PDF
    BACKGROUND: Oligonucleotide microarray-based comparative genomic hybridization (CGH) offers an attractive possible route for the rapid and cost-effective genome-wide discovery of deletion mutations. CGH typically involves comparison of the hybridization intensities of genomic DNA samples with microarray chip representations of entire genomes, and has widespread potential application in experimental research and medical diagnostics. However, the power to detect small deletions is low. RESULTS: Here we use a graduated series of Arabidopsis thaliana genomic deletion mutations (of sizes ranging from 4 bp to ~5 kb) to optimize CGH-based genomic deletion detection. We show that the power to detect smaller deletions (4, 28 and 104 bp) depends upon oligonucleotide density (essentially the number of genome-representative oligonucleotides on the microarray chip), and determine the oligonucleotide spacings necessary to guarantee detection of deletions of specified size. CONCLUSIONS: Our findings will enhance a wide range of research and clinical applications, and in particular will aid in the discovery of genomic deletions in the absence of a priori knowledge of their existence

    Microarray-based ultra-high resolution discovery of genomic deletion mutations

    Get PDF
    BACKGROUND: Oligonucleotide microarray-based comparative genomic hybridization (CGH) offers an attractive possible route for the rapid and cost-effective genome-wide discovery of deletion mutations. CGH typically involves comparison of the hybridization intensities of genomic DNA samples with microarray chip representations of entire genomes, and has widespread potential application in experimental research and medical diagnostics. However, the power to detect small deletions is low. RESULTS: Here we use a graduated series of Arabidopsis thaliana genomic deletion mutations (of sizes ranging from 4 bp to ~5 kb) to optimize CGH-based genomic deletion detection. We show that the power to detect smaller deletions (4, 28 and 104 bp) depends upon oligonucleotide density (essentially the number of genome-representative oligonucleotides on the microarray chip), and determine the oligonucleotide spacings necessary to guarantee detection of deletions of specified size. CONCLUSIONS: Our findings will enhance a wide range of research and clinical applications, and in particular will aid in the discovery of genomic deletions in the absence of a priori knowledge of their existence

    Population Firm Interaction and the Dynamics of Assimilation Gap

    Get PDF
    The paper shows that the interaction between population and firm knowledge produces a non-monotonic change in the assimilation gap. The assimilation gap follows a convex curve experiencing an upward slope driven by imitation and the downward slope by knowledge spillovers. Changes in the characteristics of innovation shift its peak across time. The relative advantage and compatibility shift the peak towards the left and the complexity shifts it to the right. The model is tested in a simulated environment and offers insights into the differences in temporal trajectories of the various adopter groups

    Network Archaeology: Uncovering Ancient Networks from Present-day Interactions

    Get PDF
    Often questions arise about old or extinct networks. What proteins interacted in a long-extinct ancestor species of yeast? Who were the central players in the Last.fm social network 3 years ago? Our ability to answer such questions has been limited by the unavailability of past versions of networks. To overcome these limitations, we propose several algorithms for reconstructing a network's history of growth given only the network as it exists today and a generative model by which the network is believed to have evolved. Our likelihood-based method finds a probable previous state of the network by reversing the forward growth model. This approach retains node identities so that the history of individual nodes can be tracked. We apply these algorithms to uncover older, non-extant biological and social networks believed to have grown via several models, including duplication-mutation with complementarity, forest fire, and preferential attachment. Through experiments on both synthetic and real-world data, we find that our algorithms can estimate node arrival times, identify anchor nodes from which new nodes copy links, and can reveal significant features of networks that have long since disappeared.Comment: 16 pages, 10 figure

    MultiMetEval: comparative and multi-objective analysis of genome-scale metabolic models

    Get PDF
    Comparative metabolic modelling is emerging as a novel field, supported by the development of reliable and standardized approaches for constructing genome-scale metabolic models in high throughput. New software solutions are needed to allow efficient comparative analysis of multiple models in the context of multiple cellular objectives. Here, we present the user-friendly software framework Multi-Metabolic Evaluator (MultiMetEval), built upon SurreyFBA, which allows the user to compose collections of metabolic models that together can be subjected to flux balance analysis. Additionally, MultiMetEval implements functionalities for multi-objective analysis by calculating the Pareto front between two cellular objectives. Using a previously generated dataset of 38 actinobacterial genome-scale metabolic models, we show how these approaches can lead to exciting novel insights. Firstly, after incorporating several pathways for the biosynthesis of natural products into each of these models, comparative flux balance analysis predicted that species like Streptomyces that harbour the highest diversity of secondary metabolite biosynthetic gene clusters in their genomes do not necessarily have the metabolic network topology most suitable for compound overproduction. Secondly, multi-objective analysis of biomass production and natural product biosynthesis in these actinobacteria shows that the well-studied occurrence of discrete metabolic switches during the change of cellular objectives is inherent to their metabolic network architecture. Comparative and multi-objective modelling can lead to insights that could not be obtained by normal flux balance analyses. MultiMetEval provides a powerful platform that makes these analyses straightforward for biologists. Sources and binaries of MultiMetEval are freely available from https://github.com/PiotrZakrzewski/MetEv​al/downloads

    A Decision Support System for Moving Workloads to Public Clouds

    Get PDF
    The current economic environment is compellingCxOs to look for better IT resource utilization in order to get morevalue from their IT investments and reuse existing infrastructureto support growing business demands. How to get more from less?How to reuse the resources? How to minimize the Total Cost ofOwnership (TCO) of underlying IT infrastructure and data centeroperation cost? How to improve Return On Investment (ROI) toremain profitable and transform the IT cost center into a profitcenter? All of these questions are now being considered in light ofemerging ‘Public Cloud Computing’ services. Cloud Computingis a model for enabling resource allocation to dynamic businessworkloads in a real time manner from a pool of free resourcesin a cost effective manner. Providing resource on demand atcost effective pricing is not the only criteria when determiningif a business service workload can be moved to a public cloud.So what else must CxOs consider before they migrate to publiccloud environments? There is a need to validate the businessapplications and workloads in terms of technical portability andbusiness requirements/compliance so that they can be deployedinto a public cloud without considerable customization. Thisvalidation is not a simple task.In this paper, we will discuss an approach and the analytictooling which will help CxOs and their teams to automate theprocess of identifying business workloads that should move toa public cloud environment, as well as understanding its costbenefits. Using this approach, an organization can identify themost suitable business service workloads which could be movedto a public cloud environment from a private data center withoutre-architecting the applications or changing their business logic.This approach helps automate the classification and categorizationof workloads into various categories. For example, BusinessCritical (BC) and Non-business Critical (NBC) workloads canbe identified based on the role of business services within theoverall business function. The approach helps in the assessmentof public cloud providers on the basis of features and constraints.This approach provides consideration for industry complianceand the price model for hosting workloads on a pay-per-usebasis. Finally, the inbuilt analytics in the tool find the ‘best-fit’cloud provider for hosting the business service workload. ‘Bestfit’is based on analysis and outcomes of the previously mentionedsteps.Today, the industry follows a manual time consumingprocess for workload identification, workload classification andcloud provider assessment to find the best-fit for business serviceworkload hosting. The suggested automated approach enables anorganization to reduce cost and time when deciding to move toa public cloud environment. The proposed automated approachaccelerates the entire process of leveraging cloud benefits,through an effective, informed, fact-based decision process

    An Alternative String Landscape Cosmology: Eliminating Bizarreness

    Full text link
    In what has become a standard eternal inflation picture of the string landscape there are many problematic consequences and a difficulty defining probabilities for the occurrence of each type of universe. One feature in particular that might be philosophically disconcerting is the infinite cloning of each individual and each civilization in infinite numbers of separated regions of the multiverse. Even if this is not ruled out due to causal separation one should ask whether the infinite cloning is a universal prediction of string landscape models or whether there are scenarios in which it is avoided. If a viable alternative cosmology can be constructed one might search for predictions that might allow one to discriminate experimentally between the models. We present one such scenario although, in doing so, we are forced to give up several popular presuppositions including the absence of a preferred frame and the homogeneity of matter in the universe. The model also has several ancillary advantages. We also consider the future lifetime of the current universe before becoming a light trapping region.Comment: 13 pages, 1 figure, minor clarifications in version

    Blood pressure and restorative sleep intensity are altered by chronic daytime sleep disruption in rats

    Get PDF
    There is a serious need to develop preventative strategies reducing night-shift workers\u27 risk of cardiovascular disease and stroke. I developed an animal model of disrupted sleep and found when rats had abnormal sleep patterns, overall sleep quality (quantified by delta power in the electroencephalogram) suffered and blood pressure increased significantly

    Potential pitfalls in MitoChip detected tumor-specific somatic mutations: a call for caution when interpreting patient data

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Several investigators have employed high throughput mitochondrial sequencing array (MitoChip) in clinical studies to search mtDNA for markers linked to cancers. In consequence, a host of somatic mtDNA mutations have been identified as linked to different types of cancers. However, closer examination of these data show that there are a number of potential pitfalls in the detection tumor-specific somatic mutations in clinical case studies, thus urging caution in the interpretation of mtDNA data to the patients. This study examined mitochondrial sequence variants demonstrated in cancer patients, and assessed the reliability of using detected patterns of polymorphisms in the early diagnosis of cancer.</p> <p>Methods</p> <p>Published entire mitochondrial genomes from head and neck, adenoid cystic carcinoma, sessile serrated adenoma, and lung primary tumor from clinical patients were examined in a phylogenetic context and compared with known, naturally occurring mutations which characterize different populations.</p> <p>Results</p> <p>The phylogenetic linkage analysis of whole arrays of mtDNA mutations from patient cancerous and non-cancerous tissue confirmed that artificial recombination events occurred in studies of head and neck, adenoid cystic carcinoma, sessile serrated adenoma, and lung primary tumor. Our phylogenetic analysis of these tumor and control leukocyte mtDNA haplotype sequences shows clear cut evidence of mixed ancestries found in single individuals.</p> <p>Conclusions</p> <p>Our study makes two prescriptions: both in the clinical situation and in research 1. more care should be taken in maintaining sample identity and 2. analysis should always be undertaken with respect to all the data available and within an evolutionary framework to eliminate artifacts and mix-ups.</p
    corecore