70 research outputs found

    Construction and Random Generation of Hypergraphs with Prescribed Degree and Dimension Sequences

    Full text link
    We propose algorithms for construction and random generation of hypergraphs without loops and with prescribed degree and dimension sequences. The objective is to provide a starting point for as well as an alternative to Markov chain Monte Carlo approaches. Our algorithms leverage the transposition of properties and algorithms devised for matrices constituted of zeros and ones with prescribed row- and column-sums to hypergraphs. The construction algorithm extends the applicability of Markov chain Monte Carlo approaches when the initial hypergraph is not provided. The random generation algorithm allows the development of a self-normalised importance sampling estimator for hypergraph properties such as the average clustering coefficient.We prove the correctness of the proposed algorithms. We also prove that the random generation algorithm generates any hypergraph following the prescribed degree and dimension sequences with a non-zero probability. We empirically and comparatively evaluate the effectiveness and efficiency of the random generation algorithm. Experiments show that the random generation algorithm provides stable and accurate estimates of average clustering coefficient, and also demonstrates a better effective sample size in comparison with the Markov chain Monte Carlo approaches.Comment: 21 pages, 3 figure

    Bayesian profiling of molecular signatures to predict event times

    Get PDF
    BACKGROUND: It is of particular interest to identify cancer-specific molecular signatures for early diagnosis, monitoring effects of treatment and predicting patient survival time. Molecular information about patients is usually generated from high throughput technologies such as microarray and mass spectrometry. Statistically, we are challenged by the large number of candidates but only a small number of patients in the study, and the right-censored clinical data further complicate the analysis. RESULTS: We present a two-stage procedure to profile molecular signatures for survival outcomes. Firstly, we group closely-related molecular features into linkage clusters, each portraying either similar or opposite functions and playing similar roles in prognosis; secondly, a Bayesian approach is developed to rank the centroids of these linkage clusters and provide a list of the main molecular features closely related to the outcome of interest. A simulation study showed the superior performance of our approach. When it was applied to data on diffuse large B-cell lymphoma (DLBCL), we were able to identify some new candidate signatures for disease prognosis. CONCLUSION: This multivariate approach provides researchers with a more reliable list of molecular features profiled in terms of their prognostic relationship to the event times, and generates dependable information for subsequent identification of prognostic molecular signatures through either biological procedures or further data analysis

    Perivascular-like cells contribute to the stability of the vascular network of osteogenic tissue formed from cell sheet-based constructs

    Get PDF
    In recent years several studies have been supporting the existence of a close relationship in terms of function and progeny between Mesenchymal Stem Cells (MSCs) and Pericytes. This concept has opened new perspectives for the application of MSCs in Tissue Engineering (TE), with special interest for the pre-vascularization of cell dense constructs. In this work, cell sheet technology was used to create a scaffold-free construct composed of osteogenic, endothelial and perivascular-like (CD146+) cells for improved in vivo vessel formation, maturation and stability. The CD146 pericyte-associated phenotype was induced from human bone marrow mesenchymal stem cells (hBMSCs) by the supplementation of standard culture medium with TGF-b1. Co-cultured cell sheets were obtained by culturing perivascular-like (CD146+) cells and human umbilical vein endothelial cells (HUVECs) on an hBMSCs monolayer maintained in osteogenic medium for 7 days. The perivascular-like (CD146+) cells and the HUVECs migrated and organized over the collagen-rich osteogenic cell sheet, suggesting the existence of cross-talk involving the co-cultured cell types. Furthermore the presence of that particular ECM produced by the osteoblastic cells was shown to be the key regulator for the singular observed organization. The osteogenic and angiogenic character of the proposed constructs was assessed in vivo. Immunohistochemistry analysis of the explants revealed the integration of HUVECs with the host vasculature as well as the osteogenic potential of the created construct, by the expression of osteocalcin. Additionally, the analysis of the diameter of human CD146 positive blood vessels showed a higher mean vessel diameter for the co-cultured cell sheet condition, reinforcing the advantage of the proposed model regarding blood vessels maturation and stability and for the in vitro pre-vascularization of TE constructs.Funding provided by Fundacao para a Ciencia e a Tecnologia project Skingineering (PTDC/SAU-OSM/099422/2008). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript

    Selecting Forecasting Methods

    Get PDF
    I examined six ways of selecting forecasting methods: Convenience, “what’s easy,” is inexpensive, but risky. Market popularity, “what others do,” sounds appealing but is unlikely to be of value because popularity and success may not be related and because it overlooks some methods. Structured judgment, “what experts advise,” which is to rate methods against prespecified criteria, is promising. Statistical criteria, “what should work,” are widely used and valuable, but risky if applied narrowly. Relative track records, “what has worked in this situation,” are expensive because they depend on conducting evaluation studies. Guidelines from prior research, “what works in this type of situation,” relies on published research and offers a low-cost, effective approach to selection. Using a systematic review of prior research, I developed a flow chart to guide forecasters in selecting among ten forecasting methods. Some key findings: Given enough data, quantitative methods are more accurate than judgmental methods. When large changes are expected, causal methods are more accurate than naive methods. Simple methods are preferable to complex methods; they are easier to understand, less expensive, and seldom less accurate. To select a judgmental method, determine whether there are large changes, frequent forecasts, conflicts among decision makers, and policy considerations. To select a quantitative method, consider the level of knowledge about relationships, the amount of change involved, the type of data, the need for policy analysis, and the extent of domain knowledge. When selection is difficult, combine forecasts from different methods

    Accumulation of Rhodopsin in Late Endosomes Triggers Photoreceptor Cell Degeneration

    Get PDF
    Progressive retinal degeneration is the underlying feature of many human retinal dystrophies. Previous work using Drosophila as a model system and analysis of specific mutations in human rhodopsin have uncovered a connection between rhodopsin endocytosis and retinal degeneration. In these mutants, rhodopsin and its regulatory protein arrestin form stable complexes, and endocytosis of these complexes causes photoreceptor cell death. In this study we show that the internalized rhodopsin is not degraded in the lysosome but instead accumulates in the late endosomes. Using mutants that are defective in late endosome to lysosome trafficking, we were able to show that rhodopsin accumulates in endosomal compartments in these mutants and leads to light-dependent retinal degeneration. Moreover, we also show that in dying photoreceptors the internalized rhodopsin is not degraded but instead shows characteristics of insoluble proteins. Together these data implicate buildup of rhodopsin in the late endosomal system as a novel trigger of death of photoreceptor neurons

    The Future of Precision Medicine : Potential Impacts for Health Technology Assessment

    Get PDF
    Objective Precision medicine allows health care interventions to be tailored to groups of patients based on their disease susceptibility, diagnostic or prognostic information or treatment response. We analyse what developments are expected in precision medicine over the next decade and consider the implications for health technology assessment (HTA) agencies. Methods We perform a pragmatic review of the literature on the health economic challenges of precision medicine, and conduct interviews with representatives from HTA agencies, research councils and researchers from a variety of fields, including digital health, health informatics, health economics and primary care research. Results Three types of precision medicine are highlighted as likely to emerge in clinical practice and impact upon HTA agencies: complex algorithms, digital health applications and ‘omics’-based tests. Defining the scope of an evaluation, identifying and synthesizing the evidence and developing decision analytic models will more difficult when assessing more complex and uncertain treatment pathways. Stratification of patients will result in smaller subgroups, higher standard errors and greater decision uncertainty. Equity concerns may present in instances where biomarkers correlate with characteristics such as ethnicity, whilst fast-paced innovation may reduce the shelf-life of guidance and necessitate more frequent reviewing. Discussion Innovation in precision medicine promises substantial benefits to patients, but will also change the way in which some health services are delivered and evaluated. As biomarker discovery accelerates and AI-based technologies emerge, the technical expertise and processes of HTA agencies will need to adapt if the objective of value for money is to be maintained

    Crossmodal correspondences: A tutorial review

    Full text link

    Construction and random generation of hypergraphs with prescribed degree and dimension sequences

    No full text
    We propose algorithms for construction and random generation of hypergraphs without loops and with prescribed degree and dimension sequences. The objective is to provide a starting point for as well as an alternative to Markov chain Monte Carlo approaches. Our algorithms leverage the transposition of properties and algorithms devised for matrices constituted of zeros and ones with prescribed row- and column-sums to hypergraphs. The construction algorithm extends the applicability of Markov chain Monte Carlo approaches when the initial hypergraph is not provided. The random generation algorithm allows the development of a self-normalised importance sampling estimator for hypergraph properties such as the average clustering coefficient. We prove the correctness of the proposed algorithms. We also prove that the random generation algorithm generates any hypergraph following the prescribed degree and dimension sequences with a non-zero probability. We empirically and comparatively evaluate the effectiveness and efficiency of the random generation algorithm. Experiments show that the random generation algorithm provides stable and accurate estimates of average clustering coefficient, and also demonstrates a better effective sample size in comparison with the Markov chain Monte Carlo approaches
    corecore