82 research outputs found

    Gravity Effects on Information Filtering and Network Evolving

    Full text link
    In this paper, based on the gravity principle of classical physics, we propose a tunable gravity-based model, which considers tag usage pattern to weigh both the mass and distance of network nodes. We then apply this model in solving the problems of information filtering and network evolving. Experimental results on two real-world data sets, \emph{Del.icio.us} and \emph{MovieLens}, show that it can not only enhance the algorithmic performance, but can also better characterize the properties of real networks. This work may shed some light on the in-depth understanding of the effect of gravity model

    Survival and complications of stereotactic radiosurgery

    Get PDF
    Background: Utilization of stereotactic radiosurgery (SRS) for treatment of high-grade gliomas (HGGs) has been slowly increasing with variable reported success rates. Objective: Systematic review of the available data to evaluate the efficacy of SRS as a treatment for HGG with regards to median overall survival (OS) and progression-free survival (PFS), in addition to ascertaining the rate of radiation necrosis and other SRS-related major neurological complications. Methods: Literature searches were performed for publications from 1992 to 2016. The pooled estimates of median PFS and median OS were calculated as a weighted estimate of population medians. Meta-analyses of published rates of radiation necrosis and other major neurological complications were also performed. Results: Twenty-nine studies reported the use of SRS for recurrent HGG, and 16 studies reported the use of SRS for newly diagnosed HGG. For recurrent HGG, the pooled estimates of median PFS and median OS were 5.42 months (3–16 months) and 20.19 months (9–65 months), respectively; the pooled radiation necrosis rate was 5.9% (0–44%); and the pooled estimates of major neurological complications rate was 3.3% (0–23%). For newly diagnosed HGG, the pooled estimates of median PFS and median OS were 7.89 months (5.5–11 months) and 16.87 months (9.5–33 months) respectively; the pooled radiation necrosis rate was 6.5% (0–33%); and the pooled estimates of other major neurological complications rate was 1.5% (0–25%). Conclusion: Our results suggest that SRS holds promise as a relatively safe treatment option for HGG. In terms of efficacy at this time, there are inadequate data to support routine utilization of SRS as the standard of care for newly diagnosed or recurrent HGG. Further studies should be pursued to define more clearly the therapeutic role of SRS

    Exceptional aggressiveness of cerebral cavernous malformation disease associated with PDCD10 mutations.

    Get PDF
    PurposeThe phenotypic manifestations of cerebral cavernous malformation disease caused by rare PDCD10 mutations have not been systematically examined, and a mechanistic link to Rho kinase-mediated hyperpermeability, a potential therapeutic target, has not been established.MethodsWe analyzed PDCD10 small interfering RNA-treated endothelial cells for stress fibers, Rho kinase activity, and permeability. Rho kinase activity was assessed in cerebral cavernous malformation lesions. Brain permeability and cerebral cavernous malformation lesion burden were quantified, and clinical manifestations were assessed in prospectively enrolled subjects with PDCD10 mutations.ResultsWe determined that PDCD10 protein suppresses endothelial stress fibers, Rho kinase activity, and permeability in vitro. Pdcd10 heterozygous mice have greater lesion burden than other Ccm genotypes. We demonstrated robust Rho kinase activity in murine and human cerebral cavernous malformation vasculature and increased brain vascular permeability in humans with PDCD10 mutation. Clinical phenotype is exceptionally aggressive compared with the more common KRIT1 and CCM2 familial and sporadic cerebral cavernous malformation, with greater lesion burden and more frequent hemorrhages earlier in life. We first report other phenotypic features, including scoliosis, cognitive disability, and skin lesions, unrelated to lesion burden or bleeding.ConclusionThese findings define a unique cerebral cavernous malformation disease with exceptional aggressiveness, and they inform preclinical therapeutic testing, clinical counseling, and the design of trials.Genet Med 17 3, 188-196

    A genetically modified adenoviral vector with a phage display-derived peptide incorporated into fiber fibritin chimera prolongs survival in experimental glioma

    Get PDF
    The dismal clinical context of advanced-grade glioma demands the development of novel therapeutic strategies with direct patient impact. Adenovirus-mediated virotherapy represents a potentially effective approach for glioma therapy. In this research, we generated a novel glioma-specific adenovirus by instituting more advanced genetic modifications that can maximize the efficiency and safety of therapeutic adenoviral vectors. In this regard, a glioma-specific targeted fiber was developed through the incorporation of previously published glioma-specific, phage-panned peptide (VWT peptide) on a fiber fibritin-based chimeric fiber, designated as “GliomaFF.” We showed that the entry of this virus was highly restricted to glioma cells, supporting the specificity imparted by the phage-panned peptide. In addition, the stability of the targeting moiety presented by fiber fibritin structure permitted greatly enhanced infectivity. Furthermore, the replication of this virus was restricted in glioma cells by controlling expression of the E1 gene under the activity of the tumor-specific survivin promoter. Using this approach, we were able to explore the combinatorial efficacy of various adenoviral modifications that could amplify the specificity, infectivity, and exclusive replication of this therapeutic adenovirus in glioma. Finally, virotherapy with this modified virus resulted in up to 70% extended survival in an in vivo murine glioma model. These data demonstrate that this novel adenoviral vector is a safe and efficient treatment for this difficult malignancy

    CCL2 produced by the glioma microenvironment is essential for the recruitment of regulatory T cells and myeloid-derived suppressor cells

    Get PDF
    In many aggressive cancers, such as glioblastoma multiforme (GBM), progression is enabled by local immunosuppression driven by the accumulation of regulatory T cells (Treg) and myeloid-derived suppressor cells (MDSC). However, the mechanistic details of how Treg and MDSC are recruited in various tumors is not yet well understood. Here we report that macrophages and microglia within the glioma microenvironment produce CCL2, a chemokine that is critical for recruiting both CCR4+ Treg and CCR2+Ly-6C+ monocytic MDSC in this disease setting. In murine gliomas, we established novel roles for tumor-derived CCL20 and osteoprotegerin in inducing CCL2 production from macrophages and microglia. Tumors grown in CCL2 deficient mice failed to maximally accrue Treg and monocytic MDSC. In mixed-bone marrow chimera assays, we found that CCR4-deficient Treg and CCR2-deficient monocytic MDSC were defective in glioma accumulation. Further, administration of a small molecule antagonist of CCR4 improved median survival in the model. In clinical specimens of GBM, elevated levels of CCL2 expression correlated with reduced overall survival of patients. Lastly, we found that CD163-positive infiltrating macrophages were a major source of CCL2 in GBM patients. Collectively, our findings show how glioma cells influence the tumor microenvironment to recruit potent effectors of immunosuppression that drive progression

    DataPerf: Benchmarks for Data-Centric AI Development

    Full text link
    Machine learning research has long focused on models rather than datasets, and prominent datasets are used for common ML tasks without regard to the breadth, difficulty, and faithfulness of the underlying problems. Neglecting the fundamental importance of data has given rise to inaccuracy, bias, and fragility in real-world applications, and research is hindered by saturation across existing dataset benchmarks. In response, we present DataPerf, a community-led benchmark suite for evaluating ML datasets and data-centric algorithms. We aim to foster innovation in data-centric AI through competition, comparability, and reproducibility. We enable the ML community to iterate on datasets, instead of just architectures, and we provide an open, online platform with multiple rounds of challenges to support this iterative development. The first iteration of DataPerf contains five benchmarks covering a wide spectrum of data-centric techniques, tasks, and modalities in vision, speech, acquisition, debugging, and diffusion prompting, and we support hosting new contributed benchmarks from the community. The benchmarks, online evaluation platform, and baseline implementations are open source, and the MLCommons Association will maintain DataPerf to ensure long-term benefits to academia and industry.Comment: NeurIPS 2023 Datasets and Benchmarks Trac

    Statistical Methods For Phenotyping With Positive-Only Electronic Health Record Data

    No full text
    Electronic Health Records-based phenotyping requires fully labeled cases and controls for model training and testing. Due to asymmetric clinical workflow, labeled cases can be much more easily identified than labeled controls. Therefore, data from a group of labeled cases and a large number of unlabeled patients, referred to as “positive-only” data, is frequently accessible with minimum requirement for labeling efforts. This dissertation focuses on statistical methods for training and validating phenotyping models using such positive-only EHR data when the labeled cases can be seen as a representative subset of all cases. In project I, we developed an anchor-variable framework and proposed an accompanying maximum likelihood approach to training a logistic phenotyping model. In project II, we developed a Chi-squared test to assess model calibration through comparing the model-free and model-based estimated number of cases among the unlabeled. We also proposed consistent estimators for predictive performance measures and studied their large sample properties. These methods provide the methodological foundation for positive-only data to be routinely used for training and validating phenotyping models. In project III, we extended the MLE method in project I to accommodate high dimensional predictors by enabling automated feature selection through a proxy phenotype that is available for all patients. We performed extensive simulation studies to assess the performance of the proposed methods and applied them to Penn Medicine EHR data to phenotype primary aldosteronism

    Statistical Methods For Phenotyping With Positive-Only Electronic Health Record Data

    No full text
    Electronic Health Records-based phenotyping requires fully labeled cases and controls for model training and testing. Due to asymmetric clinical workflow, labeled cases can be much more easily identified than labeled controls. Therefore, data from a group of labeled cases and a large number of unlabeled patients, referred to as “positive-only” data, is frequently accessible with minimum requirement for labeling efforts. This dissertation focuses on statistical methods for training and validating phenotyping models using such positive-only EHR data when the labeled cases can be seen as a representative subset of all cases. In project I, we developed an anchor-variable framework and proposed an accompanying maximum likelihood approach to training a logistic phenotyping model. In project II, we developed a Chi-squared test to assess model calibration through comparing the model-free and model-based estimated number of cases among the unlabeled. We also proposed consistent estimators for predictive performance measures and studied their large sample properties. These methods provide the methodological foundation for positive-only data to be routinely used for training and validating phenotyping models. In project III, we extended the MLE method in project I to accommodate high dimensional predictors by enabling automated feature selection through a proxy phenotype that is available for all patients. We performed extensive simulation studies to assess the performance of the proposed methods and applied them to Penn Medicine EHR data to phenotype primary aldosteronism

    Optimal Sink Position Selection Algorithm for Wireless Sensor Networks

    No full text
    Abstract: The Optimal Sink Position Selection Algorithm is proposed based on the maximum demands (OSPSA). In this algorithm the communication demands of the nodes and the communication failure probability between the nodes and the sinks are considered. The sinks multiply cover the key nodes to satisfy the maximum demands to improve the quality of service. Furthermore, the characteristics are analyzed in theory. Simulation experiments are conducted to analyze and compare the relationships between the failure probability, the coverage radius and the maximum coverage demands. Moreover the effects between the number and the maximum coverage demands and the effects between the coverage and the maximum coverage demands are also compared
    • …
    corecore