237 research outputs found
Monocyte-Derived Macrophages Orchestrate Multiple Cell-Type Interactions To Repair Necrotic Liver Lesions in Disease Models
The liver can fully regenerate after partial resection, and its underlying mechanisms have been extensively studied. The liver can also rapidly regenerate after injury, with most studies focusing on hepatocyte proliferation; however, how hepatic necrotic lesions during acute or chronic liver diseases are eliminated and repaired remains obscure. Here, we demonstrate that monocyte-derived macrophages (MoMFs) were rapidly recruited to and encapsulated necrotic areas during immune-mediated liver injury and that this feature was essential in repairing necrotic lesions. At the early stage of injury, infiltrating MoMFs activated the Jagged1/notch homolog protein 2 (JAG1/NOTCH2) axis to induce cell death-resistant SRY-box transcription factor 9+ (SOX9+) hepatocytes near the necrotic lesions, which acted as a barrier from further injury. Subsequently, necrotic environment (hypoxia and dead cells) induced a cluster of complement 1q-positive (C1q+) MoMFs that promoted necrotic removal and liver repair, while Pdgfb+ MoMFs activated hepatic stellate cells (HSCs) to express α-smooth muscle actin and induce a strong contraction signal (YAP, pMLC) to squeeze and finally eliminate the necrotic lesions. In conclusion, MoMFs play a key role in repairing the necrotic lesions, not only by removing necrotic tissues, but also by inducing cell death-resistant hepatocytes to form a perinecrotic capsule and by activating α-smooth muscle actin-expressing HSCs to facilitate necrotic lesion resolution
Structural Learning of Attack Vectors for Generating Mutated XSS Attacks
Web applications suffer from cross-site scripting (XSS) attacks that
resulting from incomplete or incorrect input sanitization. Learning the
structure of attack vectors could enrich the variety of manifestations in
generated XSS attacks. In this study, we focus on generating more threatening
XSS attacks for the state-of-the-art detection approaches that can find
potential XSS vulnerabilities in Web applications, and propose a mechanism for
structural learning of attack vectors with the aim of generating mutated XSS
attacks in a fully automatic way. Mutated XSS attack generation depends on the
analysis of attack vectors and the structural learning mechanism. For the
kernel of the learning mechanism, we use a Hidden Markov model (HMM) as the
structure of the attack vector model to capture the implicit manner of the
attack vector, and this manner is benefited from the syntax meanings that are
labeled by the proposed tokenizing mechanism. Bayes theorem is used to
determine the number of hidden states in the model for generalizing the
structure model. The paper has the contributions as following: (1)
automatically learn the structure of attack vectors from practical data
analysis to modeling a structure model of attack vectors, (2) mimic the manners
and the elements of attack vectors to extend the ability of testing tool for
identifying XSS vulnerabilities, (3) be helpful to verify the flaws of
blacklist sanitization procedures of Web applications. We evaluated the
proposed mechanism by Burp Intruder with a dataset collected from public XSS
archives. The results show that mutated XSS attack generation can identify
potential vulnerabilities.Comment: In Proceedings TAV-WEB 2010, arXiv:1009.330
Informative scene decomposition for crowd analysis, comparison and simulation guidance
Crowd simulation is a central topic in several fields including graphics. To achieve high-fidelity simulations, data has been increasingly relied upon for analysis and simulation guidance. However, the information in real-world data is often noisy, mixed and unstructured, making it difficult for effective analysis, therefore has not been fully utilized. With the fast-growing volume of crowd data, such a bottleneck needs to be addressed. In this paper, we propose a new framework which comprehensively tackles this problem. It centers at an unsupervised method for analysis. The method takes as input raw and noisy data with highly mixed multi-dimensional (space, time and dynamics) information, and automatically structure it by learning the correlations among these dimensions. The dimensions together with their correlations fully describe the scene semantics which consists of recurring activity patterns in a scene, manifested as space flows with temporal and dynamics profiles. The effectiveness and robustness of the analysis have been tested on datasets with great variations in volume, duration, environment and crowd dynamics. Based on the analysis, new methods for data visualization, simulation evaluation and simulation guidance are also proposed. Together, our framework establishes a highly automated pipeline from raw data to crowd analysis, comparison and simulation guidance. Extensive experiments and evaluations have been conducted to show the flexibility, versatility and intuitiveness of our framework
AAK1 Identified as an Inhibitor of Neuregulin-1/ErbB4-Dependent Neurotrophic Factor Signaling Using Integrative Chemical Genomics and Proteomics
SummaryTarget identification remains challenging for the field of chemical biology. We describe an integrative chemical genomic and proteomic approach combining the use of differentially active analogs of small molecule probes with stable isotope labeling by amino acids in cell culture-mediated affinity enrichment, followed by subsequent testing of candidate targets using RNA interference-mediated gene silencing. We applied this approach to characterizing the natural product K252a and its ability to potentiate neuregulin-1 (Nrg1)/ErbB4 (v-erb-a erythroblastic leukemia viral oncogene homolog 4)-dependent neurotrophic factor signaling and neuritogenesis. We show that AAK1 (adaptor-associated kinase 1) is a relevant target of K252a, and that the loss of AAK1 alters ErbB4 trafficking and expression levels, providing evidence for a previously unrecognized role for AAK1 in Nrg1-mediated neurotrophic factor signaling. Similar strategies should lead to the discovery of novel targets for therapeutic development
Fine-mapping of the HNF1B multicancer locus identifies candidate variants that mediate endometrial cancer risk.
Common variants in the hepatocyte nuclear factor 1 homeobox B (HNF1B) gene are associated with the risk of Type II diabetes and multiple cancers. Evidence to date indicates that cancer risk may be mediated via genetic or epigenetic effects on HNF1B gene expression. We previously found single-nucleotide polymorphisms (SNPs) at the HNF1B locus to be associated with endometrial cancer, and now report extensive fine-mapping and in silico and laboratory analyses of this locus. Analysis of 1184 genotyped and imputed SNPs in 6608 Caucasian cases and 37 925 controls, and 895 Asian cases and 1968 controls, revealed the best signal of association for SNP rs11263763 (P = 8.4 × 10(-14), odds ratio = 0.86, 95% confidence interval = 0.82-0.89), located within HNF1B intron 1. Haplotype analysis and conditional analyses provide no evidence of further independent endometrial cancer risk variants at this locus. SNP rs11263763 genotype was associated with HNF1B mRNA expression but not with HNF1B methylation in endometrial tumor samples from The Cancer Genome Atlas. Genetic analyses prioritized rs11263763 and four other SNPs in high-to-moderate linkage disequilibrium as the most likely causal SNPs. Three of these SNPs map to the extended HNF1B promoter based on chromatin marks extending from the minimal promoter region. Reporter assays demonstrated that this extended region reduces activity in combination with the minimal HNF1B promoter, and that the minor alleles of rs11263763 or rs8064454 are associated with decreased HNF1B promoter activity. Our findings provide evidence for a single signal associated with endometrial cancer risk at the HNF1B locus, and that risk is likely mediated via altered HNF1B gene expression
SCD1 Inhibition Causes Cancer Cell Death by Depleting Mono-Unsaturated Fatty Acids
Increased metabolism is a requirement for tumor cell proliferation. To understand the dependence of tumor cells on fatty acid metabolism, we evaluated various nodes of the fatty acid synthesis pathway. Using RNAi we have demonstrated that depletion of fatty-acid synthesis pathway enzymes SCD1, FASN, or ACC1 in HCT116 colon cancer cells results in cytotoxicity that is reversible by addition of exogenous fatty acids. This conditional phenotype is most pronounced when SCD1 is depleted. We used this fatty-acid rescue strategy to characterize several small-molecule inhibitors of fatty acid synthesis, including identification of TOFA as a potent SCD1 inhibitor, representing a previously undescribed activity for this compound. Reference FASN and ACC inhibitors show cytotoxicity that is less pronounced than that of TOFA, and fatty-acid rescue profiles consistent with their proposed enzyme targets. Two reference SCD1 inhibitors show low-nanomolar cytotoxicity that is offset by at least two orders of magnitude by exogenous oleate. One of these inhibitors slows growth of HCT116 xenograft tumors. Our data outline an effective strategy for interrogation of on-mechanism potency and pathway-node-specificity of fatty acid synthesis inhibitors, establish an unambiguous link between fatty acid synthesis and cancer cell survival, and point toward SCD1 as a key target in this pathway
Traffic Control via Connected and Automated Vehicles: An Open-Road Field Experiment with 100 CAVs
The CIRCLES project aims to reduce instabilities in traffic flow, which are
naturally occurring phenomena due to human driving behavior. These "phantom
jams" or "stop-and-go waves,"are a significant source of wasted energy. Toward
this goal, the CIRCLES project designed a control system referred to as the
MegaController by the CIRCLES team, that could be deployed in real traffic. Our
field experiment leveraged a heterogeneous fleet of 100
longitudinally-controlled vehicles as Lagrangian traffic actuators, each of
which ran a controller with the architecture described in this paper. The
MegaController is a hierarchical control architecture, which consists of two
main layers. The upper layer is called Speed Planner, and is a centralized
optimal control algorithm. It assigns speed targets to the vehicles, conveyed
through the LTE cellular network. The lower layer is a control layer, running
on each vehicle. It performs local actuation by overriding the stock adaptive
cruise controller, using the stock on-board sensors. The Speed Planner ingests
live data feeds provided by third parties, as well as data from our own control
vehicles, and uses both to perform the speed assignment. The architecture of
the speed planner allows for modular use of standard control techniques, such
as optimal control, model predictive control, kernel methods and others,
including Deep RL, model predictive control and explicit controllers. Depending
on the vehicle architecture, all onboard sensing data can be accessed by the
local controllers, or only some. Control inputs vary across different
automakers, with inputs ranging from torque or acceleration requests for some
cars, and electronic selection of ACC set points in others. The proposed
architecture allows for the combination of all possible settings proposed
above. Most configurations were tested throughout the ramp up to the
MegaVandertest
Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the United States
Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub (https://covid19forecasthub.org/) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multimodel ensemble forecast that combined predictions from dozens of groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naïve baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-wk horizon three to five times larger than when predicting at a 1-wk horizon. This project underscores the role that collaboration and active coordination between governmental public-health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks
The United States COVID-19 Forecast Hub dataset
Academic researchers, government agencies, industry groups, and individuals have produced forecasts at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United States Centers for Disease Control and Prevention (CDC) partnered with an academic research lab at the University of Massachusetts Amherst to create the US COVID-19 Forecast Hub. Launched in April 2020, the Forecast Hub is a dataset with point and probabilistic forecasts of incident cases, incident hospitalizations, incident deaths, and cumulative deaths due to COVID-19 at county, state, and national, levels in the United States. Included forecasts represent a variety of modeling approaches, data sources, and assumptions regarding the spread of COVID-19. The goal of this dataset is to establish a standardized and comparable set of short-term forecasts from modeling teams. These data can be used to develop ensemble models, communicate forecasts to the public, create visualizations, compare models, and inform policies regarding COVID-19 mitigation. These open-source data are available via download from GitHub, through an online API, and through R packages
A framework for human microbiome research
A variety of microbial communities and their genes (the microbiome) exist throughout the human body, with fundamental roles in human health and disease. The National Institutes of Health (NIH)-funded Human Microbiome Project Consortium has established a population-scale framework to develop metagenomic protocols, resulting in a broad range of quality-controlled resources and data including standardized methods for creating, processing and interpreting distinct types of high-throughput metagenomic data available to the scientific community. Here we present resources from a population of 242 healthy adults sampled at 15 or 18 body sites up to three times, which have generated 5,177 microbial taxonomic profiles from 16S ribosomal RNA genes and over 3.5 terabases of metagenomic sequence so far. In parallel, approximately 800 reference strains isolated from the human body have been sequenced. Collectively, these data represent the largest resource describing the abundance and variety of the human microbiome, while providing a framework for current and future studies
- …