106 research outputs found

    High-throughput cell-based screening reveals a role for ZNF131 as a repressor of ERalpha signaling

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Estrogen receptor α (ERα) is a transcription factor whose activity is affected by multiple regulatory cofactors. In an effort to identify the human genes involved in the regulation of ERα, we constructed a high-throughput, cell-based, functional screening platform by linking a response element (ERE) with a reporter gene. This allowed the cellular activity of ERα, in cells cotransfected with the candidate gene, to be quantified in the presence or absence of its cognate ligand E2.</p> <p>Results</p> <p>From a library of 570 human cDNA clones, we identified zinc finger protein 131 (ZNF131) as a repressor of ERα mediated transactivation. ZNF131 is a typical member of the BTB/POZ family of transcription factors, and shows both ubiquitous expression and a high degree of sequence conservation. The luciferase reporter gene assay revealed that ZNF131 inhibits ligand-dependent transactivation by ERα in a dose-dependent manner. Electrophoretic mobility shift assay clearly demonstrated that the interaction between ZNF131 and ERα interrupts or prevents ERα binding to the estrogen response element (ERE). In addition, ZNF131 was able to suppress the expression of pS2, an ERα target gene.</p> <p>Conclusion</p> <p>We suggest that the functional screening platform we constructed can be applied for high-throughput genomic screening candidate ERα-related genes. This in turn may provide new insights into the underlying molecular mechanisms of ERα regulation in mammalian cells.</p

    A flexible and accurate total variation and cascaded denoisers-based image reconstruction algorithm for hyperspectrally compressed ultrafast photography

    Full text link
    Hyperspectrally compressed ultrafast photography (HCUP) based on compressed sensing and the time- and spectrum-to-space mappings can simultaneously realize the temporal and spectral imaging of non-repeatable or difficult-to-repeat transient events passively in a single exposure. It possesses an incredibly high frame rate of tens of trillions of frames per second and a sequence depth of several hundred, and plays a revolutionary role in single-shot ultrafast optical imaging. However, due to the ultra-high data compression ratio induced by the extremely large sequence depth as well as the limited fidelities of traditional reconstruction algorithms over the reconstruction process, HCUP suffers from a poor image reconstruction quality and fails to capture fine structures in complex transient scenes. To overcome these restrictions, we propose a flexible image reconstruction algorithm based on the total variation (TV) and cascaded denoisers (CD) for HCUP, named the TV-CD algorithm. It applies the TV denoising model cascaded with several advanced deep learning-based denoising models in the iterative plug-and-play alternating direction method of multipliers framework, which can preserve the image smoothness while utilizing the deep denoising networks to obtain more priori, and thus solving the common sparsity representation problem in local similarity and motion compensation. Both simulation and experimental results show that the proposed TV-CD algorithm can effectively improve the image reconstruction accuracy and quality of HCUP, and further promote the practical applications of HCUP in capturing high-dimensional complex physical, chemical and biological ultrafast optical scenes.Comment: 25 pages, 5 figures and 1 tabl

    Feature-based Transferable Disruption Prediction for future tokamaks using domain adaptation

    Full text link
    The high acquisition cost and the significant demand for disruptive discharges for data-driven disruption prediction models in future tokamaks pose an inherent contradiction in disruption prediction research. In this paper, we demonstrated a novel approach to predict disruption in a future tokamak only using a few discharges based on a domain adaptation algorithm called CORAL. It is the first attempt at applying domain adaptation in the disruption prediction task. In this paper, this disruption prediction approach aligns a few data from the future tokamak (target domain) and a large amount of data from the existing tokamak (source domain) to train a machine learning model in the existing tokamak. To simulate the existing and future tokamak case, we selected J-TEXT as the existing tokamak and EAST as the future tokamak. To simulate the lack of disruptive data in future tokamak, we only selected 100 non-disruptive discharges and 10 disruptive discharges from EAST as the target domain training data. We have improved CORAL to make it more suitable for the disruption prediction task, called supervised CORAL. Compared to the model trained by mixing data from the two tokamaks, the supervised CORAL model can enhance the disruption prediction performance for future tokamaks (AUC value from 0.764 to 0.890). Through interpretable analysis, we discovered that using the supervised CORAL enables the transformation of data distribution to be more similar to future tokamak. An assessment method for evaluating whether a model has learned a trend of similar features is designed based on SHAP analysis. It demonstrates that the supervised CORAL model exhibits more similarities to the model trained on large data sizes of EAST. FTDP provides a light, interpretable, and few-data-required way by aligning features to predict disruption using small data sizes from the future tokamak.Comment: 15 pages, 9 figure

    Picturing Electron Capture to the Continuum in the Transfer Ionization of Intermediate-Energy HeÂČâș Collisions with Argon

    Get PDF
    Electron emission occurring in transfer ionization for He2+ collisions with argon has been investigated using cold target recoil ion momentum spectroscopy. The double differential cross sections for electron capture to the continuum of the projectile (cusp-shaped electrons) are presented for collision energies from 17.5 to 75 keV/u. For an energy of 30 keV/u, we find a maximum in the experimental ratio of the cusp-shaped electron yield to the total electron yield. This result is explained in terms of the velocity matching between the projectile ion and the electron initially bound to the target. One of the important issues for double electron transitions is the role of electron-electron correlation. If this correlation is weak, then the transfer-ionization process can be viewed as two separate sequential processes. If this correlation is strong, then the transfer-ionization process would happen simultaneously and not sequentially. Our experimental and theoretical results indicate that correlation is weak and that the first step is target ionization followed by charge capture

    Disruption Precursor Onset Time Study Based on Semi-supervised Anomaly Detection

    Full text link
    The full understanding of plasma disruption in tokamaks is currently lacking, and data-driven methods are extensively used for disruption prediction. However, most existing data-driven disruption predictors employ supervised learning techniques, which require labeled training data. The manual labeling of disruption precursors is a tedious and challenging task, as some precursors are difficult to accurately identify, limiting the potential of machine learning models. To address this issue, commonly used labeling methods assume that the precursor onset occurs at a fixed time before the disruption, which may not be consistent for different types of disruptions or even the same type of disruption, due to the different speeds at which plasma instabilities escalate. This leads to mislabeled samples and suboptimal performance of the supervised learning predictor. In this paper, we present a disruption prediction method based on anomaly detection that overcomes the drawbacks of unbalanced positive and negative data samples and inaccurately labeled disruption precursor samples. We demonstrate the effectiveness and reliability of anomaly detection predictors based on different algorithms on J-TEXT and EAST to evaluate the reliability of the precursor onset time inferred by the anomaly detection predictor. The precursor onset times inferred by these predictors reveal that the labeling methods have room for improvement as the onset times of different shots are not necessarily the same. Finally, we optimize precursor labeling using the onset times inferred by the anomaly detection predictor and test the optimized labels on supervised learning disruption predictors. The results on J-TEXT and EAST show that the models trained on the optimized labels outperform those trained on fixed onset time labels.Comment: 21 pages, 11 figure

    CDX2 as a Prognostic Biomarker in Stage II and Stage III Colon Cancer

    Full text link
    BACKGROUND: The identification of high-risk stage II colon cancers is key to the selection of patients who require adjuvant treatment after surgery. Microarray-based multigene-expression signatures derived from stem cells and progenitor cells hold promise, but they are difficult to use in clinical practice. METHODS: We used a new bioinformatics approach to search for biomarkers of colon epithelial differentiation across gene-expression arrays and then ranked candidate genes according to the availability of clinical-grade diagnostic assays. With the use of subgroup analysis involving independent and retrospective cohorts of patients with stage II or stage III colon cancer, the top candidate gene was tested for its association with disease-free survival and a benefit from adjuvant chemotherapy. RESULTS: The transcription factor CDX2 ranked first in our screening test. A group of 87 of 2115 tumor samples (4.1%) lacked CDX2 expression. In the discovery data set, which included 466 patients, the rate of 5-year disease-free survival was lower among the 32 patients (6.9%) with CDX2-negative colon cancers than among the 434 (93.1%) with CDX2-positive colon cancers (hazard ratio for disease recurrence, 3.44; 95% confidence interval [CI], 1.60 to 7.38; P = 0.002). In the validation data set, which included 314 patients, the rate of 5-year disease-free survival was lower among the 38 patients (12.1%) with CDX2 protein–negative colon cancers than among the 276 (87.9%) with CDX2 protein–positive colon cancers (hazard ratio, 2.42; 95% CI, 1.36 to 4.29; P = 0.003). In both these groups, these findings were independent of the patient's age, sex, and tumor stage and grade. Among patients with stage II cancer, the difference in 5-year disease-free survival was significant both in the discovery data set (49% among 15 patients with CDX2-negative tumors vs. 87% among 191 patients with CDX2-positive tumors, P = 0.003) and in the validation data set (51% among 15 patients with CDX2-negative tumors vs. 80% among 106 patients with CDX2-positive tumors, P = 0.004). In a pooled database of all patient cohorts, the rate of 5-year disease-free survival was higher among 23 patients with stage II CDX2-negative tumors who were treated with adjuvant chemotherapy than among 25 who were not treated with adjuvant chemotherapy (91% vs. 56%, P = 0.006). CONCLUSIONS: Lack of CDX2 expression identified a subgroup of patients with high-risk stage II colon cancer who appeared to benefit from adjuvant chemotherapy. (Funded by the National Comprehensive Cancer Network, the National Institutes of Health, and others.

    Design and baseline characteristics of the finerenone in reducing cardiovascular mortality and morbidity in diabetic kidney disease trial

    Get PDF
    Background: Among people with diabetes, those with kidney disease have exceptionally high rates of cardiovascular (CV) morbidity and mortality and progression of their underlying kidney disease. Finerenone is a novel, nonsteroidal, selective mineralocorticoid receptor antagonist that has shown to reduce albuminuria in type 2 diabetes (T2D) patients with chronic kidney disease (CKD) while revealing only a low risk of hyperkalemia. However, the effect of finerenone on CV and renal outcomes has not yet been investigated in long-term trials. Patients and Methods: The Finerenone in Reducing CV Mortality and Morbidity in Diabetic Kidney Disease (FIGARO-DKD) trial aims to assess the efficacy and safety of finerenone compared to placebo at reducing clinically important CV and renal outcomes in T2D patients with CKD. FIGARO-DKD is a randomized, double-blind, placebo-controlled, parallel-group, event-driven trial running in 47 countries with an expected duration of approximately 6 years. FIGARO-DKD randomized 7,437 patients with an estimated glomerular filtration rate >= 25 mL/min/1.73 m(2) and albuminuria (urinary albumin-to-creatinine ratio >= 30 to <= 5,000 mg/g). The study has at least 90% power to detect a 20% reduction in the risk of the primary outcome (overall two-sided significance level alpha = 0.05), the composite of time to first occurrence of CV death, nonfatal myocardial infarction, nonfatal stroke, or hospitalization for heart failure. Conclusions: FIGARO-DKD will determine whether an optimally treated cohort of T2D patients with CKD at high risk of CV and renal events will experience cardiorenal benefits with the addition of finerenone to their treatment regimen. Trial Registration: EudraCT number: 2015-000950-39; ClinicalTrials.gov identifier: NCT02545049

    OPTIMAL SCHEDULING OF ELECTRIC VEHICLE'S CHARGING/DISCHARGING

    No full text
    The advent of Electric Vehicles (EVs) demonstrates the effort and determination of humans to protect the environment. However, as the number of EVs increases, charging those EVs consume large amount of energy that may cause more pressure on Grid. On the other hand, the smart grid enables two-way energy flow which gives EVs the potential to serve as distributed storage system that may help mitigate the pressure of fluctuation brought by Renewable Energy Sources (RES) and reinforce the stability of power systems. Therefore, establishing efficient management mechanism to properly schedule EV charging/discharging behavior becomes imperative. In this thesis, we consider that EVs have one charging mode, Grid-to-Vehicle (G2V), and two discharging modes, Vehicle-to- Grid (V2G) and Vehicle-to-Home (V2H). In V2G, EVs send back their surplus power to grid, while in V2H, EVs supply the power for appliances in a house. We aim to design optimal algorithms to schedule the EV’s operations. We first consider an individual residential household with a single EV, where the EV can operate at all three modes. When the EV works in G2V mode, the owner pays the cost to utility company based on the real-time price (RTP). When the EV works in V2G mode, the owner earns the reward based on the market price from utility companies. In V2H, the owner uses the EV battery to provide power to appliances in the house rather than purchasing from the utility. We propose a linear optimization algorithm to schedule the EV’s operations based on the RTP and market price subject to a set of constraints. The objective is to minimize the total cost. The results show that in general the EV chooses G2V when the RTP is low, responding to demand response. When the RTP is high, the EV tends to work as V2H to avoid buying from the utility. When the market price is high, the EVs will perform V2G to obtain more revenue. Noting that it is not practical for a single EV to perform V2G, we further consider a different scenario in which a group of EVs is aggregated and managed by an aggregator. One example is a parking lot for an enterprise. Initially only V2G is considered, that is, EVs work as energy supplies and the aggregator collects the energy from all connected EVs and then transfers the aggregated energy to the grid. Each EV needs to decide how much energy to discharge to the aggregator depending on its battery capacity, remaining energy level, and etc. To facilitate the energy collection process, we model it as a virtual energy “trading” process by using a hierarchical Stackelberg Game approach. We define the utility functions for aggregator and EVs. To start the game, the aggregator (Leader) announces a set of purchasing prices to EVs and each EV determines how much energy to sell to the aggregator by maximizing its utility based on the announced price and sends that number to the aggregator. Then the aggregator adjusts the purchasing prices by maximizing its utility based on the optimal energy values collected from the EVs and the game process repeats till it converges to an equilibrium point, where the prices and the amounts of energy become fixed values. The proposed game is an uncoordinated game. We also consider power losses during energy transmission and battery degradation caused by additional charging-discharging cycles. Simulation results show the effectiveness and robustness of our game approach. At last, we extend the game to include G2V as well for the aggregated EV group scenario. That is, EVs may charge their batteries according to the RTP so that they can sell more to the aggregator to increase the profit when the purchasing price from the aggregator is attractive. We propose a SG-DR algorithm to combine the game model for V2G and the demand response (DR) for G2V. Specifically, we adjust the utility function for EVs and then update the constraints of the game to include the DR. Subject to the duration of parking period, we solve this optimization problem using our combined SG-DR algorithm and generate EVs’ corresponding hourly charging/discharging pattern. Results show that our algorithm can increase up to 50% utility for EVs compared with the pure game model. Finally, in conclusion, we summarize our work under different scenarios. Then we analyze the potential risk and propose the future trend of EV’s development in Smart Grid.Ph.D. in Electrical Engineering, May 201
    • 

    corecore