23 research outputs found

    VPA improves ferroptosis in tubular epithelial cells after cisplatin-induced acute kidney injury

    Get PDF
    Background: As a novel non-apoptotic cell death, ferroptosis has been reported to play a crucial role in acute kidney injury (AKI), especially cisplatin-induced AKI. Valproic acid (VPA), an inhibitor of histone deacetylase (HDAC) 1 and 2, is used as an antiepileptic drug. Consistent with our data, a few studies have demonstrated that VPA protects against kidney injury in several models, but the detailed mechanism remains unclear.Results: In this study, we found that VPA prevents against cisplatin-induced renal injury via regulating glutathione peroxidase 4 (GPX4) and inhibiting ferroptosis. Our results mainly indicated that ferroptosis presented in tubular epithelial cells of AKI humans and cisplatin-induced AKI mice. VPA or ferrostatin-1 (ferroptosis inhibitor, Fer-1) reduced cisplatin-induced AKI functionally and pathologically, which was characterized by reduced serum creatinine, blood urea nitrogen, and tissue damage in mice. Meanwhile, VPA or Fer-1 treatment in both in vivo and in vitro models, decreased cell death, lipid peroxidation, and expression of acyl-CoA synthetase long-chain family member 4 (ACSL4), reversing downregulation of GPX4. In addition, our study in vitro indicated that GPX4 inhibition by siRNA significantly weakened the protective effect of VPA after cisplatin treatment.Conclusion: Ferroptosis plays an essential role in cisplatin-induced AKI and inhibiting ferroptosis through VPA to protect against renal injury is a viable treatment in cisplatin-induced AKI

    Demographics, behaviours, and preferences of birdwatchers and their implications for avitourism and avian conservation: A case study of birding in Nonggang, Southern China

    No full text
    Birding, a sustainable ecotourism, capitalizes on the community's rich bird resources to attract an increasing number of birdwatchers. However, the influence of the preferences and behaviour of birdwatchers during birding is unclear. Here, we explore the demographics, behaviours, and preferences of birdwatchers using a case study of birding in Nonggang, southern China. The data was collected from a survey of 201 birdwatchers between April 2017 and April 2018. Results demonstrated that respondents were mainly male, middle-aged, middle-to-high income, and higher-educated. When birding, 96.0% of respondents would photograph birds, and 45.3% prefer photography at fixed-points (i.e., bird-pond photography). Respondents' primary photographic subjects were more likely to be birds with narrower distribution ranges, lower encounter rates, or more feather colors. The majority of the respondents had a strong sense of protection, although the level of awareness against injuring birds was average. Our findings suggest that bird-pond photography has become the dominant form of birding. Solving the relationship between bird photographers' preferences and the conservation of unique species requires an understanding of the rare species and the value of wildlife viewing recreation by humans

    Transformer-based comparative multi-view illegal transaction detection.

    No full text
    In recent years, as the Ether platform has grown by leaps and bounds. Numerous unscrupulous individuals have used illegal transaction to defraud large sums of money, causing billions of dollars of losses to investors worldwide. Facing the endless stream of the illegal transaction based on Ether smart contracts problems, such as illegal transaction, money laundering, financial fraud, phishing. Currently, illegal transaction are only detected by a single view of the smart contract's contract code view feature and account transaction view feature, which is not only incomplete, but also not fully representative of the smart contract's features. More importantly, the single view detection model cannot accurately capture the global structure and semantic features between the Tokens of the view features. In this case, it is particularly important that all view features are shared among themselves. In this paper, we investigate a Transformer-based model for contrasting illegal transaction detection networks under multiple views (TranMulti-View Net). The model in this paper is based on Transformer to learn a multi-view fusion representation, which aims to maximise the fusion of the interaction information of different view features under the same condition. In this model we first use the Transformer model to learn global structure and semantic features from a sequence of Tokens tokenised by a view, capturing the remote dependencies of Tokens in the view features, and then we share the contract code view features and the account transaction view features across all views to learn important semantic information between views from each other. In addition, we find that the approach of semi-supervised training of multi-view features using contrast learning outperforms the scheme of prediction based on direct fusion of different view features, resulting in stronger correlation between view features. As a result, the underlying semantic information can be captured more accurately, leading to more accurate predictions of illegal transaction. The experimental results show that our proposed TranMulti-View Net obtains good detection results with a Precision score of 98%

    Drive with Your Brain: Personalized Prediction of Driving Behaviors with DR-EEG Decoding and Situational Embeddings

    No full text
    <p>These are the relevant codes for 'Drive with Your Brain: Personalized Prediction of Driving Behaviors with DR-EEG Decoding and Situational Embedding'. This dataset contains 14 folders. The content descriptions of these folders can be found in the 'ReadMe.txt' file, and each code file provides information about the input, output, and methods. It is important to note that the 'Data' folder is used to save the raw data, and other codes use loops to read data from the data list. There are several examples in the 'Data' folder. Please save driving data and EEG data according to the form of the examples. Please make sure to store the data in the specified format or modify the file paths accordingly when using the codes.</p><p>Data processing will take a lot of time. If you want to test the effectiveness of the GBRT model on sample data, please use the data samples in zenodo.org/records/8087109 for testing.</p&gt

    Drive with Your Brain: Personalized Prediction of Driving Behaviors with DR-EEG Decoding and Situational Embeddings

    No full text
    These data are part of the data sample of the paper "Drive with Your Brain: Personalized Prediction of Driving Behaviors with DR-EEG Decoding and Situational Embeddings". For use by editors and reviewers. This includes one compressed files: 'RAW Data.zip'. 'RAW Data.zip' includes EEG Data and driving behavior data as well as code for extracting features and training individual models. A detailed description can be found in 'Read_Me.txt'. Video footage of the experimental process of collecting driving data and EEG data is in another link (10.5281/zenodo.8093321)

    Drive with Your Brain: Personalized Prediction of Driving Behaviors with DR-EEG Decoding and Situational Embeddings

    No full text
    <p>These are the relevant codes for 'Drive with Your Brain: Personalized Prediction of Driving Behaviors with DR-EEG Decoding and Situational Embedding'. It consists of five parts, each containing a 'ReadMe.txt' instruction file. Please read the instructions before using it. It is worth noting to remember to change the file path in the code.</p><p> </p&gt

    Drive with Your Brain: Personalized Prediction of Driving Behaviors with DR-EEG Decoding and Situational Embeddings

    No full text
    <p>These are the relevant codes for 'Drive with Your Brain: Personalized Prediction of Driving Behaviors with DR-EEG Decoding and Situational Embedding'. It consists of five parts, each containing a 'ReadMe.txt' instruction file. Please read the instructions before using it. It is worth noting to remember to change the file path in the code.</p><p>Please process a complete data sample of a driver in the order of 1-5. The driving data and EEG data of each driver are separated and placed in two corresponding folders in 'driving' and 'EEG'.</p&gt

    Represents the result of clustering by Kmeans after removing the full concatenation layer of the last layer in Tranmulti-View Net, and the best result is displayed in bold.

    No full text
    Represents the result of clustering by Kmeans after removing the full concatenation layer of the last layer in Tranmulti-View Net, and the best result is displayed in bold.</p

    Comparison of access control technologies.

    No full text
    Comparison of access control technologies.</p

    PINAT: A Permutation INvariance Augmented Transformer for NAS Predictor

    No full text
    Time-consuming performance evaluation is the bottleneck of traditional Neural Architecture Search (NAS) methods. Predictor-based NAS can speed up performance evaluation by directly predicting performance, rather than training a large number of sub-models and then validating their performance. Most predictor-based NAS approaches use a proxy dataset to train model-based predictors efficiently but suffer from performance degradation and generalization problems. We attribute these problems to the poor abilities of existing predictors to character the sub-models' structure, specifically the topology information extraction and the node feature representation of the input graph data. To address these problems, we propose a Transformer-like NAS predictor PINAT, consisting of a Permutation INvariance Augmentation module serving as both token embedding layer and self-attention head, as well as a Laplacian matrix to be the positional encoding. Our design produces more representative features of the encoded architecture and outperforms state-of-the-art NAS predictors on six search spaces: NAS-Bench-101, NAS-Bench-201, DARTS, ProxylessNAS, PPI, and ModelNet. The code is available at https://github.com/ShunLu91/PINAT
    corecore