154 research outputs found
Cybersecurity Strategy against Cyber Attacks towards Smart Grids with PVs
Cyber attacks threaten the security of distribution power grids, such as smart grids. The emerging renewable energy sources such as photovoltaics (PVs) with power electronics controllers introduce new potential vulnerabilities. Based on the electric waveform data measured by waveform sensors in the smart grids, we propose a novel cyber attack detection and identification approach. Firstly, we analyze the cyber attack impacts (including cyber attacks on the solar inverter causing unusual harmonics) on electric waveforms in distribution power grids. Then, we propose a novel deep learning based mechanism including attack detection and attack diagnosis. By leveraging the electric waveform sensor data structure, our approach does not need the training stage for both detection and the root cause diagnosis, which is needed for machine learning/deep learning-based methods. For comparison, we have evaluated classic data-driven methods, including -nearest neighbor (KNN), decision tree (DT), support vector machine (SVM), artificial neural network (ANN), and convolutional neural network (CNN). Comparison results verify the performance of the proposed method for detection and diagnosis of various cyber attacks on PV systems
Comparing Partial Least Square Approaches in Gene-or Region-based Association Study for Multiple Quantitative Phenotypes
On thinking quantitatively of complex diseases, there are at least three statistical strategies for association study: single SNP on single trait, gene-or region (with multiple SNPs) on single trait and on multiple traits. The third of which is the most general in dissecting the genetic mechanism underlying complex diseases underpinning multiple quantitative traits. Gene-or region association methods based on partial least square (PLS) approaches have been shown to have apparent power advantage. However, few attempts are developed for multiple quantitative phenotypes or traits underlying a condition or disease, and the performance of various PLS approaches used in association study for multiple quantitative traits had not been assessed. We, from regression perspective, exploit association between multiple SNPs and multiple phenotypes or traits through exhaustive scan statistics (sliding window) using PLS and sparse PLS (SPLS) regression. Simulations are conducted to assess the performance of the proposed scan statistics and compare them with the existed method. The proposed methods are applied to 12 regions of GWAS data from the European Prospective Investigation of Cancer (EPIC)-Norfolk study
Utilizing the Double-Precision Floating-Point Computing Power of GPUs for RSA Acceleration
Asymmetric cryptographic algorithm (e.g., RSA and Elliptic Curve Cryptography) implementations on Graphics Processing Units (GPUs) have been researched for over a decade. The basic idea of most previous contributions is exploiting the highly parallel GPU architecture and porting the integer-based algorithms from general-purpose CPUs to GPUs, to offer high performance. However, the great potential cryptographic computing power of GPUs, especially by the more powerful floating-point instructions, has not been comprehensively investigated in fact. In this paper, we fully exploit the floating-point computing power of GPUs, by various designs, including the floating-point-based Montgomery multiplication/exponentiation algorithm and Chinese Remainder Theorem (CRT) implementation in GPU. And for practical usage of the proposed algorithm, a new method is performed to convert the input/output between octet strings and floating-point numbers, fully utilizing GPUs and further promoting the overall performance by about 5%. The performance of RSA-2048/3072/4096 decryption on NVIDIA GeForce GTX TITAN reaches 42,211/12,151/5,790 operations per second, respectively, which achieves 13 times the performance of the previous fastest floating-point-based implementation (published in Eurocrypt 2009). The RSA-4096 decryption precedes the existing fastest integer-based result by 23%
S3Eval: A Synthetic, Scalable, Systematic Evaluation Suite for Large Language Models
The rapid development of Large Language Models (LLMs) has led to great
strides in model capabilities like reasoning and long-context understanding.
However, as LLMs are able to process longer contexts, it becomes more
challenging to evaluate whether they have acquired certain capabilities, since
the length of text (e.g., 100K tokens) they can process far exceeds what humans
can reliably assess in a reasonable duration. In this paper, we propose using
complex synthetic tasks as a proxy evaluation method, and present S3Eval, a
Synthetic, Scalable, Systematic evaluation suite for LLMs evaluation. As a
synthetic benchmark, S3Eval enables the creation of any number of evaluation
examples that are theoretically invisible to LLMs, mitigating the test set
contamination issue. The synthetic nature of S3Eval provides users full control
over the dataset, allowing them to systematically probe LLM capabilities by
scaling text length and varying task difficulty across diverse scenarios. The
strong correlation between S3Eval performance and scores of real-world
benchmarks like Big-Bench Hard (BBH) demonstrates the soundness of using S3Eval
for evaluation of LLMs. The in-depth analysis also uncover additional insights,
including performance drop when the answer is sparsely distributed or located
in the middle context, as well as some counter-intuitive trends of model
performance.Comment: Work in progres
MMHQA-ICL: Multimodal In-context Learning for Hybrid Question Answering over Text, Tables and Images
In the real world, knowledge often exists in a multimodal and heterogeneous
form. Addressing the task of question answering with hybrid data types,
including text, tables, and images, is a challenging task (MMHQA). Recently,
with the rise of large language models (LLM), in-context learning (ICL) has
become the most popular way to solve QA problems. We propose MMHQA-ICL
framework for addressing this problems, which includes stronger heterogeneous
data retriever and an image caption module. Most importantly, we propose a
Type-specific In-context Learning Strategy for MMHQA, enabling LLMs to leverage
their powerful performance in this task. We are the first to use end-to-end LLM
prompting method for this task. Experimental results demonstrate that our
framework outperforms all baselines and methods trained on the full dataset,
achieving state-of-the-art results under the few-shot setting on the
MultimodalQA dataset
MoELoRA: Contrastive Learning Guided Mixture of Experts on Parameter-Efficient Fine-Tuning for Large Language Models
Fine-tuning is often necessary to enhance the adaptability of Large Language
Models (LLM) to downstream tasks. Nonetheless, the process of updating billions
of parameters demands significant computational resources and training time,
which poses a substantial obstacle to the widespread application of large-scale
models in various scenarios. To address this issue, Parameter-Efficient
Fine-Tuning (PEFT) has emerged as a prominent paradigm in recent research.
However, current PEFT approaches that employ a limited set of global parameters
(such as LoRA, which adds low-rank approximation matrices to all weights) face
challenges in flexibly combining different computational modules in downstream
tasks. In this work, we introduce a novel PEFT method: MoELoRA. We consider
LoRA as Mixture of Experts (MoE), and to mitigate the random routing phenomenon
observed in MoE, we propose the utilization of contrastive learning to
encourage experts to learn distinct features. We conducted experiments on 11
tasks in math reasoning and common-sense reasoning benchmarks. With the same
number of parameters, our approach outperforms LoRA significantly. In math
reasoning, MoELoRA achieved an average performance that was 4.2% higher than
LoRA, and demonstrated competitive performance compared to the 175B GPT-3.5 on
several benchmarks
HRoT: Hybrid prompt strategy and Retrieval of Thought for Table-Text Hybrid Question Answering
Answering numerical questions over hybrid contents from the given tables and
text(TextTableQA) is a challenging task. Recently, Large Language Models (LLMs)
have gained significant attention in the NLP community. With the emergence of
large language models, In-Context Learning and Chain-of-Thought prompting have
become two particularly popular research topics in this field. In this paper,
we introduce a new prompting strategy called Hybrid prompt strategy and
Retrieval of Thought for TextTableQA. Through In-Context Learning, we prompt
the model to develop the ability of retrieval thinking when dealing with hybrid
data. Our method achieves superior performance compared to the fully-supervised
SOTA on the MultiHiertt dataset in the few-shot setting
Spontaneously immortalised bovine mammary epithelial cells exhibit a distinct gene expression pattern from the breast cancer cells
<p>Abstract</p> <p>Background</p> <p>Spontaneous immortalisation of cultured mammary epithelial cells (MECs) is an extremely rare event, and the molecular mechanism behind spontaneous immortalisation of MECs is unclear. Here, we report the establishment of a spontaneously immortalised bovine mammary epithelial cell line (BME65Cs) and the changes in gene expression associated with BME65Cs cells.</p> <p>Results</p> <p>BME65Cs cells maintain the general characteristics of normal mammary epithelial cells in morphology, karyotype and immunohistochemistry, and are accompanied by the activation of endogenous <it>bTERT </it>(bovine Telomerase Reverse Transcriptase) and stabilisation of the telomere. Currently, BME65Cs cells have been passed for more than 220 generations, and these cells exhibit non-malignant transformation. The expression of multiple genes was investigated in BME65Cs cells, senescent BMECs (bovine MECs) cells, early passage BMECs cells and MCF-7 cells (a human breast cancer cell line). In comparison with early passage BMECs cells, the expression of senescence-relevant apoptosis-related gene were significantly changed in BME65Cs cells. P16<sup>INK4a </sup>was downregulated, p53 was low expressed and Bax/Bcl-2 ratio was reversed. Moreover, a slight upregulation of the oncogene <it>c-Myc</it>, along with an undetectable level of breast tumor-related gene <it>Bag-1 </it>and <it>TRPS-1</it>, was observed in BME65Cs cells while these genes are all highly expressed in MCF-7. In addition, <it>DNMT1 </it>is upregulated in BME65Cs. These results suggest that the inhibition of both senescence and mitochondrial apoptosis signalling pathways contribute to the immortality of BME65Cs cells. The expression of <it>p53 </it>and <it>p16</it><sup><it>INK4a </it></sup>in BME65Cs was altered in the pattern of down-regulation but not "loss", suggesting that this spontaneous immortalization is possibly initiated by other mechanism rather than gene mutation of <it>p53 </it>or <it>p16</it><sup><it>INK4a</it></sup>.</p> <p>Conclusions</p> <p>Spontaneously immortalised BME65Cs cells maintain many characteristics of normal BMEC cells and exhibit non-malignant transformation. Although this cell line displays altered patterns of gene expression, it is clearly distinct from malignant breast cancer cell line. It showed that co-inhibition of cellular senescence and mitochondrial apoptosis pathways coordinates BME65Cs cells immortalisation. Additionally, mechanisms other than gene mutation are likely to be involved in regulation of cellular functions. This study provides an insight into the relationship between cell senescence and immortalisation. BME65Cs cells will be useful in future studies of cellular senescence and tumorigenesis.</p
- …