322 research outputs found
Processor Microarchitecture Security
As computer systems grow more and more complicated, various optimizations can unintentionally introduce security vulnerabilities in these systems. The vulnerabilities can lead to user information and data being compromised or stolen. In particular, the ending of both Moore\u27s law and Dennard scaling motivate the design of more exotic microarchitectural optimizations to extract more performance -- further exacerbating the security vulnerabilities. The performance optimizations often focus on sharing or re-using of hardware components within a processor, between different users or programs. Because of the sharing of the hardware, unintentional information leakage channels, through the shared components, can be created. Microarchitectural attacks, such as the high-profile Spectre and Meltdown attacks or the cache covert channels that they leverage, have demonstrated major vulnerabilities of modern computer architectures due to the microarchitectural~optimizations. Key components of processor microarchitectures are processor caches used for achieving high memory bandwidth and low latency for frequently accessed data. With frequently accessed data being brought and stored in caches, memory latency can be significantly reduced when data is fetched from the cache, as opposed to being fetched from the main memory. With limited processor chip area, however, the cache size cannot be very large. Thus, modern processors adopt a cache hierarchy with multiple levels of caches, where the cache close to processor is faster but smaller, and the cache far from processor is slower but larger. This leads to a fundamental property of modern processors: {\em the latency of accessing data in different cache levels and in main memory is different}. As a result, the timing of memory operations when fetching data from different cache levels, e.g., the timing of fetching data from closest-to-processor L1 cache vs. from main memory, can reveal secret-dependent information if attacker is able to observe the timing of these accesses and correlate them to the operation of the victim\u27s code. Further, due to limited size of the caches, memory accesses by a victim may displace attacker\u27s data from the cache, and with knowledge, or reverse-engineering, of the cache architecture, the attacker can learn some information about victim\u27s data based on the modifications to the state of the cache -- which can be observed by the timing~measurements. Caches are not only structures in the processor that can suffer from security vulnerabilities. As an essential mechanism to achieving high performance, cache-like structures are used pervasively in various processor components, such as the translation lookaside buffer (TLB) and processor frontend. Consequently, the vulnerabilities due to timing differences of accessing data in caches or cache-like structures affect many components of the~processor. The main goal of this dissertation is the {\em design of high performance and secure computer architectures}. Since the sophisticated hardware components such as caches, TLBs, value predictors, and processor frontend are critical to ensure high performance, realizing this goal requires developing fundamental techniques to guarantee security in the presence of timing differences of different processor operations. Furthermore, effective defence mechanisms can be only developed after developing a formal and systematic understanding of all the possible attacks that timing side-channels can lead to. To realize the research goals, the main main contributions of this dissertation~are: \begin{itemize}[noitemsep] \item Design and evaluation of a novel three-step cache timing model to understand theoretical vulnerabilities in caches \item Development of a benchmark suite that can test if processor caches or secure cache designs are vulnerable to certain theoretical vulnerabilities. \item Development of a timing vulnerability model to test TLBs and design of hardware defenses for the TLBs to address newly found vulnerabilities. \item Analysis of value predictor attacks and design of defenses for value predictors. \item Evaluation of vulnerabilities in processor frontends based on timing differences in the operation of the frontends. \item Development of a design-time security verification framework for secure processor architectures, using information flow tracking methods. \end{itemize} \newpage This dissertation combines the theoretical modeling and practical benchmarking analysis to help evaluate susceptibility of different architectures and microarchitectures to timing attacks on caches, TLBs, value predictors and processor frontend. Although cache timing side-channel attacks have been studied for more than a decade, there is no evidence that the previously-known attacks exhaustively cover all possible attacks. One of the initial research directions covered by this dissertation was to develop a model for cache timing attacks, which can help lead towards discovering all possible cache timing attacks. The proposed three-step cache timing vulnerability model provides a means to enumerate all possible interactions between the victim and attacker who are sharing a cache-like structure, producing the complete set of theoretical timing vulnerabilities. This dissertation also covers new theoretical cache timing attacks that are unknown prior to being found by the model. To make the advances in security not only theoretical, this dissertation also covers design of a benchmarking suite that runs on commodity processors and helps evaluate their cache\u27s susceptibility to attacks, as well as can run on simulators to test potential or future cache designs. As the dissertation later demonstrates, the three-step timing vulnerability model can be naturally applied to any cache-like structures such as TLBs, and the dissertation encompasses a three-step model for TLBs, uncovering of theoretical new TLB attacks, and proposals for defenses. Building on success of analyzing caches and TLBs for new timing attacks, this dissertation then discusses follow-on research on evaluation and uncovering of new timing vulnerabilities in processor frontends. Since security analysis should be applied not just to existing processor microarchitectural features, the dissertation further analyzes possible future features such as value predictors. Although not currently in use, value predictors are actively being researched and proposed for addition into future microarchitectures. This dissertation shows, however, that they are vulnerable to attacks. Lastly, based on findings of the security issues with existing and proposed processor features, this dissertation explores how to better design secure processors from ground up, and presents a design-time security verification framework for secure processor architectures, using information flow tracking methods
Analysis of Secure Caches using a Three-Step Model for Timing-Based Attacks
Many secure cache designs have been proposed in literature with the aim of mitigating different types of cache timing-based attacks. However, there has so far been no systematic analysis of how these secure cache designs can, or cannot, protect
against different types of the timing-based attacks. To provide a means of analyzing the caches, this paper presents
a novel three-step modeling approach that is used to exhaustively enumerate all the possible cache timing-based vulnerabilities. The model covers not only attacks that leverage cache accesses or flushes from the local processor core, but also attacks that leverage changes in the cache state due to the cache coherence protocol actions from remote cores. Moreover, both conventional attacks and speculative execution attacks are considered. With the list of all possible cache timing vulnerabilities derived from the three-step model, this work further manually analyzes each of the existing secure cache designs to show which types of timing-based side-channel vulnerabilities each secure cache can mitigate. Based on the security analysis of the existing secure cache designs using the new three-step model, this paper further summarizes different techniques gleaned from the secure cache designs and their ability help mitigate different types of cache timing-based vulnerabilities
Pre-Trained Language Models Augmented with Synthetic Scanpaths for Natural Language Understanding
Human gaze data offer cognitive information that reflects natural language comprehension. Indeed, augmenting language models with human scanpaths has proven beneficial for a range of NLP tasks, including language understanding. However, the applicability of this approach is hampered because the abundance of text corpora is contrasted by a scarcity of gaze data. Although models for the generation of human-like scanpaths during reading have been developed, the potential of synthetic gaze data across NLP tasks remains largely unexplored. We develop a model that integrates synthetic scanpath generation with a scanpath-augmented language model, eliminating the need for human gaze data. Since the model’s error gradient can be propagated throughout all parts of the model, the scanpath generator can be fine-tuned to downstream tasks. We find that the proposed model not only outperforms the underlying language model, but achieves a performance that is comparable to a language model augmented with real human gaze data. Our code is publicly available
Reading Does Not Equal Reading: Comparing, Simulating and Exploiting Reading Behavior Across Populations
Eye-tracking-while-reading corpora play a crucial role in the study of human language processing, and, more recently, have been leveraged for cognitively enhancing neural language models. A critical limitation of existing corpora is that they often lack diversity, comprising primarily native speakers. In this study, we expand the eye-tracking-while-reading dataset CopCo, which initially included only Danish L1 readers with and without dyslexia, by incorporating a new dataset of non-native readers with diverse L1 backgrounds. Thus, the extended CopCo corpus constitutes the first eye-tracking-while-reading dataset encompassing neurotypical L1 and L1 readers with dyslexia as well as non-native readers, all reading the same materials. We first provide extensive descriptive statistics of the extended CopCo corpus. Second, we investigate how different degrees of diversity of the training data affect a state-of-the-art generative model of eye movements in reading. Finally, we use this scanpath generation model for gaze-augmented language modeling and investigate the impact of diversity in the training data on the model’s performance on a range of NLP downstream tasks. The code can be found here: https://github.com/norahollenstein/copco-processing
Pre-Trained Language Models Augmented with Synthetic Scanpaths for Natural Language Understanding
Human gaze data offer cognitive information that reflects natural language
comprehension. Indeed, augmenting language models with human scanpaths has
proven beneficial for a range of NLP tasks, including language understanding.
However, the applicability of this approach is hampered because the abundance
of text corpora is contrasted by a scarcity of gaze data. Although models for
the generation of human-like scanpaths during reading have been developed, the
potential of synthetic gaze data across NLP tasks remains largely unexplored.
We develop a model that integrates synthetic scanpath generation with a
scanpath-augmented language model, eliminating the need for human gaze data.
Since the model's error gradient can be propagated throughout all parts of the
model, the scanpath generator can be fine-tuned to downstream tasks. We find
that the proposed model not only outperforms the underlying language model, but
achieves a performance that is comparable to a language model augmented with
real human gaze data. Our code is publicly available.Comment: Pre-print for EMNLP 202
Eyettention: An Attention-based Dual-Sequence Model for Predicting Human Scanpaths during Reading
Eye movements during reading offer insights into both the reader's cognitive
processes and the characteristics of the text that is being read. Hence, the
analysis of scanpaths in reading have attracted increasing attention across
fields, ranging from cognitive science over linguistics to computer science. In
particular, eye-tracking-while-reading data has been argued to bear the
potential to make machine-learning-based language models exhibit a more
human-like linguistic behavior. However, one of the main challenges in modeling
human scanpaths in reading is their dual-sequence nature: the words are ordered
following the grammatical rules of the language, whereas the fixations are
chronologically ordered. As humans do not strictly read from left-to-right, but
rather skip or refixate words and regress to previous words, the alignment of
the linguistic and the temporal sequence is non-trivial. In this paper, we
develop Eyettention, the first dual-sequence model that simultaneously
processes the sequence of words and the chronological sequence of fixations.
The alignment of the two sequences is achieved by a cross-sequence attention
mechanism. We show that Eyettention outperforms state-of-the-art models in
predicting scanpaths. We provide an extensive within- and across-data set
evaluation on different languages. An ablation study and qualitative analysis
support an in-depth understanding of the model's behavior
Case Report: Isolated facial and trigeminal nerve palsy without ataxia in anti-GQ1b antibody syndrome secondary to Mycoplasma pneumonia
The presence of anti-GQ1b antibodies in serum or cerebrospinal fluid is a diagnostic indicator of the Miller–Fisher variant of Guillain–Barré syndrome (GBS), whereas anti-GQ1b antibody syndrome is rarely presented as acute bilateral pain in the cheeks and masticatory muscle fatigue without ophthalmoplegia, ataxia, or limb weakness. Here, we report a case of a female patient diagnosed with GBS characterized only by the involvement of the facial and trigeminal nerves who was positive for serum anti-GQ1b antibodies secondary to Mycoplasma pneumoniae infection. The patient was treated with macrolide antibiotics and neurotrophic drugs, and her symptoms were significantly alleviated after 1 month. This case indicates a new clinical presentation of GBS and anti-GQ1b antibody syndrome with a differential diagnosis of multiple cranial nerve damage of which neurological physicians should be aware. Positive anti-GQ1b antibodies secondary to infection were observed in this case, and antibiotic treatment resulted in a favorable prognosis. The specific underlying mechanism requires further investigation
ZKPoG: Accelerating WitGen-Incorporated End-to-End Zero-Knowledge Proof on GPU
Zero-Knowledge Proof (ZKP) is a cornerstone technology in privacy-preserving computing, addressing critical challenges in domains such as finance and healthcare by ensuring data confidentiality during computation. However, the high computational overhead of ZKP, particularly in proof generation and verification, limits its scalability and usability in real-world applications. Existing efforts to accelerate ZKP primarily focus on specific components, such as polynomial commitment schemes or elliptic curve operations, but fail to deliver an integrated, flexible, and efficient end-to-end solution that includes witness generation.
In this work, we present ZKPoG, a GPU-based ZKP acceleration platform that achieves full end-to-end optimization. ZKPoG addresses three key challenges: (1) designing a witness-generation-incorporated flow for Plonkish circuits, enabling seamless integration of frontend and backend with GPU acceleration; (2) optimizing memory usage to accommodate large-scale circuits on affordable GPUs with limited memory; and (3) introducing an automated compiler for custom gates, simplifying adaptation to diverse applications. Experimental results on an NVIDIA RTX 4090 GPU show on average end-to-end acceleration compared to state-of-the-art CPU implementations and on average speedup over existing GPU-based approaches
Low-carbohydrate diets reduce cardiovascular risk factor levels in patients with metabolic dysfunction-associated steatotic liver disease: a systematic review and meta-analysis of randomized controlled trials
BackgroundLow-carbohydrate diets (LCDs) are increasingly advocated for the treatment of metabolic dysfunction-associated steatotic liver disease (MASLD); however, their cardiovascular safety profile remains controversial. This analysis aims to evaluate the effects of LCDs on cardiovascular risk factors in MASLD patients.MethodsPubMed, Cochrane Library, Web of Science, and Scopus were searched from inception to March 19, 2025. Two reviewers independently conducted data extraction. Meta-analyses were performed using fixed-effects or random-effects models, as determined by the heterogeneity of the included studies. Outcomes included blood pressure, glycemic markers, lipid profiles, and anthropometric indicators. Subgroup analyses explored carbohydrate thresholds (<26% vs. ≥26%) and intervention durations (<24 weeks vs. ≥24 weeks).ResultsSixteen RCTs comprising 1,056 participants were included. LCDs significantly reduced glycated hemoglobin (HbA1c: SMD, −0.27; 95% CI, −0.47 to −0.07), triglyceride (TG: SMD, −0.20; 95% CI, −0.34 to −0.06), body weight (SMD, −0.19; 95% CI, −0.36 to −0.03), and body mass index (BMI: SMD, −0.28; 95% CI, −0.42 to −0.14). Stricter carbohydrate restriction (<26% energy) further improved systolic/diastolic blood pressure, homeostatic model assessment insulin resistance index (HOMA-IR), HbA1c, TG, body weight, BMI, and waist circumference. Short-term interventions (<24 weeks) lowered HbA1c, TG, and BMI.ConclusionThis systematic review and meta-analysis found that LCDs are associated with improvements in cardiometabolic risk factors among patients with MASLD. Furthermore, short-term implementation of a strict carbohydrate-restricted dietary regimen may yield additional clinical benefits. Future research should prioritize: standardized nutrient assessment, enhanced adherence strategies, and cardiovascular endpoint trials.Systematic review registrationPROSPERO: CRD42024603432; https://www.crd.york.ac.uk/PROSPERO/view/CRD42024603432
Survey of Approaches and Techniques for Security Verification of Computer Systems
This paper surveys the landscape of security verification approaches and techniques for computer systems at various levels: from a software-application level all the way to the physical hardware level. Different existing projects are compared, based on the tools used and security aspects being examined. Since many systems require both hardware and software components to work together to provide the system\u27s promised security protections, it is not sufficient to verify just the software levels or just the hardware levels in a mutually exclusive fashion. This survey especially highlights system levels that are verified by the different existing projects and presents to the readers the state of the art in hardware and software system security verification. Few approaches come close to providing full-system verification, and there is still much room for improvement
- …
