1,705 research outputs found
A New Deep-Neural-Network--Based Missing Transverse Momentum Estimator, and its Application to W Recoil
This dissertation presents the first Deep-Neural-Network–based missing transverse momentum (pTmiss) estimator, called “DeepMET”. It utilizes all reconstructed particles in an event as input, and assigns an individual weight to each of them. The DeepMET estimator is the negative of the vector sum of the weighted transverse momenta of all input particles. Compared with the pTmiss estimators currently utilized by the CMS Collaboration, DeepMET is found to improve the pTmiss resolution by 10-20%, and is more resilient towards the effect of additional proton-proton interactions accompanying the interaction of interest. DeepMET is demonstrated to improve the resolution on the recoil measurement of the W boson and reduce the systematic uncertainties on the W mass measurement by a large fraction compared with other pTmiss estimators
Semi-supervised Graph Neural Networks for Pileup Noise Removal
The high instantaneous luminosity of the CERN Large Hadron Collider leads to
multiple proton-proton interactions in the same or nearby bunch crossings
(pileup). Advanced pileup mitigation algorithms are designed to remove this
noise from pileup particles and improve the performance of crucial physics
observables. This study implements a semi-supervised graph neural network for
particle-level pileup noise removal, by identifying individual particles
produced from pileup. The graph neural network is firstly trained on charged
particles with known labels, which can be obtained from detector measurements
on data or simulation, and then inferred on neutral particles for which such
labels are missing. This semi-supervised approach does not depend on the ground
truth information from simulation and thus allows us to perform training
directly on experimental data. The performance of this approach is found to be
consistently better than widely-used domain algorithms and comparable to the
fully-supervised training using simulation truth information. The study serves
as the first attempt at applying semi-supervised learning techniques to pileup
mitigation, and opens up a new direction of fully data-driven machine learning
pileup mitigation studies
How to Choose Interesting Points for Template Attacks?
Template attacks are widely accepted to be the most powerful side-channel attacks from an information theoretic point of view. For template attacks, many papers suggested a guideline for choosing interesting points which is still not proven. The guideline is that one should only choose one point as the interesting point per clock cycle. Up to now, many different methods of choosing interesting points were introduced. However, it is still unclear that which approach will lead to the best classification performance for template attacks. In this paper, we comprehensively evaluate and compare the classification performance of template attacks when using different methods of choosing interesting points. Evaluation results show that the classification performance of template attacks has obvious difference when different methods of choosing interesting points are used. The CPA based method and the SOST based method will lead to the best classification performance. Moreover, we find that some methods of choosing interesting points provide the same results in the same circumstance. Finally, we verify the guideline for choosing interesting points for template attacks is correct by presenting a new way of conducting template attacks
Towards Optimal Leakage Exploitation Rate in Template Attacks
Under the assumption that one has a reference device identical or similar to the target device, and thus be well capable of characterizing power leakages of the target device, Template Attacks are widely accepted to be the most powerful side-channel attacks. However, the question of whether Template Attacks are really optimal in terms of the leakage exploitation rate is still unclear. In this paper, we present a negative answer to this crucial question by introducing a normalization process into classical Template Attacks. Specifically, our contributions are two folds. On the theoretical side, we prove that Normalized Template Attacks are better in terms of the leakage exploitation rate than Template Attacks; on the practical side, we evaluate the key-recovery efficiency of Normalized Template Attacks and Template Attacks in the same attacking scenario. Evaluation results show that, compared with Template Attacks, Normalized Template Attacks are more effective. We note that, the computational price of the normalization process is of extremely low, and thus it is very easy-to-implement in practice. Therefore, the normalization process should be integrated into Template Attacks as a necessary step, so that one can better understand practical threats of Template Attacks
Cryptosystems Resilient to Both Continual Key Leakages and Leakages from Hash Functions
Yoneyama et al. introduced Leaky Random Oracle Model (LROM for short) at ProvSec2008 in order to discuss security (or insecurity) of cryptographic schemes which use hash functions as building blocks when leakages from pairs of input and output of hash functions occur. This kind of leakages occurs due to various attacks caused by sloppy usage or implementation. Their results showed that this kind of leakages may threaten the security of some cryptographic schemes. However, an important fact is that such attacks would leak not only pairs of input and output of hash functions, but also the secret key. Therefore, LROM is rather limited in the sense that it considers leakages from pairs of input and output of hash functions alone, instead of taking into consideration other possible leakages from the secret key simultaneously. On the other hand, many other leakage models mainly concentrate on leakages from the secret key and ignore leakages from hash functions for a cryptographic scheme exploiting hash functions in these leakage models. Some examples show that the above drawbacks of LROM and other leakage models may cause insecurity of some schemes which are secure in the two kinds of leakage model.
In this paper, we present an augmented model of both LROM and some leakage models, which both the secret key and pairs of input and output of hash functions can be leaked. Furthermore, the secret key can be leaked continually during the whole life cycle of a cryptographic scheme. Hence, our new model is more universal and stronger than LROM and some leakage models (e.g. only computation leaks model and bounded memory leakage model). As an application example, we also present a public key encryption scheme which is provably IND-CCA secure in our new model
On the Impacts of Mathematical Realization over Practical Security of Leakage Resilient Cryptographic Schemes
In real world, in order to transform an abstract and generic cryptographic scheme into actual physical implementation, one usually undergoes two processes: mathematical realization at algorithmic level and physical realization at implementation level. In the former process, the abstract and generic cryptographic scheme is transformed into an exact and specific mathematical scheme, while in the latter process the output of mathematical realization is being transformed into a physical cryptographic module runs as a piece of software, or hardware, or combination of both. In black-box model (i.e. leakage-free setting), a cryptographic scheme can be mathematically realized without affecting its theoretical security as long as the mathematical components meet the required cryptographic properties. However, up to now, no previous work formally show that whether one can mathematically realize a leakage resilient cryptographic scheme in existent ways without affecting its practical security.
Our results give a negative answer to this important question by introducing attacks against several kinds of mathematical realization of a practical leakage resilient cryptographic scheme. Our results show that there may exist a big gap between the theoretical tolerance leakage rate and the practical tolerance leakage rate of the same leakage resilient cryptographic scheme if the mathematical components in the mathematical realization are not provably secure in leakage setting. Therefore, on one hand, we suggest that all (practical) leakage resilient cryptographic schemes should at least come with a kind of mathematical realization. Using this kind of mathematical realization, its practical security can be guaranteed. On the other hand, our results inspire cryptographers to design advanced leakage resilient cryptographic schemes whose practical security is independent of the specific details of its mathematical realization
Impact of cell wall adsorption behaviours on phenolic stability under air drying of blackberry with and without contact ultrasound assistance
The physicochemical properties of blackberry cell walls under air drying with and without contact ultrasonication were analysed, and their ability to bind soluble phenolics was evaluated. Compared to air drying alone, ultrasound promoted cell wall shrinkage and reduced their specific surface area and water binding capacity. Meanwhile, the in-process ultrasound further increased the amount of water soluble pectin (WSP) and decreased protopectin. After drying, the cell walls of ultrasound-dried samples contained 11.6 % less protopectin (PP) than air-dried samples. Pectins in ultrasound-dried samples were also more aggregated with a reduced branching degree of Rhamnogalacturonan-I (RG-I). Most of these ultrasonic modifications of blackberry cell walls hindered their phenolics acquirement. The equilibrium adsorption capacities of cell walls from ultrasound-dried blackberries for 1 h were 33.5% (for catechin) and 21.8% (for phloretic acid) lower than the counterparts from air-dried samples for 8 h. Although the soluble phenolics absorbed by dried blackberry cell walls were more thermal-stable than those adsorbed by fresh blackberry cell walls, the overall protection provided by cell walls was still regarded as attenuated with drying due to the decline in the adsorption ability. Besides, it is believed that the higher retention of soluble phenolics in ultrasound dried samples is ascribed to the shortened thermal-drying time rather than the cell walls-phenolics interactions. These findings provide an in-depth understanding of the effect of ultrasound drying on phenolic stability.journal articl
Prospective study on the overuse of blood test-guided antibiotics on patients with acute diarrhea in primary hospitals of China
BACKGROUND: Overuse with antibiotics in the treatment of infectious diseases has become a central focus of public health over the years. The aim of this study was to provide an up-to-date evaluation of the blood test-guided antibiotic use on patients with acute diarrhea in primary hospitals of China. MATERIALS AND METHODS: A cross-sectional survey was conducted on 330 patients with acute diarrhea in Shanghai, People’s Republic of China, from March 2013 to February 2016. These patients were treated with or without antibiotics based on the results of their blood tests, including examinations of C-reactive protein (CRP), white blood cells (WBC), and the percentage of neutrophils (Neu%). The infection types, which included bacterial, viral, and combination diarrhea, were determined by microbiological culture methods. Antibiotics used in non-bacterial diarrhea patients were considered misused and overused. RESULTS: There were significant overall differences in the clinical characteristics and blood tests between patients with diarrhea with a bacterial infection and patients with other types of infections. The patients were divided into four grading groups (0–3) according to the number of the positive results from three blood testes (CRP, WBC, and Neu%). The misuse rates of antibiotics in each group (0–3) were 81.3%, 71.1%, 72.4%, and 64.9%, respectively. CONCLUSION: In this prospective study, the current diagnostic criteria (CRP, WBC, and Neu%) based on blood tests are not reliable in diagnosing bacterial diarrhea or guiding antibiotics use. To limit antibiotic overuse, a rapid and accurate differentiation of bacterial diarrhea from other types of diarrhea is pivotal
Conversational Recommender System and Large Language Model Are Made for Each Other in E-commerce Pre-sales Dialogue
E-commerce pre-sales dialogue aims to understand and elicit user needs and
preferences for the items they are seeking so as to provide appropriate
recommendations. Conversational recommender systems (CRSs) learn user
representation and provide accurate recommendations based on dialogue context,
but rely on external knowledge. Large language models (LLMs) generate responses
that mimic pre-sales dialogues after fine-tuning, but lack domain-specific
knowledge for accurate recommendations. Intuitively, the strengths of LLM and
CRS in E-commerce pre-sales dialogues are complementary, yet no previous work
has explored this. This paper investigates the effectiveness of combining LLM
and CRS in E-commerce pre-sales dialogues, proposing two collaboration methods:
CRS assisting LLM and LLM assisting CRS. We conduct extensive experiments on a
real-world dataset of Ecommerce pre-sales dialogues. We analyze the impact of
two collaborative approaches with two CRSs and two LLMs on four tasks of
Ecommerce pre-sales dialogue. We find that collaborations between CRS and LLM
can be very effective in some cases.Comment: EMNLP 2023 Finding
Improving Factual Consistency of Text Summarization by Adversarially Decoupling Comprehension and Embellishment Abilities of LLMs
Despite the recent progress in text summarization made by large language
models (LLMs), they often generate summaries that are factually inconsistent
with original articles, known as "hallucinations" in text generation. Unlike
previous small models (e.g., BART, T5), current LLMs make fewer silly mistakes
but more sophisticated ones, such as imposing cause and effect, adding false
details, overgeneralizing, etc. These hallucinations are challenging to detect
through traditional methods, which poses great challenges for improving the
factual consistency of text summarization. In this paper, we propose an
adversarially DEcoupling method to disentangle the Comprehension and
EmbellishmeNT abilities of LLMs (DECENT). Furthermore, we adopt a probing-based
efficient training to cover the shortage of sensitivity for true and false in
the training process of LLMs. In this way, LLMs are less confused about
embellishing and understanding; thus, they can execute the instructions more
accurately and have enhanced abilities to distinguish hallucinations.
Experimental results show that DECENT significantly improves the reliability of
text summarization based on LLMs
- …