325 research outputs found
Generating large non-singular matrices over an arbitrary field with blocks of full rank
This note describes a technique for generating large non-singular matrices
with blocks of full rank. Our motivation to construct such matrices arises in
the white-box implementation of cryptographic algorithms with S-boxes.Comment:
Photoelectrocatalytic Degradation of Humic Acids Using Codoped TiO 2
Cu/N codoped TiO2 films on Ti substrates were successfully prepared by electrochemical method with the goal of enhancing the photoelectrocatalytic activity under visible light. The morphology and composition of the Cu/N codoped films were characterized using field emission scanning electron microscopy (FESEM), X-ray diffraction (XRD), energy dispersive X-ray (EDX), and UV-Vis diffusion reflection spectroscopy (UV-Vis DRS). The photocatalytic activities of the Cu/N codoped TiO2 films were evaluated by the degradation of humic acid. The visible light photocatalytic degradation of humic acid (HA) was tested and Cu/N codoped TiO2 films showed the highest degradation efficiency up to 41.5% after 210 minutes of treatment. It showed that Cu2+ and NH4+ codoped TiO2 film significantly improved the photocatalytic efficiency under the visible light. When +5.0 V anodic bias potential and visible light were simultaneously applied, the degradation efficiency of HA over the Cu/N codoped TiO2 films significantly improved to 93.5% after 210 minutes of treatment
Can GPT models Follow Human Summarization Guidelines? Evaluating ChatGPT and GPT-4 for Dialogue Summarization
This study explores the capabilities of prompt-driven Large Language Models
(LLMs) like ChatGPT and GPT-4 in adhering to human guidelines for dialogue
summarization. Experiments employed DialogSum (English social conversations)
and DECODA (French call center interactions), testing various prompts:
including prompts from existing literature and those from human summarization
guidelines, as well as a two-step prompt approach. Our findings indicate that
GPT models often produce lengthy summaries and deviate from human summarization
guidelines. However, using human guidelines as an intermediate step shows
promise, outperforming direct word-length constraint prompts in some cases. The
results reveal that GPT models exhibit unique stylistic tendencies in their
summaries. While BERTScores did not dramatically decrease for GPT outputs
suggesting semantic similarity to human references and specialised pre-trained
models, ROUGE scores reveal grammatical and lexical disparities between
GPT-generated and human-written summaries. These findings shed light on the
capabilities and limitations of GPT models in following human instructions for
dialogue summarization
Evaluating Emotional Nuances in Dialogue Summarization
Automatic dialogue summarization is a well-established task that aims to
identify the most important content from human conversations to create a short
textual summary. Despite recent progress in the field, we show that most of the
research has focused on summarizing the factual information, leaving aside the
affective content, which can yet convey useful information to analyse, monitor,
or support human interactions. In this paper, we propose and evaluate a set of
measures , to quantify how much emotion is preserved in dialog summaries.
Results show that, summarization models of the state-of-the-art do not preserve
well the emotional content in the summaries. We also show that by reducing the
training set to only emotional dialogues, the emotional content is better
preserved in the generated summaries, while conserving the most salient factual
information
Learn From Zoom: Decoupled Supervised Contrastive Learning For WCE Image Classification
Accurate lesion classification in Wireless Capsule Endoscopy (WCE) images is
vital for early diagnosis and treatment of gastrointestinal (GI) cancers.
However, this task is confronted with challenges like tiny lesions and
background interference. Additionally, WCE images exhibit higher intra-class
variance and inter-class similarities, adding complexity. To tackle these
challenges, we propose Decoupled Supervised Contrastive Learning for WCE image
classification, learning robust representations from zoomed-in WCE images
generated by Saliency Augmentor. Specifically, We use uniformly down-sampled
WCE images as anchors and WCE images from the same class, especially their
zoomed-in images, as positives. This approach empowers the Feature Extractor to
capture rich representations from various views of the same image, facilitated
by Decoupled Supervised Contrastive Learning. Training a linear Classifier on
these representations within 10 epochs yields an impressive 92.01% overall
accuracy, surpassing the prior state-of-the-art (SOTA) by 0.72% on a blend of
two publicly accessible WCE datasets. Code is available at:
https://github.com/Qiukunpeng/DSCL.Comment: Accepted by ICASSP202
Deep Domain-Adversarial Image Generation for Domain Generalisation
Machine learning models typically suffer from the domain shift problem when
trained on a source dataset and evaluated on a target dataset of different
distribution. To overcome this problem, domain generalisation (DG) methods aim
to leverage data from multiple source domains so that a trained model can
generalise to unseen domains. In this paper, we propose a novel DG approach
based on \emph{Deep Domain-Adversarial Image Generation} (DDAIG). Specifically,
DDAIG consists of three components, namely a label classifier, a domain
classifier and a domain transformation network (DoTNet). The goal for DoTNet is
to map the source training data to unseen domains. This is achieved by having a
learning objective formulated to ensure that the generated data can be
correctly classified by the label classifier while fooling the domain
classifier. By augmenting the source training data with the generated unseen
domain data, we can make the label classifier more robust to unknown domain
changes. Extensive experiments on four DG datasets demonstrate the
effectiveness of our approach.Comment: 8 page
- …