192 research outputs found
A Review: Peanut Fatty Acids Determination Using Hyper Spectroscopy Imaging and Its Significance on Food Quality and Safety
This paper is a review of determination of peanut fatty acids by using Hyper Spectral Imaging (HSI) methods as a non-destructive food quality and safety monitoring. The key spectral areas are the visual and near-infrared wavelengths. Few have been published on determination of peanut fatty acids by using HSI as an efficient and effective method for evaluating the quality and safety of oil. Providentially, the use of HSI has been observed to have positive effects on determination of food quality and safety (Smith B. 2012). It has gained a wide recognition as a non-destructive, fast, quality and safety analysis, and assessment method for a wide range of food products. Literature shows that, HSI is not commonly and widely used therefore this paper aspires to emphasize the use of HSI on improving the quality and safety of peanut oil and its products based on the determination of peanut fatty acids. The authors predicted that even in its current imperfect on the affordability, maintenance and complexity on finding solutions or model approaches to their food quality problems from optics, imaging, and spectroscopy, yet HSI is the best method than other current existing methods, and can give an idea of how to better meet market and consumer needs on high food quality and safety for their better healthy. Key words: Hyper spectral imaging, Peanut (Arachis hypogaea), oil, Oleic and linoleic fatty acid, Food quality, food safety
DeepDPM: Dynamic Population Mapping via Deep Neural Network
Dynamic high resolution data on human population distribution is of great
importance for a wide spectrum of activities and real-life applications, but is
too difficult and expensive to obtain directly. Therefore, generating
fine-scaled population distributions from coarse population data is of great
significance. However, there are three major challenges: 1) the complexity in
spatial relations between high and low resolution population; 2) the dependence
of population distributions on other external information; 3) the difficulty in
retrieving temporal distribution patterns. In this paper, we first propose the
idea to generate dynamic population distributions in full-time series, then we
design dynamic population mapping via deep neural network(DeepDPM), a model
that describes both spatial and temporal patterns using coarse data and point
of interest information. In DeepDPM, we utilize super-resolution convolutional
neural network(SRCNN) based model to directly map coarse data into higher
resolution data, and a time-embedded long short-term memory model to
effectively capture the periodicity nature to smooth the finer-scaled results
from the previous static SRCNN model. We perform extensive experiments on a
real-life mobile dataset collected from Shanghai. Our results demonstrate that
DeepDPM outperforms previous state-of-the-art methods and a suite of frequent
data-mining approaches. Moreover, DeepDPM breaks through the limitation from
previous works in time dimension so that dynamic predictions in all-day time
slots can be obtained.Comment: AAAI201
Japanese Legal Scholars and Political Reformation During the Late Qing Dynasty
In this essay, I have examined the Sino-Japanese relations during the ten years immediately preceding the Qinhai Revolution from three closely related perspectives. The first is the frequency with which the elite of the two countries travelled to each country. The second is the translation of editorials written by the Japanese elite on the Qing reformations that were published in each of the Chinese newspapers. The third is that many of the exchange students in Japan returned to China where they played important roles in the social reformation occurring at the end of the Qing Dynasty. I have also closely examined the influence of the Japanese legal scholars on the Qing political reformations as they were extremely important figures in the cultural exchange between the two countries and furthered the transformation of modern Chinese thought and systems
BOURNE: Bootstrapped Self-supervised Learning Framework for Unified Graph Anomaly Detection
Graph anomaly detection (GAD) has gained increasing attention in recent years
due to its critical application in a wide range of domains, such as social
networks, financial risk management, and traffic analysis. Existing GAD methods
can be categorized into node and edge anomaly detection models based on the
type of graph objects being detected. However, these methods typically treat
node and edge anomalies as separate tasks, overlooking their associations and
frequent co-occurrences in real-world graphs. As a result, they fail to
leverage the complementary information provided by node and edge anomalies for
mutual detection. Additionally, state-of-the-art GAD methods, such as CoLA and
SL-GAD, heavily rely on negative pair sampling in contrastive learning, which
incurs high computational costs, hindering their scalability to large graphs.
To address these limitations, we propose a novel unified graph anomaly
detection framework based on bootstrapped self-supervised learning (named
BOURNE). We extract a subgraph (graph view) centered on each target node as
node context and transform it into a dual hypergraph (hypergraph view) as edge
context. These views are encoded using graph and hypergraph neural networks to
capture the representations of nodes, edges, and their associated contexts. By
swapping the context embeddings between nodes and edges and measuring the
agreement in the embedding space, we enable the mutual detection of node and
edge anomalies. Furthermore, we adopt a bootstrapped training strategy that
eliminates the need for negative sampling, enabling BOURNE to handle large
graphs efficiently. Extensive experiments conducted on six benchmark datasets
demonstrate the superior effectiveness and efficiency of BOURNE in detecting
both node and edge anomalies
Model and Data Agreement for Learning with Noisy Labels
Learning with noisy labels is a vital topic for practical deep learning as
models should be robust to noisy open-world datasets in the wild. The
state-of-the-art noisy label learning approach JoCoR fails when faced with a
large ratio of noisy labels. Moreover, selecting small-loss samples can also
cause error accumulation as once the noisy samples are mistakenly selected as
small-loss samples, they are more likely to be selected again. In this paper,
we try to deal with error accumulation in noisy label learning from both model
and data perspectives. We introduce mean point ensemble to utilize a more
robust loss function and more information from unselected samples to reduce
error accumulation from the model perspective. Furthermore, as the flip images
have the same semantic meaning as the original images, we select small-loss
samples according to the loss values of flip images instead of the original
ones to reduce error accumulation from the data perspective. Extensive
experiments on CIFAR-10, CIFAR-100, and large-scale Clothing1M show that our
method outperforms state-of-the-art noisy label learning methods with different
levels of label noise. Our method can also be seamlessly combined with other
noisy label learning methods to further improve their performance and
generalize well to other tasks. The code is available in
https://github.com/zyh-uaiaaaa/MDA-noisy-label-learning.Comment: Accepted by AAAI2023 Worksho
Towards Personalized Privacy: User-Governed Data Contribution for Federated Recommendation
Federated recommender systems (FedRecs) have gained significant attention for
their potential to protect user's privacy by keeping user privacy data locally
and only communicating model parameters/gradients to the server. Nevertheless,
the currently existing architecture of FedRecs assumes that all users have the
same 0-privacy budget, i.e., they do not upload any data to the server, thus
overlooking those users who are less concerned about privacy and are willing to
upload data to get a better recommendation service. To bridge this gap, this
paper explores a user-governed data contribution federated recommendation
architecture where users are free to take control of whether they share data
and the proportion of data they share to the server. To this end, this paper
presents a cloud-device collaborative graph neural network federated
recommendation model, named CDCGNNFed. It trains user-centric ego graphs
locally, and high-order graphs based on user-shared data in the server in a
collaborative manner via contrastive learning. Furthermore, a graph mending
strategy is utilized to predict missing links in the graph on the server, thus
leveraging the capabilities of graph neural networks over high-order graphs.
Extensive experiments were conducted on two public datasets, and the results
demonstrate the effectiveness of the proposed method
Gradient Attention Balance Network: Mitigating Face Recognition Racial Bias via Gradient Attention
Although face recognition has made impressive progress in recent years, we
ignore the racial bias of the recognition system when we pursue a high level of
accuracy. Previous work found that for different races, face recognition
networks focus on different facial regions, and the sensitive regions of
darker-skinned people are much smaller. Based on this discovery, we propose a
new de-bias method based on gradient attention, called Gradient Attention
Balance Network (GABN). Specifically, we use the gradient attention map (GAM)
of the face recognition network to track the sensitive facial regions and make
the GAMs of different races tend to be consistent through adversarial learning.
This method mitigates the bias by making the network focus on similar facial
regions. In addition, we also use masks to erase the Top-N sensitive facial
regions, forcing the network to allocate its attention to a larger facial
region. This method expands the sensitive region of darker-skinned people and
further reduces the gap between GAM of darker-skinned people and GAM of
Caucasians. Extensive experiments show that GABN successfully mitigates racial
bias in face recognition and learns more balanced performance for people of
different races.Comment: Accepted by CVPR 2023 worksho
On-Device Recommender Systems: A Comprehensive Survey
Recommender systems have been widely deployed in various real-world
applications to help users identify content of interest from massive amounts of
information. Traditional recommender systems work by collecting user-item
interaction data in a cloud-based data center and training a centralized model
to perform the recommendation service. However, such cloud-based recommender
systems (CloudRSs) inevitably suffer from excessive resource consumption,
response latency, as well as privacy and security risks concerning both data
and models. Recently, driven by the advances in storage, communication, and
computation capabilities of edge devices, there has been a shift of focus from
CloudRSs to on-device recommender systems (DeviceRSs), which leverage the
capabilities of edge devices to minimize centralized data storage requirements,
reduce the response latency caused by communication overheads, and enhance user
privacy and security by localizing data processing and model training. Despite
the rapid rise of DeviceRSs, there is a clear absence of timely literature
reviews that systematically introduce, categorize and contrast these methods.
To bridge this gap, we aim to provide a comprehensive survey of DeviceRSs,
covering three main aspects: (1) the deployment and inference of DeviceRSs (2)
the training and update of DeviceRSs (3) the security and privacy of DeviceRSs.
Furthermore, we provide a fine-grained and systematic taxonomy of the methods
involved in each aspect, followed by a discussion regarding challenges and
future research directions. This is the first comprehensive survey on DeviceRSs
that covers a spectrum of tasks to fit various needs. We believe this survey
will help readers effectively grasp the current research status in this field,
equip them with relevant technical foundations, and stimulate new research
ideas for developing DeviceRSs
Single Cells Are Spatial Tokens: Transformers for Spatial Transcriptomic Data Imputation
Spatially resolved transcriptomics brings exciting breakthroughs to
single-cell analysis by providing physical locations along with gene
expression. However, as a cost of the extremely high spatial resolution, the
cellular level spatial transcriptomic data suffer significantly from missing
values. While a standard solution is to perform imputation on the missing
values, most existing methods either overlook spatial information or only
incorporate localized spatial context without the ability to capture long-range
spatial information. Using multi-head self-attention mechanisms and positional
encoding, transformer models can readily grasp the relationship between tokens
and encode location information. In this paper, by treating single cells as
spatial tokens, we study how to leverage transformers to facilitate spatial
tanscriptomics imputation. In particular, investigate the following two key
questions: (1) , and (2) . By answering these two questions, we present a transformer-based
imputation framework, SpaFormer, for cellular-level spatial transcriptomic
data. Extensive experiments demonstrate that SpaFormer outperforms existing
state-of-the-art imputation algorithms on three large-scale datasets while
maintaining superior computational efficiency
- …