75 research outputs found
Identify a Specified Fish species by the Co-occurrence and Confusion Matrix
Nowadays, invasive species threaten native species has become a global problem. Invasive species might be carrying pathogenic microorganisms, reduce biological species and even threat to human health. Therefore, in this study, we proposed a method of co-occurrence matrix to texture analysis of three species of fish. We catch the body pattern, and make a judgment based on confusion matrix. Simulation results show that three species of fish can be classified from each other reasonable.The 3rd International Conference on Industrial Application Engineering 2015, March 28-31, 2015, Kitakyushu International Conference Center, Kitakyushu, Japa
A bioinformatics approach to the identification of hub genes of Huo Xin Pill (HXP) for the treatment of acute myocardial infarction
Purpose: To apply bioinformatics for the identification of potential genes associated with Huo Xin Pill (HXP), a traditional Chinese medicine (TCM) used for the treatment of acute myocardial infarction AMI).Methods: Mouse AMI expression profile dataset GSE153485 and HXP-treated mouse AMI expression profile dataset GSE147365 were downloaded from GEO database. Then, R software was used to screen differentially-expressed genes in AMI and differentially-expressed genes in HXP-treated AMI. Gene Ontology (GO) enrichment analysis, Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analyses, Venn diagrams, and protein-protein interaction (PPI) analysis were carried out on the hub genes linked to the effect of HXP on AMI.Results: Six hub genes were identified. Based on the differential analysis of the sham and AMI groups, GSE153485 and GSE147365 had 840 and 2116 differentially-expressed genes, respectively (p < 0.05). The GO and KEGG analyses revealed enrichments in actin filament organization, membrane repolarization, and regulation of the actin cytoskeleton. Differential analysis of the use of HXP on AMI showed that GSE147365 had 380 differentially-expressed genes, comprising 96 up-regulated genes and 284 down-regulated genes (p < 0.05). Thirteen potential acting target genes were obtained using a enn diagram, while 6 key acting genes were obtained via final screening.Conclusion: Six (6) hub genes linked to HXP and AMI have been identified using bioinformatics: Egr2, Tubb2a, Col4a2, Cnn2, Lmna, and Col4a1. This study provides a partial experimental basis for the use of HXP in the treatment of AMI. In addition, it provides new potential targets for the treatment of AMI
An Analysis of the Principles in Formulation and Implementation of University Constitution from the Perspective of the Spirit of Law
Under the background of university constitution construction,the formulation and implementation of the university constitution need to be divided into three parts, the legal effect,the regulatory mechanism,the power inside and outside of the university and the legal relationship, these three areas need further improvement. This paper will analyze the principles of university constitution from three aspects: constitution formulation, rights and interests protection and procedure implementation conditions
Study on Image Segmentation in CT Metal Artifacts
Computed Tomography (CT) is one of the most important means of medical diagnosis and the quality of CT image can be seriously affected by metal artifacts. How to use CT image segmentation to extract the focused region is a classical difficult problem in this research field. According to the principle of CT reconstruction, after the medical image segmentation, projection of the metal part by compensation can improve the image quality. This paper first introduces the causes of the metal artifacts as well as the principle of CT image reconstruction. Then,it mainly discusses the simple and iterative threshold segmentation to solve metal artifacts. Corresponding comparison shows that the proposed method in this study has better segmentation effect based on the experimental results. Finally, the prospect of medical image segmentation is predicted to indicate future research work.The 2nd International Conference on Intelligent Systems and Image Processing 2014 (ICISIP2014), September 26-29, 2014, Nishinippon Institute of Technology, Kitakyushu, Japa
Projection-Based AR for Hearing Parent-Deaf Child Communication
Deaf infants born to hearing parents are at risk of language deprivation due to lack of sign language fluency and subpar parent-child communication. We present a projection-based Augmented Reality (AR) prototype designed to improve parent-child communication and American Sign Language (ASL) acquisition. Our system aims to non-intrusively augment play episodes by projecting just-in-time and context-aware ASL equivalents extracted from nursery rhymes being sung by parents. This paper presents the initial implementation of the prototype
Renmin University of China at TRECVID 2022: Improving Video Search by Feature Fusion and Negation Understanding
We summarize our TRECVID 2022 Ad-hoc Video Search (AVS) experiments. Our
solution is built with two new techniques, namely Lightweight Attentional
Feature Fusion (LAFF) for combining diverse visual / textual features and
Bidirectional Negation Learning (BNL) for addressing queries that contain
negation cues. In particular, LAFF performs feature fusion at both early and
late stages and at both text and video ends to exploit diverse (off-the-shelf)
features. Compared to multi-head self attention, LAFF is much more compact yet
more effective. Its attentional weights can also be used for selecting fewer
features, with the retrieval performance mostly preserved. BNL trains a
negation-aware video retrieval model by minimizing a bidirectionally
constrained loss per triplet, where a triplet consists of a given training
video, its original description and a partially negated description. For video
feature extraction, we use pre-trained CLIP, BLIP, BEiT, ResNeXt-101 and irCSN.
As for text features, we adopt bag-of-words, word2vec, CLIP and BLIP. Our
training data consists of MSR-VTT, TGIF and VATEX that were used in our
previous participation. In addition, we automatically caption the V3C1
collection for pre-training. The 2022 edition of the TRECVID benchmark has
again been a fruitful participation for the RUCMM team. Our best run, with an
infAP of 0.262, is ranked at the second place teamwise
Beyond Control: Exploring Novel File System Objects for Data-Only Attacks on Linux Systems
The widespread deployment of control-flow integrity has propelled non-control
data attacks into the mainstream. In the domain of OS kernel exploits, by
corrupting critical non-control data, local attackers can directly gain root
access or privilege escalation without hijacking the control flow. As a result,
OS kernels have been restricting the availability of such non-control data.
This forces attackers to continue to search for more exploitable non-control
data in OS kernels. However, discovering unknown non-control data can be
daunting because they are often tied heavily to semantics and lack universal
patterns.
We make two contributions in this paper: (1) discover critical non-control
objects in the file subsystem and (2) analyze their exploitability. This work
represents the first study, with minimal domain knowledge, to
semi-automatically discover and evaluate exploitable non-control data within
the file subsystem of the Linux kernel. Our solution utilizes a custom analysis
and testing framework that statically and dynamically identifies promising
candidate objects. Furthermore, we categorize these discovered objects into
types that are suitable for various exploit strategies, including a novel
strategy necessary to overcome the defense that isolates many of these objects.
These objects have the advantage of being exploitable without requiring KASLR,
thus making the exploits simpler and more reliable. We use 18 real-world CVEs
to evaluate the exploitability of the file system objects using various exploit
strategies. We develop 10 end-to-end exploits using a subset of CVEs against
the kernel with all state-of-the-art mitigations enabled.Comment: 14 pages, in submission of the 31th ACM Conference on Computer and
Communications Security (CCS), 202
CJRC: A Reliable Human-Annotated Benchmark DataSet for Chinese Judicial Reading Comprehension
We present a Chinese judicial reading comprehension (CJRC) dataset which
contains approximately 10K documents and almost 50K questions with answers. The
documents come from judgment documents and the questions are annotated by law
experts. The CJRC dataset can help researchers extract elements by reading
comprehension technology. Element extraction is an important task in the legal
field. However, it is difficult to predefine the element types completely due
to the diversity of document types and causes of action. By contrast, machine
reading comprehension technology can quickly extract elements by answering
various questions from the long document. We build two strong baseline models
based on BERT and BiDAF. The experimental results show that there is enough
space for improvement compared to human annotators
TPTU-v2: Boosting Task Planning and Tool Usage of Large Language Model-based Agents in Real-world Systems
Large Language Models (LLMs) have demonstrated proficiency in addressing
tasks that necessitate a combination of task planning and the usage of external
tools that require a blend of task planning and the utilization of external
tools, such as APIs. However, real-world complex systems present three
prevalent challenges concerning task planning and tool usage: (1) The real
system usually has a vast array of APIs, so it is impossible to feed the
descriptions of all APIs to the prompt of LLMs as the token length is limited;
(2) the real system is designed for handling complex tasks, and the base LLMs
can hardly plan a correct sub-task order and API-calling order for such tasks;
(3) Similar semantics and functionalities among APIs in real systems create
challenges for both LLMs and even humans in distinguishing between them. In
response, this paper introduces a comprehensive framework aimed at enhancing
the Task Planning and Tool Usage (TPTU) abilities of LLM-based agents operating
within real-world systems. Our framework comprises three key components
designed to address these challenges: (1) the API Retriever selects the most
pertinent APIs for the user task among the extensive array available; (2) LLM
Finetuner tunes a base LLM so that the finetuned LLM can be more capable for
task planning and API calling; (3) the Demo Selector adaptively retrieves
different demonstrations related to hard-to-distinguish APIs, which is further
used for in-context learning to boost the final performance. We validate our
methods using a real-world commercial system as well as an open-sourced
academic dataset, and the outcomes clearly showcase the efficacy of each
individual component as well as the integrated framework
- …