88 research outputs found
In Vivo Molecular Imaging in Retinal Disease
There is an urgent need for early diagnosis in medicine, whereupon effective treatments could prevent irreversible tissue damage. The special structure of the eye provides a unique opportunity for noninvasive light-based imaging of ocular fundus vasculature. To detect endothelial injury at the early and reversible stage of adhesion molecule upregulation, some novel imaging agents that target retinal endothelial molecules were generated. In vivo molecular imaging has a great potential to impact medicine by detecting diseases or screening disease in early stages, identifying extent of disease, selecting disease and patient-specific therapeutic treatment, applying a directed or targeted therapy, and measuring molecular-specific effects of treatment. Current preclinical findings and advances in instrumentation such as endoscopes and microcatheters suggest that these molecular imaging modalities have numerous clinical applications and will be translated into clinical use in the near future
The Therapeutic Effect of Cytokine-Induced Killer Cells on Pancreatic Cancer Enhanced by Dendritic Cells Pulsed with K-Ras Mutant Peptide
Objective. This study is to investigate the role of the CIKs cocultured with K-ras-DCs in killing of pancreatic cancer cell lines, PANC-1 (K-ras+) and SW1990 (K-ras−). Methods. CIKs induced by IFN-γ, IL-2, and anti-CD3 monoantibody, K-ras-DCCIKs obtained by cocultivation of k-ras-DCs and CIKs. Surface markers examined by FACS. IFN-γ IL-12 ,CCL19 and CCL22 detected by ELISA. Proliferation of various CIKs tested via 3H-TdR. Killing activities of k-ras-DCCIKs and CTLs examined with 125IUdR. Results. CD3+CD56+ and CD3+CD8+ were highly expressed by K-ras-DCCIKs. In its supernatant, IFN-γ, IL-12, CCL19 and CCL22 were significantly higher than those in DCCIK and CIK. The killing rate of K-ras-DCCIK was greater than those of CIK and CTL. CTL induced by K-ras-DCs only inhibited the PANC-1 cells. Conclusions. The k-ras-DC can enhance CIK's proliferation and increase the killing effect on pancreatic cancer cell. The CTLs induced by K-ras-DC can only inhibit PANC-1 cells. In this study, K-ras-DCCIKs also show the specific inhibition to PANC-1 cells, their tumor suppression is almost same with the CTLs, their total tumor inhibitory efficiency is higher than that of the CTLs
Vascular Adhesion Protein 1 in the Eye
Semicarbazide-sensitive amine oxidase/vascular adhesion protein-1 (SSAO/VAP-1), a dual-function molecule with adhesive and enzymatic properties, is expressed on the surface of vascular endothelial cells of mammals. It also exists as a soluble form (sVAP-1), which is implicated in oxidative stress via its enzymatic activity and can be a prognostic biomarker. Recent evidence suggests that VAP-1 is an important therapeutic target for several inflammation-related ocular diseases, such as uveitis, agerelated macular degeneration (AMD), and diabetic retinopathy (DR), by involving in the recruitment of leukocytes at sites of inflammation. Furthermore, VAP-1 plays an important role in the pathogenesis of conjunctival inflammatory diseases such as pyogenic granulomas and the progression of conjunctival lymphoma. VAP-1 may be an alternative therapeutic target in ocular diseases. The in vivo imaging of inflammation using VAP-1 as a target molecule is a novel approach with a potential for early detection and characterization of inflammatory diseases. This paper reviews the critical roles of VAP-1 in ophthalmological diseases which may provide a novel research direction or a potent therapeutic strategy
Vascular Adhesion Protein 1 in the Eye
Semicarbazide-sensitive amine oxidase/vascular adhesion protein-1 (SSAO/VAP-1), a dual-function molecule with adhesive and enzymatic properties, is expressed on the surface of vascular endothelial cells of mammals. It also exists as a soluble form (sVAP-1), which is implicated in oxidative stress via its enzymatic activity and can be a prognostic biomarker. Recent evidence suggests that VAP-1 is an important therapeutic target for several inflammation-related ocular diseases, such as uveitis, age-related macular degeneration (AMD), and diabetic retinopathy (DR), by involving in the recruitment of leukocytes at sites of inflammation. Furthermore, VAP-1 plays an important role in the pathogenesis of conjunctival inflammatory diseases such as pyogenic granulomas and the progression of conjunctival lymphoma. VAP-1 may be an alternative therapeutic target in ocular diseases. The in vivo imaging of inflammation using VAP-1 as a target molecule is a novel approach with a potential for early detection and characterization of inflammatory diseases. This paper reviews the critical roles of VAP-1 in ophthalmological diseases which may provide a novel research direction or a potent therapeutic strategy
DELAN: Dual-Level Alignment for Vision-and-Language Navigation by Cross-Modal Contrastive Learning
Vision-and-Language navigation (VLN) requires an agent to navigate in unseen
environment by following natural language instruction. For task completion, the
agent needs to align and integrate various navigation modalities, including
instruction, observation and navigation history. Existing works primarily
concentrate on cross-modal attention at the fusion stage to achieve this
objective. Nevertheless, modality features generated by disparate uni-encoders
reside in their own spaces, leading to a decline in the quality of cross-modal
fusion and decision. To address this problem, we propose a Dual-levEL AligNment
(DELAN) framework by cross-modal contrastive learning. This framework is
designed to align various navigation-related modalities before fusion, thereby
enhancing cross-modal interaction and action decision-making. Specifically, we
divide the pre-fusion alignment into dual levels: instruction-history level and
landmark-observation level according to their semantic correlations. We also
reconstruct a dual-level instruction for adaptation to the dual-level
alignment. As the training signals for pre-fusion alignment are extremely
limited, self-supervised contrastive learning strategies are employed to
enforce the matching between different modalities. Our approach seamlessly
integrates with the majority of existing models, resulting in improved
navigation performance on various VLN benchmarks, including R2R, R4R, RxR and
CVDN.Comment: Accepted by LREC-COLING 202
A new strategy for controlling invasive weeds: selecting valuable native plants to defeat them
To explore replacement control of the invasive weed Ipomoea cairica, we studied the competitive effects of two valuable natives, Pueraria lobata and Paederia scandens, on growth and photosynthetic characteristics of I. cairica, in pot and field experiments. When I. cairica was planted in pots with P. lobata or P. scandens, its total biomass decreased by 68.7% and 45.8%, and its stem length by 33.3% and 34.1%, respectively. The two natives depressed growth of the weed by their strong effects on its photosynthetic characteristics, including suppression of leaf biomass and the abundance of the CO 2 -fixing enzyme RUBISCO. The field experiment demonstrated that sowing seeds of P. lobata or P. scandens in plots where the weed had been largely cleared produced 11.8-fold or 2.5-fold as much leaf biomass of the two natives, respectively, as the weed. Replacement control by valuable native species is potentially a feasible and sustainable means of suppressing I. cairica
SoMeLVLM: A Large Vision Language Model for Social Media Processing
The growth of social media, characterized by its multimodal nature, has led
to the emergence of diverse phenomena and challenges, which calls for an
effective approach to uniformly solve automated tasks. The powerful Large
Vision Language Models make it possible to handle a variety of tasks
simultaneously, but even with carefully designed prompting methods, the general
domain models often fall short in aligning with the unique speaking style and
context of social media tasks. In this paper, we introduce a Large Vision
Language Model for Social Media Processing (SoMeLVLM), which is a cognitive
framework equipped with five key capabilities including knowledge &
comprehension, application, analysis, evaluation, and creation. SoMeLVLM is
designed to understand and generate realistic social media behavior. We have
developed a 654k multimodal social media instruction-tuning dataset to support
our cognitive framework and fine-tune our model. Our experiments demonstrate
that SoMeLVLM achieves state-of-the-art performance in multiple social media
tasks. Further analysis shows its significant advantages over baselines in
terms of cognitive abilities
GPT-4V(ision) as A Social Media Analysis Engine
Recent research has offered insights into the extraordinary capabilities of
Large Multimodal Models (LMMs) in various general vision and language tasks.
There is growing interest in how LMMs perform in more specialized domains.
Social media content, inherently multimodal, blends text, images, videos, and
sometimes audio. Understanding social multimedia content remains a challenging
problem for contemporary machine learning frameworks. In this paper, we explore
GPT-4V(ision)'s capabilities for social multimedia analysis. We select five
representative tasks, including sentiment analysis, hate speech detection, fake
news identification, demographic inference, and political ideology detection,
to evaluate GPT-4V. Our investigation begins with a preliminary quantitative
analysis for each task using existing benchmark datasets, followed by a careful
review of the results and a selection of qualitative samples that illustrate
GPT-4V's potential in understanding multimodal social media content. GPT-4V
demonstrates remarkable efficacy in these tasks, showcasing strengths such as
joint understanding of image-text pairs, contextual and cultural awareness, and
extensive commonsense knowledge. Despite the overall impressive capacity of
GPT-4V in the social media domain, there remain notable challenges. GPT-4V
struggles with tasks involving multilingual social multimedia comprehension and
has difficulties in generalizing to the latest trends in social media.
Additionally, it exhibits a tendency to generate erroneous information in the
context of evolving celebrity and politician knowledge, reflecting the known
hallucination problem. The insights gleaned from our findings underscore a
promising future for LMMs in enhancing our comprehension of social media
content and its users through the analysis of multimodal information
Valley: Video Assistant with Large Language model Enhanced abilitY
Large language models (LLMs), with their remarkable conversational
capabilities, have demonstrated impressive performance across various
applications and have emerged as formidable AI assistants. In view of this, it
raises an intuitive question: Can we harness the power of LLMs to build
multimodal AI assistants for visual applications? Recently, several multi-modal
models have been developed for this purpose. They typically pre-train an
adaptation module to align the semantics of the vision encoder and language
model, followed by fine-tuning on instruction-following data. However, despite
the success of this pipeline in image and language understanding, its
effectiveness in joint video and language understanding has not been widely
explored. In this paper, we aim to develop a novel multi-modal foundation model
capable of comprehending video, image, and language within a general framework.
To achieve this goal, we introduce Valley, a Video Assistant with Large
Language model Enhanced abilitY. The Valley consists of a LLM, a temporal
modeling module, a visual encoder, and a simple projection module designed to
bridge visual and textual modes. To empower Valley with video comprehension and
instruction-following capabilities, we construct a video instruction dataset
and adopt a two-stage tuning procedure to train it. Specifically, we employ
ChatGPT to facilitate the construction of task-oriented conversation data
encompassing various tasks, including multi-shot captions, long video
descriptions, action recognition, causal relationship inference, etc.
Subsequently, we adopt a pre-training-then-instructions-tuned pipeline to align
visual and textual modalities and improve the instruction-following capability
of Valley. Qualitative experiments demonstrate that Valley has the potential to
function as a highly effective video assistant that can make complex video
understanding scenarios easy
Asymmetric Co-Training with Explainable Cell Graph Ensembling for Histopathological Image Classification
Convolutional neural networks excel in histopathological image
classification, yet their pixel-level focus hampers explainability. Conversely,
emerging graph convolutional networks spotlight cell-level features and medical
implications. However, limited by their shallowness and suboptimal use of
high-dimensional pixel data, GCNs underperform in multi-class histopathological
image classification. To make full use of pixel-level and cell-level features
dynamically, we propose an asymmetric co-training framework combining a deep
graph convolutional network and a convolutional neural network for multi-class
histopathological image classification. To improve the explainability of the
entire framework by embedding morphological and topological distribution of
cells, we build a 14-layer deep graph convolutional network to handle cell
graph data. For the further utilization and dynamic interactions between
pixel-level and cell-level information, we also design a co-training strategy
to integrate the two asymmetric branches. Notably, we collect a private
clinically acquired dataset termed LUAD7C, including seven subtypes of lung
adenocarcinoma, which is rare and more challenging. We evaluated our approach
on the private LUAD7C and public colorectal cancer datasets, showcasing its
superior performance, explainability, and generalizability in multi-class
histopathological image classification
- …