147 research outputs found
On-chip integrated process-programmable sub-10 nm thick molecular devices switching between photomultiplication and memristive behaviour
Molecular devices constructed by sub-10 nm thick molecular layers are promising candidates for a new generation of integratable nanoelectronic applications. Here, we report integrated molecular devices based on ultrathin copper phthalocyanine/fullerene hybrid layers with microtubular soft-contacts, which exhibit process-programmable functionality switching between photomultiplication and memristive behaviour. The local electric field at the interface between the polymer bottom electrode and the enclosed molecular channels modulates the ionic-electronic charge interaction and hence determines the transition of the device function. When ions are not driven into the molecular channels at a low interface electric field, photogenerated holes are trapped as electronic space charges, resulting in photomultiplication with a high external quantum efficiency. Once mobile ions are polarized and accumulated as ionic space charges in the molecular channels at a high interface electric field, the molecular devices show ferroelectric-like memristive switching with remarkable resistive ON/OFF and rectification ratios
Indoor 3D NLOS VLP using a binocular camera and a single LED
In this paper, we propose a non-line of sight (NLOS) visible light positioning (VLP) system using a binocular camera and a single light emitting diode (LED) for the realization of 3D positioning of an arbitrary posture. The proposed system overcomes the challenges of the shadowing/blocking of the line of sight (LOS) transmission paths between transmitters and receivers (Rxs) and the need for a sufficient number of LEDs that can be captured within the limited field of view of the camera-based Rx. We have developed an experimental testbed to evaluate the performance of the proposed system with results showing that the lowest average error and the root mean square error (RMSE) are 26.10 and 31.02 cm following an error compensation algorithm. In addition, a label-based enhanced VLP scheme is proposed for the first time, which has a great improvement on the system performance with the average error and RMSE values of 7.31 and 7.74 cm and a 90 th percentile accuracies of < 11 cm
ECPC-IDS:A benchmark endometrail cancer PET/CT image dataset for evaluation of semantic segmentation and detection of hypermetabolic regions
Endometrial cancer is one of the most common tumors in the female
reproductive system and is the third most common gynecological malignancy that
causes death after ovarian and cervical cancer. Early diagnosis can
significantly improve the 5-year survival rate of patients. With the
development of artificial intelligence, computer-assisted diagnosis plays an
increasingly important role in improving the accuracy and objectivity of
diagnosis, as well as reducing the workload of doctors. However, the absence of
publicly available endometrial cancer image datasets restricts the application
of computer-assisted diagnostic techniques.In this paper, a publicly available
Endometrial Cancer PET/CT Image Dataset for Evaluation of Semantic Segmentation
and Detection of Hypermetabolic Regions (ECPC-IDS) are published. Specifically,
the segmentation section includes PET and CT images, with a total of 7159
images in multiple formats. In order to prove the effectiveness of segmentation
methods on ECPC-IDS, five classical deep learning semantic segmentation methods
are selected to test the image segmentation task. The object detection section
also includes PET and CT images, with a total of 3579 images and XML files with
annotation information. Six deep learning methods are selected for experiments
on the detection task.This study conduct extensive experiments using deep
learning-based semantic segmentation and object detection methods to
demonstrate the differences between various methods on ECPC-IDS. As far as we
know, this is the first publicly available dataset of endometrial cancer with a
large number of multiple images, including a large amount of information
required for image and target detection. ECPC-IDS can aid researchers in
exploring new algorithms to enhance computer-assisted technology, benefiting
both clinical doctors and patients greatly.Comment: 14 pages,6 figure
A Cortical Folding Pattern-Guided Model of Intrinsic Functional Brain Networks in Emotion Processing
There have been increasing studies demonstrating that emotion processing in humans is realized by the interaction within or among the large-scale intrinsic functional brain networks. Identifying those meaningful intrinsic functional networks based on task-based functional magnetic resonance imaging (task fMRI) with specific emotional stimuli and responses, and exploring the underlying functional working mechanisms of interregional neural communication within the intrinsic functional networks are thus of great importance to understand the neural basis of emotion processing. In this paper, we propose a novel cortical folding pattern-guided model of intrinsic networks in emotion processing: gyri serve as global functional connection centers that perform interregional neural communication among distinct regions via long distance dense axonal fibers, and sulci serve as local functional units that directly communicate with neighboring gyri via short distance fibers and indirectly communicate with other distinct regions via the neighboring gyri. We test the proposed model by adopting a computational framework of dictionary learning and sparse representation of emotion task fMRI data of 68 subjects in the publicly released Human Connectome Project. The proposed model provides novel insights of functional mechanisms in emotion processing
MediViSTA-SAM: Zero-shot Medical Video Analysis with Spatio-temporal SAM Adaptation
In recent years, the Segmentation Anything Model (SAM) has attracted
considerable attention as a foundational model well-known for its robust
generalization capabilities across various downstream tasks. However, SAM does
not exhibit satisfactory performance in the realm of medical image analysis. In
this study, we introduce the first study on adapting SAM on video segmentation,
called MediViSTA-SAM, a novel approach designed for medical video segmentation.
Given video data, MediViSTA, spatio-temporal adapter captures long and short
range temporal attention with cross-frame attention mechanism effectively
constraining it to consider the immediately preceding video frame as a
reference, while also considering spatial information effectively.
Additionally, it incorporates multi-scale fusion by employing a U-shaped
encoder and a modified mask decoder to handle objects of varying sizes. To
evaluate our approach, extensive experiments were conducted using
state-of-the-art (SOTA) methods, assessing its generalization abilities on
multi-vendor in-house echocardiography datasets. The results highlight the
accuracy and effectiveness of our network in medical video segmentation
MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image Segmentation
The Segment Anything Model (SAM), a foundation model for general image
segmentation, has demonstrated impressive zero-shot performance across numerous
natural image segmentation tasks. However, SAM's performance significantly
declines when applied to medical images, primarily due to the substantial
disparity between natural and medical image domains. To effectively adapt SAM
to medical images, it is important to incorporate critical third-dimensional
information, i.e., volumetric or temporal knowledge, during fine-tuning.
Simultaneously, we aim to harness SAM's pre-trained weights within its original
2D backbone to the fullest extent. In this paper, we introduce a
modality-agnostic SAM adaptation framework, named as MA-SAM, that is applicable
to various volumetric and video medical data. Our method roots in the
parameter-efficient fine-tuning strategy to update only a small portion of
weight increments while preserving the majority of SAM's pre-trained weights.
By injecting a series of 3D adapters into the transformer blocks of the image
encoder, our method enables the pre-trained 2D backbone to extract
third-dimensional information from input data. The effectiveness of our method
has been comprehensively evaluated on four medical image segmentation tasks, by
using 10 public datasets across CT, MRI, and surgical video data. Remarkably,
without using any prompt, our method consistently outperforms various
state-of-the-art 3D approaches, surpassing nnU-Net by 0.9%, 2.6%, and 9.9% in
Dice for CT multi-organ segmentation, MRI prostate segmentation, and surgical
scene segmentation respectively. Our model also demonstrates strong
generalization, and excels in challenging tumor segmentation when prompts are
used. Our code is available at: https://github.com/cchen-cc/MA-SAM
Predictive models of resting state networks for assessment of altered functional connectivity in mild cognitive impairment
Due to the difficulties in establishing correspondences between functional regions across individuals and populations, systematic elucidation of functional connectivity alterations in mild cognitive impairment (MCI) in comparison with normal controls (NC) is still a challenging problem. In this paper, we assessed the functional connectivity alterations in MCI via novel, alternative predictive models of resting state networks (RSNs) learned from multimodal resting state fMRI (R-fMRI) and diffusion tensor imaging (DTI) data. First, ICA-clustering was used to construct RSNs from R-fMRI data in NC group. Second, since the RSNs in MCI are already altered and can hardly be constructed directly from R-fMRI data, structural landmarks derived from DTI data were employed as the predictive models of RSNs for MCI. Third, given that the landmarks are structurally consistent and correspondent across NC and MCI, functional connectivities in MCI were assessed based on the predicted RSNs and compared with those in NC. Experimental results demonstrated that the predictive models of RSNs based on multimodal R-fMRI and DTI data systematically and comprehensively revealed widespread functional connectivity alterations in MCI in comparison with NC
Evaluating the Potential of Leading Large Language Models in Reasoning Biology Questions
Recent advances in Large Language Models (LLMs) have presented new
opportunities for integrating Artificial General Intelligence (AGI) into
biological research and education. This study evaluated the capabilities of
leading LLMs, including GPT-4, GPT-3.5, PaLM2, Claude2, and SenseNova, in
answering conceptual biology questions. The models were tested on a
108-question multiple-choice exam covering biology topics in molecular biology,
biological techniques, metabolic engineering, and synthetic biology. Among the
models, GPT-4 achieved the highest average score of 90 and demonstrated the
greatest consistency across trials with different prompts. The results
indicated GPT-4's proficiency in logical reasoning and its potential to aid
biology research through capabilities like data analysis, hypothesis
generation, and knowledge integration. However, further development and
validation are still required before the promise of LLMs in accelerating
biological discovery can be realized
When Brain-inspired AI Meets AGI
Artificial General Intelligence (AGI) has been a long-standing goal of
humanity, with the aim of creating machines capable of performing any
intellectual task that humans can do. To achieve this, AGI researchers draw
inspiration from the human brain and seek to replicate its principles in
intelligent machines. Brain-inspired artificial intelligence is a field that
has emerged from this endeavor, combining insights from neuroscience,
psychology, and computer science to develop more efficient and powerful AI
systems. In this article, we provide a comprehensive overview of brain-inspired
AI from the perspective of AGI. We begin with the current progress in
brain-inspired AI and its extensive connection with AGI. We then cover the
important characteristics for both human intelligence and AGI (e.g., scaling,
multimodality, and reasoning). We discuss important technologies toward
achieving AGI in current AI systems, such as in-context learning and prompt
tuning. We also investigate the evolution of AGI systems from both algorithmic
and infrastructural perspectives. Finally, we explore the limitations and
future of AGI
- …