186 research outputs found
Synthesis of Cadmium Selenide Quantum Dots and Their Cytotoxicity
Cadmium selenide (CdSe) nanoparticles (NPs) have applications in biomedical, biochemistry, bioimaging areas through different methods such as cell labelling and drug delivery (Chapter1). This study aims to test the optical and biological properties of CdSe NPs so that its applications can be improved in these areas in the future.
Three types of CdSe NPs have been synthesised using a wet chemical method with the molar ratio of Cd:Se 10:1, 4:1 and 1:1. The observed luminescence of the CdSe NPs was strong and stable. The maxima PL spectrum peak of the CdSe (10:1) nanoparticles was around 590 nm and the ultraviolet-visible absorption (UV-Vis) spectrum showed a peak between 530-550 nm. The photoluminescence peak of CdSe (4:1) was the same as CdSe (10:1) and the UV-Vis spectrum showed a peak at about 550 nm. The aging studies indicate that sodium citrate (a stabiliser) could enhance the stability of the CdSe NPs. For example, the CdSe NPs with 0.2% sodium citrate were more stable than 0.05% (Chapter 3). Using this property, more stable encapsulated drugs could be made in the future to improve the clinical treatment method (Chapter 1).
Cell toxicity of the CdSe NPs was evaluated through the use of 3-(4, 5-Dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide (MTT) assay. The MTT results show that the more Cadmium ions accumulated in HHL-5 cells, the greater the cell toxicity is. The results (keeping the cadmium level constant) also indicate that CdSe NPs with a cadmium to selenite ratio with 10:1 (CdSe (10:1) had the strongest toxicity in HHL-5 cell of all these three kinds of CdSe NPs tested. Conversely, the CdSe (1:1) has the lowest toxicity among all. The results indicated that the toxicity of the cadmium were very obvious so that we need to avoid accumulation of cadmium in clinical. In addition, the confocal images in MCF-7 cells also reflect the relative toxicities of the CdSe NPs. The results of the confocal images indicated that the higher concentrations of the CdSe NPs in the cells, the greater the observed toxicity is.
Moreover, all the experiments in this study (aging, TEM, quantum yield, MTT, confocal) are surrounding 9 kinds of CdSe NPs without any other parameters, which are novel and unique
GLoRE: Evaluating Logical Reasoning of Large Language Models
Recently, large language models (LLMs), including notable models such as
GPT-4 and burgeoning community models, have showcased significant general
language understanding abilities. However, there has been a scarcity of
attempts to assess the logical reasoning capacities of these LLMs, an essential
facet of natural language understanding. To encourage further investigation in
this area, we introduce GLoRE, a meticulously assembled General Logical
Reasoning Evaluation benchmark comprised of 12 datasets that span three
different types of tasks. Our experimental results show that compared to the
performance of human and supervised fine-tuning, the logical reasoning
capabilities of open LLM models necessitate additional improvement; ChatGPT and
GPT-4 show a strong capability of logical reasoning, with GPT-4 surpassing
ChatGPT by a large margin. We propose a self-consistency probing method to
enhance the accuracy of ChatGPT and a fine-tuned method to boost the
performance of an open LLM. We release the datasets and evaluation programs to
facilitate future research
2D-Shapley: A Framework for Fragmented Data Valuation
Data valuation -- quantifying the contribution of individual data sources to
certain predictive behaviors of a model -- is of great importance to enhancing
the transparency of machine learning and designing incentive systems for data
sharing. Existing work has focused on evaluating data sources with the shared
feature or sample space. How to valuate fragmented data sources of which each
only contains partial features and samples remains an open question. We start
by presenting a method to calculate the counterfactual of removing a fragment
from the aggregated data matrix. Based on the counterfactual calculation, we
further propose 2D-Shapley, a theoretical framework for fragmented data
valuation that uniquely satisfies some appealing axioms in the fragmented data
context. 2D-Shapley empowers a range of new use cases, such as selecting useful
data fragments, providing interpretation for sample-wise data values, and
fine-grained data issue diagnosis.Comment: ICML 202
Excitement Surfeited Turns to Errors: Deep Learning Testing Framework Based on Excitable Neurons
Despite impressive capabilities and outstanding performance, deep neural
networks (DNNs) have captured increasing public concern about their security
problems, due to their frequently occurred erroneous behaviors. Therefore, it
is necessary to conduct a systematical testing for DNNs before they are
deployed to real-world applications. Existing testing methods have provided
fine-grained metrics based on neuron coverage and proposed various approaches
to improve such metrics. However, it has been gradually realized that a higher
neuron coverage does \textit{not} necessarily represent better capabilities in
identifying defects that lead to errors. Besides, coverage-guided methods
cannot hunt errors due to faulty training procedure. So the robustness
improvement of DNNs via retraining by these testing examples are
unsatisfactory. To address this challenge, we introduce the concept of
excitable neurons based on Shapley value and design a novel white-box testing
framework for DNNs, namely DeepSensor. It is motivated by our observation that
neurons with larger responsibility towards model loss changes due to small
perturbations are more likely related to incorrect corner cases due to
potential defects. By maximizing the number of excitable neurons concerning
various wrong behaviors of models, DeepSensor can generate testing examples
that effectively trigger more errors due to adversarial inputs, polluted data
and incomplete training. Extensive experiments implemented on both image
classification models and speaker recognition models have demonstrated the
superiority of DeepSensor.Comment: 32 page
OpenShape: Scaling Up 3D Shape Representation Towards Open-World Understanding
We introduce OpenShape, a method for learning multi-modal joint
representations of text, image, and point clouds. We adopt the commonly used
multi-modal contrastive learning framework for representation alignment, but
with a specific focus on scaling up 3D representations to enable open-world 3D
shape understanding. To achieve this, we scale up training data by ensembling
multiple 3D datasets and propose several strategies to automatically filter and
enrich noisy text descriptions. We also explore and compare strategies for
scaling 3D backbone networks and introduce a novel hard negative mining module
for more efficient training. We evaluate OpenShape on zero-shot 3D
classification benchmarks and demonstrate its superior capabilities for
open-world recognition. Specifically, OpenShape achieves a zero-shot accuracy
of 46.8% on the 1,156-category Objaverse-LVIS benchmark, compared to less than
10% for existing methods. OpenShape also achieves an accuracy of 85.3% on
ModelNet40, outperforming previous zero-shot baseline methods by 20% and
performing on par with some fully-supervised methods. Furthermore, we show that
our learned embeddings encode a wide range of visual and semantic concepts
(e.g., subcategories, color, shape, style) and facilitate fine-grained text-3D
and image-3D interactions. Due to their alignment with CLIP embeddings, our
learned shape representations can also be integrated with off-the-shelf
CLIP-based models for various applications, such as point cloud captioning and
point cloud-conditioned image generation.Comment: Project Website: https://colin97.github.io/OpenShape
Zero123++: a Single Image to Consistent Multi-view Diffusion Base Model
We report Zero123++, an image-conditioned diffusion model for generating
3D-consistent multi-view images from a single input view. To take full
advantage of pretrained 2D generative priors, we develop various conditioning
and training schemes to minimize the effort of finetuning from off-the-shelf
image diffusion models such as Stable Diffusion. Zero123++ excels in producing
high-quality, consistent multi-view images from a single image, overcoming
common issues like texture degradation and geometric misalignment. Furthermore,
we showcase the feasibility of training a ControlNet on Zero123++ for enhanced
control over the generation process. The code is available at
https://github.com/SUDO-AI-3D/zero123plus
- …