166 research outputs found
Wpływ tributylocyny na przyjmowanie pokarmu i ekspresję neuropeptydów w mózgu szczurów
Introduction: Tributyltin (TBT) is a largely diffused environmental pollutant. Several studies have demonstrated that TBT is involved in the development of obesity. However, few studies addressing the effects of TBT on the brain neuropeptides involved in appetite and body weight homeostasis have been published.Material and methods: Experiments were carried out on female and male Sprague-Dawley rats. Animals were exposed to TBT (0.5 μg/kg body weight) for 54 days. The hepatic triglyceride and total cholesterol were determined using commercial enzyme kits. The NPY, AgRP, POMC and CART mRNA expression in brains were quantified by real-time PCR.Results: TBT exposure resulted in significant increases in the hepatic total cholesterol and triglyceride concentration of both male and female rats. Interestingly, increases in body weight and fat mass were only found in the TBT-treated male rats. TBT exposure also led to a significant increase in food intake by the female rats, while no change was observed in the male rats. Moreover, the neuropeptides expression was different between males and females after TBT exposure. TBT induced brain NPY expression in the female rats, and depressed brain POMC, AgRP and CART expression in the males.Conclusions: TBT can increase food intake in female rats, which is associated with the disturbance of NPY in brains. TBT had sex-different effects on brain NPY, AgRP, POMC and CART mRNA expression, which indicates a complex neuroendocrine mechanism of TBT. (Endokrynol Pol 2014; 65 (6): 485–490)Wstęp: Tributylocyna (TBT) jest powszechnie występującym w środowisku zanieczyszczeniem. Prowadzone dotychczas badania wykazały, że obecność TBT może mieć związek z rozwojem otyłości. Niewiele jest jednak doniesień na temat wpływu TBT na układ neuropeptydów w mózgowiu regulujących łaknienie i utrzymanie masy ciała. Materiał i metody: Doświadczenia przeprowadzono na szczurach obu płci szczepu Sprague-Dawley. Zwierzętom podawano przez 54 dni TBT w dawce 0,5 μg/kg masy ciała. Stężenie triglicerydów i całkowite stężenie cholesterolu w wątrobie oznaczano przy użyciu komercyjnych zestawów analitycznych. Obecność mRNA NPY, AgRP, POMC i CART w mózgach szczurów oznaczano metodą PCR w czasie rzeczywistym (real time-PCR).Wyniki: Ekspozycja na TBT powodowała istotne zwiększenie całkowitego stężenia cholesterolu i trójglicerydów w wątrobie zarówno samców, jak i samic szczura. Co ciekawe, zwiększenie masy ciała i masy tkanki tłuszczowej odnotowano jedynie u samców, którym podawano TBT. Stwierdzono także istotne zwiększenie ilości pokarmu przyjmowanego przez samice, natomiast nie obserwowano takich zmian u samców. Ponadto, odnotowano różnice w ekspresji neuropeptydów w mózgowiu samic i samców szczura, którym podawano TBT. Ekspozycja na TBT nasilała ekspresję NPY w mózgach samic, ale równocześnie zmniejszała ekspresję POMC, AgRP i CART w mózgach samców szczura.Wnioski: Ekspozycja na TBT może zwiększać ilość pokarmu spożywanego przez samice szczura, co wiąże się z zaburzeniem układu NPY w mózgowiu. Trybutylocyna wywiera odmienny wpływ na ekspresję mRNA NPY, AgRP, POMC i CART w mózgach samców i samic szczura, co wskazuje na istnienie złożonego mechanizmu działania tej substancji na układ neuroendokrynny. (Endokrynol Pol 2014; 65 (6): 485–490
Dense Pixel-to-Pixel Harmonization via Continuous Image Representation
High-resolution (HR) image harmonization is of great significance in
real-world applications such as image synthesis and image editing. However, due
to the high memory costs, existing dense pixel-to-pixel harmonization methods
are mainly focusing on processing low-resolution (LR) images. Some recent works
resort to combining with color-to-color transformations but are either limited
to certain resolutions or heavily depend on hand-crafted image filters. In this
work, we explore leveraging the implicit neural representation (INR) and
propose a novel image Harmonization method based on Implicit neural Networks
(HINet), which to the best of our knowledge, is the first dense pixel-to-pixel
method applicable to HR images without any hand-crafted filter design. Inspired
by the Retinex theory, we decouple the MLPs into two parts to respectively
capture the content and environment of composite images. A Low-Resolution Image
Prior (LRIP) network is designed to alleviate the Boundary Inconsistency
problem, and we also propose new designs for the training and inference
process. Extensive experiments have demonstrated the effectiveness of our
method compared with state-of-the-art methods. Furthermore, some interesting
and practical applications of the proposed method are explored. Our code is
available at https://github.com/WindVChen/INR-Harmonization.Comment: Accepted by IEEE Transactions on Circuits and Systems for Video
Technology (TCSVT
Group Work Involved in the Practice of Life Education
With the continuous advancement of material civilization, the life values of college students appeared alienation. More and more students did not know how to face pressure and seek help; someone was to end their lives to solve the problem. Obliviously, life education was the urgent and necessary need in China. Therefore, the author explored the feasibility of life education from the angle of social work practice (such as the “group work”). Key words: Life education; Group work; College studentsResumé: Avec l'avancement continu de civilisation matérielle, les valeurs de vie d'étudiants universitaires ont apparu l'aliénation. De plus en plus les étudiants n'ont pas su comment faire face à la pression et chercher l'aide; quelqu'un devait finir leurs vies pour résoudre le problème. Apparemment, l'enseignement de vie était le besoin urgent et nécessaire en Chine. Donc, l'auteur a exploré la faisabilité d'enseignement de vie sous un angle de pratique (comme "le travail de groupe").Mots-clés: Enseignement de vie; Travail de groupe; Etudiants universitaire
Continuous Cross-resolution Remote Sensing Image Change Detection
Most contemporary supervised Remote Sensing (RS) image Change Detection (CD)
approaches are customized for equal-resolution bitemporal images. Real-world
applications raise the need for cross-resolution change detection, aka, CD
based on bitemporal images with different spatial resolutions. Given training
samples of a fixed bitemporal resolution difference (ratio) between the
high-resolution (HR) image and the low-resolution (LR) one, current
cross-resolution methods may fit a certain ratio but lack adaptation to other
resolution differences. Toward continuous cross-resolution CD, we propose
scale-invariant learning to enforce the model consistently predicting HR
results given synthesized samples of varying resolution differences.
Concretely, we synthesize blurred versions of the HR image by random
downsampled reconstructions to reduce the gap between HR and LR images. We
introduce coordinate-based representations to decode per-pixel predictions by
feeding the coordinate query and corresponding multi-level embedding features
into an MLP that implicitly learns the shape of land cover changes, therefore
benefiting recognizing blurred objects in the LR image. Moreover, considering
that spatial resolution mainly affects the local textures, we apply
local-window self-attention to align bitemporal features during the early
stages of the encoder. Extensive experiments on two synthesized and one
real-world different-resolution CD datasets verify the effectiveness of the
proposed method. Our method significantly outperforms several vanilla CD
methods and two cross-resolution CD methods on the three datasets both in
in-distribution and out-of-distribution settings. The empirical results suggest
that our method could yield relatively consistent HR change predictions
regardless of varying bitemporal resolution ratios. Our code is available at
\url{https://github.com/justchenhao/SILI_CD}.Comment: 21 pages, 11 figures. Accepted article by IEEE TGR
Continuous Remote Sensing Image Super-Resolution based on Context Interaction in Implicit Function Space
Despite its fruitful applications in remote sensing, image super-resolution
is troublesome to train and deploy as it handles different resolution
magnifications with separate models. Accordingly, we propose a
highly-applicable super-resolution framework called FunSR, which settles
different magnifications with a unified model by exploiting context interaction
within implicit function space. FunSR composes a functional representor, a
functional interactor, and a functional parser. Specifically, the representor
transforms the low-resolution image from Euclidean space to multi-scale
pixel-wise function maps; the interactor enables pixel-wise function expression
with global dependencies; and the parser, which is parameterized by the
interactor's output, converts the discrete coordinates with additional
attributes to RGB values. Extensive experimental results demonstrate that FunSR
reports state-of-the-art performance on both fixed-magnification and
continuous-magnification settings, meanwhile, it provides many friendly
applications thanks to its unified nature
OvarNet: Towards Open-vocabulary Object Attribute Recognition
In this paper, we consider the problem of simultaneously detecting objects
and inferring their visual attributes in an image, even for those with no
manual annotations provided at the training stage, resembling an
open-vocabulary scenario. To achieve this goal, we make the following
contributions: (i) we start with a naive two-stage approach for open-vocabulary
object detection and attribute classification, termed CLIP-Attr. The candidate
objects are first proposed with an offline RPN and later classified for
semantic category and attributes; (ii) we combine all available datasets and
train with a federated strategy to finetune the CLIP model, aligning the visual
representation with attributes, additionally, we investigate the efficacy of
leveraging freely available online image-caption pairs under weakly supervised
learning; (iii) in pursuit of efficiency, we train a Faster-RCNN type model
end-to-end with knowledge distillation, that performs class-agnostic object
proposals and classification on semantic categories and attributes with
classifiers generated from a text encoder; Finally, (iv) we conduct extensive
experiments on VAW, MS-COCO, LSA, and OVAD datasets, and show that recognition
of semantic category and attributes is complementary for visual scene
understanding, i.e., jointly training object detection and attributes
prediction largely outperform existing approaches that treat the two tasks
independently, demonstrating strong generalization ability to novel attributes
and categories
Graph Sampling-based Meta-Learning for Molecular Property Prediction
Molecular property is usually observed with a limited number of samples, and
researchers have considered property prediction as a few-shot problem. One
important fact that has been ignored by prior works is that each molecule can
be recorded with several different properties simultaneously. To effectively
utilize many-to-many correlations of molecules and properties, we propose a
Graph Sampling-based Meta-learning (GS-Meta) framework for few-shot molecular
property prediction. First, we construct a Molecule-Property relation Graph
(MPG): molecule and properties are nodes, while property labels decide edges.
Then, to utilize the topological information of MPG, we reformulate an episode
in meta-learning as a subgraph of the MPG, containing a target property node,
molecule nodes, and auxiliary property nodes. Third, as episodes in the form of
subgraphs are no longer independent of each other, we propose to schedule the
subgraph sampling process with a contrastive loss function, which considers the
consistency and discrimination of subgraphs. Extensive experiments on 5
commonly-used benchmarks show GS-Meta consistently outperforms state-of-the-art
methods by 5.71%-6.93% in ROC-AUC and verify the effectiveness of each proposed
module. Our code is available at https://github.com/HICAI-ZJU/GS-Meta.Comment: Accepted by IJCAI 202
Learning Invariant Molecular Representation in Latent Discrete Space
Molecular representation learning lays the foundation for drug discovery.
However, existing methods suffer from poor out-of-distribution (OOD)
generalization, particularly when data for training and testing originate from
different environments. To address this issue, we propose a new framework for
learning molecular representations that exhibit invariance and robustness
against distribution shifts. Specifically, we propose a strategy called
``first-encoding-then-separation'' to identify invariant molecule features in
the latent space, which deviates from conventional practices. Prior to the
separation step, we introduce a residual vector quantization module that
mitigates the over-fitting to training data distributions while preserving the
expressivity of encoders. Furthermore, we design a task-agnostic
self-supervised learning objective to encourage precise invariance
identification, which enables our method widely applicable to a variety of
tasks, such as regression and multi-label classification. Extensive experiments
on 18 real-world molecular datasets demonstrate that our model achieves
stronger generalization against state-of-the-art baselines in the presence of
various distribution shifts. Our code is available at
https://github.com/HICAI-ZJU/iMoLD
- …