47 research outputs found
Characterization of p73 and STAT5b genes that are susceptible to manganese exposure in dopaminergic neurons
Manganese (Mn) is an essential trace element found in most living organisms. Chronic exposure to Mn has been linked to the pathogenesis of manganism, which displays neurological abnormalities somewhat similar to those associated with Parkinson\u27s disease resulting from dysfunction of the extrapyramidal motor system within the basal ganglia. However, the exact cellular and molecular mechanisms underlying Mn induced neurotoxicity have not been defined. Oxidative stress mediated dopaminergic neuronal apoptosis is considered to be the prime mechanisms of Mn neurotoxicity. Thus, we sought to identify the genes that are altered during Mn exposure and that lead us to elucidate the mechanisms underlying Mn induced neurotoxicity. First, we used the Qiagen mouse apoptosis RT2 Profiler™ quantitative PCR array system to identify the genes susceptible to Mn exposure. We treated C57 black mice with 10 mg/kg Mn via oral gavage for 30 days. Afterwards, PCR apoptosis array was performed on substantia nigral tissues for 84 genes associated with apoptotic signaling. Interestingly, we found a significant downregulation of the tumor repressor gene p73 in Mn-treated substantia nigral tissues. Western blot analyses revealed that the p73 isoform protein lacking transactivation domain at N-terminus (ΔNp73) was downregulated from substantia nigral tissues of C57 black mice exposed to 30 mg/kg Mn for 30 days via gavage. To further characterize the functional role of Mn-induced p73 downregulation in Mn neurotoxicity, we examined the interrelationships between the effects of Mn on p73 gene expression and apoptotic cell death in an N27 dopaminergic neuronal model. Mn exposure to 300 μM downregulated dNp73 proteins in N27 dopaminergic neurons in a time-dependent manner, which consistently supports our animal study. We further determined that protein level of the Np73 was also reduced in primary striatal cultures in a dose-dependent manner. Furthermore, overexpression of Np73 conferred modest cellular protection against Mn-induced neurotoxicity. Secondly, we identified signal transducer and activator of transcription 5b (STAT5b) gene which was downregulated both in a time-dependent and dose-dependent manner during Mn exposure in N27 dopaminergic neuronal cells over 12 h span. However, STAT1 was relatively unaffected during Mn treatment, indicating isoform-specific effect of Mn on STAT5b. Consistent to N27 dopaminergic neuronal cell model, Mn exposure downregulated STAT5b expression in primary mouse striatal culture. Quantitative RT-PCR analyses showed Mn exposure induces downregulation of STAT5b expression at the transcriptional level as well. Moreover, Bcl-2, a well-known downstream target of STAT5b pathway, was also downregulated concomitantly during Mn exposure. Pretreatment of 20 uM Lactacystin failed to protect downregulation of STAT5b indicating STAT5b downregulation was independent of proteasomal degradation pathway. Pre-treatment of N-Acetyl Cystine (NAC) was shown to protect downregulation of STAT5b. In addition, treatment of MPP+ in N27 cells showed downregulation of STAT5b. These results support the hypothesis that Mn exposure mediates oxidative stress that induces downregulation of STAT5b. Overexpression of STAT5b cells protected N27 cells against Mn-induced neurotoxicity. Furthermore, overexpression of STAT5b protected mitochondria in N27 cells. Downregulation of STAT5b was recapitulated in substnatia nigra of C57 black mice model treated with Mn and MitoPark Parkinson\u27s disease model. We also present that human lymphocytes show downregulation of STAT5b during Mn exposure, proposing a potential drug candidate for Mn-induced neurotoxicity and Parkinson\u27s disease patients. Futhermore, we show that Mn exposure suppresses promoter activity of STAT5b in MN9D dopaminergic cells. To characterize the molecular mechanisms underlying STAT5b downregulation during Mn neurotoxicity, we examined the effects of 300 μM Mn exposure for the promoter analysis of STAT5b expression. We subcloned the STAT5b promoter 1 from mouse brain. Analysis of mouse STAT5b promoter from 2,000 nt upstream to 5,00 nt downstream region indicated that a proximal region near exon 1 contains the regulatory element in response to Mn exposure. Detailed mutational analyses of the putative transcription factor binding site revealed that a Sp1 like transcription factor binding sites near exon 1 may be required for the suppression of STAT5b in Mn-induced neurotoxicity. Two KLF binding sites exhibited to be transcription repressor that can respond to Mn exposure, whereas one Sp1 binding sites exhibited transcription activator which senses Mn exposure and reduces its activity. These data suggest Mn exposure alters the profiles of transcription factors to downregulate anti-apoptotic STAT5b signaling via an Sp1-like transcription factor-dependent mechanism in dopaminergic neurons, which may significantly contribute to Mn neurotoxicity. Taken together, our results suggest that Mn exposure compromises the expression of neuroprotective dNp73 and STAT5B in dopaminergic neurons for Mn-induced neurotoxicity, thereby exacerbating neuronal cell death (NIH grants ES10586, ES19267, NS74443)
Predict to Detect: Prediction-guided 3D Object Detection using Sequential Images
Recent camera-based 3D object detection methods have introduced sequential
frames to improve the detection performance hoping that multiple frames would
mitigate the large depth estimation error. Despite improved detection
performance, prior works rely on naive fusion methods (e.g., concatenation) or
are limited to static scenes (e.g., temporal stereo), neglecting the importance
of the motion cue of objects. These approaches do not fully exploit the
potential of sequential images and show limited performance improvements. To
address this limitation, we propose a novel 3D object detection model, P2D
(Predict to Detect), that integrates a prediction scheme into a detection
framework to explicitly extract and leverage motion features. P2D predicts
object information in the current frame using solely past frames to learn
temporal motion features. We then introduce a novel temporal feature
aggregation method that attentively exploits Bird's-Eye-View (BEV) features
based on predicted object information, resulting in accurate 3D object
detection. Experimental results demonstrate that P2D improves mAP and NDS by
3.0% and 3.7% compared to the sequential image-based baseline, illustrating
that incorporating a prediction scheme can significantly improve detection
accuracy.Comment: ICCV 202
Recommended from our members
Effects of Active Participation and Education of Caregivers on Peripheral Intravenous Injections for Their Child
This study was to determine the effects of active participation and education of caregivers on the pain experienced by their hospitalized children, the anxiety of the caregivers, and the working efficiency of nurses when administering peripheral intravenous (IV) injections to their children. It was found IV injections were the most feared procedures experienced by inpatient pediatric patients. A quasi-experimental design used in which different types of treatment were given to subjects in three groups. All caregivers received brief verbal information about the peripheral IV injection procedure for their child. Those in the control group then stayed outside the treatment room, those in the first experimental group observed the procedure, and those in the second experimental group participated actively in the procedure for their child after additionally receiving written information about it. Hospitalized children’s pain level did not differ among the three study groups (F=1.18, p=.323) while caregivers’ anxiety level differed being lowest in the second experimental group (F=5.98, p=.001). The nursing action duration of performing the intravenous injection was longest and shortest in the first and control group, respectively (F=5.07, p=.003). This study shows that active participation and education of caregivers decreased the caregiver’s anxiety during peripheral IV injections for their children, while the absence of caregivers shortened the duration of performing the IV injection. The outcomes of caregiver anxiety and the duration of the IV injection were worse for caregivers who observed their child without receiving additional education about or participating in the injection
Recommended from our members
Educational Needs Associated with the Level of Complication and Comparative Risk Perceptions in People with Type 2 Diabetes
Objectives: This study aimed to identify the educational needs of people with type 2 diabetes according to risk perceptions and the level of severity of complications. Methods: There were 177 study participants who were outpatients of the internal medicine department at a university hospital located in the Republic of Korea, who consented to participate in the survey from December 10, 2016 to February 10, 2017. The data were analyzed using descriptive statistics, Pearson correlation, ANOVA with post-hoc comparison, and multiple regression analysis. Type 2 diabetes complications were classified into 3 groups: no complications, common complications, and severe complications. Results: There were statistically significant positive correlations between educational needs and comparative risk perceptions, and the level of complication and comparative risk perception. Multiple regression analysis revealed that the factor predicting educational needs of type 2 diabetes people was their comparative risk perceptions, rather than the severity of diabetes complications or sociodemographic variables. Conclusion: Since risk perception is the factor that indicates the educational needs of people with type 2 diabetes, there is a need to explore factors which increase risk perception, in order to meet educational needs. The findings suggest that a more specific and individualized educational program, which focuses on each person\u27s risk perceptions, should be developed
3D Dual-Fusion: Dual-Domain Dual-Query Camera-LiDAR Fusion for 3D Object Detection
Fusing data from cameras and LiDAR sensors is an essential technique to
achieve robust 3D object detection. One key challenge in camera-LiDAR fusion
involves mitigating the large domain gap between the two sensors in terms of
coordinates and data distribution when fusing their features. In this paper, we
propose a novel camera-LiDAR fusion architecture called, 3D Dual-Fusion, which
is designed to mitigate the gap between the feature representations of camera
and LiDAR data. The proposed method fuses the features of the camera-view and
3D voxel-view domain and models their interactions through deformable
attention. We redesign the transformer fusion encoder to aggregate the
information from the two domains. Two major changes include 1) dual query-based
deformable attention to fuse the dual-domain features interactively and 2) 3D
local self-attention to encode the voxel-domain queries prior to dual-query
decoding. The results of an experimental evaluation show that the proposed
camera-LiDAR fusion architecture achieved competitive performance on the KITTI
and nuScenes datasets, with state-of-the-art performances in some 3D object
detection benchmarks categories.Comment: 12 pages, 3 figure
CRN: Camera Radar Net for Accurate, Robust, Efficient 3D Perception
Autonomous driving requires an accurate and fast 3D perception system that
includes 3D object detection, tracking, and segmentation. Although recent
low-cost camera-based approaches have shown promising results, they are
susceptible to poor illumination or bad weather conditions and have a large
localization error. Hence, fusing camera with low-cost radar, which provides
precise long-range measurement and operates reliably in all environments, is
promising but has not yet been thoroughly investigated. In this paper, we
propose Camera Radar Net (CRN), a novel camera-radar fusion framework that
generates a semantically rich and spatially accurate bird's-eye-view (BEV)
feature map for various tasks. To overcome the lack of spatial information in
an image, we transform perspective view image features to BEV with the help of
sparse but accurate radar points. We further aggregate image and radar feature
maps in BEV using multi-modal deformable attention designed to tackle the
spatial misalignment between inputs. CRN with real-time setting operates at 20
FPS while achieving comparable performance to LiDAR detectors on nuScenes, and
even outperforms at a far distance on 100m setting. Moreover, CRN with offline
setting yields 62.4% NDS, 57.5% mAP on nuScenes test set and ranks first among
all camera and camera-radar 3D object detectors.Comment: IEEE/CVF International Conference on Computer Vision (ICCV'23
RCM-Fusion: Radar-Camera Multi-Level Fusion for 3D Object Detection
While LiDAR sensors have been succesfully applied to 3D object detection, the
affordability of radar and camera sensors has led to a growing interest in
fusiong radars and cameras for 3D object detection. However, previous
radar-camera fusion models have not been able to fully utilize radar
information in that initial 3D proposals were generated based on the camera
features only and the instance-level fusion is subsequently conducted. In this
paper, we propose radar-camera multi-level fusion (RCM-Fusion), which fuses
radar and camera modalities at both the feature-level and instance-level to
fully utilize radar information. At the feature-level, we propose a Radar
Guided BEV Encoder which utilizes radar Bird's-Eye-View (BEV) features to
transform image features into precise BEV representations and then adaptively
combines the radar and camera BEV features. At the instance-level, we propose a
Radar Grid Point Refinement module that reduces localization error by
considering the characteristics of the radar point clouds. The experiments
conducted on the public nuScenes dataset demonstrate that our proposed
RCM-Fusion offers 11.8% performance gain in nuScenes detection score (NDS) over
the camera-only baseline model and achieves state-of-the-art performaces among
radar-camera fusion methods in the nuScenes 3D object detection benchmark. Code
will be made publicly available.Comment: 10 pages, 5 figure