53 research outputs found
Developing a Hyperspectral Remote Sensing-Based Algorithm to Diagnose Potato Moisture for Water-Saving Irrigation
Appropriate water supply is crucial for high-yield and high-quality potato tuber production. However, potatoes are mainly planted in arid and semi-arid regions in China, where the precipitation usually cannot meet the water demand throughout the growth period. In view of the actual situation of water shortage in these areas, to monitor the water status of potato plants timely and accurately and thus precisely control the irrigation are of significance for water-saving management of potatoes. Hyperspectral remote sensing has unique advantages in diagnosing crop water stress. In this paper, the canopy spectral reflectance and plant water content were measured under five irrigation treatments. The spectral parameters that respond to plant water content were selected, and a hyperspectral water diagnosis model for leaf water content (LWC) and aboveground water content (AGWC) of potato plants was established. It was found that potato tuber yield was the highest during the entire growth period under sufficient irrigation, and the plant water content showed a downward trend as the degree of drought intensified. The peak hyperspectral reflectance of potato plant canopies appeared in the red wavelength, where the reflectance varied significantly under different water treatments and decreased with decreasing irrigation. Six models with sensitive bands, first-order derivatives, and moisture spectral indices were established to monitor water content of potato plants. The R2 values of partial least squares regression (PLSR), support vector machine (SVM), and BP neural network (BP) models are 0.8418, 0.9020, and 0.8926, respectively, between LWC and hyperspectral data; and 0.8003, 0.8167, and 0.8671, respectively, between the AGWC and hyperspectral data. These six models can all predict the water content of potato plants, but SVM is the best model for predicting LWC of potato plants. These results are of great significance for guiding precision irrigation of potato plants at different growth stages
Safety analysis of pemigatinib leveraging the US Food and Drug administration adverse event reporting system
Background: Cholangiocarcinoma (CCA) is a highly lethal and aggressive epithelial tumor of the hepatobiliary system. A poor prognosis, propensity for relapse, low chance of cure and survival are some of its hallmarks. Pemigatinib, the first targeted treatment for CCA in the United States, has been demonstrated to have a significant response rate and encouraging survival data in early-phase trials. The adverse events (AEs) of pemigatinib must also be determined.Objective: To understand more deeply the safety of pemigatinib in the real world through data-mining of the US Food and Drug Administration (FDA) Adverse Event Reporting System (FAERS).Methods: Disproportionality analysis was employed in a retrospective pharmacovigilance investigation to identify the AEs linked to pemigatinib use as signals. Data were collected between 1 January 2020 to 30 June 2022. Four data-mining methods (proportional reporting odds ratio; proportional reporting ratio; Bayesian confidence propagation neural networks of information components; empirical Bayes geometric means) were used to calculate disproportionality.Results: A total of 203 cases using pemigatinib as the prime-suspect medication were found in our search, which involved 99 preferred terms (PTs). Thirteen signals of pemigatinib-induced AEs in seven System Organ Classes were detected after confirming the four algorithms simultaneously. Nephrolithiasis was an unexpected significant AE not listed on the drug label found in our data-mining. Comparison of the differences between pemigatinib and platinum drugs in terms of 33Â PTs revealed that 13Â PTs also met the criteria of the four algorithms. Ten of these PTs were identical to those compared with all other drugs, in which (excluding a reduction in phosphorus in blood) other PT signal values were higher than those of all other drugs tested. However, comparison of the differences between pemigatinib and infigratinib in terms of the 33Â PTs revealed no significant signals in each algorithm method.Conclusion: Some significant signals were detected between pemigatinib use and AEs. PTs with apparently strong signals and PTs not mentioned in the label should be taken seriously
VoxelFormer: Bird's-Eye-View Feature Generation based on Dual-view Attention for Multi-view 3D Object Detection
In recent years, transformer-based detectors have demonstrated remarkable
performance in 2D visual perception tasks. However, their performance in
multi-view 3D object detection remains inferior to the state-of-the-art (SOTA)
of convolutional neural network based detectors. In this work, we investigate
this issue from the perspective of bird's-eye-view (BEV) feature generation.
Specifically, we examine the BEV feature generation method employed by the
transformer-based SOTA, BEVFormer, and identify its two limitations: (i) it
only generates attention weights from BEV, which precludes the use of lidar
points for supervision, and (ii) it aggregates camera view features to the BEV
through deformable sampling, which only selects a small subset of features and
fails to exploit all information. To overcome these limitations, we propose a
novel BEV feature generation method, dual-view attention, which generates
attention weights from both the BEV and camera view. This method encodes all
camera features into the BEV feature. By combining dual-view attention with the
BEVFormer architecture, we build a new detector named VoxelFormer. Extensive
experiments are conducted on the nuScenes benchmark to verify the superiority
of dual-view attention and VoxelForer. We observe that even only adopting 3
encoders and 1 historical frame during training, VoxelFormer still outperforms
BEVFormer significantly. When trained in the same setting, VoxelFormer can
surpass BEVFormer by 4.9% NDS point. Code is available at:
https://github.com/Lizhuoling/VoxelFormer-public.git
Linking Artificial Sweetener Intake With Kidney Function: Insights From Nhanes 2003-2006 and Findings From Mendelian Randomization Research
BACKGROUND: The current investigation examines the association between artificial sweetener (AS) consumption and the likelihood of developing chronic kidney disease (CKD), along with its impact on kidney function.
METHODS: We utilized data from the National Health and Nutrition Examination Survey from 2003-2006 to conduct covariance analysis and weighted adjusted logistic regression, aiming to assess the association between artificial sweetener intake and CKD risk, as well as kidney function indicators. Subsequently, we employed Mendelian randomization methods to validate the causal relationship between the intake of artificial sweeteners, CKD risk, and kidney function indicators. Instrumental variable analysis using inverse-variance weighting and Robust adjusted profile score were the primary analytical methods employed.
RESULTS: A total of 20,470 participants were included in the study, with 1,257 participants ultimately included in the analysis. In all adjusted logistic regression models, no significant association was found between the intake of artificial sweeteners and CKD risk. Similarly, the summary odds ratios (OR) for each unit change in genetically predicted CKD risk were 2.14 (95% CI: 0.83, 5.21,
CONCLUSION: Our study does not support a causal relationship between artificial sweetener intake and the risk of CKD. However, due to the limitations and potential confounding factors, these findings need to be further validated through larger sample sizes in observational studies and Mendelian randomization analyses
Long-term survival, toxicities, and the role of chrono-chemotherapy with different infusion rates in locally advanced nasopharyngeal carcinoma patients treated with intensity-modulated radiation therapy: a retrospective study with a 5-year follow-up
PurposeThis study aimed to evaluate 5-year outcomes and the late toxicity profile of chrono-chemotherapy with different infusion rates in patients with locally advanced nasopharyngeal carcinoma (NPC).Methods and materialsOur retrospective analysis included 70 patients with locally advanced NPC stages III and IVB (according to the 2010 American Joint Committee on Cancer staging system). Patients were treated with two cycles of induction chemotherapy (IC) before concurrent chemoradiotherapy (CCRT) at Guizhou Cancer Hospital. The IC with docetaxel, cisplatin (DDP) and fluorouracil regimen. Patients were divided into two groups during CCRT. Using a “MELODIE” multi-channel programmed pump, DDP (100 mg/m2) was administered for 12 hours from 10:00 am to 10:00 pm and repeated every 3 weeks for 2-3 cycles. DDP was administered at the peak period of 4:00 pm in the sinusoidal chrono-modulated infusion group (Arm A, n=35). The patients in Arm B received a constant rate of infusion. Both arms received radiotherapy through the same technique and dose fraction. The long-term survival and disease progression were observed.ResultsAfter a median follow-up of 82.8 months, the 5-year progression-free survival rate was 81.3% in Arm A and 79.6% in Arm B (P = 0.85). The 5-year overall survival rate was not significantly different between Arm A and Arm B (79.6% vs 85.3%, P = 0.79). The 5-year distant metastasis-free survival rate was 83.6% in Arm A and 84.6% in Arm B (P = 0.75). The 5-year local recurrence-free survival rate was 88.2% in Arm A and 85.3% in Arm B (P = 0.16). There were no late toxicities of grade 3-4 in either group. Both groups had grade 1-2 late toxicities. Dry mouth was the most common late toxic side effect, followed by hearing loss and difficulty in swallowing. There was no statistically significant difference between Arm A and Arm B in terms of side effects.ConclusionLong-term analysis confirmed that in CCRT, cisplatin administration with sinusoidal chrono-modulated infusion was not superior to the constant infusion rate in terms of long-term toxicity and prognosis
Postoperative myopic shift and visual acuity rehabilitation in patients with bilateral congenital cataracts
BackgroundThis study aimed to explore the postoperative myopic shift and its relationship to visual acuity rehabilitation in patients with bilateral congenital cataracts (CCs).MethodsBilateral CC patients who underwent cataract extraction and primary intraocular lens implantations before 6 years old were included and divided into five groups according to surgical ages (<2, 2–3, 3–4, 4–5, and 5–6 years). The postoperative myopic shift rates, spherical equivalents (SEs), and the best corrected visual acuity (BCVA) were measured and analyzed.ResultsA total of 1,137 refractive measurements from 234 patients were included, with a mean follow-up period of 34 months. The postoperative mean SEs at each follow-up in the five groups were linearly fitted with a mean R2 = 0.93 ± 0.03, which showed a downtrend of SE with age (linear regression). Among patients with a follow-up of 4 years, the mean postoperative myopic shift rate was 0.84, 0.81, 0.68, 0.24, and 0.28 diopters per year (D/y) in the five age groups (from young to old), respectively. The BCVA of those with a surgical age of <2 years at the 4-year visit was 0.26 (LogMAR), and the mean postoperative myopic shift rate was 0.84 D/y. For patients with a surgical age of 2–6 years, a poorer BCVA at the 4-year visit was found in those with higher postoperative myopic shift rates (r = 0.974, p = 0.026, Pearson’s correlation test).ConclusionPerforming cataract surgery for patients before 2 years old and decreasing the postoperative myopic shift rates for those with a surgical age of 2–6 years may benefit visual acuity rehabilitation
Delving into the Pre-training Paradigm of Monocular 3D Object Detection
The labels of monocular 3D object detection (M3OD) are expensive to obtain.
Meanwhile, there usually exists numerous unlabeled data in practical
applications, and pre-training is an efficient way of exploiting the knowledge
in unlabeled data. However, the pre-training paradigm for M3OD is hardly
studied. We aim to bridge this gap in this work. To this end, we first draw two
observations: (1) The guideline of devising pre-training tasks is imitating the
representation of the target task. (2) Combining depth estimation and 2D object
detection is a promising M3OD pre-training baseline. Afterwards, following the
guideline, we propose several strategies to further improve this baseline,
which mainly include target guided semi-dense depth estimation, keypoint-aware
2D object detection, and class-level loss adjustment. Combining all the
developed techniques, the obtained pre-training framework produces pre-trained
backbones that improve M3OD performance significantly on both the KITTI-3D and
nuScenes benchmarks. For example, by applying a DLA34 backbone to a naive
center-based M3OD detector, the moderate score of Car on the
KITTI-3D testing set is boosted by 18.71\% and the NDS score on the nuScenes
validation set is improved by 40.41\% relatively
Metabolomics of Hydrazine-Induced Hepatotoxicity in Rats for Discovering Potential Biomarkers
Metabolic pathway disturbances associated with drug-induced liver injury remain unsatisfactorily characterized. Diagnostic biomarkers for hepatotoxicity have been used to minimize drug-induced liver injury and to increase the clinical safety. A metabolomics strategy using rapid-resolution liquid chromatography/tandem mass spectrometry (RRLC-MS/MS) analyses and multivariate statistics was implemented to identify potential biomarkers for hydrazine-induced hepatotoxicity. The global serum and urine metabolomics of 30 hydrazine-treated rats at 24 or 48 h postdosing and 24 healthy rats were characterized by a metabolomics approach. Multivariate statistical data analyses and receiver operating characteristic (ROC) curves were performed to identify the most significantly altered metabolites. The 16 most significant potential biomarkers were identified to be closely related to hydrazine-induced liver injury. The combination of these biomarkers had an area under the curve (AUC) > 0.85, with 100% specificity and sensitivity, respectively. This high-quality classification group included amino acids and their derivatives, glutathione metabolites, vitamins, fatty acids, intermediates of pyrimidine metabolism, and lipids. Additionally, metabolomics pathway analyses confirmed that phenylalanine, tyrosine, and tryptophan biosynthesis as well as tyrosine metabolism had great interactions with hydrazine-induced liver injury in rats. These discriminating metabolites might be useful in understanding the pathogenesis mechanisms of liver injury and provide good prospects for drug-induced liver injury diagnosis clinically
The Distribution of Emergency Logistics Centers under the COVID-19 Lockdown: The Case of Yangtze River Delta Area
The regular lockdown policy adopted in controlling the pandemic of COVID-19 has caused logistic disruptions in some areas that have a great impact on the living standards of residents and the production of enterprises. Given that the construction of emergency logistics centers is an effective solution, this paper takes the Yangtze River Delta Area (YRDA) of China as an example and discusses the site selection and material distribution of the emergency logistics centers in the region via a two-stage model. The first stage is the selection of candidate emergency logistics centers in the YRDA. A comprehensive evaluation index system is built with 4 primary and 15 secondary indexes to evaluate the logistic infrastructure capacity of the 41 cities in the YRDA. Further, through a principal component analysis, 12 cities are selected as candidate construction sites for emergency logistics centers. In the second stage, a biobjective site selection model with uncertain demand is established and calculated via the NSGA-II algorithm. According to the time sensitivity of emergency logistics, six cities are filtered from the optimal solution set, including Hefei, Hangzhou, Xuzhou, Wenzhou, Changzhou, and Shanghai, ensuring that all 41 cities are within their service scope
- …