295 research outputs found
Wideband Anti-Jamming Based on Free Space Optical Communication and Photonic Signal Processing
We propose and demonstrate an anti-jamming system to defend against wideband jamming attack. Free space optical communication is deployed to provide a reference for jamming cancellation. The mixed signal is processed and separated with photonic signal processing method to achieve large bandwidth. As an analog signal processing method, the cancellation system introduces zero latency. The radio frequency signals are modulated on optical carriers to achieve wideband and unanimous frequency response. With wideband and zero latency, the system meets the key requirements of high speed and real-time communications in transportation systems
Nondegeneracy of positive bubble solutions for generalized energy-critical Hartree equations
We show the nondegeneracy of positive bubble solutions for generalized
energy-critical Hartree equations (NLH) \begin{equation*}
-{\Delta u}\sts{x}
-{\bm\alpha}\sts{N,\lambda}
\int_{\R^N}
{ \frac{ u^{p}\sts{y}}{\pabs{\,x-y\,}{\lambda}} }\diff{y}\,
u^{p-1}\sts{x}
=0,\quad x\in \R^N \end{equation*} where is a real-valued function,
, , and
{\bm\alpha}\sts{N,\lambda} is a constant. It generalizes the results for the
whole range in \cite{DY2019dcds, GWY2020na, LTX2021, MWX:Hartree}
and confirms an open nondegeneracy problem in \cite{GMYZ2022cvpde}.
Firstly, by the stereographic projection and sharp Hardy-Littlewood-Sobolev
inequality on the sphere in \cite{FL2012}, we give an alternative proof
of the existence of the extremizer of sharp Hardy-Littlewood-Sobolev inequality
in without use of the rearrangement inequalities in
\cite{lieb2001analysis}, which is related to
the existence of positive bubble solutions of (NLH). Secondly, by making use
of the Green function, we obtain an integral form in of the
corresponding linearized equation around positive bubble solutions under
suitable decay condition, and its equivalent integral form on the sphere
via the stereographic projection. Lastly, together with the key spherical
harmonic decomposition and the Funk-Hecke formula of the spherical harmonic
functions in \cite{AH2012, DX2013book, SteinW:Fourier anal}, we can obtain the
nondegeneracy of positive bubble solutions for generalized energy-critical
Hartree equation (NLH), which is inspired by Frank and Lieb in
\cite{FL2012am,FL2012}.Comment: 26 pages. Any comment is welcom
Enhancing Mixed Traffic Flow Safety Via Connected and Autonomous Vehicle Trajectory Planning with a Reinforcement Learning Approach
The longitudinal trajectory planning of connected and autonomous vehicle (CAV) has been widely studied in the literature to reduce travel time or fuel consumptions. The safety impact of CAV trajectory planning to the mixed traffic flow with both CAV and human-driven vehicle (HDV), however, is not well understood yet. This study presents a reinforcement learning modeling approach, named Monte Carlo tree search-based autonomous vehicle safety algorithm, or MCTS-AVS, to optimize the safety of mixed traffic flow, on a one-lane roadway with signalized intersection control. Crash potential index (CPI) is defined to quantitively measure the safety performance of the mixed traffic flow. The CAV trajectory planning problem is firstly formulated as an optimization model; then, the solution procedure based on reinforcement learning is proposed. The tree-expansion determination module and rollout termination module are developed to identify and reduce the unnecessary tree expansion, so as to train the model more efficiently towards the desired direction. The case study results showed that the proposed algorithm was able to reduce the CPI by 76.56%, when compared with a benchmark model without any intelligence, and 12.08%, when compared with another benchmark model that the team developed earlier. These results demonstrated the satisfactory performance of the proposed algorithm in enhancing the safety of the mixed traffic flow
CoRide: Joint Order Dispatching and Fleet Management for Multi-Scale Ride-Hailing Platforms
How to optimally dispatch orders to vehicles and how to tradeoff between
immediate and future returns are fundamental questions for a typical
ride-hailing platform. We model ride-hailing as a large-scale parallel ranking
problem and study the joint decision-making task of order dispatching and fleet
management in online ride-hailing platforms. This task brings unique challenges
in the following four aspects. First, to facilitate a huge number of vehicles
to act and learn efficiently and robustly, we treat each region cell as an
agent and build a multi-agent reinforcement learning framework. Second, to
coordinate the agents from different regions to achieve long-term benefits, we
leverage the geographical hierarchy of the region grids to perform hierarchical
reinforcement learning. Third, to deal with the heterogeneous and variant
action space for joint order dispatching and fleet management, we design the
action as the ranking weight vector to rank and select the specific order or
the fleet management destination in a unified formulation. Fourth, to achieve
the multi-scale ride-hailing platform, we conduct the decision-making process
in a hierarchical way where a multi-head attention mechanism is utilized to
incorporate the impacts of neighbor agents and capture the key agent in each
scale. The whole novel framework is named as CoRide. Extensive experiments
based on multiple cities real-world data as well as analytic synthetic data
demonstrate that CoRide provides superior performance in terms of platform
revenue and user experience in the task of city-wide hybrid order dispatching
and fleet management over strong baselines.Comment: CIKM 201
Polyethylene glycol combined with lactulose has better efficacy than polyethylene glycol alone in bowel preparation before colonoscopy: A meta-analysis
Background: The accuracy of diagnosis and the safety of treatment in colonoscopy depends largely on the quality of bowel cleansing. This study aimed to compare the efficacy and adverse reactions of Polyethylene Glycol (PEG) combined with lactulose with that of PEG alone in bowel preparation before colonoscopy.
Methods: The authors searched a number of databases including EMBASE, MEDLINE, Cochrane Library, and China Academic Journals Full-text Database. The authors screened according to literature inclusion and exclusion criteria, assessed the quality of the included literature, and extracted the data. The meta-analysis of included literature used RevMan 5.3 and Stata 14.0 software.
Results: A total of 18 studies, including 2274 patients, were enrolled. The meta-analysis showed that PEG combined with lactulose had a better efficacy (OR = 3.87, 95% CI 3.07‒4.87, p = 0.000, and I2 = 36.2% in the efficiency group; WMD = 0.86, 95% CI 0.69‒1.03, p = 0.032 and I2 = 0% in the BBPS score group) in bowel preparation for patients with or without constipation. Moreover, PEG combined with lactulose had fewer adverse reactions, including abdominal pain (OR = 1.42, 95% CI 0.94‒2.14, p = 0.094), nausea (OR = 1.60, 95% CI 1.13‒2.28, p = 0.009) and vomiting (OR = 1.77, 95% CI 1.14‒2.74, p = 0.011), than PEG alone. No significant reduction in the incidence of abdominal distention was observed.
Conclusion: PEG combined with lactulose may be a better choice for bowel preparation before colonoscopy compared with PEG alone
Study on the limit of moisture content of smouldering humus during sub-surface fires in the boreal forests of China
A sub-surface forest fire is a kind of fire that spreads slowly with no flames and lower temperatures, and threatens the ecosystem and human life. The moisture content of humus is considered to be an important factor in determining fire occurrence and sustaining. The humus of the Larix gmelinii in the Daxing’an Mountains was selected for the experiment, the limit moisture content condition of sub-surface forest fires was determined by an experiment simulating smoldering, and the prediction model of the probability of sub-surface forest fire occurrence was established. The results will be of great significance for the prevention, monitoring, and fighting of sub-surface forest fires in the boreal forest. The results showed that when the moisture content of humus in the upper layer was low, the smoldering process could be self-sustaining at 20%. For deeper layers of a depth of 18 cm, this increased to 30% moisture content of the humus and was the critical depth for sub-surface fires. The moisture content of 40% was a limit to burning where smoldering can only last for a short duration and is then extinguished. When the moisture content of the humus was 20%, the smoldering temperature was higher and the rate of spread was faster, with smoldering being maintained for longer periods at 30% moisture content. The regression prediction model of the highest temperature and vertical rate of spread in a column of humus was correlated to moisture content and depth, and the model significance was good at p < 0.01. Based on moisture content and depth, the occurrence probability prediction model of sub-surface fires has a good correlation (R 2 = 0.93) and high prediction accuracy (AUC = 0.995). The effect of moisture content (Or = 4.008) on the occurrence probability of sub-surface fires is higher than that of depth (Or = 2.948). The results point out that it is necessary to prevent and monitor the occurrence of sub-surface fires when the humus moisture content is less than 40%. In order to reduce the risk of sub-surface fires, the monitoring time of the fire field should be extended after the fire is extinguished due to the slowburning process of the sub-surface fire. Increasing the moisture content of the humus is an important method to reduce the probability and restrain the spread of sub-surface fires
RH20T: A Comprehensive Robotic Dataset for Learning Diverse Skills in One-Shot
A key challenge in robotic manipulation in open domains is how to acquire
diverse and generalizable skills for robots. Recent research in one-shot
imitation learning has shown promise in transferring trained policies to new
tasks based on demonstrations. This feature is attractive for enabling robots
to acquire new skills and improving task and motion planning. However, due to
limitations in the training dataset, the current focus of the community has
mainly been on simple cases, such as push or pick-place tasks, relying solely
on visual guidance. In reality, there are many complex skills, some of which
may even require both visual and tactile perception to solve. This paper aims
to unlock the potential for an agent to generalize to hundreds of real-world
skills with multi-modal perception. To achieve this, we have collected a
dataset comprising over 110,000 contact-rich robot manipulation sequences
across diverse skills, contexts, robots, and camera viewpoints, all collected
in the real world. Each sequence in the dataset includes visual, force, audio,
and action information. Moreover, we also provide a corresponding human
demonstration video and a language description for each robot sequence. We have
invested significant efforts in calibrating all the sensors and ensuring a
high-quality dataset. The dataset is made publicly available at rh20t.github.ioComment: RSS 2023 workshop on LTAMP. The project page is at rh20t.github.i
Anti‑inflammatory activities of Gardenia jasminoides extracts in retinal pigment epithelial cells and zebrafish embryos
Modeling a Production Function to Evaluate the Effect of Medical Staffing on Antimicrobial Stewardship Performance in China, 2009–2016: Static and Dynamic Panel Data Analyses
Background: Antimicrobial resistance (AMR) is an international problem. Emergence and spread of AMR are strongly associated with overuse or inappropriate use of antimicrobials. Antimicrobial stewardship ensures the appropriate use of antimicrobials, and is an effective approach to control AMR. This study aims to understand the relationship between medical staffing and antimicrobial stewardship performance in China.Methods: A provincial-level panel dataset from 2009 to 2016 is used. A macro production function is used to quantify the relationship. The output, antimicrobial stewardship performance, is measured by changes in methicillin resistance rates of Staphylococcus. aureus (S. aureus) and coagulase-negative staphylococci (CoNS). The labor input is measured by the numbers of infectious diseases physicians, pharmacists, clinical microbiologists, and nurses in hospitals per 100,000 populations, whereas the capital input is represented by the number of hospital beds per 100,000 populations. The technology is captured by the time index. Both static and dynamic panel data approaches are employed.Results: The increasing number of clinical microbiologists is a significant predictor of lower resistance of CoNS according to dynamic models (Coef. = −0.191, −0.351; p = 0.070, 0.004, respectively). However, a larger number of nurses is significantly associated with higher resistance of S. aureus (Coef. = 0.648; p = 0.044). In addition, the numbers of the other two groups of medical professionals exhibit no significant associations with stewardship performance.Conclusions: The study demonstrates the crucial role of clinical microbiologists in antimicrobial stewardship. The predicted increased risk of resistance with the higher number of nurses may be attributable to their lack of related knowledge and their unrecognized functions in antimicrobial stewardship
Lumen contour segmentation in ivoct based on n-type cnn
Automatic segmentation of lumen contour plays an important role in medical imaging and diagnosis, which is the first step towards the evaluation of morphology of vessels under analysis and the identification of possible atherosclerotic lesions. Meanwhile, quantitative information can only be obtained with segmentation, contributing to the appearance of novel methods which can be successfully applied to intravascular optical coherence tomography (IVOCT) images. This paper proposed a new end-to-end neural network (N-Net) for the automatic lumen segmentation, using multi-scale features based deep neural network, for IVOCT images. The architecture of the N-Net contains a multi-scale input layer, a N-type convolution network layer and a cross-entropy loss function. The multi-scale input layer in the proposed N-Net is designed to avoid the loss of information caused by pooling in traditional U-Net and also enriches the detailed information in each layer. The N-type convolutional network is proposed as the framework in the whole deep architecture. Finally, the loss function guarantees the degree of fidelity between the output of proposed method and the manually labeled output. In order to enlarge the training set, data augmentation is also introduced. We evaluated our method against loss, accuracy, recall, dice similarity coefficient, jaccard similarity coefficient and specificity. The experimental results presented in this paper demonstrate the superior performance of the proposed N-Net architecture, comparing to some existing networks, for enhancing the precision of automatic lumen segmentation and increasing the detailed information of edges of the vascular lumen
- …