280 research outputs found
Communicating innovation: An appreciative inquiry investigation into innovation in China
The thesis considers that the intersection of creativity, innovation management, and communication are under researched in China. It seeks make a contribution to this area through exploratory research in a range of companies in Wenzhou, a south-east city in Zhejiang province, China. By researching firms across different sectors, and through analysing the companies‟ experiences of innovation generation and implementation, this thesis offers findings in a range of areas including: the apprehension of successful innovation, innovation and top leadership, and the relationship between innovation and customer value creations. The main findings indicate several aspects of innovation in China. First, Chinese enterprises considered successful innovations as those can bring profitable growth. Second, top leadership drives innovation and has the greatest influence on corporate innovation in Chinese enterprises. Third, although few companies in Wenzhou have created new products, or new markets, an increasing number of customer-oriented innovations occurred in recent years in Chinese enterprises. In addition, investigations on Corporate Social Responsibility (CSR) after the field research, reveals that Chinese state-owned enterprises may be promoting similar ideas in relation to innovation and CSR. To sum up, this research project provided an insight into recent perceptions of innovation, innovation readiness, and innovation achievement by Chinese enterprises in Wenzhou
Fixed-Time Gradient Flows for Solving Constrained Optimization: A Unified Approach
The accelerated method in solving optimization problems has always been an
absorbing topic. Based on the fixed-time (FxT) stability of nonlinear dynamical
systems, we provide a unified approach for designing FxT gradient flows
(FxTGFs). First, a general class of nonlinear functions in designing FxTGFs is
provided. A unified method for designing first-order FxTGFs is shown under
PolyakL jasiewicz inequality assumption, a weaker condition than strong
convexity. When there exist both bounded and vanishing disturbances in the
gradient flow, a specific class of nonsmooth robust FxTGFs with disturbance
rejection is presented. Under the strict convexity assumption, Newton-based
FxTGFs is given and further extended to solve time-varying optimization.
Besides, the proposed FxTGFs are further used for solving equation-constrained
optimization. Moreover, an FxT proximal gradient flow with a wide range of
parameters is provided for solving nonsmooth composite optimization. To show
the effectiveness of various FxTGFs, the static regret analysis for several
typical FxTGFs are also provided in detail. Finally, the proposed FxTGFs are
applied to solve two network problems, i.e., the network consensus problem and
solving a system linear equations, respectively, from the respective of
optimization. Particularly, by choosing component-wisely sign-preserving
functions, these problems can be solved in a distributed way, which extends the
existing results. The accelerated convergence and robustness of the proposed
FxTGFs are validated in several numerical examples stemming from practical
applications
FusionRCNN: LiDAR-Camera Fusion for Two-stage 3D Object Detection
3D object detection with multi-sensors is essential for an accurate and
reliable perception system of autonomous driving and robotics. Existing 3D
detectors significantly improve the accuracy by adopting a two-stage paradigm
which merely relies on LiDAR point clouds for 3D proposal refinement. Though
impressive, the sparsity of point clouds, especially for the points far away,
making it difficult for the LiDAR-only refinement module to accurately
recognize and locate objects.To address this problem, we propose a novel
multi-modality two-stage approach named FusionRCNN, which effectively and
efficiently fuses point clouds and camera images in the Regions of
Interest(RoI). FusionRCNN adaptively integrates both sparse geometry
information from LiDAR and dense texture information from camera in a unified
attention mechanism. Specifically, it first utilizes RoIPooling to obtain an
image set with a unified size and gets the point set by sampling raw points
within proposals in the RoI extraction step; then leverages an intra-modality
self-attention to enhance the domain-specific features, following by a
well-designed cross-attention to fuse the information from two
modalities.FusionRCNN is fundamentally plug-and-play and supports different
one-stage methods with almost no architectural changes. Extensive experiments
on KITTI and Waymo benchmarks demonstrate that our method significantly boosts
the performances of popular detectors.Remarkably, FusionRCNN significantly
improves the strong SECOND baseline by 6.14% mAP on Waymo, and outperforms
competing two-stage approaches. Code will be released soon at
https://github.com/xxlbigbrother/Fusion-RCNN.Comment: 7 pages, 3 figure
Sample-adaptive Augmentation for Point Cloud Recognition Against Real-world Corruptions
Robust 3D perception under corruption has become an essential task for the
realm of 3D vision. While current data augmentation techniques usually perform
random transformations on all point cloud objects in an offline way and ignore
the structure of the samples, resulting in over-or-under enhancement. In this
work, we propose an alternative to make sample-adaptive transformations based
on the structure of the sample to cope with potential corruption via an
auto-augmentation framework, named as AdaptPoint. Specially, we leverage a
imitator, consisting of a Deformation Controller and a Mask Controller,
respectively in charge of predicting deformation parameters and producing a
per-point mask, based on the intrinsic structural information of the input
point cloud, and then conduct corruption simulations on top. Then a
discriminator is utilized to prevent the generation of excessive corruption
that deviates from the original data distribution. In addition, a
perception-guidance feedback mechanism is incorporated to guide the generation
of samples with appropriate difficulty level. Furthermore, to address the
paucity of real-world corrupted point cloud, we also introduce a new dataset
ScanObjectNN-C, that exhibits greater similarity to actual data in real-world
environments, especially when contrasted with preceding CAD datasets.
Experiments show that our method achieves state-of-the-art results on multiple
corruption benchmarks, including ModelNet-C, our ScanObjectNN-C, and
ShapeNet-C.Comment: Accepted by ICCV2023; code: https://github.com/Roywangj/AdaptPoin
Occupancy-MAE: Self-supervised Pre-training Large-scale LiDAR Point Clouds with Masked Occupancy Autoencoders
Current perception models in autonomous driving heavily rely on large-scale
labelled 3D data, which is both costly and time-consuming to annotate. This
work proposes a solution to reduce the dependence on labelled 3D training data
by leveraging pre-training on large-scale unlabeled outdoor LiDAR point clouds
using masked autoencoders (MAE). While existing masked point autoencoding
methods mainly focus on small-scale indoor point clouds or pillar-based
large-scale outdoor LiDAR data, our approach introduces a new self-supervised
masked occupancy pre-training method called Occupancy-MAE, specifically
designed for voxel-based large-scale outdoor LiDAR point clouds. Occupancy-MAE
takes advantage of the gradually sparse voxel occupancy structure of outdoor
LiDAR point clouds and incorporates a range-aware random masking strategy and a
pretext task of occupancy prediction. By randomly masking voxels based on their
distance to the LiDAR and predicting the masked occupancy structure of the
entire 3D surrounding scene, Occupancy-MAE encourages the extraction of
high-level semantic information to reconstruct the masked voxel using only a
small number of visible voxels. Extensive experiments demonstrate the
effectiveness of Occupancy-MAE across several downstream tasks. For 3D object
detection, Occupancy-MAE reduces the labelled data required for car detection
on the KITTI dataset by half and improves small object detection by
approximately 2% in AP on the Waymo dataset. For 3D semantic segmentation,
Occupancy-MAE outperforms training from scratch by around 2% in mIoU. For
multi-object tracking, Occupancy-MAE enhances training from scratch by
approximately 1% in terms of AMOTA and AMOTP. Codes are publicly available at
https://github.com/chaytonmin/Occupancy-MAE.Comment: Accepted by TI
Occ-BEV: Multi-Camera Unified Pre-training via 3D Scene Reconstruction
Multi-camera 3D perception has emerged as a prominent research field in
autonomous driving, offering a viable and cost-effective alternative to
LiDAR-based solutions. However, existing multi-camera algorithms primarily rely
on monocular image pre-training, which overlooks the spatial and temporal
correlations among different camera views. To address this limitation, we
propose a novel multi-camera unified pre-training framework called Occ-BEV,
which involves initially reconstructing the 3D scene as the foundational stage
and subsequently fine-tuning the model on downstream tasks. Specifically, a 3D
decoder is designed for leveraging Bird's Eye View (BEV) features from
multi-view images to predict the 3D geometry occupancy to enable the model to
capture a more comprehensive understanding of the 3D environment. One
significant advantage of Occ-BEV is that it can utilize a vast amount of
unlabeled image-LiDAR pairs for pre-training. The proposed multi-camera unified
pre-training framework demonstrates promising results in key tasks such as
multi-camera 3D object detection and semantic scene completion. When compared
to monocular pre-training methods on the nuScenes dataset, Occ-BEV demonstrates
a significant improvement of 2.0% in mAP and 2.0% in NDS for 3D object
detection, as well as a 0.8% increase in mIOU for semantic scene completion.
codes are publicly available at https://github.com/chaytonmin/Occ-BEV.Comment: 8 pages, 5 figure
Research on reconfigurable control for a hovering PVTOL aircraft
This paper presents a novel reconfigurable control method for the planar vertical take-off and landing (PVTOL) aircraft when actuator faults occur. According to the position subsystem within the multivariable coupling, and the series between subsystems of position and attitude, an active disturbance rejection controller (ADRC) is used to counteract the adverse effects when actuator faults occur. The controller is cascade and ensures the input value of the controlled system can be tracked accurately. The coordinate transformation method is used for model decoupling due to the severe coupling. In addition, the Taylor differentiator is designed to improve the control precision based on the detailed research for tracking differentiator. The stability and safety of the aircraft is much improved in the event of actuator faults. Finally, the simulation results are given to show the effectiveness and performance of the developed method
Urinary paraquat concentration and white blood cell count as prognostic factors in paraquat poisoning
Purpose: To investigate the effect of white blood cell (WBC) and urinary paraquat (PQ) levels on prognostic factors in patients exposed to PQ intoxication using multivariate logistic regression analysis.Methods: A total of 104 subjects intoxicated with PQ between December 2015 and July 2016 were used in this retrospective study. They comprised patients who survived (n = 78), and patients who died (n = 26). Clinical features and prognostic parameters were analyzed in both groups. Multivariate logistic regression analysis was used to establish a prognostic correlation model based on results from single factor variables.Results: Comparison of demographic and clinical attributes between the two groups, survivors (n = 78) and non-survivors (n = 26), revealed that those who survived were not as old (33.3 ± 9.9 years) as nonsurvivors (41.5 ± 12.9 years). In addition, on admission, it was found that the survivors ingested lower amounts of PQ (31.6 ± 13.8 ml) than non-survivors (67.88 ± 31.2 ml). There were significant differences between the two groups with respect to WBC, neutrophils, lymphocytes, lactate dehydrogenase (LDH), creatine kinase (CK), amylase, uric acid (UA), pH, partial pressure of oxygen (PaO2), base excess (BE), lactic acid, and D-dimer levels (p < 0.05).Conclusion: WBC and urine PQ concentration have strong correlation with prognostic factors in PQ poisoning.Keywords: Paraquat intoxication, Dithionite test, Multivariate logistic analysis, Prognosis, Predictor
TiG-BEV: Multi-view BEV 3D Object Detection via Target Inner-Geometry Learning
To achieve accurate and low-cost 3D object detection, existing methods
propose to benefit camera-based multi-view detectors with spatial cues provided
by the LiDAR modality, e.g., dense depth supervision and bird-eye-view (BEV)
feature distillation. However, they directly conduct point-to-point mimicking
from LiDAR to camera, which neglects the inner-geometry of foreground targets
and suffers from the modal gap between 2D-3D features. In this paper, we
propose the learning scheme of Target Inner-Geometry from the LiDAR modality
into camera-based BEV detectors for both dense depth and BEV features, termed
as TiG-BEV. First, we introduce an inner-depth supervision module to learn the
low-level relative depth relations between different foreground pixels. This
enables the camera-based detector to better understand the object-wise spatial
structures. Second, we design an inner-feature BEV distillation module to
imitate the high-level semantics of different keypoints within foreground
targets. To further alleviate the BEV feature gap between two modalities, we
adopt both inter-channel and inter-keypoint distillation for feature-similarity
modeling. With our target inner-geometry distillation, TiG-BEV can effectively
boost BEVDepth by +2.3% NDS and +2.4% mAP, along with BEVDet by +9.1% NDS and
+10.3% mAP on nuScenes val set. Code will be available at
https://github.com/ADLab3Ds/TiG-BEV.Comment: Code link: https://github.com/ADLab3Ds/TiG-BE
The effect of inhibiting glycinamide ribonucleotide formyl transferase on the development of neural tube in mice
mRNA gel picture. (TIF 878Â kb
- …