251 research outputs found
Practical Submodule Capacitor Sizing for Modular Multilevel Converter Considering Grid Faults
Submodule (SM) capacitors are key elements in the modular multilevel converter (MMC), the design of which influences the entire system performance. In practical cases, SM capacitor sizing must consider the abnormal system operation (e.g., grid faults). In order to find a clear design boundary for SM capacitors, a practical capacitor sizing method is presented for the first time in this paper, considering the grid-fault-ride-through operation of the MMC, impact of MMC control system, and aging mechanism of capacitors. The SM capacitor rated voltage, capacitance, ESR, thermal resistance, and lifetime can be decided to ensure reliable operations of the MMC during grid faults. The effectiveness of the proposed method has been verified through experimental tests on a down-scale MMC system.Published versio
Unsupervised Low Light Image Enhancement Using SNR-Aware Swin Transformer
Image captured under low-light conditions presents unpleasing artifacts,
which debilitate the performance of feature extraction for many upstream visual
tasks. Low-light image enhancement aims at improving brightness and contrast,
and further reducing noise that corrupts the visual quality. Recently, many
image restoration methods based on Swin Transformer have been proposed and
achieve impressive performance. However, On one hand, trivially employing Swin
Transformer for low-light image enhancement would expose some artifacts,
including over-exposure, brightness imbalance and noise corruption, etc. On the
other hand, it is impractical to capture image pairs of low-light images and
corresponding ground-truth, i.e. well-exposed image in same visual scene. In
this paper, we propose a dual-branch network based on Swin Transformer, guided
by a signal-to-noise ratio prior map which provides the spatial-varying
information for low-light image enhancement. Moreover, we leverage unsupervised
learning to construct the optimization objective based on Retinex model, to
guide the training of proposed network. Experimental results demonstrate that
the proposed model is competitive with the baseline models
FlatFormer: Flattened Window Attention for Efficient Point Cloud Transformer
Transformer, as an alternative to CNN, has been proven effective in many
modalities (e.g., texts and images). For 3D point cloud transformers, existing
efforts focus primarily on pushing their accuracy to the state-of-the-art
level. However, their latency lags behind sparse convolution-based models (3x
slower), hindering their usage in resource-constrained, latency-sensitive
applications (such as autonomous driving). This inefficiency comes from point
clouds' sparse and irregular nature, whereas transformers are designed for
dense, regular workloads. This paper presents FlatFormer to close this latency
gap by trading spatial proximity for better computational regularity. We first
flatten the point cloud with window-based sorting and partition points into
groups of equal sizes rather than windows of equal shapes. This effectively
avoids expensive structuring and padding overheads. We then apply
self-attention within groups to extract local features, alternate sorting axis
to gather features from different directions, and shift windows to exchange
features across groups. FlatFormer delivers state-of-the-art accuracy on Waymo
Open Dataset with 4.6x speedup over (transformer-based) SST and 1.4x speedup
over (sparse convolutional) CenterPoint. This is the first point cloud
transformer that achieves real-time performance on edge GPUs and is faster than
sparse convolutional methods while achieving on-par or even superior accuracy
on large-scale benchmarks. Code to reproduce our results will be made publicly
available.Comment: The first two authors contributed equally to this wor
TorchSparse++: Efficient Training and Inference Framework for Sparse Convolution on GPUs
Sparse convolution plays a pivotal role in emerging workloads, including
point cloud processing in AR/VR, autonomous driving, and graph understanding in
recommendation systems. Since the computation pattern is sparse and irregular,
specialized high-performance kernels are required. Existing GPU libraries offer
two dataflow types for sparse convolution. The gather-GEMM-scatter dataflow is
easy to implement but not optimal in performance, while the dataflows with
overlapped computation and memory access (e.g.implicit GEMM) are highly
performant but have very high engineering costs. In this paper, we introduce
TorchSparse++, a new GPU library that achieves the best of both worlds. We
create a highly efficient Sparse Kernel Generator that generates performant
sparse convolution kernels at less than one-tenth of the engineering cost of
the current state-of-the-art system. On top of this, we design the Sparse
Autotuner, which extends the design space of existing sparse convolution
libraries and searches for the best dataflow configurations for training and
inference workloads. Consequently, TorchSparse++ achieves 2.9x, 3.3x, 2.2x and
1.7x measured end-to-end speedup on an NVIDIA A100 GPU over state-of-the-art
MinkowskiEngine, SpConv 1.2, TorchSparse and SpConv v2 in inference; and is
1.2-1.3x faster than SpConv v2 in mixed precision training across seven
representative autonomous driving benchmarks. It also seamlessly supports graph
convolutions, achieving 2.6-7.6x faster inference speed compared with
state-of-the-art graph deep learning libraries.Comment: MICRO 2023; Haotian Tang and Shang Yang contributed equally to this
projec
Classical simulation of Quantum Entanglement using Optical Transverse Modes in Multimode Waveguides
We discuss mode-entangled states based on the optical transverse modes of the
optical field propagating in multi-mode waveguides, which are classical analogs
of the quantum entangled states. The analogs are discussed in detail, including
the violation of the Bell inequality and the correlation properties of optical
pulses' group delays. The research on these analogs may be important, for it
not only provides useful insights into fundamental features of quantum
entanglement, but also yields new insights into quantum computation and quantum
communication.Comment: RevTeX v4, 17 pages and 4 figure
Single-cell RNA sequencing reveals cell subpopulations in the tumor microenvironment contributing to hepatocellular carcinoma
Background: Hepatocellular carcinoma (HCC) is among the deadliest cancers worldwide, and advanced HCC is difficult to treat. Identifying specific cell subpopulations in the tumor microenvironment and exploring interactions between the cells and their environment are crucial for understanding the development, prognosis, and treatment of tumors.Methods: In this study, we constructed a tumor ecological landscape of 14 patients with HCC from 43 tumor tissue samples and 14 adjacent control samples. We used bioinformatics analysis to reveal cell subpopulations with potentially specific functions in the tumor microenvironment and to explore the interactions between tumor cells and the tumor microenvironment.Results: Immune cell infiltration was evident in the tumor tissues, and BTG1+RGS1+ central memory T cells (Tcms) interact with tumor cells through CCL5-SDC4/1 axis. HSPA1B may be associated with remodeling of the tumor ecological niche in HCC. Cancer-associated fibroblasts (CAFs) and macrophages (TAMs) were closely associated with tumor cells. APOC1+SPP1+ TAM secretes SPP1, which binds to ITGF1 secreted by CAFs to remodel the tumor microenvironment. More interestingly, FAP+ CAF interacts with naïve T cells via the CXCL12–CXCR4 axis, which may lead to resistance to immune checkpoint inhibitor therapy.Conclusion: Our study suggests the presence of tumor cells with drug-resistant potential in the HCC microenvironment. Among non-tumor cells, high NDUFA4L2 expression in fibroblasts may promote tumor progression, while high HSPA1B expression in central memory T cells may exert anti-tumor effects. In addition, the CCL5–SDC4/1 interaction between BTG1+RGS1+ Tcms and tumor cells may promote tumor progression. Focusing on the roles of CAFs and TAMs, which are closely related to tumor cells, in tumors would be beneficial to the progress of systemic therapy research
Pertinence of glioma and single nucleotide polymorphism of TERT, CCDC26, CDKN2A/B and RTEL1 genes in glioma: a meta-analysis
BackgroundPrevious genetic-epidemiological studies considered TERT (rs2736100), CCDC26 (rs4295627), CDKN2A/B (rs4977756) and RTEL1 (rs6010620) gene polymorphisms as the risk factors specific to glioma. However, the data samples of previous genetic-epidemiological studies are modest to determine whether they have definite association with glioma.MethodThe study paid attention to systematically searching databases of PubMed, Embase, Web of Science (WoS), Scopus, Cochrane Library and Google Scholars. Meta-analysis under 5 genetic models, namely recessive model (RM), over-dominant model (O-DM), allele model (AM), co-dominant model (C-DM) and dominant model (DM) was conducted for generating odds ratios (ORs) and 95% confidence intervals (CIs). That was accompanied by subgroup analyses according to various racial groups. The software STATA 17.0 MP was implemented in the study.Result21 articles were collected. According to data analysis results, in four genetic models (AM, RM, DM and C-DM) TERT gene rs2736100 polymorphism, CCDC26 gene rs4295627 polymorphism, CDKN2A/B gene rs4977756 polymorphism and RTEL1 gene rs6010620 polymorphisms increased the risk of glioma in Caucasians to different degrees. In Asian populations, the CCDC26 gene rs4295627 polymorphism and CDKN2A/B gene rs4977756 polymorphism did not exhibit a relevance to the risk of glioma. It is suggested to cautiously explain these results as the sample size is small.ConclusionThe current meta-analysis suggested that the SNP of TERT (rs2736100), CCDC26 (rs4295627), CDKN2A/B (rs4977756) and RTEL1 (rs6010620) genes in glioma might increase risk of glioma, but there are ethnic differences. Further studies evaluating these polymorphisms and glioma risk are warranted
- …