190 research outputs found
Revisiting the Space-Time Gradient Method: A Time-clocking Perspective, High Order Difference Time Discretization and Comparison with the Harmonic Balance Method
This paper revisits the Space-Time Gradient (STG) method which was developed for efficient analysis of unsteady flows due to rotor–stator interaction and presents the method from an alternative time-clocking perspective. The STG method requires reordering of blade passages according to their relative clocking positions with respect to blades of an adjacent blade row. As the space-clocking is linked to an equivalent time-clocking, the passage reordering can be performed according to the alternative time-clocking. With the time-clocking perspective, unsteady flow solutions from different passages of the same blade row are mapped to flow solutions of the same passage at different time instants or phase angles. Accordingly, the time derivative of the unsteady flow equation is discretized in time directly, which is more natural than transforming the time derivative to a spatial one as with the original STG method. To improve the solution accuracy, a ninth order difference scheme has been investigated for discretizing the time derivative. To achieve a stable solution for the high order scheme, the implicit solution method of Lower-Upper Symmetric Gauss-Seidel/Gauss-Seidel (LU-SGS/GS) has been employed. The NASA Stage 35 and its blade-count-reduced variant are used to demonstrate the validity of the time-clocking based passage reordering and the advantages of the high order difference scheme for the STG method. Results from an existing harmonic balance flow solver are also provided to contrast the two methods in terms of solution stability and computational cost
GarmentDreamer: 3DGS Guided Garment Synthesis with Diverse Geometry and Texture Details
Traditional 3D garment creation is labor-intensive, involving sketching,
modeling, UV mapping, and texturing, which are time-consuming and costly.
Recent advances in diffusion-based generative models have enabled new
possibilities for 3D garment generation from text prompts, images, and videos.
However, existing methods either suffer from inconsistencies among multi-view
images or require additional processes to separate cloth from the underlying
human model. In this paper, we propose GarmentDreamer, a novel method that
leverages 3D Gaussian Splatting (GS) as guidance to generate wearable,
simulation-ready 3D garment meshes from text prompts. In contrast to using
multi-view images directly predicted by generative models as guidance, our 3DGS
guidance ensures consistent optimization in both garment deformation and
texture synthesis. Our method introduces a novel garment augmentation module,
guided by normal and RGBA information, and employs implicit Neural Texture
Fields (NeTF) combined with Score Distillation Sampling (SDS) to generate
diverse geometric and texture details. We validate the effectiveness of our
approach through comprehensive qualitative and quantitative experiments,
showcasing the superior performance of GarmentDreamer over state-of-the-art
alternatives. Our project page is available at:
https://xuan-li.github.io/GarmentDreamerDemo/
Dynamic Sparse Training versus Dense Training: The Unexpected Winner in Image Corruption Robustness
It is generally perceived that Dynamic Sparse Training opens the door to a new era of scalability and efficiency for artificial neural networks at, perhaps, some costs in accuracy performance for the classification task. At the same time, Dense Training is widely accepted as being the "de facto" approach to train artificial neural networks if one would like to maximize their robustness against image corruption. In this paper, we question this general practice. Consequently, \textit{we claim that}, contrary to what is commonly thought, the Dynamic Sparse Training methods can consistently outperform Dense Training in terms of robustness accuracy, particularly if the efficiency aspect is not considered as a main objective (i.e., sparsity levels between 10% and up to 50%), without adding (or even reducing) resource cost. We validate our claim on two types of data, images and videos, using several traditional and modern deep learning architectures for computer vision and three widely studied Dynamic Sparse Training algorithms. Our findings reveal a new yet-unknown benefit of Dynamic Sparse Training and open new possibilities in improving deep learning robustness beyond the current state of the art
The Tianlin Mission: a 6m UV/Opt/IR space telescope to explore the habitable worlds and the universe
[Abridged] It is expected that the ongoing and future space-borne planet
survey missions including TESS, PLATO, and Earth 2.0 will detect thousands of
small to medium-sized planets via the transit technique, including over a
hundred habitable terrestrial rocky planets. To conduct a detailed study of
these terrestrial planets, particularly the cool ones with wide orbits, the
exoplanet community has proposed various follow-up missions. The currently
proposed ESA mission ARIEL is capable of characterization of planets down to
warm super-Earths mainly using transmission spectroscopy. The NASA 6m
UV/Opt/NIR mission proposed in the Astro2020 Decadal Survey may further tackle
down to habitable rocky planets, and is expected to launch around 2045. In the
meanwhile, China is funding a concept study of a 6-m class space telescope
named Tianlin (A UV/Opt/NIR Large Aperture Space Telescope) that aims to start
its operation within the next 10-15 years and last for 5+ years. Tianlin will
be primarily aimed to the discovery and characterization of rocky planets in
the habitable zones (HZ) around nearby stars and to search for potential
biosignatures mainly using the direct imaging method. Transmission and emission
spectroscopy at moderate to high resolution will be carried out as well on a
population of exoplanets to strengthen the understanding of the formation and
evolution of exoplanets. It will also carry out in-depth studies of the cosmic
web and early galaxies, and constrain the nature of the dark matter and dark
energy. We describe briefly the primary scientific motivations and main
technical considerations based on our preliminary simulation results. We find
that a monolithic off-axis space telescope with a primary mirror diameter
larger than 6m equipped with a high contrast chronograph can identify water in
the atmosphere of a habitable-zone Earth-like planet around a Sun-like star.Comment: 15 pages, 5 figures, accepted for publication in RAA and is available
onlin
Dynamic Sparse Training versus Dense Training: The Unexpected Winner in Image Corruption Robustness
It is generally perceived that Dynamic Sparse Training opens the door to a new era of scalability and efficiency for artificial neural networks at, perhaps, some costs in accuracy performance for the classification task. At the same time, Dense Training is widely accepted as being the "de facto" approach to train artificial neural networks if one would like to maximize their robustness against image corruption. In this paper, we question this general practice. Consequently, we claim that, contrary to what is commonly thought, the Dynamic Sparse Training methods can consistently outperform Dense Training in terms of robustness accuracy, particularly if the efficiency aspect is not considered as a main objective (i.e., sparsity levels between 10% and up to 50%), without adding (or even reducing) resource cost. We validate our claim on two types of data, images and videos, using several traditional and modern deep learning architectures for computer vision and three widely studied Dynamic Sparse Training algorithms. Our findings reveal a new yet-unknown benefit of Dynamic Sparse Training and open new possibilities in improving deep learning robustness beyond the current state of the art
More ConvNets in the 2020s:Scaling up Kernels Beyond 51x51 using Sparsity
Transformers have quickly shined in the computer vision world since the emergence of Vision Transformers (ViTs). The dominant role of convolutional neural networks (CNNs) seems to be challenged by increasingly effective transformer-based models. Very recently, a couple of advanced convolutional models strike back with large kernels motivated by the local-window attention mechanism, showing appealing performance and efficiency. While one of them, i.e. RepLKNet, impressively manages to scale the kernel size to 31x31 with improved performance, the performance starts to saturate as the kernel size continues growing, compared to the scaling trend of advanced ViTs such as Swin Transformer. In this paper, we explore the possibility of training extreme convolutions larger than 31x31 and test whether the performance gap can be eliminated by strategically enlarging convolutions. This study ends up with a recipe for applying extremely large kernels from the perspective of sparsity, which can smoothly scale up kernels to 61x61 with better performance. Built on this recipe, we propose Sparse Large Kernel Network (SLaK), a pure CNN architecture equipped with sparse factorized 51x51 kernels that can perform on par with or better than state-of-the-art hierarchical Transformers and modern ConvNet architectures like ConvNeXt and RepLKNet, on ImageNet classification as well as a wide range of downstream tasks including semantic segmentation on ADE20K, object detection on PASCAL VOC 2007, and object detection/segmentation on MS COCO
More ConvNets in the 2020s:Scaling up Kernels Beyond 51x51 using Sparsity
Transformers have quickly shined in the computer vision world since the emergence of Vision Transformers (ViTs). The dominant role of convolutional neural networks (CNNs) seems to be challenged by increasingly effective transformer-based models. Very recently, a couple of advanced convolutional models strike back with large kernels motivated by the local-window attention mechanism, showing appealing performance and efficiency. While one of them, i.e. RepLKNet, impressively manages to scale the kernel size to 31×31 with improved performance, the performance starts to saturate as the kernel size continues growing, compared to the scaling trend of advanced ViTs such as Swin Transformer. In this paper, we explore the possibility of training extreme convolutions larger than 31×31 and test whether the performance gap can be eliminated by strategically enlarging convolutions. This study ends up with a recipe for applying extremely large kernels from the perspective of sparsity, which can smoothly scale up kernels to 61×61 with better performance. Built on this recipe, we propose Sparse Large Kernel Network (SLaK), a pure CNN architecture equipped with sparse factorized 51×51 kernels that can perform on par with or better than state-of-the-art hierarchical Transformers and modern ConvNet architectures like ConvNeXt and RepLKNet, on ImageNet classification as well as a wide range of downstream tasks including semantic segmentation on ADE20K, object detection on PASCAL VOC 2007, and object detection/segmentation on MS COCO
More ConvNets in the 2020s:Scaling up Kernels Beyond 51x51 using Sparsity
Transformers have quickly shined in the computer vision world since the emergence of Vision Transformers (ViTs). The dominant role of convolutional neural networks (CNNs) seems to be challenged by increasingly effective transformer-based models. Very recently, a couple of advanced convolutional models strike back with large kernels motivated by the local-window attention mechanism, showing appealing performance and efficiency. While one of them, i.e. RepLKNet, impressively manages to scale the kernel size to 31×31 with improved performance, the performance starts to saturate as the kernel size continues growing, compared to the scaling trend of advanced ViTs such as Swin Transformer. In this paper, we explore the possibility of training extreme convolutions larger than 31×31 and test whether the performance gap can be eliminated by strategically enlarging convolutions. This study ends up with a recipe for applying extremely large kernels from the perspective of sparsity, which can smoothly scale up kernels to 61×61 with better performance. Built on this recipe, we propose Sparse Large Kernel Network (SLaK), a pure CNN architecture equipped with sparse factorized 51×51 kernels that can perform on par with or better than state-of-the-art hierarchical Transformers and modern ConvNet architectures like ConvNeXt and RepLKNet, on ImageNet classification as well as a wide range of downstream tasks including semantic segmentation on ADE20K, object detection on PASCAL VOC 2007, and object detection/segmentation on MS COCO
Pyrenylpyridines: Sky-Blue Emitters for Organic Light-Emitting Diodes
A novel sky-blue-emitting tripyrenylpyridine derivative, 2,4,6-tri(1-pyrenyl)pyridine (2,4,6-TPP), has been synthesized using a Suzuki coupling reaction and compared with three previously reported isomeric dipyrenylpyridine (DPP) analogues (2,4-di(1-pyrenyl)pyridine (2,4-DPP), 2,6-di(1-pyrenyl)pyridine (2,6-DPP), and 3,5-di(1-pyrenyl)pyridine (3,5-DPP)). As revealed by single-crystal X-ray analysis and computational simulations, all compounds possess highly twisted conformations in the solid state with interpyrene torsional angles of 42.3°–57.2°. These solid-state conformations and packing variations of pyrenylpyridines could be correlated to observed variations in physical characteristics such as photo/thermal stability and spectral properties, but showed only marginal influence on electrochemical properties. The novel derivative, 2,4,6-TPP, exhibited the lowest degree of crystallinity as revealed by powder X-ray diffraction analysis and formed amorphous thin films as verified using grazing-incidence wide-angle X-ray scattering. This compound also showed high thermal/photo stability relative to its disubstituted analogues (DPPs). Thus, a nondoped organic light-emitting diode (OLED) prototype was fabricated using 2,4,6-TPP as the emissive layer, which displayed a sky-blue electroluminescence with Commission Internationale de L’Eclairage (CIE) coordinates of (0.18, 0.34). This OLED prototype achieved a maximum external quantum efficiency of 6.0 ± 1.2% at 5 V. The relatively high efficiency for this simple-architecture device reflects a good balance of electron and hole transporting ability of 2,4,6-TPP along with efficient exciton formation in this material and indicates its promise as an emitting material for design of blue OLED devices
- …
