69 research outputs found
Assessment of multi-air emissions: case of particulate matter (dust), SO2, NOx and CO2 from iron and steel industry of China
Industrial activities are generally energy and air emissions intensive, requiring bulky inputs of raw materials and fossil fuels and emitting huge waste gases including particulate matter (PM, or dust), sulphur dioxide (SO2), nitrogen oxides (NOx), carbon dioxide (CO2), and other substances, which are severely damaging the environment. Many studies have been carried out on the quantification of the concentrations of these air emissions. Although there are studies published on the co-effect of multi-air emissions, a more fair and comprehensive method for assessing the environmental impact of multi-air emissions is still lacking, which can simultaneously consider the flow rate of waste gases, the availability of emitting sources and the concentrations of all emission substances. In this work, a Total Environmental Impact Score (TEIS) approach is proposed to assess the environmental impact of the main industrial processes of an integrated iron and steel site located in the northeast of China. Besides the concentration of each air emission substance, this TEIS approach also combines the flow rate of waste gases and the availability of emitting sources. It is shown that the processes in descending order by the values of TEIS are sintering, ironmaking, steelmaking, thermal power, steel rolling, and coking, with the values of 17.57, 16.68, 10.86, 10.43, 9.60 and 9.27, respectively. In addition, a sensitivity analysis was conducted, indicating that the TEIS order is almost the same with the variation of 10% in the permissible CO2 concentration limit and the weight of each air emission substance. The effects of emitting source availability and waste gas flow rate on the TEIS cannot be neglected in the environmental impact assessment
DiffuStereo: High Quality Human Reconstruction via Diffusion-based Stereo Using Sparse Cameras
We propose DiffuStereo, a novel system using only sparse cameras (8 in this
work) for high-quality 3D human reconstruction. At its core is a novel
diffusion-based stereo module, which introduces diffusion models, a type of
powerful generative models, into the iterative stereo matching network. To this
end, we design a new diffusion kernel and additional stereo constraints to
facilitate stereo matching and depth estimation in the network. We further
present a multi-level stereo network architecture to handle high-resolution (up
to 4k) inputs without requiring unaffordable memory footprint. Given a set of
sparse-view color images of a human, the proposed multi-level diffusion-based
stereo network can produce highly accurate depth maps, which are then converted
into a high-quality 3D human model through an efficient multi-view fusion
strategy. Overall, our method enables automatic reconstruction of human models
with quality on par to high-end dense-view camera rigs, and this is achieved
using a much more light-weight hardware setup. Experiments show that our method
outperforms state-of-the-art methods by a large margin both qualitatively and
quantitatively.Comment: Accepted by ECCV202
Does urbanization have spatial spillover effect on poverty reduction: empirical evidence from rural China
In light of a scarcity of research on the spatial effects of urbanization
on poverty reduction, this study uses panel data on 30 provinces
in China from 2009 to 2019 to construct a system of indices
to assess poverty that spans the four dimensions of the economy,
education, health, and living. We use the spatial autocorrelation
test and the spatial Durbin model (SDM) to analyze the spatial
effects of urbanization on poverty reduction in these different
dimensions. The main conclusions are as follows: (a) China’s
urbanization has the characteristics of spatial aggregation and a
spatial spillover effect. (b) Different dimensions of poverty had
the attributes of spatial agglomeration, and Moran’s index of a
reduction in economic poverty was the highest. Under the SDM,
the different dimensions of poverty also showed a significant
positive spatial correlation. (c) Urbanization has a significant effect
on poverty reduction along the dimensions of the economy, education,
and living, but has little effect on reducing health poverty.
It has a spatial spillover effect on poverty reduction in economic
and living contexts. (d) There were spatial differences in the effect
of urbanization on relieving economic and living-related poverty
ITportrait: Image-Text Coupled 3D Portrait Domain Adaptation
Domain adaptation of 3D portraits has gained more and more attention.
However, the transfer mechanism of existing methods is mainly based on vision
or language, which ignores the potential of vision-language combined guidance.
In this paper, we propose a vision-language coupled 3D portraits domain
adaptation framework, namely Image and Text portrait (ITportrait). ITportrait
relies on a two-stage alternating training strategy. In the first stage, we
employ a 3D Artistic Paired Transfer (APT) method for image-guided style
transfer. APT constructs paired photo-realistic portraits to obtain accurate
artistic poses, which helps ITportrait to achieve high-quality 3D style
transfer. In the second stage, we propose a 3D Image-Text Embedding (ITE)
approach in the CLIP space. ITE uses a threshold function to adaptively control
the optimization direction of image or text in the CLIP space. Comprehensive
quantitative and qualitative results show that our ITportrait achieves
state-of-the-art (SOTA) results and benefits downstream tasks. All source codes
and pre-trained models will be released to the public
VectorTalker: SVG Talking Face Generation with Progressive Vectorisation
High-fidelity and efficient audio-driven talking head generation has been a
key research topic in computer graphics and computer vision. In this work, we
study vector image based audio-driven talking head generation. Compared with
directly animating the raster image that most widely used in existing works,
vector image enjoys its excellent scalability being used for many applications.
There are two main challenges for vector image based talking head generation:
the high-quality vector image reconstruction w.r.t. the source portrait image
and the vivid animation w.r.t. the audio signal. To address these, we propose a
novel scalable vector graphic reconstruction and animation method, dubbed
VectorTalker. Specifically, for the highfidelity reconstruction, VectorTalker
hierarchically reconstructs the vector image in a coarse-to-fine manner. For
the vivid audio-driven facial animation, we propose to use facial landmarks as
intermediate motion representation and propose an efficient landmark-driven
vector image deformation module. Our approach can handle various styles of
portrait images within a unified framework, including Japanese manga, cartoon,
and photorealistic images. We conduct extensive quantitative and qualitative
evaluations and the experimental results demonstrate the superiority of
VectorTalker in both vector graphic reconstruction and audio-driven animation
DreamCraft3D: Hierarchical 3D Generation with Bootstrapped Diffusion Prior
We present DreamCraft3D, a hierarchical 3D content generation method that
produces high-fidelity and coherent 3D objects. We tackle the problem by
leveraging a 2D reference image to guide the stages of geometry sculpting and
texture boosting. A central focus of this work is to address the consistency
issue that existing works encounter. To sculpt geometries that render
coherently, we perform score distillation sampling via a view-dependent
diffusion model. This 3D prior, alongside several training strategies,
prioritizes the geometry consistency but compromises the texture fidelity. We
further propose Bootstrapped Score Distillation to specifically boost the
texture. We train a personalized diffusion model, Dreambooth, on the augmented
renderings of the scene, imbuing it with 3D knowledge of the scene being
optimized. The score distillation from this 3D-aware diffusion prior provides
view-consistent guidance for the scene. Notably, through an alternating
optimization of the diffusion prior and 3D scene representation, we achieve
mutually reinforcing improvements: the optimized 3D scene aids in training the
scene-specific diffusion model, which offers increasingly view-consistent
guidance for 3D optimization. The optimization is thus bootstrapped and leads
to substantial texture boosting. With tailored 3D priors throughout the
hierarchical generation, DreamCraft3D generates coherent 3D objects with
photorealistic renderings, advancing the state-of-the-art in 3D content
generation. Code available at https://github.com/deepseek-ai/DreamCraft3D.Comment: Project Page: https://mrtornado24.github.io/DreamCraft3D
Next3D: Generative Neural Texture Rasterization for 3D-Aware Head Avatars
3D-aware generative adversarial networks (GANs) synthesize high-fidelity and
multi-view-consistent facial images using only collections of single-view 2D
imagery. Towards fine-grained control over facial attributes, recent efforts
incorporate 3D Morphable Face Model (3DMM) to describe deformation in
generative radiance fields either explicitly or implicitly. Explicit methods
provide fine-grained expression control but cannot handle topological changes
caused by hair and accessories, while implicit ones can model varied topologies
but have limited generalization caused by the unconstrained deformation fields.
We propose a novel 3D GAN framework for unsupervised learning of generative,
high-quality and 3D-consistent facial avatars from unstructured 2D images. To
achieve both deformation accuracy and topological flexibility, we propose a 3D
representation called Generative Texture-Rasterized Tri-planes. The proposed
representation learns Generative Neural Textures on top of parametric mesh
templates and then projects them into three orthogonal-viewed feature planes
through rasterization, forming a tri-plane feature representation for volume
rendering. In this way, we combine both fine-grained expression control of
mesh-guided explicit deformation and the flexibility of implicit volumetric
representation. We further propose specific modules for modeling mouth interior
which is not taken into account by 3DMM. Our method demonstrates
state-of-the-art 3D-aware synthesis quality and animation ability through
extensive experiments. Furthermore, serving as 3D prior, our animatable 3D
representation boosts multiple applications including one-shot facial avatars
and 3D-aware stylization.Comment: Project page: https://mrtornado24.github.io/Next3D
Control4D: Dynamic Portrait Editing by Learning 4D GAN from 2D Diffusion-based Editor
Recent years have witnessed considerable achievements in editing images with
text instructions. When applying these editors to dynamic scene editing, the
new-style scene tends to be temporally inconsistent due to the frame-by-frame
nature of these 2D editors. To tackle this issue, we propose Control4D, a novel
approach for high-fidelity and temporally consistent 4D portrait editing.
Control4D is built upon an efficient 4D representation with a 2D
diffusion-based editor. Instead of using direct supervisions from the editor,
our method learns a 4D GAN from it and avoids the inconsistent supervision
signals. Specifically, we employ a discriminator to learn the generation
distribution based on the edited images and then update the generator with the
discrimination signals. For more stable training, multi-level information is
extracted from the edited images and used to facilitate the learning of the
generator. Experimental results show that Control4D surpasses previous
approaches and achieves more photo-realistic and consistent 4D editing
performances. The link to our project website is
https://control4darxiv.github.io.Comment: The link to our project website is https://control4darxiv.github.i
Effect of pectin on properties of potato starch after dry heat treatment
Purpose: To evaluate the effect of pectin on the properties of potato starch after dry heat treatment.
Methods: Rapid visco analyzer (RVA), differential scanning calorimetry (DSC), texture profile analyzer (TPA), scanning electron microscopy (SEM), Fourier transform infrared spectroscopy (FTIR) and x-ray diffractometry (XRD) were used to determine the properties of modified potato starch and pectin blends after dry heat treatment.
Results: Results from RVA showed that the peak viscosity of modified potato starch decreased gradually with increase in pectin concentration, dry heat time and dry heat temperature, while starch breakdown decreased and setback was increased to varying degrees. The lowest breakdown was 792 cP at dry heat temperature of 140 °C. Modified potato starch had broader ranges of gelatinization temperatures and lower gelatinization enthalpy than raw potato starch. Dry heat treatment improved the hardness, gumminess and chewiness of the gels of modified potato starch and pectin blends SEM micrographs showed some cluster shapes in microstructure after dry heat treatment of starch-pectin blends. Infrared spectra revealed that pectin addition and dry heat treatment did not cause changes in starch structure. However, x-ray diffractograms indicated that dry heat treatment weakened the third peak of potato starch.
Conclusion: These results indicate that dry heat treatment effectively alters the properties of potato starch and pectin blends. This finding broadens the applications of modified potato starch in food and pharmaceutical industries
- …