149 research outputs found
Reconciling: How Chinese Village Cadres Solve Land Disputes in Southeast China
Land disputes are a common phenomenon in rural China. This study focuses on how rural cadres resolve land disputes. It is found that the uniqueness of ownership and cultural and historical factors are the main reasons for land disputes. Village cadres often use the power of their positions in the village, their relationship with villagers, and private financial compensation to resolve disputes. This solution is related to the type of village cadres and the attitudes of higher-level departments (township cadres)
Heuristic approaches to solve risk-adjusted and time-adjusted discrete asset allocation problem
Master'sMASTER OF ENGINEERIN
CARNet:Compression Artifact Reduction for Point Cloud Attribute
A learning-based adaptive loop filter is developed for the Geometry-based
Point Cloud Compression (G-PCC) standard to reduce attribute compression
artifacts. The proposed method first generates multiple Most-Probable Sample
Offsets (MPSOs) as potential compression distortion approximations, and then
linearly weights them for artifact mitigation. As such, we drive the filtered
reconstruction as close to the uncompressed PCA as possible. To this end, we
devise a Compression Artifact Reduction Network (CARNet) which consists of two
consecutive processing phases: MPSOs derivation and MPSOs combination. The
MPSOs derivation uses a two-stream network to model local neighborhood
variations from direct spatial embedding and frequency-dependent embedding,
where sparse convolutions are utilized to best aggregate information from
sparsely and irregularly distributed points. The MPSOs combination is guided by
the least square error metric to derive weighting coefficients on the fly to
further capture content dynamics of input PCAs. The CARNet is implemented as an
in-loop filtering tool of the GPCC, where those linear weighting coefficients
are encapsulated into the bitstream with negligible bit rate overhead.
Experimental results demonstrate significant improvement over the latest GPCC
both subjectively and objectively.Comment: 13pages, 8figure
Efficient Memory Management for GPU-based Deep Learning Systems
GPU (graphics processing unit) has been used for many data-intensive
applications. Among them, deep learning systems are one of the most important
consumer systems for GPU nowadays. As deep learning applications impose deeper
and larger models in order to achieve higher accuracy, memory management
becomes an important research topic for deep learning systems, given that GPU
has limited memory size. Many approaches have been proposed towards this issue,
e.g., model compression and memory swapping. However, they either degrade the
model accuracy or require a lot of manual intervention. In this paper, we
propose two orthogonal approaches to reduce the memory cost from the system
perspective. Our approaches are transparent to the models, and thus do not
affect the model accuracy. They are achieved by exploiting the iterative nature
of the training algorithm of deep learning to derive the lifetime and
read/write order of all variables. With the lifetime semantics, we are able to
implement a memory pool with minimal fragments. However, the optimization
problem is NP-complete. We propose a heuristic algorithm that reduces up to
13.3% of memory compared with Nvidia's default memory pool with equal time
complexity. With the read/write semantics, the variables that are not in use
can be swapped out from GPU to CPU to reduce the memory footprint. We propose
multiple swapping strategies to automatically decide which variable to swap and
when to swap out (in), which reduces the memory cost by up to 34.2% without
communication overhead
Efficient Memory Management for GPU-based Deep Learning Systems
GPU (graphics processing unit) has been used for many data-intensive
applications. Among them, deep learning systems are one of the most important
consumer systems for GPU nowadays. As deep learning applications impose deeper
and larger models in order to achieve higher accuracy, memory management
becomes an important research topic for deep learning systems, given that GPU
has limited memory size. Many approaches have been proposed towards this issue,
e.g., model compression and memory swapping. However, they either degrade the
model accuracy or require a lot of manual intervention. In this paper, we
propose two orthogonal approaches to reduce the memory cost from the system
perspective. Our approaches are transparent to the models, and thus do not
affect the model accuracy. They are achieved by exploiting the iterative nature
of the training algorithm of deep learning to derive the lifetime and
read/write order of all variables. With the lifetime semantics, we are able to
implement a memory pool with minimal fragments. However, the optimization
problem is NP-complete. We propose a heuristic algorithm that reduces up to
13.3% of memory compared with Nvidia's default memory pool with equal time
complexity. With the read/write semantics, the variables that are not in use
can be swapped out from GPU to CPU to reduce the memory footprint. We propose
multiple swapping strategies to automatically decide which variable to swap and
when to swap out (in), which reduces the memory cost by up to 34.2% without
communication overhead
Enhancing thermal properties of asphalt materials for heat storage and transfer applications
The paper considers extending the role of asphalt concrete pavements to become solar heat collectors and storage systems. The majority of the construction cost is already procured for such pavements and only marginal additional costs are likely to be incurred to add the necessary thermal features. Asphalt concrete pavements are, therefore, designed that incorporate aggregates and additives such as limestone, quartzite, lightweight aggregate, copper slag and copper fibre to make them more conductive, or more insulative, or to enable them to store more heat energy. The resulting materials are assessed for both mechanical and thermal properties by laboratory tests and numerical simulations and recommendations are made in regard to the optimum formulations for the purposes considered
IAIFNet: An Illumination-Aware Infrared and Visible Image Fusion Network
Infrared and visible image fusion (IVIF) is used to generate fusion images
with comprehensive features of both images, which is beneficial for downstream
vision tasks. However, current methods rarely consider the illumination
condition in low-light environments, and the targets in the fused images are
often not prominent. To address the above issues, we propose an
Illumination-Aware Infrared and Visible Image Fusion Network, named as IAIFNet.
In our framework, an illumination enhancement network first estimates the
incident illumination maps of input images. Afterwards, with the help of
proposed adaptive differential fusion module (ADFM) and salient target aware
module (STAM), an image fusion network effectively integrates the salient
features of the illumination-enhanced infrared and visible images into a fusion
image of high visual quality. Extensive experimental results verify that our
method outperforms five state-of-the-art methods of fusing infrared and visible
images.Comment: Submitted to IEE
- …