451 research outputs found
Convex Decomposition of Indoor Scenes
We describe a method to parse a complex, cluttered indoor scene into
primitives which offer a parsimonious abstraction of scene structure. Our
primitives are simple convexes. Our method uses a learned regression procedure
to parse a scene into a fixed number of convexes from RGBD input, and can
optionally accept segmentations to improve the decomposition. The result is
then polished with a descent method which adjusts the convexes to produce a
very good fit, and greedily removes superfluous primitives. Because the entire
scene is parsed, we can evaluate using traditional depth, normal, and
segmentation error metrics. Our evaluation procedure demonstrates that the
error from our primitive representation is comparable to that of predicting
depth from a single image.Comment: 18 pages, 12 figure
Clutter Detection and Removal in 3D Scenes with View-Consistent Inpainting
Removing clutter from scenes is essential in many applications, ranging from
privacy-concerned content filtering to data augmentation. In this work, we
present an automatic system that removes clutter from 3D scenes and inpaints
with coherent geometry and texture. We propose techniques for its two key
components: 3D segmentation from shared properties and 3D inpainting, both of
which are important problems. The definition of 3D scene clutter
(frequently-moving objects) is not well captured by commonly-studied object
categories in computer vision. To tackle the lack of well-defined clutter
annotations, we group noisy fine-grained labels, leverage virtual rendering,
and impose an instance-level area-sensitive loss. Once clutter is removed, we
inpaint geometry and texture in the resulting holes by merging inpainted RGB-D
images. This requires novel voting and pruning strategies that guarantee
multi-view consistency across individually inpainted images for mesh
reconstruction. Experiments on ScanNet and Matterport dataset show that our
method outperforms baselines for clutter segmentation and 3D inpainting, both
visually and quantitatively.Comment: 18 pages. ICCV 2023. Project page:
https://weify627.github.io/clutter
HMC-Based Accelerator Design For Compressed Deep Neural Networks
Deep Neural Networks (DNNs) offer remarkable performance of classifications and regressions in many high dimensional problems and have been widely utilized in real-word cognitive applications. In DNN applications, high computational cost of DNNs greatly hinder their deployment in resource-constrained applications, real-time systems and edge computing platforms. Moreover, energy consumption and performance cost of moving data between memory hierarchy and computational units are higher than that of the computation itself. To overcome the memory bottleneck, data locality and temporal data reuse are improved in accelerator design. In an attempt to further improve data locality, memory manufacturers have invented 3D-stacked memory where multiple layers of memory arrays are stacked on top of each other. Inherited from the concept of Process-In-Memory (PIM), some 3D-stacked memory architectures also include a logic layer that can integrate general-purpose computational logic directly within main memory to take advantages of high internal bandwidth during computation.
In this dissertation, we are going to investigate hardware/software co-design for neural network accelerator. Specifically, we introduce a two-phase filter pruning framework for model compression and an accelerator tailored for efficient DNN execution on HMC, which can dynamically offload the primitives and functions to PIM logic layer through a latency-aware scheduling controller.
In our compression framework, we formulate filter pruning process as an optimization problem and propose a filter selection criterion measured by conditional entropy. The key idea of our proposed approach is to establish a quantitative connection between filters and model accuracy. We define the connection as conditional entropy over filters in a convolutional layer, i.e., distribution of entropy conditioned on network loss. Based on the definition, different pruning efficiencies of global and layer-wise pruning strategies are compared, and two-phase pruning method is proposed. The proposed pruning method can achieve a reduction of 88% filters and 46% inference time reduction on VGG16 within 2% accuracy degradation.
In this dissertation, we are going to investigate hardware/software co-design for neural network accelerator. Specifically, we introduce a two-phase filter pruning framework for model compres- sion and an accelerator tailored for efficient DNN execution on HMC, which can dynamically offload the primitives and functions to PIM logic layer through a latency-aware scheduling con- troller.
In our compression framework, we formulate filter pruning process as an optimization problem and propose a filter selection criterion measured by conditional entropy. The key idea of our proposed approach is to establish a quantitative connection between filters and model accuracy. We define the connection as conditional entropy over filters in a convolutional layer, i.e., distribution of entropy conditioned on network loss. Based on the definition, different pruning efficiencies of global and layer-wise pruning strategies are compared, and two-phase pruning method is proposed. The proposed pruning method can achieve a reduction of 88% filters and 46% inference time reduction on VGG16 within 2% accuracy degradation
A graph neural network approach to automated model building in cryo-EM maps
Electron cryo-microscopy (cryo-EM) produces three-dimensional (3D) maps of
the electrostatic potential of biological macromolecules, including proteins.
Along with knowledge about the imaged molecules, cryo-EM maps allow de novo
atomic modelling, which is typically done through a laborious manual process.
Taking inspiration from recent advances in machine learning applications to
protein structure prediction, we propose a graph neural network (GNN) approach
for automated model building of proteins in cryo-EM maps. The GNN acts on a
graph with nodes assigned to individual amino acids and edges representing the
protein chain. Combining information from the voxel-based cryo-EM data, the
amino acid sequence data and prior knowledge about protein geometries, the GNN
refines the geometry of the protein chain and classifies the amino acids for
each of its nodes. Application to 28 test cases shows that our approach
outperforms the state-of-the-art and approximates manual building for cryo-EM
maps with resolutions better than 3.5 \r{A}
A Survey on Deep Neural Network Pruning-Taxonomy, Comparison, Analysis, and Recommendations
Modern deep neural networks, particularly recent large language models, come
with massive model sizes that require significant computational and storage
resources. To enable the deployment of modern models on resource-constrained
environments and accelerate inference time, researchers have increasingly
explored pruning techniques as a popular research direction in neural network
compression. However, there is a dearth of up-to-date comprehensive review
papers on pruning. To address this issue, in this survey, we provide a
comprehensive review of existing research works on deep neural network pruning
in a taxonomy of 1) universal/specific speedup, 2) when to prune, 3) how to
prune, and 4) fusion of pruning and other compression techniques. We then
provide a thorough comparative analysis of seven pairs of contrast settings for
pruning (e.g., unstructured/structured) and explore emerging topics, including
post-training pruning, different levels of supervision for pruning, and broader
applications (e.g., adversarial robustness) to shed light on the commonalities
and differences of existing methods and lay the foundation for further method
development. To facilitate future research, we build a curated collection of
datasets, networks, and evaluations on different applications. Finally, we
provide some valuable recommendations on selecting pruning methods and prospect
promising research directions. We build a repository at
https://github.com/hrcheng1066/awesome-pruning
- …