6,447 research outputs found
Dense 3D Object Reconstruction from a Single Depth View
In this paper, we propose a novel approach, 3D-RecGAN++, which reconstructs
the complete 3D structure of a given object from a single arbitrary depth view
using generative adversarial networks. Unlike existing work which typically
requires multiple views of the same object or class labels to recover the full
3D geometry, the proposed 3D-RecGAN++ only takes the voxel grid representation
of a depth view of the object as input, and is able to generate the complete 3D
occupancy grid with a high resolution of 256^3 by recovering the
occluded/missing regions. The key idea is to combine the generative
capabilities of autoencoders and the conditional Generative Adversarial
Networks (GAN) framework, to infer accurate and fine-grained 3D structures of
objects in high-dimensional voxel space. Extensive experiments on large
synthetic datasets and real-world Kinect datasets show that the proposed
3D-RecGAN++ significantly outperforms the state of the art in single view 3D
object reconstruction, and is able to reconstruct unseen types of objects.Comment: TPAMI 2018. Code and data are available at:
https://github.com/Yang7879/3D-RecGAN-extended. This article extends from
arXiv:1708.0796
Tensor Computation: A New Framework for High-Dimensional Problems in EDA
Many critical EDA problems suffer from the curse of dimensionality, i.e. the
very fast-scaling computational burden produced by large number of parameters
and/or unknown variables. This phenomenon may be caused by multiple spatial or
temporal factors (e.g. 3-D field solvers discretizations and multi-rate circuit
simulation), nonlinearity of devices and circuits, large number of design or
optimization parameters (e.g. full-chip routing/placement and circuit sizing),
or extensive process variations (e.g. variability/reliability analysis and
design for manufacturability). The computational challenges generated by such
high dimensional problems are generally hard to handle efficiently with
traditional EDA core algorithms that are based on matrix and vector
computation. This paper presents "tensor computation" as an alternative general
framework for the development of efficient EDA algorithms and tools. A tensor
is a high-dimensional generalization of a matrix and a vector, and is a natural
choice for both storing and solving efficiently high-dimensional EDA problems.
This paper gives a basic tutorial on tensors, demonstrates some recent examples
of EDA applications (e.g., nonlinear circuit modeling and high-dimensional
uncertainty quantification), and suggests further open EDA problems where the
use of tensor computation could be of advantage.Comment: 14 figures. Accepted by IEEE Trans. CAD of Integrated Circuits and
System
Automatic supervised information extraction of structured web data
The overall purpose of this project is, in short words, to create a system able to extract vital
information from product web pages just like a human would. Information like the name of the
product, its description, price tag, company that produces it, and so on. At a first glimpse, this
may not seem extraordinary or technically difficult, since web scraping techniques exist from long
ago (like the python library Beautiful Soup for instance, an HTML parser1 released in 2004). But
let us think for a second on what it actually means being able to extract desired information from
any given web source: the way information is displayed can be extremely varied, not only visually,
but also semantically. For instance, some hotel booking web pages display at once all prices for
the different room types, while medium-sized consumer products in websites like Amazon offer the
main product in detail and then more small-sized product recommendations further down the page,
being the latter the preferred way of displaying assets by most retail companies. And each with its
own styling and search engines. With the above said, the task of mining valuable data from the
web now does not sound as easy as it first seemed. Hence the purpose of this project is to shine
some light on the Automatic Supervised Information Extraction of Structured Web Data problem.
It is important to think if developing such a solution is really valuable at all. Such an endeavour
both in time and computing resources should lead to a useful end result, at least on paper, to
justify it. The opinion of this author is that it does lead to a potentially valuable result. The
targeted extraction of information of publicly available consumer-oriented content at large scale in
an accurate, reliable and future proof manner could provide an incredibly useful and large amount
of data. This data, if kept updated, could create endless opportunities for Business Intelligence,
although exactly which ones is beyond the scope of this work. A simple metaphor explains the
potential value of this work: if an oil company were to be told where are all the oil reserves in the
planet, it still should need to invest in machinery, workers and time to successfully exploit them,
but half of the job would have already been done2.
As the reader will see in this work, the way the issue is tackled is by building a somehow complex
architecture that ends in an Artificial Neural Network3. A quick overview of such architecture is
as follows: first find the URLs that lead to the product pages that contain the desired data that
is going to be extracted inside a given site (like URLs that lead to ”action figure” products inside
the site ebay.com); second, per each URL passed, extract its HTML and make a screenshot of the
page, and store this data in a suitable and scalable fashion; third, label the data that will be fed to
the NN4; fourth, prepare the aforementioned data to be input in an NN; fifth, train the NN; and
sixth, deploy the NN to make [hopefully accurate] predictions
3D-PhysNet: Learning the Intuitive Physics of Non-Rigid Object Deformations
The ability to interact and understand the environment is a fundamental
prerequisite for a wide range of applications from robotics to augmented
reality. In particular, predicting how deformable objects will react to applied
forces in real time is a significant challenge. This is further confounded by
the fact that shape information about encountered objects in the real world is
often impaired by occlusions, noise and missing regions e.g. a robot
manipulating an object will only be able to observe a partial view of the
entire solid. In this work we present a framework, 3D-PhysNet, which is able to
predict how a three-dimensional solid will deform under an applied force using
intuitive physics modelling. In particular, we propose a new method to encode
the physical properties of the material and the applied force, enabling
generalisation over materials. The key is to combine deep variational
autoencoders with adversarial training, conditioned on the applied force and
the material properties. We further propose a cascaded architecture that takes
a single 2.5D depth view of the object and predicts its deformation. Training
data is provided by a physics simulator. The network is fast enough to be used
in real-time applications from partial views. Experimental results show the
viability and the generalisation properties of the proposed architecture.Comment: in IJCAI 201
- …