32,136 research outputs found
3D simulation of complex shading affecting PV systems taking benefit from the power of graphics cards developed for the video game industry
Shading reduces the power output of a photovoltaic (PV) system. The design
engineering of PV systems requires modeling and evaluating shading losses. Some
PV systems are affected by complex shading scenes whose resulting PV energy
losses are very difficult to evaluate with current modeling tools. Several
specialized PV design and simulation software include the possibility to
evaluate shading losses. They generally possess a Graphical User Interface
(GUI) through which the user can draw a 3D shading scene, and then evaluate its
corresponding PV energy losses. The complexity of the objects that these tools
can handle is relatively limited. We have created a software solution, 3DPV,
which allows evaluating the energy losses induced by complex 3D scenes on PV
generators. The 3D objects can be imported from specialized 3D modeling
software or from a 3D object library. The shadows cast by this 3D scene on the
PV generator are then directly evaluated from the Graphics Processing Unit
(GPU). Thanks to the recent development of GPUs for the video game industry,
the shadows can be evaluated with a very high spatial resolution that reaches
well beyond the PV cell level, in very short calculation times. A PV simulation
model then translates the geometrical shading into PV energy output losses.
3DPV has been implemented using WebGL, which allows it to run directly from a
Web browser, without requiring any local installation from the user. This also
allows taken full benefits from the information already available from
Internet, such as the 3D object libraries. This contribution describes, step by
step, the method that allows 3DPV to evaluate the PV energy losses caused by
complex shading. We then illustrate the results of this methodology to several
application cases that are encountered in the world of PV systems design.Comment: 5 page, 9 figures, conference proceedings, 29th European Photovoltaic
Solar Energy Conference and Exhibition, Amsterdam, 201
3D Neural Embedding Likelihood for Robust Probabilistic Inverse Graphics
The ability to perceive and understand 3D scenes is crucial for many
applications in computer vision and robotics. Inverse graphics is an appealing
approach to 3D scene understanding that aims to infer the 3D scene structure
from 2D images. In this paper, we introduce probabilistic modeling to the
inverse graphics framework to quantify uncertainty and achieve robustness in 6D
pose estimation tasks. Specifically, we propose 3D Neural Embedding Likelihood
(3DNEL) as a unified probabilistic model over RGB-D images, and develop
efficient inference procedures on 3D scene descriptions. 3DNEL effectively
combines learned neural embeddings from RGB with depth information to improve
robustness in sim-to-real 6D object pose estimation from RGB-D images.
Performance on the YCB-Video dataset is on par with state-of-the-art yet is
much more robust in challenging regimes. In contrast to discriminative
approaches, 3DNEL's probabilistic generative formulation jointly models
multi-object scenes, quantifies uncertainty in a principled way, and handles
object pose tracking under heavy occlusion. Finally, 3DNEL provides a
principled framework for incorporating prior knowledge about the scene and
objects, which allows natural extension to additional tasks like camera pose
tracking from video
Occlusion resistant learning of intuitive physics from videos
To reach human performance on complex tasks, a key ability for artificial
systems is to understand physical interactions between objects, and predict
future outcomes of a situation. This ability, often referred to as intuitive
physics, has recently received attention and several methods were proposed to
learn these physical rules from video sequences. Yet, most of these methods are
restricted to the case where no, or only limited, occlusions occur. In this
work we propose a probabilistic formulation of learning intuitive physics in 3D
scenes with significant inter-object occlusions. In our formulation, object
positions are modeled as latent variables enabling the reconstruction of the
scene. We then propose a series of approximations that make this problem
tractable. Object proposals are linked across frames using a combination of a
recurrent interaction network, modeling the physics in object space, and a
compositional renderer, modeling the way in which objects project onto pixel
space. We demonstrate significant improvements over state-of-the-art in the
intuitive physics benchmark of IntPhys. We apply our method to a second dataset
with increasing levels of occlusions, showing it realistically predicts
segmentation masks up to 30 frames in the future. Finally, we also show results
on predicting motion of objects in real videos
Real-time Spatial Detection and Tracking of Resources in a Construction Environment
Construction accidents with heavy equipment and bad decision making can be based on poor knowledge of the site environment and in both cases may lead to work interruptions and costly delays. Supporting the construction environment with real-time generated three-dimensional (3D) models can help preventing accidents as well as support management by modeling infrastructure assets in 3D. Such models can be integrated in the path planning of construction equipment operations for obstacle avoidance or in a 4D model that simulates construction processes. Detecting and guiding resources, such as personnel, machines and materials in and to the right place on time requires methods and technologies supplying information in real-time. This paper presents research in real-time 3D laser scanning and modeling using high range frame update rate scanning technology. Existing and emerging sensors and techniques in three-dimensional modeling are explained. The presented research successfully developed computational models and algorithms for the real-time detection, tracking, and three-dimensional modeling of static and dynamic construction resources, such as workforce, machines, equipment, and materials based on a 3D video range camera. In particular, the proposed algorithm for rapidly modeling three-dimensional scenes is explained. Laboratory and outdoor field experiments that were conducted to validate the algorithm’s performance and results are discussed
RGBD Datasets: Past, Present and Future
Since the launch of the Microsoft Kinect, scores of RGBD datasets have been
released. These have propelled advances in areas from reconstruction to gesture
recognition. In this paper we explore the field, reviewing datasets across
eight categories: semantics, object pose estimation, camera tracking, scene
reconstruction, object tracking, human actions, faces and identification. By
extracting relevant information in each category we help researchers to find
appropriate data for their needs, and we consider which datasets have succeeded
in driving computer vision forward and why.
Finally, we examine the future of RGBD datasets. We identify key areas which
are currently underexplored, and suggest that future directions may include
synthetic data and dense reconstructions of static and dynamic scenes.Comment: 8 pages excluding references (CVPR style
- …