11 research outputs found
Home and healthcare. The prospect of home adaptation through a computational design decision-support system
This paper presents an ongoing research to define the framework of a computational
design approach based on the idea of spatial analysis and spatial synthesis to implement multicriteria
evaluations and provide evidence of the performance of the design alternatives in the
specific case of home adaptation for healthcare at home. The European health systems place
among the priority objectives the strengthening of the provision of healthcare at home to
guarantee the aging in place of elderly people and to limit, at the same time, the unnecessary use of
resources. Therefore, existing homes must provide adequate safety, comfort, and accessibility
features to ensure a high quality of life for the care receivers and facilitate the caregivers' tasks. To
address the complexity of the requirements to be met, we propose a spatial decision support
system (SDSS) to implement multi-criteria assessments to ergonomic design problems at a spatial
scale of apartment homes. The system is intended to streamline and assist designers and
homeowners in planning interventions for home adaptations for healthcare. Such design problems
can be formulated as decision problems with costs and benefits modeled within constraints of
validity and quality criteria/objectives. Concerning the specific field of study, the system evaluates
the degree of compliance with the accessibility and visibility quality criteria of each design
alternative. The reiteration of the evaluation mechanism allows for the classification and supports
the selection of satisfactory technical solutions identified with an informed and well-balanced
trade-off between the relevant quality criteria
SG-VAE: Scene Grammar Variational Autoencoder to Generate New Indoor Scenes
Deep generative models have been used in recent years to learn coherent latent representations in order to synthesize high-quality images. In this work, we propose a neural network to learn a generative model for sampling consistent indoor scene layouts. Our method learns the co-occurrences, and appearance parameters such as shape and pose, for different objects categories through a grammar-based auto-encoder, resulting in a compact and accurate representation for scene layouts. In contrast to existing grammar-based methods with a user-specified grammar, we construct the grammar automatically by extracting a set of production rules on reasoning about object co-occurrences in training data. The extracted grammar is able to represent a scene by an augmented parse tree. The proposed auto-encoder encodes these parse trees to a latent code, and decodes the latent code to a parse tree, thereby ensuring the generated scene is always valid. We experimentally demonstrate that the proposed auto-encoder learns not only to generate valid scenes (i.e. the arrangements and appearances of objects), but it also learns coherent latent representations where nearby latent samples decode to similar scene outputs. The obtained generative model is applicable to several computer vision tasks such as 3D pose and layout estimation from RGB-D data
Learning 3D Scene Priors with 2D Supervision
Holistic 3D scene understanding entails estimation of both layout
configuration and object geometry in a 3D environment. Recent works have shown
advances in 3D scene estimation from various input modalities (e.g., images, 3D
scans), by leveraging 3D supervision (e.g., 3D bounding boxes or CAD models),
for which collection at scale is expensive and often intractable. To address
this shortcoming, we propose a new method to learn 3D scene priors of layout
and shape without requiring any 3D ground truth. Instead, we rely on 2D
supervision from multi-view RGB images. Our method represents a 3D scene as a
latent vector, from which we can progressively decode to a sequence of objects
characterized by their class categories, 3D bounding boxes, and meshes. With
our trained autoregressive decoder representing the scene prior, our method
facilitates many downstream applications, including scene synthesis,
interpolation, and single-view reconstruction. Experiments on 3D-FRONT and
ScanNet show that our method outperforms state of the art in single-view
reconstruction, and achieves state-of-the-art results in scene synthesis
against baselines which require for 3D supervision.Comment: Video: https://youtu.be/YT7MEdygRoY Project:
https://yinyunie.github.io/sceneprior-page
SG-VAE: Scene Grammar Variational Autoencoder to generate new indoor scenes
Deep generative models have been used in recent years to learn coherent
latent representations in order to synthesize high-quality images. In this
work, we propose a neural network to learn a generative model for sampling
consistent indoor scene layouts. Our method learns the co-occurrences, and
appearance parameters such as shape and pose, for different objects categories
through a grammar-based auto-encoder, resulting in a compact and accurate
representation for scene layouts. In contrast to existing grammar-based methods
with a user-specified grammar, we construct the grammar automatically by
extracting a set of production rules on reasoning about object co-occurrences
in training data. The extracted grammar is able to represent a scene by an
augmented parse tree. The proposed auto-encoder encodes these parse trees to a
latent code, and decodes the latent code to a parse tree, thereby ensuring the
generated scene is always valid. We experimentally demonstrate that the
proposed auto-encoder learns not only to generate valid scenes (i.e. the
arrangements and appearances of objects), but it also learns coherent latent
representations where nearby latent samples decode to similar scene outputs.
The obtained generative model is applicable to several computer vision tasks
such as 3D pose and layout estimation from RGB-D data
Modeling a Social Placement Cost to Extend Navigation Among Movable Obstacles (NAMO) Algorithms
DOI is not yet properly functionnal, go to IEEEXplore directly : https://ieeexplore.ieee.org/abstract/document/9340892International audienceCurrent Navigation Among Movable Obstacles (NAMO) algorithms focus on finding a path for the robot that only optimizes the displacement cost of navigating and moving obstacles out of its way. However, in a human environment, this focus may lead the robot to leave the space in a socially inappropriate state that may hamper human activity (i.e. by blocking access to doors, corridors, rooms or objects of interest). In this paper, we tackle this problem of "Social Placement Choice" by building a social occupation costmap, built using only geometrical information. We present how existing NAMO algorithms can be extended by exploiting this new cost map. Then, we show the effectiveness of this approach with simulations, and provide additional evaluation criteria to assess the social acceptability of plans