342 research outputs found
Examining the Relationship Between Lignocellulosic Biomass Structural Constituents and Its Flow Behavior
Lignocellulosic biomass material sourced from plants and herbaceous sources is a promising substrate of inexpensive, abundant, and potentially carbon-neutral energy. One of the leading limitations of using lignocellulosic biomass as a feedstock for bioenergy products is the flow issues encountered during biomass conveyance in biorefineries. In the biorefining process, the biomass feedstock undergoes flow through a variety of conveyance systems. The inherent variability of the feedstock materials, as evidenced by their complex microstructural composition and non-uniform morphology, coupled with the varying flow conditions in the conveyance systems, gives rise to flow issues such as bridging, ratholing, and clogging. These issues slow down the conveyance process, affect machine life, and potentially lead to partial or even complete shutdown of the biorefinery. Hence, we need to improve our fundamental understanding of biomass feedstock flow physics and mechanics to address the flow issues and improve biorefinery economics.
This dissertation research examines the fundamental relationship between structural constituents of diverse lignocellulosic biomass materials, i.e., cellulose, hemicellulose, and lignin, their morphology, and the impact of the structural composition and morphology on their flow behavior.
First, we prepared and characterized biomass feedstocks of different chemical compositions and morphologies. Then, we conducted our fundamental investigation experimentally, through physical flow characterization tests, and computationally through high-fidelity discrete element modeling. Finally, we statistically analyzed the relative influence of the properties of lignocellulosic biomass assemblies on flow behavior to determine the most critical properties and the optimum values of flow parameters. Our research provides an experimental and computational framework to generalize findings to a wider portfolio of biomass materials. It will help the bioenergy community to design more efficient biorefining machinery and equipment, reduce the risk of failure, and improve the overall commercial viability of the bioenergy industry
Iterative Superquadric Recomposition of 3D Objects from Multiple Views
Humans are good at recomposing novel objects, i.e. they can identify
commonalities between unknown objects from general structure to finer detail,
an ability difficult to replicate by machines. We propose a framework, ISCO, to
recompose an object using 3D superquadrics as semantic parts directly from 2D
views without training a model that uses 3D supervision. To achieve this, we
optimize the superquadric parameters that compose a specific instance of the
object, comparing its rendered 3D view and 2D image silhouette. Our ISCO
framework iteratively adds new superquadrics wherever the reconstruction error
is high, abstracting first coarse regions and then finer details of the target
object. With this simple coarse-to-fine inductive bias, ISCO provides
consistent superquadrics for related object parts, despite not having any
semantic supervision. Since ISCO does not train any neural network, it is also
inherently robust to out-of-distribution objects. Experiments show that,
compared to recent single instance superquadrics reconstruction approaches,
ISCO provides consistently more accurate 3D reconstructions, even from images
in the wild. Code available at https://github.com/ExplainableML/ISCO .Comment: Accepted at ICCV 202
Model-Free 3D Shape Control of Deformable Objects Using Novel Features Based on Modal Analysis
Shape control of deformable objects is a challenging and important robotic
problem. This paper proposes a model-free controller using novel 3D global
deformation features based on modal analysis. Unlike most existing controllers
using geometric features, our controller employs a physically-based deformation
feature by decoupling 3D global deformation into low-frequency mode shapes.
Although modal analysis is widely adopted in computer vision and simulation, it
has not been used in robotic deformation control. We develop a new model-free
framework for modal-based deformation control under robot manipulation.
Physical interpretation of mode shapes enables us to formulate an analytical
deformation Jacobian matrix mapping the robot manipulation onto changes of the
modal features. In the Jacobian matrix, unknown geometry and physical
properties of the object are treated as low-dimensional modal parameters which
can be used to linearly parameterize the closed-loop system. Thus, an adaptive
controller with proven stability can be designed to deform the object while
online estimating the modal parameters. Simulations and experiments are
conducted using linear, planar, and solid objects under different settings. The
results not only confirm the superior performance of our controller but also
demonstrate its advantages over the baseline method.Comment: Accepted by the IEEE Transactions on Robotics. The paper will appear
in the IEEE Transactions on Robotics. IEEE copyrigh
Learnable Earth Parser: Discovering 3D Prototypes in Aerial Scans
We propose an unsupervised method for parsing large 3D scans of real-world
scenes into interpretable parts. Our goal is to provide a practical tool for
analyzing 3D scenes with unique characteristics in the context of aerial
surveying and mapping, without relying on application-specific user
annotations. Our approach is based on a probabilistic reconstruction model that
decomposes an input 3D point cloud into a small set of learned prototypical
shapes. Our model provides an interpretable reconstruction of complex scenes
and leads to relevant instance and semantic segmentations. To demonstrate the
usefulness of our results, we introduce a novel dataset of seven diverse aerial
LiDAR scans. We show that our method outperforms state-of-the-art unsupervised
methods in terms of decomposition accuracy while remaining visually
interpretable. Our method offers significant advantage over existing
approaches, as it does not require any manual annotations, making it a
practical and efficient tool for 3D scene analysis. Our code and dataset are
available at https://imagine.enpc.fr/~loiseaur/learnable-earth-parse
Investigating Scene Understanding for Robotic Grasping: From Pose Estimation to Explainable AI
In the rapidly evolving field of robotics, the ability to accurately grasp and manipulate objects—known as robotic grasping—is a cornerstone of autonomous operation. This capability is pivotal across a multitude of applications, from industrial manufacturing automation to supply chain management, and is a key determinant of a robot's ability to interact effectively with its environment. Central to this capability is the concept of scene understanding, a complex task that involves interpreting the robot's environment to facilitate decision-making and action planning. This thesis presents a comprehensive exploration of scene understanding for robotic grasping, with a particular emphasis on pose estimation, a critical aspect of scene understanding.
Pose estimation, the process of determining the position and orientation of objects within the robot's environment, is a crucial component of robotic grasping. It provides the robot with the necessary spatial information about the objects in the scene, enabling it to plan and execute grasping actions effectively. However, many current pose estimation methods provide relative pose compared to a 3D model, which lacks descriptiveness without referencing the 3D model. This thesis explores the use of keypoints and superquadrics as more general and descriptive representations of an object's pose. These novel approaches address the limitations of traditional methods and significantly enhance the generalizability and descriptiveness of pose estimation, thereby improving the overall effectiveness of robotic grasping.
In addition to pose estimation, this thesis briefly touches upon the importance of uncertainty estimation and explainable AI in the context of robotic grasping. It introduces the concept of multimodal consistency for uncertainty estimation, providing a reliable measure of uncertainty that can enhance decision-making in human-in-the-loop situations. Furthermore, it explores the realm of explainable AI, presenting a method for gaining deeper insights into deep learning models, thereby enhancing their transparency and interpretability.
In summary, this thesis presents a comprehensive approach to scene understanding for robotic grasping, with a particular emphasis on pose estimation. It addresses key challenges and advances the state of the art in this critical area of robotics research. The research is structured around five published papers, each contributing to a unique aspect of the overall study
Neural Deformable Models for 3D Bi-Ventricular Heart Shape Reconstruction and Modeling from 2D Sparse Cardiac Magnetic Resonance Imaging
We propose a novel neural deformable model (NDM) targeting at the
reconstruction and modeling of 3D bi-ventricular shape of the heart from 2D
sparse cardiac magnetic resonance (CMR) imaging data. We model the
bi-ventricular shape using blended deformable superquadrics, which are
parameterized by a set of geometric parameter functions and are capable of
deforming globally and locally. While global geometric parameter functions and
deformations capture gross shape features from visual data, local deformations,
parameterized as neural diffeomorphic point flows, can be learned to recover
the detailed heart shape.Different from iterative optimization methods used in
conventional deformable model formulations, NDMs can be trained to learn such
geometric parameter functions, global and local deformations from a shape
distribution manifold. Our NDM can learn to densify a sparse cardiac point
cloud with arbitrary scales and generate high-quality triangular meshes
automatically. It also enables the implicit learning of dense correspondences
among different heart shape instances for accurate cardiac shape registration.
Furthermore, the parameters of NDM are intuitive, and can be used by a
physician without sophisticated post-processing. Experimental results on a
large CMR dataset demonstrate the improved performance of NDM over conventional
methods.Comment: Accepted by ICCV 202
Modal-Graph 3D Shape Servoing of Deformable Objects with Raw Point Clouds
Deformable object manipulation (DOM) with point clouds has great potential as
non-rigid 3D shapes can be measured without detecting and tracking image
features. However, robotic shape control of deformable objects with point
clouds is challenging due to: the unknown point-wise correspondences and the
noisy partial observability of raw point clouds; the modeling difficulties of
the relationship between point clouds and robot motions. To tackle these
challenges, this paper introduces a novel modal-graph framework for the
model-free shape servoing of deformable objects with raw point clouds. Unlike
the existing works studying the object's geometry structure, our method builds
a low-frequency deformation structure for the DOM system, which is robust to
the measurement irregularities. The built modal representation and graph
structure enable us to directly extract low-dimensional deformation features
from raw point clouds. Such extraction requires no extra point processing of
registrations, refinements, and occlusion removal. Moreover, to shape the
object using the extracted features, we design an adaptive robust controller
which is proved to be input-to-state stable (ISS) without offline learning or
identifying both the physical and geometric object models. Extensive
simulations and experiments are conducted to validate the effectiveness of our
method for linear, planar, tubular, and solid objects under different settings
A Survey of Methods for Converting Unstructured Data to CSG Models
The goal of this document is to survey existing methods for recovering CSG
representations from unstructured data such as 3D point-clouds or polygon
meshes. We review and discuss related topics such as the segmentation and
fitting of the input data. We cover techniques from solid modeling and CAD for
polyhedron to CSG and B-rep to CSG conversion. We look at approaches coming
from program synthesis, evolutionary techniques (such as genetic programming or
genetic algorithm), and deep learning methods. Finally, we conclude with a
discussion of techniques for the generation of computer programs representing
solids (not just CSG models) and higher-level representations (such as, for
example, the ones based on sketch and extrusion or feature based operations).Comment: 29 page
Convex Decomposition of Indoor Scenes
We describe a method to parse a complex, cluttered indoor scene into
primitives which offer a parsimonious abstraction of scene structure. Our
primitives are simple convexes. Our method uses a learned regression procedure
to parse a scene into a fixed number of convexes from RGBD input, and can
optionally accept segmentations to improve the decomposition. The result is
then polished with a descent method which adjusts the convexes to produce a
very good fit, and greedily removes superfluous primitives. Because the entire
scene is parsed, we can evaluate using traditional depth, normal, and
segmentation error metrics. Our evaluation procedure demonstrates that the
error from our primitive representation is comparable to that of predicting
depth from a single image.Comment: 18 pages, 12 figure
- …