5 research outputs found
LIST: Learning Implicitly from Spatial Transformers for Single-View 3D Reconstruction
Accurate reconstruction of both the geometric and topological details of a 3D
object from a single 2D image embodies a fundamental challenge in computer
vision. Existing explicit/implicit solutions to this problem struggle to
recover self-occluded geometry and/or faithfully reconstruct topological shape
structures. To resolve this dilemma, we introduce LIST, a novel neural
architecture that leverages local and global image features to accurately
reconstruct the geometric and topological structure of a 3D object from a
single image. We utilize global 2D features to predict a coarse shape of the
target object and then use it as a base for higher-resolution reconstruction.
By leveraging both local 2D features from the image and 3D features from the
coarse prediction, we can predict the signed distance between an arbitrary
point and the target surface via an implicit predictor with great accuracy.
Furthermore, our model does not require camera estimation or pixel alignment.
It provides an uninfluenced reconstruction from the input-view direction.
Through qualitative and quantitative analysis, we show the superiority of our
model in reconstructing 3D objects from both synthetic and real-world images
against the state of the art.Comment: To be published in the 2023 IEEE/CVF International Conference on
Computer Vision (ICCV
Automated Reconstruction of 3D Open Surfaces from Sparse Point Clouds
Real-world 3D data may contain intricate details defined by salient surface
gaps. Automated reconstruction of these open surfaces (e.g., non-watertight
meshes) is a challenging problem for environment synthesis in mixed reality
applications. Current learning-based implicit techniques can achieve high
fidelity on closed-surface reconstruction. However, their dependence on the
distinction between the inside and outside of a surface makes them incapable of
reconstructing open surfaces. Recently, a new class of implicit functions have
shown promise in reconstructing open surfaces by regressing an unsigned
distance field. Yet, these methods rely on a discretized representation of the
raw data, which loses important surface details and can lead to outliers in the
reconstruction. We propose IPVNet, a learning-based implicit model that
predicts the unsigned distance between a surface and a query point in 3D space
by leveraging both raw point cloud data and its discretized voxel counterpart.
Experiments on synthetic and real-world public datasets demonstrates that
IPVNet outperforms the state of the art while producing far fewer outliers in
the reconstruction.Comment: To be presented at the 2022 IEEE International Symposium on Mixed and
Augmented Reality (ISMAR) Workshop on Photorealistic Image and Environment
Synthesis for Mixed Reality (PIES-MR
Mortality from gastrointestinal congenital anomalies at 264 hospitals in 74 low-income, middle-income, and high-income countries: a multicentre, international, prospective cohort study
Summary
Background Congenital anomalies are the fifth leading cause of mortality in children younger than 5 years globally.
Many gastrointestinal congenital anomalies are fatal without timely access to neonatal surgical care, but few studies
have been done on these conditions in low-income and middle-income countries (LMICs). We compared outcomes of
the seven most common gastrointestinal congenital anomalies in low-income, middle-income, and high-income
countries globally, and identified factors associated with mortality.
Methods We did a multicentre, international prospective cohort study of patients younger than 16 years, presenting to
hospital for the first time with oesophageal atresia, congenital diaphragmatic hernia, intestinal atresia, gastroschisis,
exomphalos, anorectal malformation, and Hirschsprung’s disease. Recruitment was of consecutive patients for a
minimum of 1 month between October, 2018, and April, 2019. We collected data on patient demographics, clinical
status, interventions, and outcomes using the REDCap platform. Patients were followed up for 30 days after primary
intervention, or 30 days after admission if they did not receive an intervention. The primary outcome was all-cause,
in-hospital mortality for all conditions combined and each condition individually, stratified by country income status.
We did a complete case analysis.
Findings We included 3849 patients with 3975 study conditions (560 with oesophageal atresia, 448 with congenital
diaphragmatic hernia, 681 with intestinal atresia, 453 with gastroschisis, 325 with exomphalos, 991 with anorectal
malformation, and 517 with Hirschsprung’s disease) from 264 hospitals (89 in high-income countries, 166 in middleincome
countries, and nine in low-income countries) in 74 countries. Of the 3849 patients, 2231 (58·0%) were male.
Median gestational age at birth was 38 weeks (IQR 36–39) and median bodyweight at presentation was 2·8 kg (2·3–3·3).
Mortality among all patients was 37 (39·8%) of 93 in low-income countries, 583 (20·4%) of 2860 in middle-income
countries, and 50 (5·6%) of 896 in high-income countries (p<0·0001 between all country income groups).
Gastroschisis had the greatest difference in mortality between country income strata (nine [90·0%] of ten in lowincome
countries, 97 [31·9%] of 304 in middle-income countries, and two [1·4%] of 139 in high-income countries;
p≤0·0001 between all country income groups). Factors significantly associated with higher mortality for all patients
combined included country income status (low-income vs high-income countries, risk ratio 2·78 [95% CI 1·88–4·11],
p<0·0001; middle-income vs high-income countries, 2·11 [1·59–2·79], p<0·0001), sepsis at presentation (1·20
[1·04–1·40], p=0·016), higher American Society of Anesthesiologists (ASA) score at primary intervention
(ASA 4–5 vs ASA 1–2, 1·82 [1·40–2·35], p<0·0001; ASA 3 vs ASA 1–2, 1·58, [1·30–1·92], p<0·0001]), surgical safety
checklist not used (1·39 [1·02–1·90], p=0·035), and ventilation or parenteral nutrition unavailable when needed
(ventilation 1·96, [1·41–2·71], p=0·0001; parenteral nutrition 1·35, [1·05–1·74], p=0·018). Administration of
parenteral nutrition (0·61, [0·47–0·79], p=0·0002) and use of a peripherally inserted central catheter (0·65
[0·50–0·86], p=0·0024) or percutaneous central line (0·69 [0·48–1·00], p=0·049) were associated with lower mortality.
Interpretation Unacceptable differences in mortality exist for gastrointestinal congenital anomalies between lowincome,
middle-income, and high-income countries. Improving access to quality neonatal surgical care in LMICs will
be vital to achieve Sustainable Development Goal 3.2 of ending preventable deaths in neonates and children younger
than 5 years by 2030
Recommended from our members
Synthesizing Dense and Colored 3D Point Clouds for Training Deep Neural Networks
3D point clouds are a compact homogeneous representation that have the ability to cap- ture intricate details of the environment. They are useful for a wide variety of applications. For example, point clouds can be sampled from the mesh of manually designed objects to use as synthetic data for training deep learning networks. However, the geometry and tex- ture of these point clouds is bounded by the resolution of the modeled objects. To facilitate learning with synthetic 3D point clouds, we present a novel conditional generative adver- sarial network that creates dense point clouds, with color, in an unsupervised manner. The difficulty of capturing intricate details at high resolutions is handled by a point transformer that progressively grows the network through the use of graph convolutions. Every training iteration evolves a point vector into a point cloud. Experimental results show that our net- work is capable of learning a 3D data distribution and produces colored point clouds with fine details at multiple resolutions
Recommended from our members
3D Scene Generation via Unsupervised Object Synthesis
Understanding the geometric and semantic structure of a scene (scene understanding) is a crucial problem in robotics. Researchers have employed deep learning to address scene understanding problems such as instance segmentation, semantic segmentation, and object recognition. A major impediment to applying deep learning models is the requirement for enormous quantities of labeled data: performance increases in proportion to the amount of training data available. Manually accumulating these annotated datasets is an immense undertaking and not a viable long-term option. Synthetic scene generation is an active area of research at the intersection of computer graphics, computer vision, and robotics. Recent state-of-the-art systems automatically generate configurations of objects from synthetic 3D scene models using heuristic techniques. In contrast, we introduce a framework for unsupervised synthetic scene generation from raw 3D point cloud data. Our architecture is established by autoencoders and generative adversarial networks.Texas Advanced Computing Center (TACC