77 research outputs found
Toward Accurate Camera-based 3D Object Detection via Cascade Depth Estimation and Calibration
Recent camera-based 3D object detection is limited by the precision of
transforming from image to 3D feature spaces, as well as the accuracy of object
localization within the 3D space. This paper aims to address such a fundamental
problem of camera-based 3D object detection: How to effectively learn depth
information for accurate feature lifting and object localization. Different
from previous methods which directly predict depth distributions by using a
supervised estimation model, we propose a cascade framework consisting of two
depth-aware learning paradigms. First, a depth estimation (DE) scheme leverages
relative depth information to realize the effective feature lifting from 2D to
3D spaces. Furthermore, a depth calibration (DC) scheme introduces depth
reconstruction to further adjust the 3D object localization perturbation along
the depth axis. In practice, the DE is explicitly realized by using both the
absolute and relative depth optimization loss to promote the precision of depth
prediction, while the capability of DC is implicitly embedded into the
detection Transformer through a depth denoising mechanism in the training
phase. The entire model training is accomplished through an end-to-end manner.
We propose a baseline detector and evaluate the effectiveness of our proposal
with +2.2%/+2.7% NDS/mAP improvements on NuScenes benchmark, and gain a
comparable performance with 55.9%/45.7% NDS/mAP. Furthermore, we conduct
extensive experiments to demonstrate its generality based on various detectors
with about +2% NDS improvements.Comment: Accepted to ICRA202
Nonmetric Trait Correlation: A Look at Environmental and Biological Influences on Third Trochanter Formation Among Pre-Contact Upper Midwest Populations
Nonmetric traits of the human skeleton are thought to correlate with genetic and/or environmental influences; however, to what extent each may affect the presence of nonmetric traits has not been clearly substantiated in the literature. Nonmetric traits as defined by Larsen are, discrete or quasi-continuous anatomical entities often expressed as gradations from absence to full expression (1997:305). More precisely, nonmetric traits are anomalies that express themselves in the skeleton and are recorded as absent or present. A third trochanter is one of many nonmetric traits present in the femur and is defined by Finnegan as, a rounded tubercle that can be found at the superior end of the gluteal crest of the femur (1978:25). The third trochanter is considered an enthesopathy as well as a nonmetric trait because it is the insertion point of the gluteus maximus muscle, the most superficial muscle in the gluteal region (Gray 1918:426). Recent studies (Hawkey and Merbs 1995, Knusel 2000) indicate that enthesopathies are closely linked to patterns of subsistence, habitual activities and geographic location. It should also be noted that enthesopathies have been directly related to pathology, trauma, biological diversity, age, hormonal, and rheumatic conditions (Hawkey and Merbs 1995, Jurmain 1999). This research will examine the correlation between sex, age, pathology, and environmental influences on the presence of third trochanters in pre-contact populations of the Upper Midwest region of the United States
SupFusion: Supervised LiDAR-Camera Fusion for 3D Object Detection
In this paper, we propose a novel training strategy called SupFusion, which
provides an auxiliary feature level supervision for effective LiDAR-Camera
fusion and significantly boosts detection performance. Our strategy involves a
data enhancement method named Polar Sampling, which densifies sparse objects
and trains an assistant model to generate high-quality features as the
supervision. These features are then used to train the LiDAR-Camera fusion
model, where the fusion feature is optimized to simulate the generated
high-quality features. Furthermore, we propose a simple yet effective deep
fusion module, which contiguously gains superior performance compared with
previous fusion methods with SupFusion strategy. In such a manner, our proposal
shares the following advantages. Firstly, SupFusion introduces auxiliary
feature-level supervision which could boost LiDAR-Camera detection performance
without introducing extra inference costs. Secondly, the proposed deep fusion
could continuously improve the detector's abilities. Our proposed SupFusion and
deep fusion module is plug-and-play, we make extensive experiments to
demonstrate its effectiveness. Specifically, we gain around 2% 3D mAP
improvements on KITTI benchmark based on multiple LiDAR-Camera 3D detectors.Comment: Accepted to ICCV202
Quantitative Bioluminescence Tomography-guided System for Conformal Irradiation In Vivo
Although cone-beam CT (CBCT) has been used to guide irradiation for
pre-clinical radiotherapy(RT) research, it is limited to localize soft tissue
target especially in a low imaging contrast environment. Knowledge of target
shape is a fundamental need for RT. Without such information to guide
radiation, normal tissue can be irradiated unnecessarily, leading to
experimental uncertainties. Recognition of this need led us to develop
quantitative bioluminescence tomography (QBLT), which provides strong imaging
contrast to localize optical targets. We demonstrated its capability of guiding
conformal RT using an orthotopic bioluminescent glioblastoma (GBM) model. With
multi-projection and multi-spectral bioluminescence imaging and a novel
spectral derivative method, our QBLT system is able to reconstruct GBM with
localization accuracy <1mm. An optimal threshold was determined to delineate
QBLT reconstructed gross target volume (GTV_{QBLT}), which provides the best
overlap between the GTV_{QBLT} and CBCT contrast labeled GBM (GTV), used as the
ground truth for the GBM volume. To account for the uncertainty of QBLT in
target localization and volume delineation, we also innovated a margin design;
a 0.5mm margin was determined and added to GTV_{QBLT} to form a planning target
volume (PTV_{QBLT}), which largely improved tumor coverage from 75% (0mm
margin) to 98% and the corresponding variation (n=10) of the tumor coverage was
significantly reduced. Moreover, with prescribed dose 5Gy covering 95% of
PTV_{QBLT}, QBLT-guided 7-field conformal RT can irradiate 99.4 \pm 1.0% of GTV
vs. 65.5 \pm 18.5% with conventional single field irradiation (n=10). Our
QBLT-guided system provides a unique opportunity for researchers to guide
irradiation for soft tissue targets and increase rigorous and reproducibility
of scientific discovery
In vivo bioluminescence tomography-guided system for pancreatic cancer radiotherapy research
Recent development of radiotherapy (RT) has heightened the use of radiation in managing pancreatic cancer. Thus, there is a need to investigate pancreatic cancer in a pre-clinical setting to advance our understanding of the role of RT. Widely-used cone-beam CT (CBCT) imaging cannot provide sufficient soft tissue contrast to guide irradiation. The pancreas is also prone to motion. Large collimation is unavoidably used for irradiation, costing normal tissue toxicity. We innovated a bioluminescence tomography (BLT)-guided system to address these needs. We established an orthotopic pancreatic ductal adenocarcinoma (PDAC) mouse model to access BLT. Mice underwent multi-projection and multi-spectral bioluminescence imaging (BLI), followed by CBCT imaging in an animal irradiator for BLT reconstruction and radiation planning. With optimized absorption coefficients, BLT localized PDAC at 1.25 ± 0.19 mm accuracy. To account for BLT localization uncertainties, we expanded the BLT-reconstructed volume with margin to form planning target volume(PTVBLT) for radiation planning, covering 98.7 ± 2.2% of PDAC. The BLT-guided conformal plan can cover 100% of tumors with limited normal tissue involvement across both inter-animal and inter-fraction cases, superior to the 2D BLI-guided conventional plan. BLT offers unique opportunities to localize PDAC for conformal irradiation, minimize normal tissue involvement, and support reproducibility in RT studies.</p
Arc-to-line frame registration method for ultrasound and photoacoustic image-guided intraoperative robot-assisted laparoscopic prostatectomy
Purpose: To achieve effective robot-assisted laparoscopic prostatectomy, the
integration of transrectal ultrasound (TRUS) imaging system which is the most
widely used imaging modelity in prostate imaging is essential. However, manual
manipulation of the ultrasound transducer during the procedure will
significantly interfere with the surgery. Therefore, we propose an image
co-registration algorithm based on a photoacoustic marker method, where the
ultrasound / photoacoustic (US/PA) images can be registered to the endoscopic
camera images to ultimately enable the TRUS transducer to automatically track
the surgical instrument Methods: An optimization-based algorithm is proposed to
co-register the images from the two different imaging modalities. The
principles of light propagation and an uncertainty in PM detection were assumed
in this algorithm to improve the stability and accuracy of the algorithm. The
algorithm is validated using the previously developed US/PA image-guided system
with a da Vinci surgical robot. Results: The target-registration-error (TRE) is
measured to evaluate the proposed algorithm. In both simulation and
experimental demonstration, the proposed algorithm achieved a sub-centimeter
accuracy which is acceptable in practical clinics. The result is also
comparable with our previous approach, and the proposed method can be
implemented with a normal white light stereo camera and doesn't require highly
accurate localization of the PM. Conclusion: The proposed frame registration
algorithm enabled a simple yet efficient integration of commercial US/PA
imaging system into laparoscopic surgical setting by leveraging the
characteristic properties of acoustic wave propagation and laser excitation,
contributing to automated US/PA image-guided surgical intervention
applications.Comment: 12 pages, 9 figure
- âŠ