37 research outputs found

    Extrinsic Calibration of 2D Millimetre-Wavelength Radar Pairs Using Ego-Velocity Estimates

    Full text link
    Correct radar data fusion depends on knowledge of the spatial transform between sensor pairs. Current methods for determining this transform operate by aligning identifiable features in different radar scans, or by relying on measurements from another, more accurate sensor. Feature-based alignment requires the sensors to have overlapping fields of view or necessitates the construction of an environment map. Several existing techniques require bespoke retroreflective radar targets. These requirements limit both where and how calibration can be performed. In this paper, we take a different approach: instead of attempting to track targets or features, we rely on ego-velocity estimates from each radar to perform calibration. Our method enables calibration of a subset of the transform parameters, including the yaw and the axis of translation between the radar pair, without the need for a shared field of view or for specialized targets. In general, the yaw and the axis of translation are the most important parameters for data fusion, the most likely to vary over time, and the most difficult to calibrate manually. We formulate calibration as a batch optimization problem, show that the radar-radar system is identifiable, and specify the platform excitation requirements. Through simulation studies and real-world experiments, we establish that our method is more reliable and accurate than state-of-the-art methods. Finally, we demonstrate that the full rigid body transform can be recovered if relatively coarse information about the platform rotation rate is available.Comment: Accepted to the 2023 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2023), Seattle, Washington, USA, June 27- July 1, 202

    Effect of a chemical manufacturing plant on community cancer rates

    Get PDF
    BACKGROUND: We conducted a retrospective study to determine if potential past exposure to dioxin had resulted in increased incidence of cancer in people living near a former manufacturing plant in New South Wales, Australia. During operation, from 1928 to 1970, by-products of the manufacturing process, including dioxin and other chemical waste, were dumped into wetlands and mangroves, discharged into a nearby bay and used to reclaim land along the foreshore, leaving a legacy of significant dioxin contamination. METHODS: We selected 20 Census Collector Districts within 1.5 kilometres of the former manufacturing plant as the study area. We obtained data on all cases of cancer and deaths from cancer in New South Wales from 1972 to 2001. We also compared rates for some cancer types that have been associated with dioxin exposure. Based on a person's residential address at time of cancer diagnosis, or at time of death due to cancer, various geo-coding software and processes were used to determine which collector district the case or death should be attributed to. Age and sex specific population data were used to calculate standardised incidence ratios and standardised mortality ratios, to compare the study area to two comparison areas, using indirect standardisation. RESULTS: During the 30-year study period 1,106 cases of cancer and 524 deaths due to cancer were identified in the study area. This corresponds to an age-sex standardised rate of 3.2 cases per 1,000 person-years exposed and 1.6 deaths per 1,000 person-years exposed. The study area had a lower rate of cancer and deaths from cancer than the comparison areas. The case incidence and mortality due to lung and bronchus carcinomas and haematopoietic cancers did not differ significantly from the comparison areas for the study period. There was no obvious geographical trend in ratios when comparing individual collector districts to New South Wales according to distance from the potential source of dioxin exposure. CONCLUSION: This investigation found no evidence that dioxin contamination from this site resulted in increased cancer rates in the potentially exposed population living around the former manufacturing plant

    Nanomaterials for Neural Interfaces

    Full text link
    This review focuses on the application of nanomaterials for neural interfacing. The junction between nanotechnology and neural tissues can be particularly worthy of scientific attention for several reasons: (i) Neural cells are electroactive, and the electronic properties of nanostructures can be tailored to match the charge transport requirements of electrical cellular interfacing. (ii) The unique mechanical and chemical properties of nanomaterials are critical for integration with neural tissue as long-term implants. (iii) Solutions to many critical problems in neural biology/medicine are limited by the availability of specialized materials. (iv) Neuronal stimulation is needed for a variety of common and severe health problems. This confluence of need, accumulated expertise, and potential impact on the well-being of people suggests the potential of nanomaterials to revolutionize the field of neural interfacing. In this review, we begin with foundational topics, such as the current status of neural electrode (NE) technology, the key challenges facing the practical utilization of NEs, and the potential advantages of nanostructures as components of chronic implants. After that the detailed account of toxicology and biocompatibility of nanomaterials in respect to neural tissues is given. Next, we cover a variety of specific applications of nanoengineered devices, including drug delivery, imaging, topographic patterning, electrode design, nanoscale transistors for high-resolution neural interfacing, and photoactivated interfaces. We also critically evaluate the specific properties of particular nanomaterials—including nanoparticles, nanowires, and carbon nanotubes—that can be taken advantage of in neuroprosthetic devices. The most promising future areas of research and practical device engineering are discussed as a conclusion to the review.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/64336/1/3970_ftp.pd

    Certifiably Optimal Monocular Hand-Eye Calibration

    Full text link
    Correct fusion of data from two sensors is not possible without an accurate estimate of their relative pose, which can be determined through the process of extrinsic calibration. When two or more sensors are capable of producing their own egomotion estimates (i.e., measurements of their trajectories through an environment), the 'hand-eye' formulation of extrinsic calibration can be employed. In this paper, we extend our recent work on a convex optimization approach for hand-eye calibration to the case where one of the sensors cannot observe the scale of its translational motion (e.g., a monocular camera observing an unmapped environment). We prove that our technique is able to provide a certifiably globally optimal solution to both the known- and unknown-scale variants of hand-eye calibration, provided that the measurement noise is bounded. Herein, we focus on the theoretical aspects of the problem, show the tightness and stability of our solution, and demonstrate the optimality and speed of our algorithm through experiments with synthetic data.Comment: In Proceedings of the IEEE International Conference on Multisensor Fusion and Integration (MFI'20), Karlsruhe, Germany, Sep. 12-16, 202

    A Self-Supervised, Differentiable Kalman Filter for Uncertainty-Aware Visual-Inertial Odometry

    Full text link
    Visual-inertial odometry (VIO) systems traditionally rely on filtering or optimization-based techniques for egomotion estimation. While these methods are accurate under nominal conditions, they are prone to failure during severe illumination changes, rapid camera motions, or on low-texture image sequences. Learning-based systems have the potential to outperform classical implementations in challenging environments, but, currently, do not perform as well as classical methods in nominal settings. Herein, we introduce a framework for training a hybrid VIO system that leverages the advantages of learning and standard filtering-based state estimation. Our approach is built upon a differentiable Kalman filter, with an IMU-driven process model and a robust, neural network-derived relative pose measurement model. The use of the Kalman filter framework enables the principled treatment of uncertainty at training time and at test time. We show that our self-supervised loss formulation outperforms a similar, supervised method, while also enabling online retraining. We evaluate our system on a visually degraded version of the EuRoC dataset and find that our estimator operates without a significant reduction in accuracy in cases where classical estimators consistently diverge. Finally, by properly utilizing the metric information contained in the IMU measurements, our system is able to recover metric scene scale, while other self-supervised monocular VIO approaches cannot.Comment: Accepted to the 2022 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM'22
    corecore