15,190 research outputs found

    Efficient localization methods for a point source and rigid body

    Get PDF
    Localization has been a very important and fundamental research topic in GPS, radar, sonar, and especially in mobile communications and sensor networks over the past few years. Localization of a signal source is often accomplished by using a number of sensors that measure the radiated signal from the source, here we consider the range based measurements, including time of arrival (TOA) and time difference of arrival (TDOA). In such study, the object is far away or only the position information is needed, and we refer this as point source localization. For some applications, e.g., robotics, spacecraft, and gaming, orientation information in addition to position is also needed. Although an inertial measurement unit (IMU) can perform such task once the initial state is available, it suffers from long-term performance deviation and requires accurate calibration using additional devices. Here we consider joint position and orientation estimation using the distance or AOA measurements between the fixed sensors on the object and the anchors at fixed locations, and it is called rigid body localization. Our research has two manifolds: First, for the point source localization, the original squared range least squares (SR-LS) admits global and computationally efficient solution using generalized trust region subproblems (GTRS) technique but with non-optimal accuracy, therefore we add proper range weighting (SR-WLS) into it and investigate the resulting performances of mean squared error (MSE) and bias. Its asymptotic efficiency is proven theoretically and validated by simulations. The effects of range weighting on the localization performance under different sensor number, noise correlation, and localization geometry are examined. We also conduct similar range weighting for squared range difference least squares (SRD-LS and SRD-WLS) under TDOA measurements. In addition, the weighting technique is extended to the scenario where the sensor positions are not exactly known. The resultant cost function has the same structure as that without sensor position errors, thereby existing algebraic or exact solutions to the squared measurements can still be used without requiring new optimization method. Second, for the rigid body localization, under distance measurements, the existing method cancels the quadratic term of the sensor position in the squared distance measurement equations, which may cause serious degradation. Our proposed estimators are non-iterative and have two steps: preliminary and refinement. The preliminary step provides a coarse estimate and the refinement step improves the first step estimate to yield an accurate solution. When the rigid body is stationary, we are able to locate the rigid body with accuracy higher than the solutions of comparable complexity found in the literature. When the rigid body is moving, we introduce additional Doppler shift measurements and develop an estimator that contains the additional unknowns of angular and translational velocities. Simulations show that the proposed estimators, in both stationary and moving cases, can approach the Cramer-Rao lower bound (CRLB) performance under Gaussian noise over the small error region. Under AOA measurements, we solve the 3D scenario that is seldom considered before, through estimating its distances to landmarks and contrasting the landmark positions in object local frame and the global frame. Furthermore, we extend it to the scenario where there is more than one AOA sensor on-board, which either increases the robustness and accuracy or decreases the minimum requirement on number of landmarks. And the methods for 2D and 3D are designed respectively. The simulations confirm the effectiveness of proposed methods

    Position and Orientation Estimation of a Rigid Body: Rigid Body Localization

    Full text link
    Rigid body localization refers to a problem of estimating the position of a rigid body along with its orientation using anchors. We consider a setup in which a few sensors are mounted on a rigid body. The absolute position of the rigid body is not known, but, the relative position of the sensors or the topology of the sensors on the rigid body is known. We express the absolute position of the sensors as an affine function of the Stiefel manifold and propose a simple least-squares (LS) estimator as well as a constrained total least-squares (CTLS) estimator to jointly estimate the orientation and the position of the rigid body. To account for the perturbations of the sensors, we also propose a constrained total least-squares (CTLS) estimator. Analytical closed-form solutions for the proposed estimators are provided. Simulations are used to corroborate and analyze the performance of the proposed estimators.Comment: 4 pages and 1 reference page; 3 Figures; In Proc. of ICASSP 201

    Unsupervised Odometry and Depth Learning for Endoscopic Capsule Robots

    Full text link
    In the last decade, many medical companies and research groups have tried to convert passive capsule endoscopes as an emerging and minimally invasive diagnostic technology into actively steerable endoscopic capsule robots which will provide more intuitive disease detection, targeted drug delivery and biopsy-like operations in the gastrointestinal(GI) tract. In this study, we introduce a fully unsupervised, real-time odometry and depth learner for monocular endoscopic capsule robots. We establish the supervision by warping view sequences and assigning the re-projection minimization to the loss function, which we adopt in multi-view pose estimation and single-view depth estimation network. Detailed quantitative and qualitative analyses of the proposed framework performed on non-rigidly deformable ex-vivo porcine stomach datasets proves the effectiveness of the method in terms of motion estimation and depth recovery.Comment: submitted to IROS 201

    A distributed optimization framework for localization and formation control: applications to vision-based measurements

    Full text link
    Multiagent systems have been a major area of research for the last 15 years. This interest has been motivated by tasks that can be executed more rapidly in a collaborative manner or that are nearly impossible to carry out otherwise. To be effective, the agents need to have the notion of a common goal shared by the entire network (for instance, a desired formation) and individual control laws to realize the goal. The common goal is typically centralized, in the sense that it involves the state of all the agents at the same time. On the other hand, it is often desirable to have individual control laws that are distributed, in the sense that the desired action of an agent depends only on the measurements and states available at the node and at a small number of neighbors. This is an attractive quality because it implies an overall system that is modular and intrinsically more robust to communication delays and node failures

    Magnetic-Visual Sensor Fusion-based Dense 3D Reconstruction and Localization for Endoscopic Capsule Robots

    Full text link
    Reliable and real-time 3D reconstruction and localization functionality is a crucial prerequisite for the navigation of actively controlled capsule endoscopic robots as an emerging, minimally invasive diagnostic and therapeutic technology for use in the gastrointestinal (GI) tract. In this study, we propose a fully dense, non-rigidly deformable, strictly real-time, intraoperative map fusion approach for actively controlled endoscopic capsule robot applications which combines magnetic and vision-based localization, with non-rigid deformations based frame-to-model map fusion. The performance of the proposed method is demonstrated using four different ex-vivo porcine stomach models. Across different trajectories of varying speed and complexity, and four different endoscopic cameras, the root mean square surface reconstruction errors 1.58 to 2.17 cm.Comment: submitted to IROS 201

    Dial It In: Rotating RF Sensors to Enhance Radio Tomography

    Full text link
    A radio tomographic imaging (RTI) system uses the received signal strength (RSS) measured by RF sensors in a static wireless network to localize people in the deployment area, without having them to carry or wear an electronic device. This paper addresses the fact that small-scale changes in the position and orientation of the antenna of each RF sensor can dramatically affect imaging and localization performance of an RTI system. However, the best placement for a sensor is unknown at the time of deployment. Improving performance in a deployed RTI system requires the deployer to iteratively "guess-and-retest", i.e., pick a sensor to move and then re-run a calibration experiment to determine if the localization performance had improved or degraded. We present an RTI system of servo-nodes, RF sensors equipped with servo motors which autonomously "dial it in", i.e., change position and orientation to optimize the RSS on links of the network. By doing so, the localization accuracy of the RTI system is quickly improved, without requiring any calibration experiment from the deployer. Experiments conducted in three indoor environments demonstrate that the servo-nodes system reduces localization error on average by 32% compared to a standard RTI system composed of static RF sensors.Comment: 9 page
    • …
    corecore