9,546 research outputs found

    Wing and body motion during flight initiation in Drosophila revealed by automated visual tracking

    Get PDF
    The fruit fly Drosophila melanogaster is a widely used model organism in studies of genetics, developmental biology and biomechanics. One limitation for exploiting Drosophila as a model system for behavioral neurobiology is that measuring body kinematics during behavior is labor intensive and subjective. In order to quantify flight kinematics during different types of maneuvers, we have developed a visual tracking system that estimates the posture of the fly from multiple calibrated cameras. An accurate geometric fly model is designed using unit quaternions to capture complex body and wing rotations, which are automatically fitted to the images in each time frame. Our approach works across a range of flight behaviors, while also being robust to common environmental clutter. The tracking system is used in this paper to compare wing and body motion during both voluntary and escape take-offs. Using our automated algorithms, we are able to measure stroke amplitude, geometric angle of attack and other parameters important to a mechanistic understanding of flapping flight. When compared with manual tracking methods, the algorithm estimates body position within 4.4±1.3% of the body length, while body orientation is measured within 6.5±1.9 deg. (roll), 3.2±1.3 deg. (pitch) and 3.4±1.6 deg. (yaw) on average across six videos. Similarly, stroke amplitude and deviation are estimated within 3.3 deg. and 2.1 deg., while angle of attack is typically measured within 8.8 deg. comparing against a human digitizer. Using our automated tracker, we analyzed a total of eight voluntary and two escape take-offs. These sequences show that Drosophila melanogaster do not utilize clap and fling during take-off and are able to modify their wing kinematics from one wingstroke to the next. Our approach should enable biomechanists and ethologists to process much larger datasets than possible at present and, therefore, accelerate insight into the mechanisms of free-flight maneuvers of flying insects

    Perception-aware Tag Placement Planning for Robust Localization of UAVs in Indoor Construction Environments

    Full text link
    Tag-based visual-inertial localization is a lightweight method for enabling autonomous data collection missions of low-cost unmanned aerial vehicles (UAVs) in indoor construction environments. However, finding the optimal tag configuration (i.e., number, size, and location) on dynamic construction sites remains challenging. This paper proposes a perception-aware genetic algorithm-based tag placement planner (PGA-TaPP) to determine the optimal tag configuration using 4D-BIM, considering the project progress, safety requirements, and UAV's localizability. The proposed method provides a 4D plan for tag placement by maximizing the localizability in user-specified regions of interest (ROIs) while limiting the installation costs. Localizability is quantified using the Fisher information matrix (FIM) and encapsulated in navigable grids. The experimental results show the effectiveness of our method in finding an optimal 4D tag placement plan for the robust localization of UAVs on under-construction indoor sites.Comment: [Final draft] This material may be downloaded for personal use only. Any other use requires prior permission of the American Society of Civil Engineers and the Journal of Computing in Civil Engineerin

    Robot Vision in Industrial Assembly and Quality Control Processes

    Get PDF

    POSE ESTIMATION FOR ROBOTIC DISASSEMBLY USING RANSAC WITH LINE FEATURES

    Get PDF
    In this thesis, a new technique to recognize and estimate the pose of a given 3-D object from a single real image provided known prior knowledge of its approximate structure is proposed. Metrics to evaluate the correctness of a calculated pose are presented and analyzed. The traditional and the more recent approaches used in solving this problem are explored and the various methodologies adopted are discussed. The first step in disassembling a given assembly from its image is to recognize the attitude and translation of each of its constituent components - a fundamental problem which is being addressed in this work. The proposed algorithm does not depend on uniquely identifiable 3D model surface features for its operation - this makes it ideally suited for object recognition for assemblies. The algorithm works well even for low-resolution occluded object images taken under variable illumination conditions and heavy shadows and performs markedly better when these factors are removed. The algorithm uses a combination of various computer vision concepts such as segmentation, corner detection and camera calibration, and subsequently adopts a line-based object pose estimation technique (originally based on the RANSAC algorithm) to settle on the best pose estimate. The novelty of the proposed technique lies in the specific way in which the poses are evaluated in the RANSAC-like algorithm. In particular, line-based pose evaluation is adopted where the line chamfer image is used to evaluate the error distance between the projected model line and the image edges. The correctness of the computed pose is determined based on the number of line matches computed using this error distance. As opposed to the RANSAC algorithm where the search process is pseudo-random, we do an exhaustive pose search instead. Techniques to reduce the search space by a large amount are discussed and implemented. The algorithm was used to estimate the pose of 28 objects in 22 images, where some images contain multiple objects. The algorithm has been found to work with a 3-D mismatch error of less than 2.5cm in 90% of the cases and less than 1cm error in 53% of the cases in the dataset used

    Data-driven Soft Sensors in the Process Industry

    Get PDF
    In the last two decades Soft Sensors established themselves as a valuable alternative to the traditional means for the acquisition of critical process variables, process monitoring and other tasks which are related to process control. This paper discusses characteristics of the process industry data which are critical for the development of data-driven Soft Sensors. These characteristics are common to a large number of process industry fields, like the chemical industry, bioprocess industry, steel industry, etc. The focus of this work is put on the data-driven Soft Sensors because of their growing popularity, already demonstrated usefulness and huge, though yet not completely realised, potential. A comprehensive selection of case studies covering the three most important Soft Sensor application fields, a general introduction to the most popular Soft Sensor modelling techniques as well as a discussion of some open issues in the Soft Sensor development and maintenance and their possible solutions are the main contributions of this work

    Uncertainty Minimization in Robotic 3D Mapping Systems Operating in Dynamic Large-Scale Environments

    Get PDF
    This dissertation research is motivated by the potential and promise of 3D sensing technologies in safety and security applications. With specific focus on unmanned robotic mapping to aid clean-up of hazardous environments, under-vehicle inspection, automatic runway/pavement inspection and modeling of urban environments, we develop modular, multi-sensor, multi-modality robotic 3D imaging prototypes using localization/navigation hardware, laser range scanners and video cameras. While deploying our multi-modality complementary approach to pose and structure recovery in dynamic real-world operating conditions, we observe several data fusion issues that state-of-the-art methodologies are not able to handle. Different bounds on the noise model of heterogeneous sensors, the dynamism of the operating conditions and the interaction of the sensing mechanisms with the environment introduce situations where sensors can intermittently degenerate to accuracy levels lower than their design specification. This observation necessitates the derivation of methods to integrate multi-sensor data considering sensor conflict, performance degradation and potential failure during operation. Our work in this dissertation contributes the derivation of a fault-diagnosis framework inspired by information complexity theory to the data fusion literature. We implement the framework as opportunistic sensing intelligence that is able to evolve a belief policy on the sensors within the multi-agent 3D mapping systems to survive and counter concerns of failure in challenging operating conditions. The implementation of the information-theoretic framework, in addition to eliminating failed/non-functional sensors and avoiding catastrophic fusion, is able to minimize uncertainty during autonomous operation by adaptively deciding to fuse or choose believable sensors. We demonstrate our framework through experiments in multi-sensor robot state localization in large scale dynamic environments and vision-based 3D inference. Our modular hardware and software design of robotic imaging prototypes along with the opportunistic sensing intelligence provides significant improvements towards autonomous accurate photo-realistic 3D mapping and remote visualization of scenes for the motivating applications

    Using Cost Simulation and Computer Vision to Inform Probabilistic Cost Estimates

    Get PDF
    Cost estimating is a critical task in the construction process. Building cost estimates using historical data from previously performed projects have long been recognized as one of the better methods to generate precise construction bids. However, the cost and productivity data are typically gathered at the summary level for cost-control purposes. The possible ranges of production rates and costs associated with the construction activities lack accuracy and comprehensiveness. In turn, the robustness of cost estimates is minimal. Thus, this study proposes exploring a range of cost and productivity data to better inform potential outcomes of cost estimates by using probabilistic cost simulation and computer vision techniques for activity production rate analysis. Chapter two employed the Monte Carlo Simulation approach to computing a range of cost outcomes to find the optimal construction methods for large-scale concrete construction. The probabilistic cost simulation approach helps the decision-makers better understand the probable cost consequences of different construction methods and to make more informed decisions based on the project characteristics. Chapter three experimented with the computer vision-based skeletal pose estimation model and recurrent neural network to recognize human activities. The activity recognition algorithm was employed to help interpret the construction activities into productivity information for automated labor productivity tracking. Chapter four implemented computer vision-based object detection and object tracking algorithms to automatically track the construction productivity data. The productivity data collected was used to inform the probabilistic cost estimates. The Monte Carlo Simulation was adopted to explore potential cost outcomes and sensitive cost factors in the overall construction project. The study demonstrated how the computer vision techniques and probabilistic cost simulation optimize the reliability of the cost estimates to support construction decision-making. Advisor: Philip Baruth
    corecore