103 research outputs found

    Lateral dampers for thrust bearings

    Get PDF
    The development of lateral damping schemes for thrust bearings was examined, ranking their applicability to various engine classes, selecting the best concept for each engine class and performing an in-depth evaluation. Five major engine classes were considered: large transport, military, small general aviation, turboshaft, and non-manrated. Damper concepts developed for evaluation were: curved beam, constrained and unconstrained elastomer, hybrid boost bearing, hydraulic thrust piston, conical squeeze film, and rolling element thrust face

    Virtual-to-Real-World Transfer Learning for Robots on Wilderness Trails

    Full text link
    Robots hold promise in many scenarios involving outdoor use, such as search-and-rescue, wildlife management, and collecting data to improve environment, climate, and weather forecasting. However, autonomous navigation of outdoor trails remains a challenging problem. Recent work has sought to address this issue using deep learning. Although this approach has achieved state-of-the-art results, the deep learning paradigm may be limited due to a reliance on large amounts of annotated training data. Collecting and curating training datasets may not be feasible or practical in many situations, especially as trail conditions may change due to seasonal weather variations, storms, and natural erosion. In this paper, we explore an approach to address this issue through virtual-to-real-world transfer learning using a variety of deep learning models trained to classify the direction of a trail in an image. Our approach utilizes synthetic data gathered from virtual environments for model training, bypassing the need to collect a large amount of real images of the outdoors. We validate our approach in three main ways. First, we demonstrate that our models achieve classification accuracies upwards of 95% on our synthetic data set. Next, we utilize our classification models in the control system of a simulated robot to demonstrate feasibility. Finally, we evaluate our models on real-world trail data and demonstrate the potential of virtual-to-real-world transfer learning.Comment: iROS 201

    Extrinisic Calibration of a Camera-Arm System Through Rotation Identification

    Get PDF
    Determining extrinsic calibration parameters is a necessity in any robotic system composed of actuators and cameras. Once a system is outside the lab environment, parameters must be determined without relying on outside artifacts such as calibration targets. We propose a method that relies on structured motion of an observed arm to recover extrinsic calibration parameters. Our method combines known arm kinematics with observations of conics in the image plane to calculate maximum-likelihood estimates for calibration extrinsics. This method is validated in simulation and tested against a real-world model, yielding results consistent with ruler-based estimates. Our method shows promise for estimating the pose of a camera relative to an articulated arm's end effector without requiring tedious measurements or external artifacts. Index Terms: robotics, hand-eye problem, self-calibration, structure from motio

    Generating Executable Action Plans with Environmentally-Aware Language Models

    Full text link
    Large Language Models (LLMs) trained using massive text datasets have recently shown promise in generating action plans for robotic agents from high level text queries. However, these models typically do not consider the robot's environment, resulting in generated plans that may not actually be executable, due to ambiguities in the planned actions or environmental constraints. In this paper, we propose an approach to generate environmentally-aware action plans that agents are better able to execute. Our approach involves integrating environmental objects and object relations as additional inputs into LLM action plan generation to provide the system with an awareness of its surroundings, resulting in plans where each generated action is mapped to objects present in the scene. We also design a novel scoring function that, along with generating the action steps and associating them with objects, helps the system disambiguate among object instances and take into account their states. We evaluated our approach using the VirtualHome simulator and the ActivityPrograms knowledge base and found that action plans generated from our system had a 310% improvement in executability and a 147% improvement in correctness over prior work. The complete code and a demo of our method is publicly available at https://github.com/hri-ironlab/scene_aware_language_planner

    A Neural Network Approach for Analyzing the Illusion of Movement in Static Images

    Full text link
    The purpose of this work is to analyze the illusion of movement that appears when seeing certain static images. This analysis is accomplished by using a biologically plausible neural network that learned (in a unsupervised manner) to identify the movement direction of shifting training patterns. Some of the biological features that characterizes this neural network are: intrinsic plasticity to adapt firing probability, metaplasticity to regulate synaptic weights and firing adaptation of simulated pyramidal networks. After analyzing the results, we hypothesize that the illusion is due to cinematographic perception mechanisms in the brain due to which each visual frame is renewed approximately each 100 msec. Blurring of moving object in visual frames might be interpreted by the brain as movement, the same as if we present a static blurred object

    A Survey of Data and Encodings in Word Clouds

    Get PDF
    Word clouds are an increasingly popular means of presenting statistical summaries of document collections, appearing frequently in digital humanities literature, newspaper articles, and social media. Despite their ubiquity and intuitive appeal, our ability to read such visualizations accurately is not yet fully understood. Past work has shown that readers perform poorly at certain tasks with word clouds, and that perceptual biases can affect their interpretation. To better understand the potential impacts of these biases, we present a survey of word cloud usage. Drawing from a corpus of literature from the fields of digital humanities, data visualization, and journalism, we record what data encodings are most commonly used (e.g., font size, position, etc.), what data is being presented, and what tasks are meant to be supported. We offer design recommendations given the most common tasks and biases, and point to future work to answer standing questions

    Além das formas: introdução ao pensamento contemporâneo no design, nas artes e na arquitetura

    Get PDF
    corecore