288,511 research outputs found

    Machine Design Experiments Using Gears to Foster Discovery Learning

    Get PDF
    Machine Design Experiments Using Gears to Foster Discovery Learning For the typical undergraduate engineering student the topic of gears is introduced and discussed in several courses. Early exposure may be in a physics course or in a first dynamics course,where gear pairs are presented as an idealized means to change speed ratios and torque ratios.They are used for mechanical advantage or to achieve desired speed, and the focus is usually on kinematics. Since gears have inertia they store kinetic energy and are part of the dynamic equations of motion of mechanisms and machines. For mechanical engineering students, gears are a core component studied in courses such as \u27kinematics and dynamics of mechanisms\u27 and \u27machine design\u27, where the nomenclature and design equations are developed for various types of gears. There may be exposure to real gears in a mechanical engineering laboratory; more often, students may see gears passed around in class and as part of demonstrations.In this paper we describe new experiments that were designed to provide mechanical engineering students with discovery learning experiences with gears and mechanical systems using gears.The suite of practical experiments presents students with a range of challenges that require them to analyze, measure, design, and fabricate gears. Activities in the experiments include: (1) Identifying gear types (spur, helical, bevel, etc.) and appropriate applications (automotive transmissions and differentials, drills, gear head motors). (2) Disassembling and re-assembling a kitchen mixer (with design and manufacturing questions related to its gears). (3) Disassembling and re-assembling an automotive HVAC baffle sub-assembly (with measurement of train ratios, and design and manufacturing questions related to its gears). (4) Designing the gear mechanism for driving the minute and hour hands of a gear clock given a known yet arbitrary drive speed. Fabricating the gears of the clock via rapid prototyping (3D printing), assembling the clock, and then testing the timing accuracy.In addition to reporting the details of the experiments, we share experiences of students and teaching assistants in their use and effectiveness. We provide insights into how well students became familiar with types and nomenclature of gears and understood the applicability of different gears to actual real-world problems. The intent of the experiments is to effectively enhance mechanical engineering students\u27 awareness of gears and expand their knowledge and confidence in the use of gears in machine and mechanism design

    Many Episode Learning in a Modular Embodied Agent via End-to-End Interaction

    Full text link
    In this work we give a case study of an embodied machine-learning (ML) powered agent that improves itself via interactions with crowd-workers. The agent consists of a set of modules, some of which are learned, and others heuristic. While the agent is not "end-to-end" in the ML sense, end-to-end interaction is a vital part of the agent's learning mechanism. We describe how the design of the agent works together with the design of multiple annotation interfaces to allow crowd-workers to assign credit to module errors from end-to-end interactions, and to label data for individual modules. Over multiple automated human-agent interaction, credit assignment, data annotation, and model re-training and re-deployment, rounds we demonstrate agent improvement

    Neural Attention Memory

    Full text link
    We propose a novel perspective of the attention mechanism by reinventing it as a memory architecture for neural networks, namely Neural Attention Memory (NAM). NAM is a memory structure that is both readable and writable via differentiable linear algebra operations. We explore three use cases of NAM: memory-augmented neural network (MANN), few-shot learning, and efficient long-range attention. First, we design two NAM-based MANNs of Long Short-term Memory (LSAM) and NAM Turing Machine (NAM-TM) that show better computational powers in algorithmic zero-shot generalization tasks compared to other baselines such as differentiable neural computer (DNC). Next, we apply NAM to the N-way K-shot learning task and show that it is more effective at reducing false positives compared to the baseline cosine classifier. Finally, we implement an efficient Transformer with NAM and evaluate it with long-range arena tasks to show that NAM can be an efficient and effective alternative for scaled dot-product attention.Comment: Preprint. Under revie
    • …
    corecore