10 research outputs found

    Hot melt adhesive attachment pad

    Get PDF
    A hot melt adhesive attachment pad for releasably securing distinct elements together is described which is particularly useful in the construction industry or a spatial vacuum environment. The attachment pad consists primarily of a cloth selectively impregnated with a charge of hot melt adhesive, a thermo-foil heater, and a thermo-cooler. These components are securely mounted in a mounting assembly. In operation, the operator activates the heating cycle transforming the hot melt adhesive to a substantially liquid state, positions the pad against the attachment surface, and activates the cooling cycle solidifying the adhesive and forming a strong, releasable bond

    Induction heating coupler and annealer

    Get PDF
    An induction heating device includes a handle having a hollow interior and two opposite ends, a wrist connected to one end of the handle, a U-shaped pole piece having- two spaced apart ends, a tank circuit including an induction coil wrapped around the pole piece and a capacitor connected to the induction coil, a head connected to the wrist and including a housing for receiving the U-shaped pole piece, the two spaced apart ends of the pole piece extending outwardly beyond the housing, and a power source connected to the tank circuit. When the tank circuit is energized and a susceptor is placed in juxtaposition to the ends of the U-shaped pole piece, the susceptor is heated by induction heating due to a magnetic flux passing between the two ends of the pole piece

    Induction heating coupler

    Get PDF
    An induction heating device includes a handle having a hollow interior and two opposite ends, a wrist connected to one end of the handle, a U-shaped pole piece having two spaced apart ends, a tank circuit including an induction coil wrapped around the pole piece and a capacitor connected to the induction coil, a head connected to the wrist and including a housing for receiving the U-shaped pole piece, the two spaced apart ends of the pole piece extending outwardely beyond the housing, and a power source connected to the tank circuit. When the tank circuit is energized and a susceptor is placed in juxtaposition to the ends of the U-shaped pole piece, the susceptor is heated by induction heating due to magnetic flux passing between the two ends of the pole piece

    Democratic population decisions result in robust policy-gradient learning: A parametric study with GPU simulations

    Get PDF
    High performance computing on the Graphics Processing Unit (GPU) is an emerging field driven by the promise of high computational power at a low cost. However, GPU programming is a non-trivial task and moreover architectural limitations raise the question of whether investing effort in this direction may be worthwhile. In this work, we use GPU programming to simulate a two-layer network of Integrate-and-Fire neurons with varying degrees of recurrent connectivity and investigate its ability to learn a simplified navigation task using a policy-gradient learning rule stemming from Reinforcement Learning. The purpose of this paper is twofold. First, we want to support the use of GPUs in the field of Computational Neuroscience. Second, using GPU computing power, we investigate the conditions under which the said architecture and learning rule demonstrate best performance. Our work indicates that networks featuring strong Mexican-Hat-shaped recurrent connections in the top layer, where decision making is governed by the formation of a stable activity bump in the neural population (a "non-democratic" mechanism), achieve mediocre learning results at best. In absence of recurrent connections, where all neurons "vote" independently ("democratic") for a decision via population vector readout, the task is generally learned better and more robustly. Our study would have been extremely difficult on a desktop computer without the use of GPU programming. We present the routines developed for this purpose and show that a speed improvement of 5x up to 42x is provided versus optimised Python code. The higher speed is achieved when we exploit the parallelism of the GPU in the search of learning parameters. This suggests that efficient GPU programming can significantly reduce the time needed for simulating networks of spiking neurons, particularly when multiple parameter configurations are investigated. © 2011 Richmond et al
    corecore