524,209 research outputs found
JOB BUILDER remote batch processing subsystem
The functions of the JOB BUILDER remote batch processing subsystem are described. Instructions are given for using it as a component of a display system developed by personnel of the System Programming Laboratory, Institute of Space Research, USSR Academy of Sciences
Optimisation of solvolysis for recycling carbon fibre reinforced composites
Solvolysis processes have been used to degrade the resin of two different varieties of epoxy based carbon fibre reinforced composite (CFRC) materials. A degradation of up to 98% has been achieved when processing material at a temperature of 320 °C using a supercritical solvent mixture of acetone and water. Increasing the processing time from 1 to 2 hours shows an increase in the degradation of only 10% and there does not appear to be any benefit in processing the material beyond this time. Due to the batch conditions used, it is necessary to rinse the fibres with acetone after processing to remove remaining organic residue. Washing the fibres at supercritical batch conditions, however, does not efficiently remove the residue compared to a simple hand washing with acetone. Shredding the sample prior to processing also does not have a significant effect. The process investigated requires 19 MJ.kg-1 of fibres recovered and, since the process has not yet been optimised, shows strong potential for future development especially since it allows for the recovery and reuse of organic resinous products
Batch-normalized Recurrent Highway Networks
Gradient control plays an important role in feed-forward networks applied to
various computer vision tasks. Previous work has shown that Recurrent Highway
Networks minimize the problem of vanishing or exploding gradients. They achieve
this by setting the eigenvalues of the temporal Jacobian to 1 across the time
steps. In this work, batch normalized recurrent highway networks are proposed
to control the gradient flow in an improved way for network convergence.
Specifically, the introduced model can be formed by batch normalizing the
inputs at each recurrence loop. The proposed model is tested on an image
captioning task using MSCOCO dataset. Experimental results indicate that the
batch normalized recurrent highway networks converge faster and performs better
compared with the traditional LSTM and RHN based models.Comment: 5 pages, 3 figures, Published in 2017 IEEE International Conference
on Image Processing (ICIP
Enabling a High Throughput Real Time Data Pipeline for a Large Radio Telescope Array with GPUs
The Murchison Widefield Array (MWA) is a next-generation radio telescope
currently under construction in the remote Western Australia Outback. Raw data
will be generated continuously at 5GiB/s, grouped into 8s cadences. This high
throughput motivates the development of on-site, real time processing and
reduction in preference to archiving, transport and off-line processing. Each
batch of 8s data must be completely reduced before the next batch arrives.
Maintaining real time operation will require a sustained performance of around
2.5TFLOP/s (including convolutions, FFTs, interpolations and matrix
multiplications). We describe a scalable heterogeneous computing pipeline
implementation, exploiting both the high computing density and FLOP-per-Watt
ratio of modern GPUs. The architecture is highly parallel within and across
nodes, with all major processing elements performed by GPUs. Necessary
scatter-gather operations along the pipeline are loosely synchronized between
the nodes hosting the GPUs. The MWA will be a frontier scientific instrument
and a pathfinder for planned peta- and exascale facilities.Comment: Version accepted by Comp. Phys. Com
- …
