6,138 research outputs found

    Urban wind energy conversion: the potential of ducted turbines

    Get PDF
    The prospects for urban wind power are discussed. A roof-mounted ducted wind turbine, which uses pressure differentials created by wind flow around a building, is proposed as an alternative to more conventional approaches. Outcomes from tests at model and prototype scale are described, and a simple mathematical model is presented. Predictions from the latter suggest that a ducted turbine can produce very high specific power outputs, going some way to offsetting its directional sensitivity. Further predictions using climate files are made to assess annual energy output and seasonal variations, with a conventional small wind turbine and a photovoltaic panel as comparators. It is concluded that ducted turbines have significant potential for retro-fitting to existing buildings, and have clear advantages where visual impact and safety are matters of concern

    Design of microfluidic networks

    Get PDF
    Microfluidics is a relatively new and fast growing research area in fluid mechanics. The devices in question are thin wafers containing etched or printed interconnecting channels through which fluids are pumped, which can mix and/or react at various nodes to produce an output product. Microfluidic devices have applications in many manufacturing and chemical detection processes. For example, they can be used to manufacture monodisperse droplets with very well defined properties for pharmaceutical applications; or form the basis for miniaturised ‘lab-on-a-chip’ sensor arrays for detecting biological substances or toxins

    FreezeOut: Accelerate Training by Progressively Freezing Layers

    Full text link
    The early layers of a deep neural net have the fewest parameters, but take up the most computation. In this extended abstract, we propose to only train the hidden layers for a set portion of the training run, freezing them out one-by-one and excluding them from the backward pass. Through experiments on CIFAR, we empirically demonstrate that FreezeOut yields savings of up to 20% wall-clock time during training with 3% loss in accuracy for DenseNets, a 20% speedup without loss of accuracy for ResNets, and no improvement for VGG networks. Our code is publicly available at https://github.com/ajbrock/FreezeOutComment: Extended Abstrac

    SMASH: One-Shot Model Architecture Search through HyperNetworks

    Full text link
    Designing architectures for deep neural networks requires expert knowledge and substantial computation time. We propose a technique to accelerate architecture selection by learning an auxiliary HyperNet that generates the weights of a main model conditioned on that model's architecture. By comparing the relative validation performance of networks with HyperNet-generated weights, we can effectively search over a wide range of architectures at the cost of a single training run. To facilitate this search, we develop a flexible mechanism based on memory read-writes that allows us to define a wide range of network connectivity patterns, with ResNet, DenseNet, and FractalNet blocks as special cases. We validate our method (SMASH) on CIFAR-10 and CIFAR-100, STL-10, ModelNet10, and Imagenet32x32, achieving competitive performance with similarly-sized hand-designed networks. Our code is available at https://github.com/ajbrock/SMAS

    Generative and Discriminative Voxel Modeling with Convolutional Neural Networks

    Get PDF
    When working with three-dimensional data, choice of representation is key. We explore voxel-based models, and present evidence for the viability of voxellated representations in applications including shape modeling and object classification. Our key contributions are methods for training voxel-based variational autoencoders, a user interface for exploring the latent space learned by the autoencoder, and a deep convolutional neural network architecture for object classification. We address challenges unique to voxel-based representations, and empirically evaluate our models on the ModelNet benchmark, where we demonstrate a 51.5% relative improvement in the state of the art for object classification.Comment: 9 pages, 5 figures, 2 table
    corecore