134 research outputs found

    Evaluation of fasteners and fastener materials for space vehicles

    Get PDF
    Testing of fasteners, bolts, rivets, and fastener materials - high temperature alloy

    Cloud cover typing from environmental satellite imagery. Discriminating cloud structure with Fast Fourier Transforms (FFT)

    Get PDF
    The use of two dimensional Fast Fourier Transforms (FFTs) subjected to pattern recognition technology for the identification and classification of low altitude stratus cloud structure from Geostationary Operational Environmental Satellite (GOES) imagery was examined. The development of a scene independent pattern recognition methodology, unconstrained by conventional cloud morphological classifications was emphasized. A technique for extracting cloud shape, direction, and size attributes from GOES visual imagery was developed. These attributes were combined with two statistical attributes (cloud mean brightness, cloud standard deviation), and interrogated using unsupervised clustering amd maximum likelihood classification techniques. Results indicate that: (1) the key cloud discrimination attributes are mean brightness, direction, shape, and minimum size; (2) cloud structure can be differentiated at given pixel scales; (3) cloud type may be identifiable at coarser scales; (4) there are positive indications of scene independence which would permit development of a cloud signature bank; (5) edge enhancement of GOES imagery does not appreciably improve cloud classification over the use of raw data; and (6) the GOES imagery must be apodized before generation of FFTs

    Spiking Neural Network Connectivity and its Potential for Temporal Sensory Processing and Variable Binding

    Get PDF
    The most biologically-inspired artificial neurons are those of the third generation, and are termed spiking neurons, as individual pulses or spikes are the means by which stimuli are communicated. In essence, a spike is a short-term change in electrical potential and is the basis of communication between biological neurons. Unlike previous generations of artificial neurons, spiking neurons operate in the temporal domain, and exploit time as a resource in their computation. In 1952, Alan Lloyd Hodgkin and Andrew Huxley produced the first model of a spiking neuron; their model describes the complex electro-chemical process that enables spikes to propagate through, and hence be communicated by, spiking neurons. Since this time, improvements in experimental procedures in neurobiology, particularly with in vivo experiments, have provided an increasingly more complex understanding of biological neurons. For example, it is now well understood that the propagation of spikes between neurons requires neurotransmitter, which is typically of limited supply. When the supply is exhausted neurons become unresponsive. The morphology of neurons, number of receptor sites, amongst many other factors, means that neurons consume the supply of neurotransmitter at different rates. This in turn produces variations over time in the responsiveness of neurons, yielding various computational capabilities. Such improvements in the understanding of the biological neuron have culminated in a wide range of different neuron models, ranging from the computationally efficient to the biologically realistic. These models enable the modelling of neural circuits found in the brain. In recent years, much of the focus in neuron modelling has moved to the study of the connectivity of spiking neural networks. Spiking neural networks provide a vehicle to understand from a computational perspective, aspects of the brain’s neural circuitry. This understanding can then be used to tackle some of the historically intractable issues with artificial neurons, such as scalability and lack of variable binding. Current knowledge of feed-forward, lateral, and recurrent connectivity of spiking neurons, and the interplay between excitatory and inhibitory neurons is beginning to shed light on these issues, by improved understanding of the temporal processing capabilities and synchronous behaviour of biological neurons. This research topic aims to amalgamate current research aimed at tackling these phenomena

    Beryllium fastener technology

    Get PDF
    Program was conducted to develop, produce, and test optimum-configuration, beryllium prestressed and blind fasteners. The program was carried out in four phases - phase 1, feasibility study, phase 2, development, phase 3, evaluation of beryllium alloys, and phase 4, fabrication and testing

    Evaluation of fasteners and fastener materials for space vehicles Final report, Nov. 1963 - Nov. 1965

    Get PDF
    Evaluation of high strength alloys used as fasteners for space vehicle

    Evaluation of fasteners and fastener materials for space vehicles annual report, nov. 1963 - nov. 1964

    Get PDF
    Tensile and double shear tests at cryogenic and room temperature after thermal cycling of high strength fasteners and materials used on space vehicle

    Smart Transcription

    Get PDF
    The Intelligent Voice Smart Transcript is an interactive HTML5 document that contains the audio, a speech transcription and the key topics from an audio recording. It is designed to enable a quick and efficient review of audio communications by encapsulating the recording with the speech transcript and topics within a single HTML5 file. This paper outlines the rationale for the design of the SmartTranscript user experience. The paper discusses the difficulties of audio review, how there is large potential for misinterpretation associated with reviewing transcripts in isolation, and how additional diarization and topic tagging components augment the audio review process

    An Experimental Analysis of Deep Learning Architectures for Supervised Speech Enhancement

    Get PDF
    Recent speech enhancement research has shown that deep learning techniques are very effective in removing background noise. Many deep neural networks are being proposed, showing promising results for improving overall speech perception. The Deep Multilayer Perceptron, Convolutional Neural Networks, and the Denoising Autoencoder are well-established architectures for speech enhancement; however, choosing between different deep learning models has been mainly empirical. Consequently, a comparative analysis is needed between these three architecture types in order to show the factors affecting their performance. In this paper, this analysis is presented by comparing seven deep learning models that belong to these three categories. The comparison includes evaluating the performance in terms of the overall quality of the output speech using five objective evaluation metrics and a subjective evaluation with 23 listeners; the ability to deal with challenging noise conditions; generalization ability; complexity; and, processing time. Further analysis is then provided while using two different approaches. The first approach investigates how the performance is affected by changing network hyperparameters and the structure of the data, including the Lombard effect. While the second approach interprets the results by visualizing the spectrogram of the output layer of all the investigated models, and the spectrograms of the hidden layers of the convolutional neural network architecture. Finally, a general evaluation is performed for supervised deep learning-based speech enhancement while using SWOC analysis, to discuss the technique’s Strengths, Weaknesses, Opportunities, and Challenges. The results of this paper contribute to the understanding of how different deep neural networks perform the speech enhancement task, highlight the strengths and weaknesses of each architecture, and provide recommendations for achieving better performance. This work facilitates the development of better deep neural networks for speech enhancement in the future

    A Mixed Reality Approach for dealing with the Video Fatigue of Online Meetings

    Get PDF
    Much of the issue with video meetings is the lack of naturalistic cues, together with the feeling of being observed all the time. Video calls take away most body language cues, but because the person is still visible, your brain still tries to compute that non-verbal language. It means that you’re working harder, trying to achieve the impossible. This impacts data retention and can lead to participants feeling unnecessarily tired. This project aims to transform the way online meetings happen, by turning off the camera and simplifying the information that our brains need to compute, thus preventing ‘Zoom fatigue’. The immersive solution we are developing, iVXR, consists of cutting-edge augmented reality technology, natural language processing, speech to text technologies and sub-real-time hardware acceleration using high performance computing
    • …
    corecore