118 research outputs found

    Multisensor satellite data integration for sea surface wind speed and direction determination

    Get PDF
    Techniques to integrate meteorological data from various satellite sensors to yield a global measure of sea surface wind speed and direction for input to the Navy's operational weather forecast models were investigated. The sensors were launched or will be launched, specifically the GOES visible and infrared imaging sensor, the Nimbus-7 SMMR, and the DMSP SSM/I instrument. An algorithm for the extrapolation to the sea surface of wind directions as derived from successive GOES cloud images was developed. This wind veering algorithm is relatively simple, accounts for the major physical variables, and seems to represent the best solution that can be found with existing data. An algorithm for the interpolation of the scattered observed data to a common geographical grid was implemented. The algorithm is based on a combination of inverse distance weighting and trend surface fitting, and is suited to combing wind data from disparate sources

    Philosophical Issues in the Addictions

    Get PDF
    This is the final version. Available from Cambridge University Press via the DOI in this recordThe study of addiction throws up a wide range of philosophical issues, connecting with some of the deepest and longest-running debates in ethics, metaphysics, epistemology, and the philosophy of science, to name but a few sub-disciplinary areas. By straddling such a wide range of fields of scientific enquiry, as this Handbook demonstrates, it also throws up numerous conceptual, explanatory, and methodological quandaries between disciplines, of the sort that philosophers have over the years developed many tools to deal with and reconcile. In this chapter, I first summarise some early philosophical treatments of addiction, as well as descriptions of addiction among the ancient philosophers themselves, before considering some of the major philosophical debates with which the study of addiction intersects, and the significance of those debates and intersections for the understanding of addiction in other disciplines

    A content analysis of job qualifications for business librarians and how they relate to library science curriculums

    Get PDF
    This study explores the job qualifications that employers of business librarians seek. Necessary skills, education, and experience are analyzed and discussed. Job titles and locations are quantified. Library science curriculums are analyzed to determine which schools are preparing students with courses most relevant to business library positions. Employers most often list interpersonal skills (41%) as a required or preferred skill in job advertisements, yet this is the area in which schools of information and library science are most often lacking courses. After interpersonal skills, employers look for business-specific skills (20%), searching skills (16%), library skills (14%), and, computer/technical skills (9%)

    An Experimental Analysis of Deep Learning Architectures for Supervised Speech Enhancement

    Get PDF
    Recent speech enhancement research has shown that deep learning techniques are very effective in removing background noise. Many deep neural networks are being proposed, showing promising results for improving overall speech perception. The Deep Multilayer Perceptron, Convolutional Neural Networks, and the Denoising Autoencoder are well-established architectures for speech enhancement; however, choosing between different deep learning models has been mainly empirical. Consequently, a comparative analysis is needed between these three architecture types in order to show the factors affecting their performance. In this paper, this analysis is presented by comparing seven deep learning models that belong to these three categories. The comparison includes evaluating the performance in terms of the overall quality of the output speech using five objective evaluation metrics and a subjective evaluation with 23 listeners; the ability to deal with challenging noise conditions; generalization ability; complexity; and, processing time. Further analysis is then provided while using two different approaches. The first approach investigates how the performance is affected by changing network hyperparameters and the structure of the data, including the Lombard effect. While the second approach interprets the results by visualizing the spectrogram of the output layer of all the investigated models, and the spectrograms of the hidden layers of the convolutional neural network architecture. Finally, a general evaluation is performed for supervised deep learning-based speech enhancement while using SWOC analysis, to discuss the technique’s Strengths, Weaknesses, Opportunities, and Challenges. The results of this paper contribute to the understanding of how different deep neural networks perform the speech enhancement task, highlight the strengths and weaknesses of each architecture, and provide recommendations for achieving better performance. This work facilitates the development of better deep neural networks for speech enhancement in the future

    A Mixed Reality Approach for dealing with the Video Fatigue of Online Meetings

    Get PDF
    Much of the issue with video meetings is the lack of naturalistic cues, together with the feeling of being observed all the time. Video calls take away most body language cues, but because the person is still visible, your brain still tries to compute that non-verbal language. It means that you’re working harder, trying to achieve the impossible. This impacts data retention and can lead to participants feeling unnecessarily tired. This project aims to transform the way online meetings happen, by turning off the camera and simplifying the information that our brains need to compute, thus preventing ‘Zoom fatigue’. The immersive solution we are developing, iVXR, consists of cutting-edge augmented reality technology, natural language processing, speech to text technologies and sub-real-time hardware acceleration using high performance computing

    Mapping and Masking Targets Comparison using Different Deep Learning based Speech Enhancement Architectures

    Get PDF
    Mapping and Masking targets are both widely used in recent Deep Neural Network (DNN) based supervised speech enhancement. Masking targets are proved to have a positive impact on the intelligibility of the output speech, while mapping targets are found, in other studies, to generate speech with better quality. However, most of the studies are based on comparing the two approaches using the Multilayer Perceptron (MLP) architecture only. With the emergence of new architectures that outperform the MLP, a more generalized comparison is needed between mapping and masking approaches. In this paper, a complete comparison will be conducted between mapping and masking targets using four different DNN based speech enhancement architectures, to work out how the performance of the networks changes with the chosen training target. The results show that there is no perfect training target with respect to all the different speech quality evaluation metrics, and that there is a tradeoff between the denoising process and the intelligibility of the output speech. Furthermore, the generalization ability of the networks was evaluated, and it is concluded that the design of the architecture restricts the choice of the training target, because masking targets result in significant performance degradation for deep convolutional autoencoder architecture

    A Comparative Study of Time and Frequency Domain Approaches to Deep Learning based Speech Enhancement

    Get PDF
    Deep learning has recently made a breakthrough in the speech enhancement process. Some architectures are based on a time domain representation, while others operate in the frequency domain; however, the study and comparison of different networks working in time and frequency is not reported in the literature. In this paper, this comparison between time and frequency domain learning for five Deep Neural Network (DNN) based speech enhancement architectures is presented. The comparison covers the evaluation of the output speech using four objective evaluation metrics: PESQ, STOI, LSD, and SSNR increase. Furthermore, the complexity of the five networks was investigated by comparing the number of parameters and processing time for each architecture. Finally some of the factors that affect learning in time and frequency were discussed. The primary results of this paper show that fully connected based architectures generate speech with low overall perception when learning in the time domain. On the other hand, convolutional based designs give acceptable performance in both frequency and time domains. However, time domain implementations show an inferior generalization ability. Frequency domain based learning was proved to be better than time domain when the complex spectrogram is used in the training process. Additionally, feature extraction is also proved to be very effective in DNN based supervised speech enhancement, whether it is performed at the beginning, or implicitly by bottleneck layer features. Finally, it was concluded that the choice of the working domain is mainly restricted by the type and design of the architecture used

    DESIGNING PRECINTS IN THE DENSIFYING CITY – THE ROLE OF PLANNING SUPPORT SYSTEMS

    Get PDF
    Australia’s cities face significant social, economic and environmental challenges, driven by population growth and rapid urbanisation. The pressure to increase housing availability will lead to greater levels of high-density and medium-density stock. However, there is enormous political and community pushback against this. One way to address this challenge is to encourage medium-density living solutions through “precinct” scale development. Precinct-scale development has the potential to include additional hard and soft infrastructure that may offset the perceived negativities of higher densities. As part of Australian research into precinct-scale development, and as part of our broader Smart Cities approach, or more specifically City Analytics approach, new digital planning tools &ndash; Envision and ESP &ndash; have been developed to support scenario planning and design needs. They utilise a data-driven and scenario planning approach underpinned by Geographic Information System (GIS) functionality.We focus on a case study in the City of Blacktown, Western Sydney, New South Wales, Australia. By 2036 Blacktown is forecast to grow to approximately 500,000 people (an increase of over 30&thinsp;%) and 180,000 dwellings. Most new dwellings will be delivered through urban infill. The Blacktown master plan promotes higher density housing, mixed employment uses and continued improvements to the public domain. Our study provides a unique opportunity to implement this broad strategy within a specific case and location. Specifically, this paper provides information on how these digital planning tools supported Blacktown planners in identifying, co-designing and implementing a new approach for precinct level planning. It also presents the results of an evaluation of digital-planning tools in the context of the Blacktown case study.</p

    A Co-design Prototyping Approach for Buiding a Precinct Planning Tool

    Get PDF
    As the world is becoming increasingly urbanized there is a need for more sustainability-oriented planning of our cities. Policy and decision-makers are interested in the use of evidenced based approaches and tools that will support collaborative planning. There are a number of tools in the domain of spatial planning and decision support systems that have been built over the last few decades but the uptake and use of these tools is somewhat limited. In the context of Australia there is significant urban growth occurring across the major cities and a need to provision planners and developers with precinct planning tools to assist in managing infill and the densification of the existing urban fabric in a carbon constrained economy. In this paper we describe the development of a new precinct planning tool known as the Envision Scenario Planner (ESP), which is being applied initially in two cities, Melbourne and Perth to assist in the urban design and planning of Greyfield sites. To set the scene in this paper we firstly provide a brief review of the existing state of play of visualization and modelling tools available to urban planners in Australia. The focus on the paper will be to introduce an iterative co-design prototyping approach for developing a best practice precinct planning support tool (ESP) from an earlier tool known as ENVISION. The first step of the approach is an exposure workshop with experts to refine the proposed tool workflow and its functionality. Subsequent iterations of the prototype are then exposed to larger audiences for validation and testing. In this paper we will describe the process and the preliminary findings in implementing the first phase of this iterative co-design prototype approach
    corecore