256 research outputs found

    Real-time interactive speech technology at Threshold Technology, Incorporated

    Get PDF
    Basic real-time isolated-word recognition techniques are reviewed. Industrial applications of voice technology are described in chronological order of their development. Future research efforts are also discussed

    Laboratory Experimentation of Guidance and Control of Spacecraft During On-Orbit Proximity Maneuvers

    Get PDF
    The article of record is available from http://www.intechopen.com/books/mechatronic-systems-simulation-modeling-and-control/laboratoryexperimentation-of-guidance-and-control-of-spacecraft-during-on-orbit-proximity-maneuversThe traditional spacecraft system is a monolithic structure with a single mission focused design and lengthy production and qualification schedules coupled with enormous cost. Additionally, there rarely, if ever, is any designed preventive maintenance plan or re-fueling capability. There has been much research in recent years into alternative options. One alternative option involves autonomous on-orbit servicing of current or future monolithic spacecraft systems. The U.S. Department of Defense (DoD) embarked on a highly successful venture to prove out such a concept with the Defense Advanced Research Projects Agency’s (DARPA’s) Orbital Express program. Orbital Express demonstrated all of the enabling technologies required for autonomous on-orbit servicing to include refueling, component transfer, autonomous satellite grappling and berthing, rendezvous, inspection, proximity operations, docking and undocking, and autonomous fault recognition and anomaly handling (Kennedy, 2008). Another potential option involves a paradigm shift from the monolithic spacecraft system to one involving multiple interacting spacecraft that can autonomously assemble and reconfigure. Numerous benefits are associated with autonomous spacecraft assemblies, ranging from a removal of significant intra-modular reliance that provides for parallel design, fabrication, assembly and validation processes to the inherent smaller nature of fractionated systems which allows for each module to be placed into orbit separately on more affordable launch platforms (Mathieu, 2005)

    TDRSS data handling and management system study. Ground station systems for data handling and relay satellite control

    Get PDF
    Results of a two-phase study of the (Data Handling and Management System DHMS) are presented. An original baseline DHMS is described. Its estimated costs are presented in detail. The DHMS automates the Tracking and Data Relay Satellite System (TDRSS) ground station's functions and handles both the forward and return link user and relay satellite data passing through the station. Direction of the DHMS is effected via a TDRSS Operations Control Central (OCC) that is remotely located. A composite ground station system, a modified DHMS (MDHMS), was conceptually developed. The MDHMS performs both the DHMS and OCC functions. Configurations and costs are presented for systems using minicomputers and midicomputers. It is concluded that a MDHMS should be configured with a combination of the two computer types. The midicomputers provide the system's organizational direction and computational power, and the minicomputers (or interface processors) perform repetitive data handling functions that relieve the midicomputers of these burdensome tasks

    System configuration and executive requirements specifications for reusable shuttle and space station/base

    Get PDF
    System configuration and executive requirements specifications for reusable shuttle and space station/bas

    An Information-Centric Communication Infrastructure for Real-Time State Estimation of Active Distribution Networks

    Get PDF
    © 2010-2012 IEEE.The evolution toward emerging active distribution networks (ADNs) can be realized via a real-time state estimation (RTSE) application facilitated by the use of phasor measurement units (PMUs). A critical challenge in deploying PMU-based RTSE applications at large scale is the lack of a scalable and flexible communication infrastructure for the timely (i.e., sub-second) delivery of the high volume of synchronized and continuous synchrophasor measurements. We address this challenge by introducing a communication platform called C-DAX based on the information-centric networking (ICN) concept. With a topic-based publish-subscribe engine that decouples data producers and consumers in time and space, C-DAX enables efficient synchrophasor measurement delivery, as well as flexible and scalable (re)configuration of PMU data communication for seamless full observability of power conditions in complex and dynamic scenarios. Based on the derived set of requirements for supporting PMU-based RTSE in ADNs, we design the ICN-based C-DAX communication platform, together with a joint optimized physical network resource provisioning strategy, in order to enable the agile PMU data communications in near real-time. In this paper, C-DAX is validated via a field trial implementation deployed over a sample feeder in a real-distribution network; it is also evaluated through simulation-based experiments using a large set of real medium voltage grid topologies currently operating live in The Netherlands. This is the first work that applies emerging communication paradigms, such as ICN, to smart grids while maintaining the required hard real-time data delivery as demonstrated through field trials at national scale. As such, it aims to become a blueprint for the application of ICN-based general purpose communication platforms to ADNs

    Autonomous Vehicle Control

    Get PDF
    A practical knowledge base in the emerging field of Robotics was developed and used to create a framework for further experiments. The framework was designed such that modular parts could be replaced, allowing for future development without reinventing the wheel . To prove the framework, a semi-autonomous robot was implemented, including stereo vision sensors, an inertial navigation system, and a simultaneous localization and mapping algorithm

    Parallel optimization algorithms for high performance computing : application to thermal systems

    Get PDF
    The need of optimization is present in every field of engineering. Moreover, applications requiring a multidisciplinary approach in order to make a step forward are increasing. This leads to the need of solving complex optimization problems that exceed the capacity of human brain or intuition. A standard way of proceeding is to use evolutionary algorithms, among which genetic algorithms hold a prominent place. These are characterized by their robustness and versatility, as well as their high computational cost and low convergence speed. Many optimization packages are available under free software licenses and are representative of the current state of the art in optimization technology. However, the ability of optimization algorithms to adapt to massively parallel computers reaching satisfactory efficiency levels is still an open issue. Even packages suited for multilevel parallelism encounter difficulties when dealing with objective functions involving long and variable simulation times. This variability is common in Computational Fluid Dynamics and Heat Transfer (CFD & HT), nonlinear mechanics, etc. and is nowadays a dominant concern for large scale applications. Current research in improving the performance of evolutionary algorithms is mainly focused on developing new search algorithms. Nevertheless, there is a vast knowledge of sequential well-performing algorithmic suitable for being implemented in parallel computers. The gap to be covered is efficient parallelization. Moreover, advances in the research of both new search algorithms and efficient parallelization are additive, so that the enhancement of current state of the art optimization software can be accelerated if both fronts are tackled simultaneously. The motivation of this Doctoral Thesis is to make a step forward towards the successful integration of Optimization and High Performance Computing capabilities, which has the potential to boost technological development by providing better designs, shortening product development times and minimizing the required resources. After conducting a thorough state of the art study of the mathematical optimization techniques available to date, a generic mathematical optimization tool has been developed putting a special focus on the application of the library to the field of Computational Fluid Dynamics and Heat Transfer (CFD & HT). Then the main shortcomings of the standard parallelization strategies available for genetic algorithms and similar population-based optimization methods have been analyzed. Computational load imbalance has been identified to be the key point causing the degradation of the optimization algorithm¿s scalability (i.e. parallel efficiency) in case the average makespan of the batch of individuals is greater than the average time required by the optimizer for performing inter-processor communications. It occurs because processors are often unable to finish the evaluation of their queue of individuals simultaneously and need to be synchronized before the next batch of individuals is created. Consequently, the computational load imbalance is translated into idle time in some processors. Several load balancing algorithms have been proposed and exhaustively tested, being extendable to any other population-based optimization method that needs to synchronize all processors after the evaluation of each batch of individuals. Finally, a real-world engineering application that consists on optimizing the refrigeration system of a power electronic device has been presented as an illustrative example in which the use of the proposed load balancing algorithms is able to reduce the simulation time required by the optimization tool.El aumento de las aplicaciones que requieren de una aproximación multidisciplinar para poder avanzar se constata en todos los campos de la ingeniería, lo cual conlleva la necesidad de resolver problemas de optimización complejos que exceden la capacidad del cerebro humano o de la intuición. En estos casos es habitual el uso de algoritmos evolutivos, principalmente de los algoritmos genéticos, caracterizados por su robustez y versatilidad, así como por su gran coste computacional y baja velocidad de convergencia. La multitud de paquetes de optimización disponibles con licencias de software libre representan el estado del arte actual en tecnología de optimización. Sin embargo, la capacidad de adaptación de los algoritmos de optimización a ordenadores masivamente paralelos alcanzando niveles de eficiencia satisfactorios es todavía una tarea pendiente. Incluso los paquetes adaptados al paralelismo multinivel tienen dificultades para gestionar funciones objetivo que requieren de tiempos de simulación largos y variables. Esta variabilidad es común en la Dinámica de Fluidos Computacional y la Transferencia de Calor (CFD & HT), mecánica no lineal, etc. y es una de las principales preocupaciones en aplicaciones a gran escala a día de hoy. La investigación actual que tiene por objetivo la mejora del rendimiento de los algoritmos evolutivos está enfocada principalmente al desarrollo de nuevos algoritmos de búsqueda. Sin embargo, ya se conoce una gran variedad de algoritmos secuenciales apropiados para su implementación en ordenadores paralelos. La tarea pendiente es conseguir una paralelización eficiente. Además, los avances en la investigación de nuevos algoritmos de búsqueda y la paralelización son aditivos, por lo que el proceso de mejora del software de optimización actual se verá incrementada si se atacan ambos frentes simultáneamente. La motivación de esta Tesis Doctoral es avanzar hacia una integración completa de las capacidades de Optimización y Computación de Alto Rendimiento para así impulsar el desarrollo tecnológico proporcionando mejores diseños, acortando los tiempos de desarrollo del producto y minimizando los recursos necesarios. Tras un exhaustivo estudio del estado del arte de las técnicas de optimización matemática disponibles a día de hoy, se ha diseñado una librería de optimización orientada al campo de la Dinámica de Fluidos Computacional y la Transferencia de Calor (CFD & HT). A continuación se han analizado las principales limitaciones de las estrategias de paralelización disponibles para algoritmos genéticos y otros métodos de optimización basados en poblaciones. En el caso en que el tiempo de evaluación medio de la tanda de individuos sea mayor que el tiempo medio que necesita el optimizador para llevar a cabo comunicaciones entre procesadores, se ha detectado que la causa principal de la degradación de la escalabilidad o eficiencia paralela del algoritmo de optimización es el desequilibrio de la carga computacional. El motivo es que a menudo los procesadores no terminan de evaluar su cola de individuos simultáneamente y deben sincronizarse antes de que se cree la siguiente tanda de individuos. Por consiguiente, el desequilibrio de la carga computacional se convierte en tiempo de inactividad en algunos procesadores. Se han propuesto y testado exhaustivamente varios algoritmos de equilibrado de carga aplicables a cualquier método de optimización basado en una población que necesite sincronizar los procesadores tras cada tanda de evaluaciones. Finalmente, se ha presentado como ejemplo ilustrativo un caso real de ingeniería que consiste en optimizar el sistema de refrigeración de un dispositivo de electrónica de potencia. En él queda demostrado que el uso de los algoritmos de equilibrado de carga computacional propuestos es capaz de reducir el tiempo de simulación que necesita la herramienta de optimización

    Nascom System Development Plan: System Description, Capabilities and Plans

    Get PDF
    The NASA Communications (Nascom) System Development Plan (NSDP), reissued annually, describes the organization of Nascom, how it obtains communication services, its current systems, its relationship with other NASA centers and International Partner Agencies, some major spaceflight projects which generate significant operational communication support requirements, and major Nascom projects in various stages of development or implementation

    Space Station data system analysis/architecture study. Task 1: Functional requirements definition, DR-5

    Get PDF
    The initial task in the Space Station Data System (SSDS) Analysis/Architecture Study is the definition of the functional and key performance requirements for the SSDS. The SSDS is the set of hardware and software, both on the ground and in space, that provides the basic data management services for Space Station customers and systems. The primary purpose of the requirements development activity was to provide a coordinated, documented requirements set as a basis for the system definition of the SSDS and for other subsequent study activities. These requirements should also prove useful to other Space Station activities in that they provide an indication of the scope of the information services and systems that will be needed in the Space Station program. The major results of the requirements development task are as follows: (1) identification of a conceptual topology and architecture for the end-to-end Space Station Information Systems (SSIS); (2) development of a complete set of functional requirements and design drivers for the SSIS; (3) development of functional requirements and key performance requirements for the Space Station Data System (SSDS); and (4) definition of an operating concept for the SSIS. The operating concept was developed both from a Space Station payload customer and operator perspective in order to allow a requirements practicality assessment

    Use of multi-GPU systems for large FFTs: with applications in ultrasound simulations

    Get PDF
    Ultrasound simulations are a type of application that are both computationally and communicatively intensive. With better performance, implementations of these can be used in designing new ultrasound probes, developing better signal processing techniques, training new ultrasonographers, in treatment planning and many other uses [11]. The pseudo-spectral technique can be used effectively to express the wave-propagation model used in these simulations, and is characterised by its use of the Fast Fourier Transform (FFT). The FFT can account for over half of the time spent by ultrasound simulations, with the remaining consisting of embarrassingly parallel arithmetic [28]. The use of a Graphics Processing Unit (GPU) for general computations like the FFT has become ubiquitous with favourable performance. The current trend in the design of the Central Processing Unit (CPU) of most systems has seen a shift from single-core to multi-core processing with these now being assembled into multi-socket configurations. GPUs are already massively multi-core processors typically with three or four times as many cores the question remains: will GPUs follow a similar trend and incorporate multiple devices in individual sockets when implemented? The purpose of the work in this thesis is to assess the viability of multi-GPU systems for ultrasound simulations in terms of cost and performance compared to other system designs that offer similar computational resources. Current machine hardware is capable of supporting multiple GPU through peripheral devices and offers a glimpse of the potential of future machines however, relatively little work has been reported on the use of such systems for ultrasound simulations and the FFT algorithm. In this thesis, to address this issue, we benchmark and model the device-to-device communication potential of an existing multi-GPU system. Four different methods are considered, namely: via CPU, pointer swapping, hybrid-staged, and kernel. The results reveal that the pointer swapping and kernel based methods of managing communication can be up to twice as efficient as other methods. The methods for communication identified in the benchmarks are then used as the basis for a number of important generic communication functions, which are in turn used to implement a distributed 3D FFT algorithm as required by the ultrasound simulation. The multi-GPU distributed 3D FFT with four GPUs was found to be up to 18% faster than an existing FFT implementation on a six core CPU. This multi-GPU distributed 3D FFT implementation is then used in an ultra- sound simulation as a proof-of-concept case study of the thesis. By overlapping communication and computation between the CPU and GPU resources a speed up of 8% is observed
    • …
    corecore