362,823 research outputs found

    Real-time computer data system for the 40- by 80-foot wind tunnel facility at Ames Research Center

    Get PDF
    The background material and operational concepts of a computer-based system for an operating wind tunnel are described. An on-line real-time computer system was installed in a wind tunnel facility to gather static and dynamic data. The computer system monitored aerodynamic forces and moments of periodic and quasi-periodic functions, and displayed and plotted computed results in real time. The total system is comprised of several off-the-shelf, interconnected subsystems that are linked to a large data processing center. The system includes a central processor unit with 32,000 24-bit words of core memory, a number of standard peripherals, and several special processors; namely, a dynamic analysis subsystem, a 256-channel PCM-data subsystem and ground station, a 60-channel high-speed data acquisition subsystem, a communication link, and static force and pressure subsystems. The role of the test engineer as a vital link in the system is also described

    A high level disc controller

    Get PDF
    Includes bibliographical references.Since the emergence of the digital computer in the 1940s, computer architecture has been largely dictated by the requirements of mathematicians and scientists. Trends have thus been towards processing data as quickly and as accurately as possible. Even now, in the age of large scale integration culminating in the microprocessor, internal structures remain committed to these ideals. This is not surprising since the main users of computers are involved with data processing and scientific computing. The process control engineer, who turned to the digital computer to provide the support he required in his ever increasing strive towards automation, has had therefore to use these generalized computing structures. His basic requirements however, are somewhat different to those of the data processing manager or the scientific user. He has to contend with an inherent problem of synchronizing the computer to the real-world timing of his plants. He is far more interested in the response time of the computer to an external occurrence than he is to sheer 'number-crunching' power. Despite the trends in process control towards distributed computing, even the most advanced systems require a relatively large central processor. This processor is called upon to carry out a wide variety of different tasks most of which are 'requested' by external events. Multiprogramming facilities are therefore essential and are normally effected by means of a real-time operating system. One of the prime objectives of such a real time operating system is to permit the various programs to be run at the required time on some priority basis. In many cases these routines can be large - thus requiring access to backing storage. Traditionally the backing store, implemented by a moving-head disc for example is under the control of the real-time operating system. This can have serious consequences. If real-time requirements are to be met, transfer to and from the disc must be made as rapidly as possible. Also, in initiating and controlling such transfer, the computer is using time which otherwise could be avai1ab1e for useful, process-orientated work. With the rapid advancement of digital technology, the time is c1ear1y right to examine our present computer architecture. This dissertation explores the problem area previously discussed - the control over the bulk storage device in a real-time process-control computer system. It is proposed that a possible solution lies in the development of an intelligent backing-store controller. This essentially combines the conventional low-level backing store interface with a special purpose processor which handles all file routines. This dissertation demonstrates how such a structure can be implemented using current technology, and will evaluate its inherent advantages

    Advances in characterisation, calibration and data processing speed of optical coherence tomography systems

    Get PDF
    This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields

    ???????????? ??????????????? ????????? ???????????? ?????? ??????

    Get PDF
    Department of Mehcanical EngineeringUnmanned aerial vehicles (UAVs) are widely used in various areas such as exploration, transportation and rescue activity due to light weight, low cost, high mobility and intelligence. This intelligent system consists of highly integrated and embedded systems along with a microprocessor to perform specific task by computing algorithm or processing data. In particular, image processing is one of main core technologies to handle important tasks such as target tracking, positioning, visual servoing using visual system. However, it often requires heavy amount of computation burden and an additional micro PC controller with a flight computer should be additionally used to process image data. However, performance of the controller is not so good enough due to limited power, size, and weight. Therefore, efficient image processing techniques are needed considering computing load and hardware resources for real time operation on embedded systems. The objective of the thesis research is to develop an efficient image processing framework on embedded systems utilizing neural network and various optimized computation techniques to satisfy both efficient computing speed versus resource usage and accuracy. Image processing techniques has been proposed and tested for management computing resources and operating high performance missions in embedded systems. Graphic processing units (GPUs) available in the market can be used for parallel computing to accelerate computing speed. Multiple cores within central processing units (CPUs) are used like multi-threading during data uploading and downloading between the CPU and the GPU. In order to minimize computing load, several methods have been proposed. The first method is visualization of convolutional neural network (CNN) that can perform both localization and detection simultaneously. The second is region proposal for input area of CNN through simple image processing, which helps algorithm to avoid full frame processing. Finally, surplus computing resources can be saved by control the transient performance such as the FPS limitation. These optimization methods have been experimentally applied to a ground vehicle and quadrotor UAVs and verified that the developed methods offer an optimization to process in embedded environment by saving CPU and memory resources. In addition, they can support to perform various tasks such as object detection and path planning, obstacle avoidance. Through optimization and algorithms, they reveal a number of improvements for the embedded system compared to the existing. Considering the characteristics of the system to transplant the various useful algorithms to the embedded system, the method developed in the research can be further applied to various practical applications.ope

    Household Electrical Power Meter Using Embedded Rfid With Wireless Sensor Network Platform

    Get PDF
    Smart monitoring utility electrical power meter plays an important role in the energy awareness scheme. The aim of this study is to develop a machine-to-machine (M2M) communication by embedding an active RFID technology with wireless mesh sensor network (WMSN) platform with heterogeneous data transfer to monitor and identify the household electrical consumption. A household electrical power meter is designed with chosen Zigbee-Pro as a RF transceiver module with WSN functionalities to communicate between RFID tag to the RFID reader wirelessly. The development of this project involves three main parts in the proposed RFID communication system which consists of: the EPRFID (embedded RFID module with household electrical power meter), reader and application software at a work station. The EPRFID module is designed to take power supply from a household electric power meter with the introduction of power management circuit developed inside the proposed module. It comprises of voltage and current sensors which is functioning to precisely sense the actual status of the resident electrical power consumption from the appliance loads. Then, the data signal is directly transferred to the central processing unit (CPU) to precisely calculate the current power consumption. The CPU is the central part to effectively communicate and command all defined operations. The real time clock is readily available to generate the local real time clock. It is combined into the memory module package which used to minute the total accumulated power. Simultaneously, the display unit shows the monitored information data such as local time, current, voltage and power values. Lastly, the application software at a personal computer (PC) was designed by using the Microsoft Visual C# software. It shows the performances of received information data from the end meter devices such as tag ID, time sent, receive, delay, accumulated power, RSSI and amount of received bytes. In this research, the EPRFID prototype was intensively examined as follows: the voltage and current calibration versus the distance ranges; the transmitted power calibration; energy analysis; energy tradeoffs based on measured dc characteristics; anti collision performance; radiation pattern; maximum read range; tag collection and latency delay time; throughput evaluation. The experimental results indicated that the proposed EPRFID prototype successfully worked wirelessly with tolerable power consumption is within a range of 1.61 W to 1.69 W. This model is to facilitate some daily life processes, saving time, and reduces the operating cost because of the reduction in the manpower requirement and error in information system that can be omitted through humans. Thus, improving the M2M communication can provide higher reliability on the communication system because the current development will focus on local control strategies. In addition, this study can be guidelines to the electrical power utility company and consumers for alternatives in electrical consumption billings in the future

    Deep Space Network information system architecture study

    Get PDF
    The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control

    The Design of a System Architecture for Mobile Multimedia Computers

    Get PDF
    This chapter discusses the system architecture of a portable computer, called Mobile Digital Companion, which provides support for handling multimedia applications energy efficiently. Because battery life is limited and battery weight is an important factor for the size and the weight of the Mobile Digital Companion, energy management plays a crucial role in the architecture. As the Companion must remain usable in a variety of environments, it has to be flexible and adaptable to various operating conditions. The Mobile Digital Companion has an unconventional architecture that saves energy by using system decomposition at different levels of the architecture and exploits locality of reference with dedicated, optimised modules. The approach is based on dedicated functionality and the extensive use of energy reduction techniques at all levels of system design. The system has an architecture with a general-purpose processor accompanied by a set of heterogeneous autonomous programmable modules, each providing an energy efficient implementation of dedicated tasks. A reconfigurable internal communication network switch exploits locality of reference and eliminates wasteful data copies

    Too far ahead of its time: Barclays, Burroughs and real-time banking

    Get PDF
    The historiography of computing has until now considered real-time computing in banking as predicated on the possibilities of networked ATMs in the 1970s. This article reveals a different story. It exposes the failed bid by Barclays and Burroughs to make real time a reality for British banking in the 1960s

    Operating-system support for distributed multimedia

    Get PDF
    Multimedia applications place new demands upon processors, networks and operating systems. While some network designers, through ATM for example, have considered revolutionary approaches to supporting multimedia, the same cannot be said for operating systems designers. Most work is evolutionary in nature, attempting to identify additional features that can be added to existing systems to support multimedia. Here we describe the Pegasus project's attempt to build an integrated hardware and operating system environment from\ud the ground up specifically targeted towards multimedia
    • …
    corecore