826 research outputs found

    Mark 4A DSN receiver-exciter and transmitter subsystems

    Get PDF
    The present configuration of the Mark 4A DSN Receiver-Exciter and Transmitter Subsystems is described. Functional requirements and key characteristics are given to show the differences in the capabilities required by the Networks Consolidation task for combined High Earth Orbiter and Deep Space Network tracking support

    Towards An Efficient Cloud Computing System: Data Management, Resource Allocation and Job Scheduling

    Get PDF
    Cloud computing is an emerging technology in distributed computing, and it has proved to be an effective infrastructure to provide services to users. Cloud is developing day by day and faces many challenges. One of challenges is to build cost-effective data management system that can ensure high data availability while maintaining consistency. Another challenge in cloud is efficient resource allocation which ensures high resource utilization and high SLO availability. Scheduling, referring to a set of policies to control the order of the work to be performed by a computer system, for high throughput is another challenge. In this dissertation, we study how to manage data and improve data availability while reducing cost (i.e., consistency maintenance cost and storage cost); how to efficiently manage the resource for processing jobs and increase the resource utilization with high SLO availability; how to design an efficient scheduling algorithm which provides high throughput, low overhead while satisfying the demands on completion time of jobs. Replication is a common approach to enhance data availability in cloud storage systems. Previously proposed replication schemes cannot effectively handle both correlated and non-correlated machine failures while increasing the data availability with the limited resource. The schemes for correlated machine failures must create a constant number of replicas for each data object, which neglects diverse data popularities and cannot utilize the resource to maximize the expected data availability. Also, the previous schemes neglect the consistency maintenance cost and the storage cost caused by replication. It is critical for cloud providers to maximize data availability hence minimize SLA (Service Level Agreement) violations while minimize cost caused by replication in order to maximize the revenue. In this dissertation, we build a nonlinear programming model to maximize data availability in both types of failures and minimize the cost caused by replication. Based on the model\u27s solution for the replication degree of each data object, we propose a low-cost multi-failure resilient replication scheme (MRR). MRR can effectively handle both correlated and non-correlated machine failures, considers data popularities to enhance data availability, and also tries to minimize consistency maintenance and storage cost. In current cloud, providers still need to reserve resources to allow users to scale on demand. The capacity offered by cloud offerings is in the form of pre-defined virtual machine (VM) configurations. This incurs resource wastage and results in low resource utilization when the users actually consume much less resource than the VM capacity. Existing works either reallocate the unused resources with no Service Level Objectives (SLOs) for availability\footnote{Availability refers to the probability of an allocated resource being remain operational and accessible during the validity of the contract~\cite{CarvalhoCirne14}.} or consider SLOs to reallocate the unused resources for long-running service jobs. This approach increases the allocated resource whenever it detects that SLO is violated in order to achieve SLO in the long term, neglecting the frequent fluctuations of jobs\u27 resource requirements in real-time application especially for short-term jobs that require fast responses and decision making for resource allocation. Thus, this approach cannot fully utilize the resources to process data because they cannot quickly adjust the resource allocation strategy dealing with the fluctuations of jobs\u27 resource requirements. What\u27s more, the previous opportunistic based resource allocation approach aims at providing long-term availability SLOs with good QoS for long-running jobs, which ensures that the jobs can be finished within weeks or months by providing slighted degraded resources with moderate availability guarantees, but it ignores deadline constraints in defining Quality of Service (QoS) for short-lived jobs requiring online responses in real-time application, thus it cannot truly guarantee the QoS and long-term availability SLOs. To overcome the drawbacks of previous works, we adequately consider the fluctuations of unused resource caused by bursts of jobs\u27 resource demands, and present a cooperative opportunistic resource provisioning (CORP) scheme to dynamically allocate the resource to jobs. CORP leverages complementarity of jobs\u27 requirements on different resource types and utilizes the job packing to reduce the resource wastage and increase the resource utilization. An increasing number of large-scale data analytics frameworks move towards larger degrees of parallelism aiming at high throughput. Scheduling that assigns tasks to workers and preemption that suspends low-priority tasks and runs high-priority tasks are two important functions in such frameworks. There are many existing works on scheduling and preemption in literature to provide high throughput. However, previous works do not substantially consider dependency in increasing throughput in scheduling or preemption. Considering dependency is crucial to increase the overall throughput. Besides, extensive task evictions for preemption increase context switches, which may decrease the throughput. To address the above problems, we propose an efficient scheduling system Dependency-aware Scheduling and Preemption (DSP) to achieve high throughput in scheduling and preemption. First, we build a mathematical model to minimize the makespan with the consideration of task dependency, and derive the target workers for tasks which can minimize the makespan; second, we utilize task dependency information to determine tasks\u27 priorities for preemption; finally, we present a probabilistic based preemption to reduce the numerous preemptions, while satisfying the demands on completion time of jobs. We conduct trace driven simulations on a real-cluster and real-world experiments on Amazon S3/EC2 to demonstrate the efficiency and effectiveness of our proposed system in comparison with other systems. The experimental results show the superior performance of our proposed system. In the future, we will further consider data update frequency to reduce consistency maintenance cost, and we will consider the effects of node joining and node leaving. Also we will consider energy consumption of machines and design an optimal replication scheme to improve data availability while saving power. For resource allocation, we will consider using the greedy approach for deep learning to reduce the computation overhead caused by the deep neural network. Also, we will additionally consider the heterogeneity of jobs (i.e., short jobs and long jobs), and use a hybrid resource allocation strategy to provide SLO availability customization for different job types while increasing the resource utilization. For scheduling, we will aim to handle scheduling tasks with partial dependency, worker failures in scheduling and make our DSP fully distributed to increase its scalability. Finally, we plan to use different workloads and real-world experiment to fully test the performance of our methods and make our preliminary system design more mature

    The multifrequency Siberian Radioheliograph

    Full text link
    The 10-antenna prototype of the multifrequency Siberian radioheliograph is described. The prototype consists of four parts: antennas with broadband front-ends, analog back-ends, digital receivers and a correlator. The prototype antennas are mounted on the outermost stations of the Siberian Solar Radio Telescope (SSRT) array. A signal from each antenna is transmitted to a workroom by an analog fiber optical link, laid in an underground tunnel. After mixing, all signals are digitized and processed by digital receivers before the data are transmitted to the correlator. The digital receivers and the correlator are accessible by the LAN. The frequency range of the prototype is from 4 to 8 GHz. Currently the frequency switching observing mode is used. The prototype data include both circular polarizations at a number of frequencies given by a list. This prototype is the first stage of the multifrequency Siberian radioheliograph development. It is assumed that the radioheliograph will consist of 96 antennas and will occupy stations of the West-East-South subarray of the SSRT. The radioheliograph will be fully constructed in autumn of 2012. We plan to reach the brightness temperature sensitivity about 100 K for the snapshot image, a spatial resolution up to 13 arcseconds at 8 GHz and polarization measurement accuracy about a few percent. First results with the 10-antenna prototype are presented of observations of solar microwave bursts. The prototype abilities to estimate source size and locations at different frequencies are discussed

    Modeled vs. Actual Performance of the Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS)

    Get PDF
    The NASA Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) has been completed as an Engineering Demonstration Unit (EDU) and has recently finished thermal vacuum testing and calibration. The GIFTS EDU was designed to demonstrate new and emerging sensor and data processing technologies with the goal of making revolutionary improvements in meteorological observational capability and forecasting accuracy. The GIFTS EDU includes a cooled (150 K), imaging FTS designed to provide the radiometric accuracy and atmospheric sounding precision required to meet the next generation GOES sounder requirements. This paper discusses a GIFTS sensor response model and its validation during thermal vacuum testing and calibration. The GIFTS sensor response model presented here is a component-based simulation written in IDL with the model component characteristics updated as actual hardware has become available. We discuss our calibration approach, calibration hardware used, and preliminary system performance, including NESR, spectral radiance responsivity, and instrument line shape. A comparison of the model predictions and hardware performance provides useful insight into the fidelity of the design approach

    Software Defined Radio Implementation of Carrier and Timing Synchronization for Distributed Arrays

    Full text link
    The communication range of wireless networks can be greatly improved by using distributed beamforming from a set of independent radio nodes. One of the key challenges in establishing a beamformed communication link from separate radios is achieving carrier frequency and sample timing synchronization. This paper describes an implementation that addresses both carrier frequency and sample timing synchronization simultaneously using RF signaling between designated master and slave nodes. By using a pilot signal transmitted by the master node, each slave estimates and tracks the frequency and timing offset and digitally compensates for them. A real-time implementation of the proposed system was developed in GNU Radio and tested with Ettus USRP N210 software defined radios. The measurements show that the distributed array can reach a residual frequency error of 5 Hz and a residual timing offset of 1/16 the sample duration for 70 percent of the time. This performance enables distributed beamforming for range extension applications.Comment: Submitted to 2019 IEEE Aerospace Conferenc

    On the Complexity of Local Search for Weighted Standard Set Problems

    Full text link
    In this paper, we study the complexity of computing locally optimal solutions for weighted versions of standard set problems such as SetCover, SetPacking, and many more. For our investigation, we use the framework of PLS, as defined in Johnson et al., [JPY88]. We show that for most of these problems, computing a locally optimal solution is already PLS-complete for a simple neighborhood of size one. For the local search versions of weighted SetPacking and SetCover, we derive tight bounds for a simple neighborhood of size two. To the best of our knowledge, these are one of the very few PLS results about local search for weighted standard set problems

    Meeting the requirements to deploy cloud RAN over optical networks

    Get PDF
    Radio access network (RAN) cost savings are expected in future cloud RAN (C-RAN). In contrast to traditional distributed RAN architectures, in C-RAN, remote radio heads (RRHs) from different sites can share baseband processing resources from virtualized baseband unit pools placed in a few central locations (COs). Due to the stringent requirements of the several interfaces needed in C-RAN, optical networks have been proposed to support C-RAN. One of the key elements that needs to be considered are optical transponders. Specifically, sliceable bandwidth-variable transponders (SBVTs) have recently shown many advantages for core optical transport networks. In this paper, we study the connectivity requirements of C-RAN applications and conclude that dynamicity, fine granularity, and elasticity are needed. However, there is no SBVT implementation that supports those requirements, and thus, we propose and assess an SBVT architecture based on dynamic optical arbitrary generation/measurement. We consider different long-term evolution-advanced configurations and study the impact of the centralization level in terms of the capital expense and operating expense. An optimization problem is modeled to decide which COs should be equipped and which equipment, including transponders, needs to be installed. The results show noticeable cost savings from installing the proposed SBVTs compared to installing fixed transponders. Finally, compared to the maximum centralization level, remarkable cost savings are shown when a lower level of centralization is considered.Peer ReviewedPostprint (author's final draft

    Design, control and error analysis of a fast tool positioning system for ultra-precision machining of freeform surfaces

    Get PDF
    This thesis was previously held under moratorium from 03/12/19 to 03/12/21Freeform surfaces are widely found in advanced imaging and illumination systems, orthopaedic implants, high-power beam shaping applications, and other high-end scientific instruments. They give the designers greater ability to cope with the performance limitations commonly encountered in simple-shape designs. However, the stringent requirements for surface roughness and form accuracy of freeform components pose significant challenges for current machining techniques—especially in the optical and display market where large surfaces with tens of thousands of micro features are to be machined. Such highly wavy surfaces require the machine tool cutter to move rapidly while keeping following errors small. Manufacturing efficiency has been a bottleneck in these applications. The rapidly changing cutting forces and inertial forces also contribute a great deal to the machining errors. The difficulty in maintaining good surface quality under conditions of high operational frequency suggests the need for an error analysis approach that can predict the dynamic errors. The machining requirements also impose great challenges on machine tool design and the control process. There has been a knowledge gap on how the mechanical structural design affects the achievable positioning stability. The goal of this study was to develop a tool positioning system capable of delivering fast motion with the required positioning accuracy and stiffness for ultra-precision freeform manufacturing. This goal is achieved through deterministic structural design, detailed error analysis, and novel control algorithms. Firstly, a novel stiff-support design was proposed to eliminate the structural and bearing compliances in the structural loop. To implement the concept, a fast positioning device was developed based on a new-type flat voice coil motor. Flexure bearing, magnet track, and motor coil parameters were designed and calculated in detail. A high-performance digital controller and a power amplifier were also built to meet the servo rate requirement of the closed-loop system. A thorough understanding was established of how signals propagated within the control system, which is fundamentally important in determining the loop performance of high-speed control. A systematic error analysis approach based on a detailed model of the system was proposed and verified for the first time that could reveal how disturbances contribute to the tool positioning errors. Each source of disturbance was treated as a stochastic process, and these disturbances were synthesised in the frequency domain. The differences between following error and real positioning error were discussed and clarified. The predicted spectrum of following errors agreed with the measured spectrum across the frequency range. It is found that the following errors read from the control software underestimated the real positioning errors at low frequencies and overestimated them at high frequencies. The error analysis approach thus successfully revealed the real tool positioning errors that are mingled with sensor noise. Approaches to suppress disturbances were discussed from the perspectives of both system design and control. A deterministic controller design approach was developed to preclude the uncertainty associated with controller tuning, resulting in a control law that can minimize positioning errors. The influences of mechanical parameters such as mass, damping, and stiffness were investigated within the closed-loop framework. Under a given disturbance condition, the optimal bearing stiffness and optimal damping coefficients were found. Experimental positioning tests showed that a larger moving mass helped to combat all disturbances but sensor noise. Because of power limits, the inertia of the fast tool positioning system could not be high. A control algorithm with an additional acceleration-feedback loop was then studied to enhance the dynamic stiffness of the cutting system without any need for large inertia. An analytical model of the dynamic stiffness of the system with acceleration feedback was established. The dynamic stiffness was tested by frequency response tests as well as by intermittent diamond-turning experiments. The following errors and the form errors of the machined surfaces were compared with the estimates provided by the model. It is found that the dynamic stiffness within the acceleration sensor bandwidth was proportionally improved. The additional acceleration sensor brought a new error source into the loop, and its contribution of errors increased with a larger acceleration gain. At a certain point, the error caused by the increased acceleration gain surpassed other disturbances and started to dominate, representing the practical upper limit of the acceleration gain. Finally, the developed positioning system was used to cut some typical freeform surfaces. A surface roughness of 1.2 nm (Ra) was achieved on a NiP alloy substrate in flat cutting experiments. Freeform surfaces—including beam integrator surface, sinusoidal surface, and arbitrary freeform surface—were successfully machined with optical-grade quality. Ideas for future improvements were proposed in the end of this thesis.Freeform surfaces are widely found in advanced imaging and illumination systems, orthopaedic implants, high-power beam shaping applications, and other high-end scientific instruments. They give the designers greater ability to cope with the performance limitations commonly encountered in simple-shape designs. However, the stringent requirements for surface roughness and form accuracy of freeform components pose significant challenges for current machining techniques—especially in the optical and display market where large surfaces with tens of thousands of micro features are to be machined. Such highly wavy surfaces require the machine tool cutter to move rapidly while keeping following errors small. Manufacturing efficiency has been a bottleneck in these applications. The rapidly changing cutting forces and inertial forces also contribute a great deal to the machining errors. The difficulty in maintaining good surface quality under conditions of high operational frequency suggests the need for an error analysis approach that can predict the dynamic errors. The machining requirements also impose great challenges on machine tool design and the control process. There has been a knowledge gap on how the mechanical structural design affects the achievable positioning stability. The goal of this study was to develop a tool positioning system capable of delivering fast motion with the required positioning accuracy and stiffness for ultra-precision freeform manufacturing. This goal is achieved through deterministic structural design, detailed error analysis, and novel control algorithms. Firstly, a novel stiff-support design was proposed to eliminate the structural and bearing compliances in the structural loop. To implement the concept, a fast positioning device was developed based on a new-type flat voice coil motor. Flexure bearing, magnet track, and motor coil parameters were designed and calculated in detail. A high-performance digital controller and a power amplifier were also built to meet the servo rate requirement of the closed-loop system. A thorough understanding was established of how signals propagated within the control system, which is fundamentally important in determining the loop performance of high-speed control. A systematic error analysis approach based on a detailed model of the system was proposed and verified for the first time that could reveal how disturbances contribute to the tool positioning errors. Each source of disturbance was treated as a stochastic process, and these disturbances were synthesised in the frequency domain. The differences between following error and real positioning error were discussed and clarified. The predicted spectrum of following errors agreed with the measured spectrum across the frequency range. It is found that the following errors read from the control software underestimated the real positioning errors at low frequencies and overestimated them at high frequencies. The error analysis approach thus successfully revealed the real tool positioning errors that are mingled with sensor noise. Approaches to suppress disturbances were discussed from the perspectives of both system design and control. A deterministic controller design approach was developed to preclude the uncertainty associated with controller tuning, resulting in a control law that can minimize positioning errors. The influences of mechanical parameters such as mass, damping, and stiffness were investigated within the closed-loop framework. Under a given disturbance condition, the optimal bearing stiffness and optimal damping coefficients were found. Experimental positioning tests showed that a larger moving mass helped to combat all disturbances but sensor noise. Because of power limits, the inertia of the fast tool positioning system could not be high. A control algorithm with an additional acceleration-feedback loop was then studied to enhance the dynamic stiffness of the cutting system without any need for large inertia. An analytical model of the dynamic stiffness of the system with acceleration feedback was established. The dynamic stiffness was tested by frequency response tests as well as by intermittent diamond-turning experiments. The following errors and the form errors of the machined surfaces were compared with the estimates provided by the model. It is found that the dynamic stiffness within the acceleration sensor bandwidth was proportionally improved. The additional acceleration sensor brought a new error source into the loop, and its contribution of errors increased with a larger acceleration gain. At a certain point, the error caused by the increased acceleration gain surpassed other disturbances and started to dominate, representing the practical upper limit of the acceleration gain. Finally, the developed positioning system was used to cut some typical freeform surfaces. A surface roughness of 1.2 nm (Ra) was achieved on a NiP alloy substrate in flat cutting experiments. Freeform surfaces—including beam integrator surface, sinusoidal surface, and arbitrary freeform surface—were successfully machined with optical-grade quality. Ideas for future improvements were proposed in the end of this thesis
    • …
    corecore