44 research outputs found

    Data Compression for Space Missions

    Get PDF
    For many space missions , the ability of spacecraft sensors to acquire meaningful data may surpass to a considerable extent the ability of the telemetry system to transmit this data to earth. It is often possible , however, to receive at the Earth a large portion of the sensed data by preprocessing or compressing the data before transmission in order to remove redundancy or useless information. This paper discusses data compression as applied to space missions. Since the usefulness and form of data compression is dependent to some extent on the particular space mission under consideration, certain general classifications of space missions are considered in light of their amenability to data compression. Some basic compression techniques are applied to example sets of data, and the results show that a rather small increase in onboard data processing can result in a severalfold increase in the amount of data transmitted to Earth. The compression procedures used are limited to those easily implemented by the unsophisticated but highly reliable data processing equipment likely to be present on future spacecraft. Curves are developed showing the compression ratio of various techniques as a function of allowable approximation error and complexity of mechanization. Data compression relationships as functions of reliability are also presented, where reliability is related to the loss of data per bit error in transmission. This analysis shows that certain tradeoffs exist since , in general , higher compression ratios are obtained at the expense of less accurate data representation, more complex implementation, and higher loss of data per bit error in transmission

    Efficient algorithm for solving semi-infinite programming problems and their applications to nonuniform filter bank designs

    Get PDF
    An efficient algorithm for solving semi-infinite programming problems is proposed in this paper. The index set is constructed by adding only one of the most violated points in a refined set of grid points. By applying this algorithm for solving the optimum nonuniform symmetric/antisymmetric linear phase finite-impulse-response (FIR) filter bank design problems, the time required to obtain a globally optimal solution is much reduced compared with that of the previous proposed algorith

    Robust Optimization in Simulation:Taguchi and Krige Combined

    Get PDF
    Optimization of simulated systems is the goal of many methods, but most methods as- sume known environments. We, however, develop a `robust' methodology that accounts for uncertain environments. Our methodology uses Taguchi's view of the uncertain world, but replaces his statistical techniques by Kriging. We illustrate the resulting methodology through classic Economic Order Quantity (EOQ) inventory models. Our results suggest that robust optimization requires order quantities that dier from the classic EOQ. We also compare our latest results with our previous results that do not use Kriging but Response Surface Methodology (RSM).

    Sampling from a system-theoretic viewpoint: Part I - Concepts and tools

    Get PDF
    This paper is first in a series of papers studying a system-theoretic approach to the problem of reconstructing an analog signal from its samples. The idea, borrowed from earlier treatments in the control literature, is to address the problem as a hybrid model-matching problem in which performance is measured by system norms. In this paper we present the paradigm and revise underlying technical tools, such as the lifting technique and some topics of the operator theory. This material facilitates a systematic and unified treatment of a wide range of sampling and reconstruction problems, recovering many hitherto considered different solutions and leading to new results. Some of these applications are discussed in the second part

    Reconfiguration and tool path planning of hexapod machine tools

    Get PDF
    Hexapod machine tools have the potential to achieve increased accuracy, speed, acceleration and rigidity over conventional machines, and are regarded by many researchers as the machine tools of the next generation. However, their small and complex workspace often limits the range of tasks they can perform, and their parallel structure raises many new issues preventing the direct use of conventional tool path planning methods. This dissertation presents an investigation of new reconfiguration and tool path planning methods for enhancing the ability of hexapods to adapt to workspace changes and assisting them in being integrated into the current manufacturing environments. A reconfiguration method which includes the consideration of foot-placement space (FPS) determination and placement parameter identification has been developed. Based on the desired workspace of a hexapod and the motion range of its leg modules, the FPS of a hexapod machine is defined and a construction method of the FPS is presented. An implementation algorithm for the construction method is developed. The equations for identifying the position and orientation of the base joints for the hexapod at a new location are formulated. For the position identification problem, an algorithm based on Dialytic Elimination is derived. Through examples, it is shown that the FPS determination method can provide feasible locations for the feet of the legs to realize the required workspace. It is also shown that these identification equations can be solved through a numerical approach or through Dialytic Elimination using symbolic manipulation. Three dissimilarities between hexapods and five-axis machines are identified and studied to enhance the basic understanding of tool path planning for hexapods. The first significant difference is the existence of an extra degree of freedom (γ angle). The second dissimilarity is that a hexapod has a widely varying inverse Jacobian over the workspace. This leads to the result that a hexapod usually has a nonlinear path when following a straight-line segment over two sampled poses. These factors indicate that the traditional path planning methods should not be used for hexapods without modification. A kinematics-based tool path planning method for hexapod machine tools is proposed to guide the part placement and the determination of γ angle. The algorithms to search for the feasible part locations and γ sets are presented. Three local planning methods for the γ angle are described. It is demonstrated that the method is feasible and is effective in enhancing the performance of the hexapod machine. As the nonlinear error is computationally expensive to evaluate in real time, the measurement of total leg length error is proposed. This measure is proved to be effective in controlling the nonlinear error

    Sampling from a system-theoretic viewpoint

    Get PDF
    This paper studies a system-theoretic approach to the problem of reconstructing an analog signal from its samples. The idea, borrowed from earlier treatments in the control literature, is to address the problem as a hybrid model-matching problem in which performance is measured by system norms. \ud \ud The paper is split into three parts. In Part I we present the paradigm and revise the lifting technique, which is our main technical tool. In Part II optimal samplers and holds are designed for various analog signal reconstruction problems. In some cases one component is fixed while the remaining are designed, in other cases all three components are designed simultaneously. No causality requirements are imposed in Part II, which allows to use frequency domain arguments, in particular the lifted frequency response as introduced in Part I. In Part III the main emphasis is placed on a systematic incorporation of causality constraints into the optimal design of reconstructors. We consider reconstruction problems, in which the sampling (acquisition) device is given and the performance is measured by the L2L^2-norm of the reconstruction error. The problem is solved under the constraint that the optimal reconstructor is ll-causal for a given l0,l\geq 0, i.e., that its impulse response is zero in the time interval (,lh),(-\infty,-l h), where hh is the sampling period. We derive a closed-form state-space solution of the problem, which is based on the spectral factorization of a rational transfer function

    Use of orthogonal arrays, quasi-Monte Carlo sampling and kriging response models for reservoir simulation with many varying factors

    Get PDF
    Asset development teams may adjust simulation model parameters using experimental design to reveal which factors have the greatest impact on the reservoir performance. Response surfaces and experimental design make sensitivity analysis less expensive and more accurate, helping to optimize recovery under geological and economical uncertainties. In this thesis, experimental designs including orthogonal arrays, factorial designs, Latin hypercubes and Hammersley sequences are compared and analyzed. These methods are demonstrated for a gas well with water coning problem to illustrate the efficiency of orthogonal arrays. Eleven geologic factors are varied while optimizing three engineering factors (total of fourteen factors). The objective is to optimize completion length, tubing head pressure, and tubing diameter for a partially penetrating well with uncertain reservoir properties. A nearly orthogonal array was specified with three levels for eight factors and four levels for the remaining six geologic and engineering factors. This design requires only 36 simulations compared to (26,873,856) runs for a full factorial design. Hyperkriging surfaces are an alternative model form for large numbers. Hyperkriging uses the maximum likelihood variogram model parameters to minimize prediction errors. Kriging is compared to conventional polynomial response models. The robustness of the response surfaces generated by kriging and polynomial regression are compared using jackknifing and bootstrapping. Sensitivity analysis and uncertainty analysis can be performed inexpensively and efficiently using response surfaces. The proposed design approach requires fewer simulations and provides accurate response models, efficient optimization, and flexible sensitivity and uncertainty assessment

    An Introduction To Numerical Relativity And Simulations Of Binary Neutron Stars

    Get PDF
    The theory of general relativity is currently the best description of gravity. However, the equations in general relativity are highly nonlinear and only the simplest of cases can hope to be solved analytically. As a result, the field of numerical relativity was created to solve some of these issues and to model more complicated dynamical situations. This thesis sets out to give the reader a basic understanding of general relativity, numerical relativity, as well as an understanding of some of the programs that are used in numerical relativity research such as Lorene and the Einstein Toolkit and concludes with a brief set of simulations of binary neutron stars with various masses
    corecore