499,784 research outputs found

    MIRAGE: The data acquisition, analysis, and display system

    Get PDF
    Developed for the NASA Johnson Space Center and Life Sciences Directorate by GE Government Services, the Microcomputer Integrated Real-time Acquisition Ground Equipment (MIRAGE) system is a portable ground support system for Spacelab life sciences experiments. The MIRAGE system can acquire digital or analog data. Digital data may be NRZ-formatted telemetry packets of packets from a network interface. Analog signal are digitized and stored in experimental packet format. Data packets from any acquisition source are archived to a disk as they are received. Meta-parameters are generated from the data packet parameters by applying mathematical and logical operators. Parameters are displayed in text and graphical form or output to analog devices. Experiment data packets may be retransmitted through the network interface. Data stream definition, experiment parameter format, parameter displays, and other variables are configured using spreadsheet database. A database can be developed to support virtually any data packet format. The user interface provides menu- and icon-driven program control. The MIRAGE system can be integrated with other workstations to perform a variety of functions. The generic capabilities, adaptability and ease of use make the MIRAGE a cost-effective solution to many experimental data processing requirements

    Solving Ill-posed Problems Using Data Assimilation. Application to optical flow estimation

    Get PDF
    ISBN 978-989-8111-69-2International audienceData Assimilation is a mathematical framework used in environmental sciences to improve forecasts performed by meteorological, oceanographic or air quality simulation models. Data Assimilation techniques require the resolution of a system with three components: one describing the temporal evolution of a state vector, one coupling the observations and the state vector, and one defining the initial condition. In this article, we use this framework to study a class of ill-posed Image Processing problems, usually solved by spatial and temporal regularization techniques. A generic approach is defined to convert an ill-posed Image Processing problem in terms of a Data Assimilation system. This method is illustrated on the determination of optical flow from a sequence of images. The resulting software has two advantages: a quality criterion on input data is used for weighting their contribution in the computation of the solution and a dynamic model is proposed to ensure a significant temporal regularity on the solution

    Transportation Management in a Distributed Logistic Consumption System Under Uncertainty Conditions

    Get PDF
    The problem of supply management in the supplier-to-consumer logistics transport system has been formed and solved. The novelty of the formulation of the problem consists in the integrated accounting of costs in the logistic system, which takes into account at the same time the cost of transporting products from suppliers to consumers, as well as the costs for each of the consumers to store the unsold product and losses due to possible shortages. The resulting optimization problem is no longer a standard linear programming problem. In addition, the work assumes that the solution of the problem should be sought taking into account the fact that the initial data of the problem are not deterministic. The analysis of traditional methods of describing the uncertainty of the source data. It is concluded that, given the rapidly changing conditions for the implementation of the delivery process in a distributed supplier-to-consumer system, it is advisable to move from a theoretical probability representation of the source data to their description in terms of fuzzy mathematics. At the same time, in particular, the fuzzy values of the demand for the delivered product for each consumer are determined by their membership functions.Distribution of supplies in the system is described by solving a mathematical programming problem with a nonlinear objective function and a set of linear constraints of the transport type. In forming the criterion, a technology is used to transform the membership functions of fuzzy parameters of the problem to its theoretical probabilistic counterparts – density distribution of demand values. The task is reduced to finding for each consumer the value of the ordered product, minimizing the average total cost of storing the unrealized product and losses from the deficit. The initial problem is reduced to solving a set of integral equations solved, in general, numerically. It is shown that in particular, important for practice, particular cases, this solution is achieved analytically.The paper states the insufficient adequacy of the traditionally used mathematical models for describing fuzzy parameters of the problem, in particular, the demand. Statistical processing of real data on demand shows that the parameters of the membership functions of the corresponding fuzzy numbers are themselves fuzzy numbers. Acceptable mathematical models of the corresponding fuzzy numbers are formulated in terms of bifuzzy mathematics. The relations describing the membership functions of the bifuzzy numbers are given. A formula is obtained for calculating the total losses to storage and from the deficit, taking into account the bifuzzy of demand. In this case, the initial task is reduced to finding the distribution of supplies, at which the maximum value of the total losses does not exceed the permissible value

    A fully digital model for Kalman filters

    No full text
    The Kalman filter is a mathematical method, whose purpose is to process noisy measurements in order to obtain an estimate of some relevant parameters of a system. It represents a valuable tool in the GNSS area, with some of its main applications related to the computation of the user PVT solution and to the integration of GNSS receivers with INS or other sensors. The Kalman filter is based on a state space representation, that describes the analyzed system as a set of differential equations that establishes the connections between the inputs, the outputs and the state variables of the analyzed system. In the continuous time domain there exists a large class of physical processes with a time evolution well described by means of stochastic differential equations. A typical problem is the need for an equivalent system in the discrete time, due to the discrete nature of the data to be processed. In the literature, it is quite common to solve this problem in the continuous time domain and to approximate the solution using a Taylor series approximation, to obtain an approximate discrete time version of the continuous time problem. By the way, other methods exist, based on the possibility to transform a continuous-time system to a discrete-time system by means of transformations from the Laplace complex plane to the z plane. These methods are widely used in the digital signal processing community, for example, to design digital filters from their analog counterparts. The main advantage of this approach is that it is very easily implemented by applying some mechanical rules. Moreover the nature of the approximation introduced by the Laplace-z transformation is a-priori known and clearly readable in the frequency domain. In the following the classical methods based on the Taylor approximation and on the Laplace-z transformations will be analyzed and compare

    Last-mile delivery optimization using GPS data a case study

    Get PDF
    The development of GPS data analysis and processing is contributing to new solutions in urban logistics, such as route characterization or client detection. The city of Quito, Ecuador, has problems regarding freight transportation. The reduction in magnitude of these problems, through the implementation of a responsible enterprise logistics system, can contribute to a better urban and economic development of this Latin-American capital city. This study proposes and analyses a solution in GPS data manipulation methodologies applied to urban freight distribution. The reliability of traditional routing software methods and truck drivers’ empiric knowledge are evaluated by comparing it to mathematical optimization algorithms which consider the city’s transportation network, modeled after the Asymmetric Traveling Salesperson Problem (ATSP). Tools used include Python for manipulating data and optimizing, CartoDB for Graphical Information Systems (GIS), and Compass (a logistics application developed by MIT) for generation of route indicators. The results of this study represent a better understanding of solutions to last-mile delivery operations in Quito, and suggest mathematical optimization is a reliable way to develop freight transportation routes

    A room acoustics measurement system using non-invasive microphone arrays

    Get PDF
    This thesis summarises research into adaptive room correction for small rooms and pre-recorded material, for example music of films. A measurement system to predict the sound at a remote location within a room, without a microphone at that location was investigated. This would allow the sound within a room to be adaptively manipulated to ensure that all listeners received optimum sound, therefore increasing their enjoyment. The solution presented used small microphone arrays, mounted on the room's walls. A unique geometry and processing system was designed, incorporating three processing stages, temporal, spatial and spectral. The temporal processing identifies individual reflection arrival times from the recorded data. Spatial processing estimates the angles of arrival of the reflections so that the three-dimensional coordinates of the reflections' origin can be calculated. The spectral processing then estimates the frequency response of the reflection. These estimates allow a mathematical model of the room to be calculated, based on the acoustic measurements made in the actual room. The model can then be used to predict the sound at different locations within the room. A simulated model of a room was produced to allow fast development of algorithms. Measurements in real rooms were then conducted and analysed to verify the theoretical models developed and to aid further development of the system. Results from these measurements and simulations, for each processing stage are presented

    Higher-order nonlinear priors for surface reconstruction

    Get PDF
    Journal ArticleAbstract-For surface reconstruction problems with noisy and incomplete range data, a Bayesian estimation approach can improve the overall quality of the surfaces. The Bayesian approach to surface estimation relies on a likelihood term, which ties the surface estimate to the input data, and the prior, which ensures surface smoothness or continuity. This paper introduces a new high-order, nonlinear prior for surface reconstruction. The proposed prior can smooth complex, noisy surfaces, while preserving sharp, geometric features, and it is a natural generalization of edge-preserving methods in image processing, such as anisotropic diffusion. An exact solution would require solving a fourth-order partial differential equation (PDE), which can be difficult with conventional numerical techniques. Our approach is to solve a cascade system of two second-order PDEs, which resembles the original fourth-order system. This strategy is based on the observation that the generalization of image processing to surfaces entails filtering the surface normals. We solve one PDE for processing the normals and one for refitting the surface to the normals. Furthermore, we implement the associated surface deformations using level sets. Hence, the algorithm can accommodate very complex shapes with arbitrary and changing topologies. This paper gives the mathematical formulation and describes the numerical algorithms. We also show results using range and medical data

    NASA Johnson Space Center Life Sciences Data System

    Get PDF
    The Life Sciences Project Division (LSPD) at JSC, which manages human life sciences flight experiments for the NASA Life Sciences Division, augmented its Life Sciences Data System (LSDS) in support of the Spacelab Life Sciences-2 (SLS-2) mission, October 1993. The LSDS is a portable ground system supporting Shuttle, Spacelab, and Mir based life sciences experiments. The LSDS supports acquisition, processing, display, and storage of real-time experiment telemetry in a workstation environment. The system may acquire digital or analog data, storing the data in experiment packet format. Data packets from any acquisition source are archived and meta-parameters are derived through the application of mathematical and logical operators. Parameters may be displayed in text and/or graphical form, or output to analog devices. Experiment data packets may be retransmitted through the network interface and database applications may be developed to support virtually any data packet format. The user interface provides menu- and icon-driven program control and the LSDS system can be integrated with other workstations to perform a variety of functions. The generic capabilities, adaptability, and ease of use make the LSDS a cost-effective solution to many experiment data processing requirements. The same system is used for experiment systems functional and integration tests, flight crew training sessions and mission simulations. In addition, the system has provided the infrastructure for the development of the JSC Life Sciences Data Archive System scheduled for completion in December 1994

    Efficient Online Scheduling in Distributed Stream Data Processing Systems

    Get PDF
    General-purpose Distributed Stream Data Processing Systems (DSDPSs) have attracted extensive attention from industry and academia in recent years. They are capable of processing unbounded big streams of continuous data in a distributed and real (or near-real) time manner. A fundamental problem in a DSDPS is the scheduling problem, i.e., assigning threads (carrying workload) to workers/machines with the objective of minimizing average end-to-end tuple processing time (or simply tuple processing time). A widely-used solution is to distribute workload over machines in the cluster in a round-robin manner, which is obviously not efficient due to the lack of consideration for communication delay among processes/machines. A scheduling solution makes a significant impact on the average tuple processing time. However, their relationship is very subtle and complicated. It does not even seem possible to have a mathematical programming formulation for the scheduling problem if its objective is to directly minimize the average tuple processing time. In this dissertation, we first propose a model-based approach that accurately models the correlation between a scheduling solution and its objective value (i.e. average tuple processing time) for a given scheduling solution according to the topology of the application graph and runtime statistics. A predictive scheduling algorithm is then presented, which as- signs tasks (threads) to machines under the guidance of the proposed model. This approach achieves an average of 24.9% improvement over Storm’s default scheduler. However, the model-based approach still has its limitations: the model may not be able to fully capture the features of a DSDPS; prediction may not be accurate enough; and a large amount of high-dimensional data may lead to high overhead. To address the limitations, we develop a model-free approach that can learn to control a DSDPS from its experience rather than adopting accurate and mathematically solvable system models, just as a human learns a skill (such as cooking, driving, swimming, etc.). Recent breakthrough of Deep Reinforcement Learning (DRL) provides a promising approach for enabling effective model-free control. The proposed DRL-based model-free approach minimizes the average end-to-end tuple processing time by jointly learning the system environment via collecting very limited runtime statistics and making decisions under the guidance of powerful Deep Neural Networks (DNNs). This approach achieves great performance improvement over the current practice and the state-of-the-art model-based approach. Moreover, there is still room for improvement for the above model-free approach: For the above model-free approach and most existing methods, a user specifies the number of threads for an application in advance without knowing much about runtime needs, which, however, remains unchanged during runtime. This could severely affect the performance of a DSDPS. Therefore, we further develop another model-free approach using DRL, EXTRA, which enables the dynamic use of a variable number of threads at runtime. It has been shown by extensive experimental results, by adding this new feature, EXTRA can achieve further performance improvement and greater flexibility on scheduling

    Big data, modeling, simulation, computational platform and holistic approaches for the fourth industrial revolution

    Get PDF
    Naturally, the mathematical process starts from proving the existence and uniqueness of the solution by the using the theorem, corollary, lemma, proposition, dealing with the simple and non-complex model. Proving the existence and uniqueness solution are guaranteed by governing the infinite amount of solutions and limited to the implementation of a small-scale simulation on a single desktop CPU. Accuracy, consistency and stability were easily controlled by a small data scale. However, the fourth industrial can be described the mathematical process as the advent of cyber-physical systems involving entirely new capabilities for researcher and machines (Xing, 2017). In numerical perspective, the fourth industrial revolution (4iR) required the transition from a uncomplex model and small scale simulation to complex model and big data for visualizing the real-world application in digital dialectical and exciting opportunity. Thus, a big data analytics and its classification are a problem solving for these limitations. Some applications of 4iR will highlight the extension version in terms of models, derivative and discretization, dimension of space and time, behavior of initial and boundary conditions, grid generation, data extraction, numerical method and image processing with high resolution feature in numerical perspective. In statistics, a big data depends on data growth however, from numerical perspective, a few classification strategies will be investigated deals with the specific classifier tool. This paper will investigate the conceptual framework for a big data classification, governing the mathematical modeling, selecting the superior numerical method, handling the large sparse simulation and investigating the parallel computing on high performance computing (HPC) platform. The conceptual framework will benefit to the big data provider, algorithm provider and system analyzer to classify and recommend the specific strategy for generating, handling and analyzing the big data. All the perspectives take a holistic view of technology. Current research, the particular conceptual framework will be described in holistic terms. 4iR has ability to take a holistic approach to explain an important of big data, complex modeling, large sparse simulation and high performance computing platform. Numerical analysis and parallel performance evaluation are the indicators for performance investigation of the classification strategy. This research will benefit to obtain an accurate decision, predictions and trending practice on how to obtain the approximation solution for science and engineering applications. As a conclusion, classification strategies for generating a fine granular mesh, identifying the root causes of failures and issues in real time solution. Furthermore, the big data-driven and data transfer evolution towards high speed of technology transfer to boost the economic and social development for the 4iR (Xing, 2017; Marwala et al., 2017)
    corecore