3,735 research outputs found
Strategies for automatic planning: A collection of ideas
The main goal of the Jet Propulsion Laboratory (JPL) is to obtain science return from interplanetary probes. The uplink process is concerned with communicating commands to a spacecraft in order to achieve science objectives. There are two main parts to the development of the command file which is sent to a spacecraft. First, the activity planning process integrates the science requests for utilization of spacecraft time into a feasible sequence. Then the command generation process converts the sequence into a set of commands. The development of a feasible sequence plan is an expensive and labor intensive process requiring many months of effort. In order to save time and manpower in the uplink process, automation of parts of this process is desired. There is an ongoing effort to develop automatic planning systems. This has met with some success, but has also been informative about the nature of this effort. It is now clear that innovative techniques and state-of-the-art technology will be required in order to produce a system which can provide automatic sequence planning. As part of this effort to develop automatic planning systems, a survey of the literature, looking for known techniques which may be applicable to our work was conducted. Descriptions of and references for these methods are given, together with ideas for applying the techniques to automatic planning
Discrete tomography and joint inversion for loosely connected or unconnected physical properties: application to crosshole seismic and georadar data sets
Tomographic inversions of geophysical data generally include an underdetermined component. To compensate for this shortcoming, assumptions or a priori knowledge need to be incorporated in the inversion process. A possible option for a broad class of problems is to restrict the range of values within which the unknown model parameters must lie. Typical examples of such problems include cavity detection or the delineation of isolated ore bodies in the subsurface. In cavity detection, the physical properties of the cavity can be narrowed down to those of air and/or water, and the physical properties of the host rock either are known to within a narrow band of values or can be established from simple experiments. Discrete tomography techniques allow such information to be included as constraints on the inversions. We have developed a discrete tomography method that is based on mixed-integer linear programming. An important feature of our method is the ability to invert jointly different types of data, for which the key physical properties are only loosely connected or unconnected. Joint inversions reduce the ambiguity in tomographic studies. The performance of our new algorithm is demonstrated on several synthetic data sets. In particular, we show how the complementary nature of seismic and georadar data can be exploited to locate air- or water-filled cavitie
A Framework for Megascale Agent Based Model Simulations on Graphics Processing Units
Agent-based modeling is a technique for modeling dynamic systems from the bottom up. Individual elements of the system are represented computationally as agents. The system-level behaviors emerge from the micro-level interactions of the agents. Contemporary state-of-the-art agent-based modeling toolkits are essentially discrete-event simulators designed to execute serially on the Central Processing Unit (CPU). They simulate Agent-Based Models (ABMs) by executing agent actions one at a time. In addition to imposing an un-natural execution order, these toolkits have limited scalability. In this article, we investigate data-parallel computer architectures such as Graphics Processing Units (GPUs) to simulate large scale ABMs. We have developed a series of efficient, data parallel algorithms for handling environment updates, various agent interactions, agent death and replication, and gathering statistics. We present three fundamental innovations that provide unprecedented scalability. The first is a novel stochastic memory allocator which enables parallel agent replication in O(1) average time. The second is a technique for resolving precedence constraints for agent actions in parallel. The third is a method that uses specialized graphics hardware, to gather and process statistical measures. These techniques have been implemented on a modern day GPU resulting in a substantial performance increase. We believe that our system is the first ever completely GPU based agent simulation framework. Although GPUs are the focus of our current implementations, our techniques can easily be adapted to other data-parallel architectures. We have benchmarked our framework against contemporary toolkits using two popular ABMs, namely, SugarScape and StupidModel.GPGPU, Agent Based Modeling, Data Parallel Algorithms, Stochastic Simulations
Optimization Modulo Theories with Linear Rational Costs
In the contexts of automated reasoning (AR) and formal verification (FV),
important decision problems are effectively encoded into Satisfiability Modulo
Theories (SMT). In the last decade efficient SMT solvers have been developed
for several theories of practical interest (e.g., linear arithmetic, arrays,
bit-vectors). Surprisingly, little work has been done to extend SMT to deal
with optimization problems; in particular, we are not aware of any previous
work on SMT solvers able to produce solutions which minimize cost functions
over arithmetical variables. This is unfortunate, since some problems of
interest require this functionality.
In the work described in this paper we start filling this gap. We present and
discuss two general procedures for leveraging SMT to handle the minimization of
linear rational cost functions, combining SMT with standard minimization
techniques. We have implemented the procedures within the MathSAT SMT solver.
Due to the absence of competitors in the AR, FV and SMT domains, we have
experimentally evaluated our implementation against state-of-the-art tools for
the domain of linear generalized disjunctive programming (LGDP), which is
closest in spirit to our domain, on sets of problems which have been previously
proposed as benchmarks for the latter tools. The results show that our tool is
very competitive with, and often outperforms, these tools on these problems,
clearly demonstrating the potential of the approach.Comment: Submitted on january 2014 to ACM Transactions on Computational Logic,
currently under revision. arXiv admin note: text overlap with arXiv:1202.140
Recommended from our members
Electrical capacitance tomography for flow imaging: System model for development of image reconstruction algorithms and design of primary sensors
A software tool that facilitates the development of image reconstruction algorithms, and the design of optimal capacitance sensors for a capacitance-based 12-electrode tomographic flow imaging system are described. The core of this software tool is the finite element (FE) model of the sensor, which is implemented in OCCAM-2 language and run on the Inmos T800 transputers. Using the system model, the in-depth study of the capacitance sensing fields and the generation of flow model data are made possible, which assists, in a systematic approach, the design of an improved image-reconstruction algorithm. This algorithm is implemented on a network of transputers to achieve a real-time performance. It is found that the selection of the geometric parameters of a 12-electrode sensor has significant effects on the sensitivity distributions of the capacitance fields and on the linearity of the capacitance data. As a consequence, the fidelity of the reconstructed images are affected. Optimal sensor designs can, therefore, be provided, by accommodating these effect
Recommended from our members
Optimal control methodologies for the optimisation of maintenance scheduling and production in processes using decaying catalysts
In this thesis, optimal control methodologies are developed for solving problems involving the optimisation of maintenance scheduling and production in processes using decaying catalysts. Previously, such problems were solved using a category of methods which involve making decisions of discrete as well as continuous nature, called mixed-integer optimisation techniques. However, these techniques are combinatorial in nature and can solve differential equations only by approximations as collections of steady state equality constraints, and such features can cause these techniques difficulties in obtaining optimal and accurate solutions for these problems. The goal behind developing optimal control methodologies is to effectively solve these problems while overcoming the drawbacks that mixed-integer optimisation techniques face or would face in solving these problems.
First, an optimal control methodology is developed to optimise maintenance scheduling and production in a process containing a reactor using decaying catalysts. This methodology involves using a multistage mixed-integer optimal control problem (MSMIOCP) formulation and obtaining solutions as a standard nonlinear optimisation problem, without using mixed-integer optimisation techniques. Two different solution implementations are required, each which has its own relative advantages. The methodology using the second procedure is particularly successful in effectively obtaining solutions within the stipulated tolerances. Further, the methodology possesses features of robustness because it enables a relatively small problem size, reliability because it solves differential equations using state-of-the-art integrators, and efficiency because it is not combinatorial in nature. These features indicate the methodology’s success in overcoming the drawbacks of using mixed-integer optimisation techniques to solve this problem.
Next, the abovementioned methodology is extended to form an optimal control methodology to optimise maintenance scheduling and production in a process containing parallel lines of reactors using decaying catalysts. This methodology, when applied to a case study of such a process, is also able to effectively obtain solutions within the stipulated tolerances. Further, the solutions obtained, once again, possess features of robustness, reliability and efficiency, which indicate that the methodology can overcome the drawbacks that mixed-integer optimisation techniques would face, if used to solve such problems.
And lastly, an optimal control methodology is developed for considering uncertainties in kinetic parameters in the optimisation of maintenance scheduling and production in a process containing a reactor using decaying catalysts. The methodology involves using a multiple scenario approach to consider parametric uncertainties and formulating a stochastic MSMIOCP, which is solved as a standard nonlinear optimisation problem as per the previously developed procedure. The results obtained provide insights into the effects of parametric uncertainties and the number of scenarios generated on the optimal operations, and indicate that the methodology is capable of solving this problem. Further, the robust, reliable and efficient nature of the results obtained suggest that the methodology can overcome the disadvantages that mixed-integer methods would introduce in the conventional methodologies, if such methodologies are used to solve such problems.Cambridge India Ramanujan Scholarship awarded by the Cambridge Trust and the Science and Engineering Research Board of Indi
- …