702 research outputs found
Pre-Reduction Graph Products: Hardnesses of Properly Learning DFAs and Approximating EDP on DAGs
The study of graph products is a major research topic and typically concerns
the term , e.g., to show that . In this paper, we
study graph products in a non-standard form where is a
"reduction", a transformation of any graph into an instance of an intended
optimization problem. We resolve some open problems as applications.
(1) A tight -approximation hardness for the minimum
consistent deterministic finite automaton (DFA) problem, where is the
sample size. Due to Board and Pitt [Theoretical Computer Science 1992], this
implies the hardness of properly learning DFAs assuming (the
weakest possible assumption).
(2) A tight hardness for the edge-disjoint paths (EDP)
problem on directed acyclic graphs (DAGs), where denotes the number of
vertices.
(3) A tight hardness of packing vertex-disjoint -cycles for large .
(4) An alternative (and perhaps simpler) proof for the hardness of properly
learning DNF, CNF and intersection of halfspaces [Alekhnovich et al., FOCS 2004
and J. Comput.Syst.Sci. 2008]
Parallel language constructs for tensor product computations on loosely coupled architectures
Distributed memory architectures offer high levels of performance and flexibility, but have proven awkard to program. Current languages for nonshared memory architectures provide a relatively low level programming environment, and are poorly suited to modular programming, and to the construction of libraries. A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. Tensor product array computations are focused on along with a simple but important class of numerical algorithms. The problem of programming 1-D kernal routines is focused on first, such as parallel tridiagonal solvers, and then how such parallel kernels can be combined to form parallel tensor product algorithms is examined
A Stochastic Complexity Perspective of Induction in Economics and Inference in Dynamics
Rissanen's fertile and pioneering minimum description length principle (MDL) has been viewed from the point of view of statistical estimation theory, information theory, as stochastic complexity theory -.i.e., a computable approximation to Kolomogorov Complexity - or Solomonoff's recursion theoretic induction principle or as analogous to Kolmogorov's sufficient statistics. All these - and many more - interpretations are valid, interesting and fertile. In this paper I view it from two points of view: those of an algorithmic economist and a dynamical system theorist. >From these points of view I suggest, first, a recasting of Jevons's sceptical vision of induction in the light of MDL; and a complexity interpretation of an undecidable question in dynamics.
Parallelisation of algorithms
Most numerical software involves performing an extremely large volume of algebraic computations. This is both costly and time consuming in respect of computer resources and, for large problems, often super-computer power is required in order for results to be obtained in a reasonable amount of time. One method whereby both the cost and time can be reduced is to use the principle "Many hands make light work", or rather, allow several computers to operate simultaneously on the code, working towards a common goal, and hopefully obtaining the required results in a fraction of the time and cost normally used. This can be achieved through the modification of the costly, time consuming code, breaking it up into separate individual code segments which may be executed concurrently on different processors. This is termed parallelisation of code. This document describes communication between sequential processes, protocols, message routing and parallelisation of algorithms. In particular, it deals with these aspects with reference to the Transputer as developed by INMOS and includes two parallelisation examples, namely parallelisation of code to study airflow and of code to determine far field patterns of antennas. This document also reports on the practical experiences with programming in parallel
Effective interprocess communication (IPC) in a real-time transputer network
The thesis describes the design and implementation of an interprocess communication (IPC)
mechanism within a real-time distributed operating system kernel (RT-DOS) which is
designed for a transputer-based network. The requirements of real-time operating systems
are examined and existing design and implementation strategies are described. Particular
attention is paid to one of the object-oriented techniques although it is concluded that these
techniques are not feasible for the chosen implementation platform. Studies of a number of
existing operating systems are reported. The choices for various aspects of operating system
design and their influence on the IPC mechanism to be used are elucidated. The actual design
choices are related to the real-time requirements and the implementation that has been
adopted is described. [Continues.
Probing the neutrino mass hierarchy with CMB weak lensing
We forecast constraints on cosmological parameters with primary CMB
anisotropy information and weak lensing reconstruction with a future
post-Planck CMB experiment, the Cosmic Origins Explorer (COrE), using
oscillation data on the neutrino mass splittings as prior information. Our MCMC
simulations in flat models with a non-evolving equation-of-state of dark energy
w give typical 68% upper bounds on the total neutrino mass of 0.136 eV and
0.098 eV for the inverted and normal hierarchies respectively, assuming the
total summed mass is close to the minimum allowed by the oscillation data for
the respective hierarchies (0.10 eV and 0.06 eV). Including information from
future baryon acoustic oscillation measurements with the complete BOSS, Type 1a
supernovae distance moduli from WFIRST, and a realistic prior on the Hubble
constant, these upper limits shrink to 0.118 eV and 0.080 eV for the inverted
and normal hierarchies, respectively. Addition of these distance priors also
yields percent-level constraints on w. We find tension between our MCMC results
and the results of a Fisher matrix analysis, most likely due to a strong
geometric degeneracy between the total neutrino mass, the Hubble constant, and
w in the unlensed CMB power spectra. If the minimal-mass, normal hierarchy were
realised in nature, the inverted hierarchy should be disfavoured by the full
data combination at typically greater than the 2-sigma level. For the
minimal-mass inverted hierarchy, we compute the Bayes' factor between the two
hierarchies for various combinations of our forecast datasets, and find that
the future probes considered here should be able to provide `strong' evidence
(odds ratio 12:1) for the inverted hierarchy. Finally, we consider potential
biases of the other cosmological parameters from assuming the wrong hierarchy
and find that all biases on the parameters are below their 1-sigma marginalised
errors.Comment: 16 pages, 13 figures; minor changes to match the published version,
references adde
Hypersonic Research Vehicle (HRV) real-time flight test support feasibility and requirements study. Part 2: Remote computation support for flight systems functions
The requirements are assessed for the use of remote computation to support HRV flight testing. First, remote computational requirements were developed to support functions that will eventually be performed onboard operational vehicles of this type. These functions which either cannot be performed onboard in the time frame of initial HRV flight test programs because the technology of airborne computers will not be sufficiently advanced to support the computational loads required, or it is not desirable to perform the functions onboard in the flight test program for other reasons. Second, remote computational support either required or highly desirable to conduct flight testing itself was addressed. The use is proposed of an Automated Flight Management System which is described in conceptual detail. Third, autonomous operations is discussed and finally, unmanned operations
Fast solution of Cahn-Hilliard variational inequalities using implicit time discretization and finite elements
We consider the e�cient solution of the Cahn-Hilliard variational inequality using an implicit time discretization, which is formulated as an optimal control problem with pointwise constraints on the control. By applying a semi-smooth Newton method combined with a Moreau-Yosida regularization technique for handling the control constraints we show superlinear convergence in function space. At the heart of this method lies the solution of large and sparse linear systems for which we propose the use of preconditioned Krylov subspace solvers using an e�ective Schur complement approximation. Numerical results illustrate the competitiveness of this approach
- …