170 research outputs found

    On sequential and parallel solution of initial value problems

    Get PDF
    AbstractWe deal with the solution of systems z′(x) = f(x, z(x)), x ϵ [0, 1], z(0) = η, where the function ƒ [0, 1] × Rs → Rs has r continuous bounded partial derivatives. We assume that available information about the problem consists of evaluations of n linear functionals at ƒ. If an adaptive choice of these functionals is allowed (which is suitable for sequential processing), then the minimal error of an algorithm is of order n−(r+1), for any dimension s. We show that if nonadaptive information (well-suited for parallel computation) is used, then the minimal error cannot be essentially less than n−(r+1)(s+1). Thus, adaption is significantly better, and the advantage of using it grows with s. This yields that the ε-complexity in sequential computation is smaller for adaptive information. For parallel computation, nonadaptive information is more efficient only if the number of processors is very large, depending exponentially on the dimension s. We conclude that using parallelism by computing the information nonadaptively is not feasible

    Efficient Optimization of Performance Measures by Classifier Adaptation

    Full text link
    In practical applications, machine learning algorithms are often needed to learn classifiers that optimize domain specific performance measures. Previously, the research has focused on learning the needed classifier in isolation, yet learning nonlinear classifier for nonlinear and nonsmooth performance measures is still hard. In this paper, rather than learning the needed classifier by optimizing specific performance measure directly, we circumvent this problem by proposing a novel two-step approach called as CAPO, namely to first train nonlinear auxiliary classifiers with existing learning methods, and then to adapt auxiliary classifiers for specific performance measures. In the first step, auxiliary classifiers can be obtained efficiently by taking off-the-shelf learning algorithms. For the second step, we show that the classifier adaptation problem can be reduced to a quadratic program problem, which is similar to linear SVMperf and can be efficiently solved. By exploiting nonlinear auxiliary classifiers, CAPO can generate nonlinear classifier which optimizes a large variety of performance measures including all the performance measure based on the contingency table and AUC, whilst keeping high computational efficiency. Empirical studies show that CAPO is effective and of high computational efficiency, and even it is more efficient than linear SVMperf.Comment: 30 pages, 5 figures, to appear in IEEE Transactions on Pattern Analysis and Machine Intelligence, 201

    A survey of information-based complexity

    Get PDF
    AbstractWe survey some recent results in information-based complexity. We focus on the worst case setting and also indicate some average case results

    Heterogeneous thin films: Combining homogenization and dimension reduction with directors

    Full text link
    We analyze the asymptotic behavior of a multiscale problem given by a sequence of integral functionals subject to differential constraints conveyed by a constant-rank operator with two characteristic length scales, namely the film thickness and the period of oscillating microstructures, by means of Γ\Gamma-convergence. On a technical level, this requires a subtile merging of homogenization tools, such as multiscale convergence methods, with dimension reduction techniques for functionals subject to differential constraints. One observes that the results depend critically on the relative magnitude between the two scales. Interestingly, this even regards the fundamental question of locality of the limit model, and, in particular, leads to new findings also in the gradient case.Comment: 28 page

    RIACS

    Get PDF
    Topics considered include: high-performance computing; cognitive and perceptual prostheses (computational aids designed to leverage human abilities); autonomous systems. Also included: development of a 3D unstructured grid code based on a finite volume formulation and applied to the Navier-stokes equations; Cartesian grid methods for complex geometry; multigrid methods for solving elliptic problems on unstructured grids; algebraic non-overlapping domain decomposition methods for compressible fluid flow problems on unstructured meshes; numerical methods for the compressible navier-stokes equations with application to aerodynamic flows; research in aerodynamic shape optimization; S-HARP: a parallel dynamic spectral partitioner; numerical schemes for the Hamilton-Jacobi and level set equations on triangulated domains; application of high-order shock capturing schemes to direct simulation of turbulence; multicast technology; network testbeds; supercomputer consolidation project

    Multi-Carrier NOMA-Empowered Wireless Federated Learning with Optimal Power and Bandwidth Allocation

    Full text link
    Wireless federated learning (WFL) undergoes a communication bottleneck in uplink, limiting the number of users that can upload their local models in each global aggregation round. This paper presents a new multi-carrier non-orthogonal multiple-access (MC-NOMA)-empowered WFL system under an adaptive learning setting of Flexible Aggregation. Since a WFL round accommodates both local model training and uploading for each user, the use of Flexible Aggregation allows the users to train different numbers of iterations per round, adapting to their channel conditions and computing resources. The key idea is to use MC-NOMA to concurrently upload the local models of the users, thereby extending the local model training times of the users and increasing participating users. A new metric, namely, Weighted Global Proportion of Trained Mini-batches (WGPTM), is analytically established to measure the convergence of the new system. Another important aspect is that we maximize the WGPTM to harness the convergence of the new system by jointly optimizing the transmit powers and subchannel bandwidths. This nonconvex problem is converted equivalently to a tractable convex problem and solved efficiently using variable substitution and Cauchy's inequality. As corroborated experimentally using a convolutional neural network and an 18-layer residential network, the proposed MC-NOMA WFL can efficiently reduce communication delay, increase local model training times, and accelerate the convergence by over 40%, compared to its existing alternative.Comment: 33 pages, 16 figure
    • …
    corecore