2,424 research outputs found

    Evolutionary and variable step size strategies for multichannel filtered-x affine projection algorithms

    Full text link
    This study is focused on the necessity to improve the performance of the affine projection (AP) algorithm for active noise control (ANC) applications. The proposed algorithms are evaluated regarding their steady-state behaviour, their convergence speed and their computational complexity. To this end, different strategies recently applied to the AP for channel identification are proposed for multichannel ANC. These strategies are based either on a variable step size, an evolving projection order, or the combination of both strategies. The developed efficient versions of the AP algorithm use the modified filtered-x structure, which exhibits faster convergence than other filtering schemes. Simulation results show that the proposed approaches exhibit better performance than the conventional AP algorithm and represent a meaningful choice for practical multichannel ANC applications.This work was supported by a grant of the Romanian National Authority for Scientific Research, CNCS-UEFISCDI, project number PN-II-ID-PCE-2011-3-0097, Spanish Ministerio de Ciencia e Innovacion TEC2009-13741 and Generalitat Valenciana PROMETEO 2009/2013.Gonzalez, A.; Albu, F.; Ferrer Contreras, M.; Diego Antón, MD. (2013). Evolutionary and variable step size strategies for multichannel filtered-x affine projection algorithms. IET Signal Processing. 7(6):471-476. https://doi.org/10.1049/iet-spr.2012.0213S47147676Shin, H.-C., Sayed, A. H., & Song, W.-J. (2004). Variable Step-Size NLMS and Affine Projection Algorithms. IEEE Signal Processing Letters, 11(2), 132-135. doi:10.1109/lsp.2003.821722Paleologu, C., Benesty, J., & Ciochina, S. (2008). A Variable Step-Size Affine Projection Algorithm Designed for Acoustic Echo Cancellation. IEEE Transactions on Audio, Speech, and Language Processing, 16(8), 1466-1478. doi:10.1109/tasl.2008.2002980Shin, H.-C., & Sayed, A. H. (2004). Mean-Square Performance of a Family of Affine Projection Algorithms. IEEE Transactions on Signal Processing, 52(1), 90-102. doi:10.1109/tsp.2003.820077Kong, S.-J., Hwang, K.-Y., & Song, W.-J. (2007). An Affine Projection Algorithm With Dynamic Selection of Input Vectors. IEEE Signal Processing Letters, 14(8), 529-532. doi:10.1109/lsp.2007.891325Seong-Eun Kim, Se-Jin Kong, & Woo-Jin Song. (2009). An Affine Projection Algorithm With Evolving Order. IEEE Signal Processing Letters, 16(11), 937-940. doi:10.1109/lsp.2009.2027638Kim, K.-H., Choi, Y.-S., Kim, S.-E., & Song, W.-J. (2011). An Affine Projection Algorithm With Periodically Evolved Update Interval. IEEE Transactions on Circuits and Systems II: Express Briefs, 58(11), 763-767. doi:10.1109/tcsii.2011.2168023Bouchard, M. (2003). Multichannel affine and fast affine projection algorithms for active noise control and acoustic equalization systems. IEEE Transactions on Speech and Audio Processing, 11(1), 54-60. doi:10.1109/tsa.2002.805642Kong, N., Shin, J., & Park, P. (2011). A two-stage affine projection algorithm with mean-square-error-matching step-sizes. Signal Processing, 91(11), 2639-2646. doi:10.1016/j.sigpro.2011.06.003MoonSoo Chang, NamWoong Kong, & PooGyeon Park. (2010). An Affine Projection Algorithm Based on Reuse Time of Input Vectors. IEEE Signal Processing Letters, 17(8), 750-753. doi:10.1109/lsp.2010.2053355Arablouei, R., & Doğançay, K. (2012). Affine projection algorithm with selective projections. Signal Processing, 92(9), 2253-2263. doi:10.1016/j.sigpro.2012.02.018Gonzalez, A., Ferrer, M., de Diego, M., & Piñero, G. (2012). An affine projection algorithm with variable step size and projection order. Digital Signal Processing, 22(4), 586-592. doi:10.1016/j.dsp.2012.03.00

    On Characterizing the Data Access Complexity of Programs

    Full text link
    Technology trends will cause data movement to account for the majority of energy expenditure and execution time on emerging computers. Therefore, computational complexity will no longer be a sufficient metric for comparing algorithms, and a fundamental characterization of data access complexity will be increasingly important. The problem of developing lower bounds for data access complexity has been modeled using the formalism of Hong & Kung's red/blue pebble game for computational directed acyclic graphs (CDAGs). However, previously developed approaches to lower bounds analysis for the red/blue pebble game are very limited in effectiveness when applied to CDAGs of real programs, with computations comprised of multiple sub-computations with differing DAG structure. We address this problem by developing an approach for effectively composing lower bounds based on graph decomposition. We also develop a static analysis algorithm to derive the asymptotic data-access lower bounds of programs, as a function of the problem size and cache size

    Numerical Verification of Affine Systems with up to a Billion Dimensions

    Full text link
    Affine systems reachability is the basis of many verification methods. With further computation, methods exist to reason about richer models with inputs, nonlinear differential equations, and hybrid dynamics. As such, the scalability of affine systems verification is a prerequisite to scalable analysis for more complex systems. In this paper, we improve the scalability of affine systems verification, in terms of the number of dimensions (variables) in the system. The reachable states of affine systems can be written in terms of the matrix exponential, and safety checking can be performed at specific time steps with linear programming. Unfortunately, for large systems with many state variables, this direct approach requires an intractable amount of memory while using an intractable amount of computation time. We overcome these challenges by combining several methods that leverage common problem structure. Memory is reduced by exploiting initial states that are not full-dimensional and safety properties (outputs) over a few linear projections of the state variables. Computation time is saved by using numerical simulations to compute only projections of the matrix exponential relevant for the verification problem. Since large systems often have sparse dynamics, we use Krylov-subspace simulation approaches based on the Arnoldi or Lanczos iterations. Our method produces accurate counter-examples when properties are violated and, in the extreme case with sufficient problem structure, can analyze a system with one billion real-valued state variables

    Some advances in the polyhedral model

    Get PDF
    Department Head: L. Darrell Whitley.2010 Summer.Includes bibliographical references.The polyhedral model is a mathematical formalism and a framework for the analysis and transformation of regular computations. It provides a unified approach to the optimization of computations from different application domains. It is now gaining wide use in optimizing compilers and automatic parallelization. In its purest form, it is based on a declarative model where computations are specified as equations over domains defined by "polyhedral sets". This dissertation presents two results. First is an analysis and optimization technique that enables us to simplify---reduce the asymptotic complexity---of such equations. The second is an extension of the model to richer domains called Ƶ-Polyhedra. Many equational specifications in the polyhedral model have reductions---application of an associative and commutative operator to collections of values to produce a collection of answers. Moreover, expressions in such equations may also exhibit reuse where intermediate values that are computed or used at different index points are identical. We develop various compiler transformations to automatically exploit this reuse and simplify the computational complexity of the specification. In general, there is an infinite set of applicable simplification transformations. Unfortunately, different choices may result in equivalent specifications with different asymptotic complexity. We present an algorithm for the optimal application of simplification transformations resulting in a final specification with minimum complexity. This dissertation also presents the Ƶ-Polyhedral model, an extension to the polyhedral model to more general sets, thereby providing a transformation framework for a larger set of regular computations. For this, we present a novel representation and interpretation of Ƶ-Polyhedra and prove a number of properties of the family of unions of Ƶ-Polyhedra that are required to extend the polyhedral model. Finally, we present value based dependence analysis and scheduling analysis for specifications in the Ƶ-Polyhedral model. These are direct extensions of the corresponding analyses of specifications in the polyhedral model. One of the benefits of our results in the Ƶ-Polyhedral model is that our abstraction allows the reuse of previously developed tools in the polyhedral model with straightforward pre- and post-processing

    Simplifying Dependent Reductions in the Polyhedral Model

    Full text link
    A Reduction -- an accumulation over a set of values, using an associative and commutative operator -- is a common computation in many numerical computations, including scientific computations, machine learning, computer vision, and financial analytics. Contemporary polyhedral-based compilation techniques make it possible to optimize reductions, such as prefix sums, in which each component of the reduction's output potentially shares computation with another component in the reduction. Therefore an optimizing compiler can identify the computation shared between multiple components and generate code that computes the shared computation only once. These techniques, however, do not support reductions that -- when phrased in the language of the polyhedral model -- span multiple dependent statements. In such cases, existing approaches can generate incorrect code that violates the data dependences of the original, unoptimized program. In this work, we identify and formalize the optimization of dependent reductions as an integer bilinear program. We present a heuristic optimization algorithm that uses an affine sequential schedule of the program to determine how to simplfy reductions yet still preserve the program's dependences. We demonstrate that the algorithm provides optimal complexity for a set of benchmark programs from the literature on probabilistic inference algorithms, whose performance critically relies on simplifying these reductions. The complexities for 10 of the 11 programs improve siginifcantly by factors at least of the sizes of the input data, which are in the range of 10410^4 to 10610^6 for typical real application inputs. We also confirm the significance of the improvement by showing speedups in wall-clock time that range from 1.1x1.1\text{x} to over 106x10^6\text{x}

    An Output-sensitive Algorithm for Computing Projections of Resultant Polytopes

    Get PDF
    We develop an incremental algorithm to compute the Newton polytope of the resultant, aka resultant polytope, or its projection along a given direction. The resultant is fundamental in algebraic elimination and in implicitization of parametric hypersurfaces. Our algorithm exactly computes vertex- and halfspace-representations of the desired polytope using an oracle producing resultant vertices in a given direction. It is output-sensitive as it uses one oracle call per vertex. We overcome the bottleneck of determinantal predicates by hashing, thus accelerating execution from 1818 to 100100 times. We implement our algorithm using the experimental CGAL package {\tt triangulation}. A variant of the algorithm computes successively tighter inner and outer approximations: when these polytopes have, respectively, 90\% and 105\% of the true volume, runtime is reduced up to 2525 times. Our method computes instances of 55-, 66- or 77-dimensional polytopes with 3535K, 2323K or 500500 vertices, resp., within 22hr. Compared to tropical geometry software, ours is faster up to dimension 55 or 66, and competitive in higher dimensions
    corecore