163,243 research outputs found

    A comparison of multiprocessor scheduling methods for iterative data flow architectures

    Get PDF
    A comparative study is made between the Algorithm to Architecture Mapping Model (ATAMM) and three other related multiprocessing models from the published literature. The primary focus of all four models is the non-preemptive scheduling of large-grain iterative data flow graphs as required in real-time systems, control applications, signal processing, and pipelined computations. Important characteristics of the models such as injection control, dynamic assignment, multiple node instantiations, static optimum unfolding, range-chart guided scheduling, and mathematical optimization are identified. The models from the literature are compared with the ATAMM for performance, scheduling methods, memory requirements, and complexity of scheduling and design procedures

    Memorizing Schroder's Method as an Efficient Strategy for Estimating Roots of Unknown Multiplicity

    Full text link
    [EN] In this paper, we propose, to the best of our knowledge, the first iterative scheme with memory for finding roots whose multiplicity is unknown existing in the literature. It improves the efficiency of a similar procedure without memory due to Schroder and can be considered as a seed to generate higher order methods with similar characteristics. Once its order of convergence is studied, its stability is analyzed showing its good properties, and it is compared numerically in terms of their basins of attraction with similar schemes without memory for finding multiple roots.This research was partially supported by PGC2018-095896-B-C22 (MCIU/AEI/FEDER, UE).Cordero Barbero, A.; Neta, B.; Torregrosa Sánchez, JR. (2021). Memorizing Schroder's Method as an Efficient Strategy for Estimating Roots of Unknown Multiplicity. Mathematics. 9(20):1-13. https://doi.org/10.3390/math9202570S11392

    Neural Distributed Autoassociative Memories: A Survey

    Full text link
    Introduction. Neural network models of autoassociative, distributed memory allow storage and retrieval of many items (vectors) where the number of stored items can exceed the vector dimension (the number of neurons in the network). This opens the possibility of a sublinear time search (in the number of stored items) for approximate nearest neighbors among vectors of high dimension. The purpose of this paper is to review models of autoassociative, distributed memory that can be naturally implemented by neural networks (mainly with local learning rules and iterative dynamics based on information locally available to neurons). Scope. The survey is focused mainly on the networks of Hopfield, Willshaw and Potts, that have connections between pairs of neurons and operate on sparse binary vectors. We discuss not only autoassociative memory, but also the generalization properties of these networks. We also consider neural networks with higher-order connections and networks with a bipartite graph structure for non-binary data with linear constraints. Conclusions. In conclusion we discuss the relations to similarity search, advantages and drawbacks of these techniques, and topics for further research. An interesting and still not completely resolved question is whether neural autoassociative memories can search for approximate nearest neighbors faster than other index structures for similarity search, in particular for the case of very high dimensional vectors.Comment: 31 page

    L'Evolució dels mètodes de càlcul d'estructures durant el segle XX: dels mètodes gràfics a la irrupció dels computadors

    Get PDF
    The methods of calculation used in engineering, and specifically in structural design, have experienced a great development throughout the 20th century. From the manual methodologies to the present systems, mainly based on computer calculations, they have increased of unimaginable way the calculation capacity, as well as the precision, reliability and speed of calculation. The methods at the beginning of the century, that followed the previous graphical methods, with which they coexisted a long time, gave rise to iterative systems for solving equations. These methodologies evolved with the appearance of first computers, and the increase of the capacity of calculation, memory and speed, lead to an increase of the degree of sophistication and complexity of the used methods. In this article the basic characteristics of the evolution of the methods of analysis of structures and its implications for the professional and educational world are exposed. This evolution is exemplified through three significant methods: the graphical methods, the Cross method and the Finite Element Metho

    Iterative pre-distortion of the non-linear satellite channel

    Full text link
    Digital Video Broadcasting - Satellite - Second Generation (DVB-S2) is the current European standard for satellite broadcast and broadband communications. It relies on high order modulations up to 32-amplitude/phase-shift-keying (APSK) in order to increase the system spectral efficiency. Unfortunately, as the modulation order increases, the receiver becomes more sensitive to physical layer impairments, and notably to the distortions induced by the power amplifier and the channelizing filters aboard the satellite. Pre-distortion of the non-linear satellite channel has been studied for many years. However, the performance of existing pre-distortion algorithms generally becomes poor when high-order modulations are used on a non-linear channel with a long memory. In this paper, we investigate a new iterative method that pre-distorts blocks of transmitted symbols so as to minimize the Euclidian distance between the transmitted and received symbols. We also propose approximations to relax the pre-distorter complexity while keeping its performance acceptable
    corecore