1,197 research outputs found

    Particle Swarm Optimization

    Get PDF
    Particle swarm optimization (PSO) is a population based stochastic optimization technique influenced by the social behavior of bird flocking or fish schooling.PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. This book represents the contributions of the top researchers in this field and will serve as a valuable tool for professionals in this interdisciplinary field

    Computer vision algorithms on reconfigurable logic arrays

    Full text link

    Optimal processor assignment for pipeline computations

    Get PDF
    The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered

    Advanced Technique and Future Perspective for Next Generation Optical Fiber Communications

    Get PDF
    Optical fiber communication industry has gained unprecedented opportunities and achieved rapid progress in recent years. However, with the increase of data transmission volume and the enhancement of transmission demand, the optical communication field still needs to be upgraded to better meet the challenges in the future development. Artificial intelligence technology in optical communication and optical network is still in its infancy, but the existing achievements show great application potential. In the future, with the further development of artificial intelligence technology, AI algorithms combining channel characteristics and physical properties will shine in optical communication. This reprint introduces some recent advances in optical fiber communication and optical network, and provides alternative directions for the development of the next generation optical fiber communication technology

    SCALABLE TECHNIQUES FOR SCHEDULING AND MAPPING DSP APPLICATIONS ONTO EMBEDDED MULTIPROCESSOR PLATFORMS

    Get PDF
    A variety of multiprocessor architectures has proliferated even for off-the-shelf computing platforms. To make use of these platforms, traditional implementation frameworks focus on implementing Digital Signal Processing (DSP) applications using special platform features to achieve high performance. However, due to the fast evolution of the underlying architectures, solution redevelopment is error prone and re-usability of existing solutions and libraries is limited. In this thesis, we facilitate an efficient migration of DSP systems to multiprocessor platforms while systematically leveraging previous investment in optimized library kernels using dataflow design frameworks. We make these library elements, which are typically tailored to specialized architectures, more amenable to extensive analysis and optimization using an efficient and systematic process. In this thesis we provide techniques to allow such migration through four basic contributions: 1. We propose and develop a framework to explore efficient utilization of Single Instruction Multiple Data (SIMD) cores and accelerators available in heterogeneous multiprocessor platforms consisting of General Purpose Processors (GPPs) and Graphics Processing Units (GPUs). We also propose new scheduling techniques by applying extensive block processing in conjunction with appropriate task mapping and task ordering methods that match efficiently with the underlying architecture. The approach gives the developer the ability to prototype a GPU-accelerated application and explore its design space efficiently and effectively. 2. We introduce the concept of Partial Expansion Graphs (PEGs) as an implementation model and associated class of scheduling strategies. PEGs are designed to help realize DSP systems in terms of forms and granularities of parallelism that are well matched to the given applications and targeted platforms. PEGs also facilitate derivation of both static and dynamic scheduling techniques, depending on the amount of variability in task execution times and other operating conditions. We show how to implement efficient PEG-based scheduling methods using real time operating systems, and to re-use pre-optimized libraries of DSP components within such implementations. 3. We develop new algorithms for scheduling and mapping systems implemented using PEGs. Collectively, these algorithms operate in three steps. First, the amount of data parallelism in the application graph is tuned systematically over many iterations to profit from the available cores in the target platform. Then a mapping algorithm that uses graph analysis is developed to distribute data and task parallel instances over different cores while trying to balance the load of all processing units to make use of pipeline parallelism. Finally, we use a novel technique for performance evaluation by implementing the scheduler and a customizable solution on the programmable platform. This allows accurate fitness functions to be measured and used to drive runtime adaptation of schedules. 4. In addition to providing scheduling techniques for the mentioned applications and platforms, we also show how to integrate the resulting solution in the underlying environment. This is achieved by leveraging existing libraries and applying the GPP-GPU scheduling framework to augment a popular existing Software Defined Radio (SDR) development environment -- GNU Radio -- with a dataflow foundation and a stand-alone GPU-accelerated library. We also show how to realize the PEG model on real time operating system libraries, such as the Texas Instruments DSP/BIOS. A code generator that accepts a manual system designer solution as well as automatically configured solutions is provided to complete the design flow starting from application model to running system

    Control techniques for thermal-aware energy-efficient real time multiprocessor scheduling

    Get PDF
    La utilización de microprocesadores multinúcleo no sólo es atractiva para la industria sino que en muchos ámbitos es la única opción. La planificación tiempo real sobre estas plataformas es mucho más compleja que sobre monoprocesadores y en general empeoran el problema de sobre-diseño, llevando a la utilización de muchos más procesadores /núcleos de los necesarios. Se han propuesto algoritmos basados en planificación fluida que optimizan la utilización de los procesadores, pero hasta el momento presentan en general inconvenientes que los alejan de su aplicación práctica, no siendo el menor el elevado número de cambios de contexto y migraciones.Esta tesis parte de la hipótesis de que es posible diseñar algoritmos basados en planificación fluida, que optimizan la utilización de los procesadores, cumpliendo restricciones temporales, térmicas y energéticas, con un bajo número de cambios de contexto y migraciones, y compatibles tanto con la generación fuera de línea de ejecutivos cíclicos atractivos para la industria, como de planificadores que integran técnicas de control en tiempo de ejecución que permiten la gestión eficiente tanto de tareas aperiódicas como de desviaciones paramétricas o pequeñas perturbaciones.A este respecto, esta tesis contribuye con varias soluciones. En primer lugar, mejora una metodología de modelo que representa todas las dimensiones del problema bajo un único formalismo (Redes de Petri Continuas Temporizadas). En segundo lugar, propone un método de generación de un ejecutivo cíclico, calculado en ciclos de procesador, para un conjunto de tareas tiempo real duro sobre multiprocesadores que optimiza la utilización de los núcleos de procesamiento respetando también restricciones térmicas y de energía, sobre la base de una planificación fluida. Considerar la sobrecarga derivada del número de cambios de contexto y migraciones en un ejecutivo cíclico plantea un dilema de causalidad: el número de cambios de contexto (y en consecuencia su sobrecarga) no se conoce hasta generar el ejecutivo cíclico, pero dicho número no se puede minimizar hasta que se ha calculado. La tesis propone una solución a este dilema mediante un método iterativo de convergencia demostrada que logra minimizar la sobrecarga mencionada.En definitiva, la tesis consigue explotar la idea de planificación fluida para maximizar la utilización (donde maximizar la utilización es un gran problema en la industria) generando un sencillo ejecutivo cíclico de mínima sobrecarga (ya que la sobrecarga implica un gran problema de los planificadores basados en planificación fluida).Finalmente, se propone un método para utilizar las referencias de la planificación fuera de línea establecida en el ejecutivo cíclico para su seguimiento por parte de un controlador de frecuencia en línea, de modo que se pueden afrontar pequeñas perturbaciones y variaciones paramétricas, integrando la gestión de tareas aperiódicas (tiempo real blando) mientras se asegura la integridad de la ejecución del conjunto de tiempo real duro.Estas aportaciones constituyen una novedad en el campo, refrendada por las publicaciones derivadas de este trabajo de tesis.<br /

    Using Imprecise Computing for Improved Real-Time Scheduling

    Get PDF
    Conventional hard real-time scheduling is often overly pessimistic due to the worst case execution time estimation. The pessimism can be mitigated by exploiting imprecise computing in applications where occasional small errors are acceptable. This leverage is investigated in a few previous works, which are restricted to preemptive cases. We study how to make use of imprecise computing in uniprocessor non-preemptive real-time scheduling, which is known to be more difficult than its preemptive counterpart. Several heuristic algorithms are developed for periodic tasks with independent or cumulative errors due to imprecision. Simulation results show that the proposed techniques can significantly improve task schedulability and achieve desired accuracy– schedulability tradeoff. The benefit of considering imprecise computing is further confirmed by a prototyping implementation in Linux system. Mixed-criticality system is a popular model for reducing pessimism in real-time scheduling while providing guarantee for critical tasks in presence of unexpected overrun. However, it is controversial due to some drawbacks. First, all low-criticality tasks are dropped in high-criticality mode, although they are still needed. Second, a single high-criticality job overrun leads to the pessimistic high-criticality mode for all high-criticality tasks and consequently resource utilization becomes inefficient. We attempt to tackle aforementioned two limitations of mixed-criticality system simultaneously in multiprocessor scheduling, while those two issues are mostly focused on uniprocessor scheduling in several recent works. We study how to achieve graceful degradation of low-criticality tasks by continuing their executions with imprecise computing or even precise computing if there is sufficient utilization slack. Schedulability conditions under this Variable-Precision Mixed-Criticality (VPMC) system model are investigated for partitioned scheduling and global fpEDF-VD scheduling. And a deferred switching protocol is introduced so that the chance of switching to high-criticality mode is significantly reduced. Moreover, we develop a precision optimization approach that maximizes precise computing of low-criticality tasks through 0-1 knapsack formulation. Experiments are performed through both software simulations and Linux proto- typing with consideration of overhead. Schedulability of the proposed methods is studied so that the Quality-of-Service for low-criticality tasks is improved with guarantee of satisfying all deadline constraints. The proposed precision optimization can largely reduce computing errors compared to constantly executing low-criticality tasks with imprecise computing in high-criticality mode

    230702

    Get PDF
    This article presents a novel centrality-driven gateway designation framework for the improved real-time performance of low-power wireless sensor networks (WSNs) at system design time. We target time-synchronized channel hopping (TSCH) WSNs with centralized network management and multiple gateways with the objective of enhancing traffic schedulability by design. To this aim, we propose a novel network centrality metric termed minimal-overlap centrality that characterizes the overall number of path overlaps between all the active flows in the network when a given node is selected as gateway. The metric is used as a gateway designation criterion to elect as a gateway the node leading to the minimal number of overlaps. The method is then extended to multiple gateways with the aid of the unsupervised learning method of spectral clustering. Concretely, after a given number of clusters are identified, we use the new metric at each cluster to designate as cluster gateway the node with the least overall number of overlaps. Extensive simulations with random topologies under centralized earliest-deadline-first (EDF) scheduling and shortest-path routing suggest our approach is dominant over traditional centrality metrics from social network analysis, namely, eigenvector, closeness, betweenness, and degree. Notably, our approach reduces by up to 40% the worst-case end-to-end deadline misses achieved by classical centrality-driven gateway designation methods.This work was partially supported by National Funds through FCT/MCTES (Portuguese Foundation for Science and Technology), within the CISTER Research Unit (UIDB/04234/2020); by the Operational Competitiveness Programme and Internationalization (COMPETE 2020) under the PT2020 Agreement, through the European Regional Development Fund (ERDF); also by FCT and the ESF (European Social Fund) through the Regional Operational Programme (ROP) Norte 2020, under PhD grant 2020.06685.BD.info:eu-repo/semantics/publishedVersio

    Parallel Architectures and Parallel Algorithms for Integrated Vision Systems

    Get PDF
    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems
    • …
    corecore