510 research outputs found

    The Interplay of Reward and Energy in Real-Time Systems

    Get PDF
    This work contends that three constraints need to be addressed in the context of power-aware real-time systems: energy, time and task rewards/values. These issues are studied for two types of systems. First, embedded systems running applications that will include temporal requirements (e.g., audio and video). Second, servers and server clusters that have timing constraints and Quality of Service (QoS) requirements implied by the application being executed (e.g., signal processing, audio/video streams, webpages). Furthermore, many future real-time systems will rely on different software versions to achieve a variety of QoS-aware tradeoffs, each with different rewards, time and energy requirements.For hard real-time systems, solutions are proposed that maximize the system reward/profit without exceeding the deadlines and without depleting the energy budget (in portable systems the energy budget is determined by the battery charge, while in server farms it is dependent on the server architecture and heat/cooling constraints). Both continuous and discrete reward and power models are studied, and the reward/energy analysis is extended with multiple task versions, optional/mandatory tasks and long-term reward maximization policies.For soft real-time systems, the reward model is relaxed into a QoS constraint, and stochastic schemes are first presented for power management of systems with unpredictable workloads. Then, load distribution and power management policies are addressed in the context of servers and homogeneous server farms. Finally, the work is extended with QoS-aware local and global policies for the general case of heterogeneous systems

    Approximation algorithms for packing and buffering problems

    Get PDF
    This thesis studies online and offine approximation algorithms for packing and buffering problems. In the second chapter of this thesis, we study the problem of packing linear programs online. In this problem, the online algorithm may only increase the values of the variables of the linear program and his goal is to maximize the value of the objective function of it. The online algorithm has initially full knowledge of all parameters of the linear program, except for the right-hand sides of the constraints which are gradually revealed to him by the adversary. This online problem has been introduced by Ochel et al. [2012]. Our contribution (Englert et al. [2014]) is to provide improved upper bounds for the competitiveness of both deterministic and randomized online algorithms for this problem, as well as an optimal deterministic online algorithm for the special case of linear programs involving two variables. In the third chapter we study the offine COLORFUL BIN PACKING problem. This problem is a variant of the BIN PACKING problem, where each item is associated with a color and where there exists the additional restriction that two items packed consecutively into the same bin cannot share the same color. The COLORFUL BIN PACKING problem has been studied mainly from an online perspective and has been introduced as a generalization of the BLACK AND WHITE BIN PACKING problem (Balogh et al. [2012]), i.e., the special case of this problem for two colors. We provide (joint work with Matthias Englert) a 2-appoximate algorithm for the COLORFUL BIN PACKING problem. In the fourth chapter we study the Longest Queue Drop (LQD) online algorithm for shared-memory switches with three and two output ports. The Longest Queue Drop algorithm is a well-known online algorithm used to direct the packet ow of shared-memory switches. According to LQD, when the buffer of the switch becomes full, a packet is preempted from the longest queue in the buffer to free buffer space for the newly arriving packet which is accepted. We show (Matsakis [2016], to appear) that the Longest Queue Drop algorithm is (3/2)-competitive for three-port switches, improving the previously best upper bound of 5/3 (Kobayashi et al. [2007]). Additionally, we show that this algorithm is exactly (4/3)-competitive for two-port switches, correcting a previously published result claiming a tight upper bound of 4M-4/3M-2 < 4=3, where M 2 Z+ denotes the buffer size

    Optimization and Communication in UAV Networks

    Get PDF
    UAVs are becoming a reality and attract increasing attention. They can be remotely controlled or completely autonomous and be used alone or as a fleet and in a large set of applications. They are constrained by hardware since they cannot be too heavy and rely on batteries. Their use still raises a large set of exciting new challenges in terms of trajectory optimization and positioning when they are used alone or in cooperation, and communication when they evolve in swarm, to name but a few examples. This book presents some new original contributions regarding UAV or UAV swarm optimization and communication aspects

    Sublinear Computation Paradigm

    Get PDF
    This open access book gives an overview of cutting-edge work on a new paradigm called the “sublinear computation paradigm,” which was proposed in the large multiyear academic research project “Foundations of Innovative Algorithms for Big Data.” That project ran from October 2014 to March 2020, in Japan. To handle the unprecedented explosion of big data sets in research, industry, and other areas of society, there is an urgent need to develop novel methods and approaches for big data analysis. To meet this need, innovative changes in algorithm theory for big data are being pursued. For example, polynomial-time algorithms have thus far been regarded as “fast,” but if a quadratic-time algorithm is applied to a petabyte-scale or larger big data set, problems are encountered in terms of computational resources or running time. To deal with this critical computational and algorithmic bottleneck, linear, sublinear, and constant time algorithms are required. The sublinear computation paradigm is proposed here in order to support innovation in the big data era. A foundation of innovative algorithms has been created by developing computational procedures, data structures, and modelling techniques for big data. The project is organized into three teams that focus on sublinear algorithms, sublinear data structures, and sublinear modelling. The work has provided high-level academic research results of strong computational and algorithmic interest, which are presented in this book. The book consists of five parts: Part I, which consists of a single chapter on the concept of the sublinear computation paradigm; Parts II, III, and IV review results on sublinear algorithms, sublinear data structures, and sublinear modelling, respectively; Part V presents application results. The information presented here will inspire the researchers who work in the field of modern algorithms

    Reduced-order electro-thermal models for computationally efficient thermal analysis of power electronics modules

    Get PDF
    Silicon and Silicon Carbide-based power module are common in power electronic systems used in a wide range of applications, including renewable energy, industrial drives and transportation. Reliability of power electronics converters is very important in many applications. It is well known that reliability and ultimately the lifetime of power modules is affected by the running temperature during power cycles. Although accurate thermal models of power electronics assemblies are widely available, based e.g. on computational fluid dynamics (CFD) solvers, their computational complexity hinders the application in real-time temperature monitoring applications. In the thesis, geometry-based numerical thermal models and compact thermal models will be developed to address the fast thermal simulation in the electronic design process and real-time temperature monitoring, respectively. Accurate geometry-based mathematical models for dynamic thermal analyses can be established with the help of finite difference methods (FDM). However, the computational complexity result from the fine mesh and large dimension of ordinary differential equations (ODE) system matrix makes a drawback on the analysis in parametric studies. In this thesis, a novel multi-parameter order reduction technique is proposed, which can significantly improve the simulation efficiency without having a significant impact on the prediction accuracy. Based on the block Arnoldi method, this method is illustrated by referring to the multi-chip power module connected with air-force cooling system including plate-fin heatsink. In real-time temperature monitoring, more compact tools might be preferable, especially if operating and boundary conditions such as losses and cooling are now known accurately, as it’s often the case in practical applications. Compared with geometry-based model which is more suitable in the design of power modules, lumped parameter thermal compact model is simpler and can be applied in real-time temperature prediction during the power cycles of power modules. This thesis proposes a reduced order state space observer to minimize the error caused by air temperature and air flow rate. Additionally, a novel feedback mechanism for disturbance estimation is introduced to compensate the effect result from the error of input power loss, air flow and changes of other nonlinearities

    A Framework for Approximate Optimization of BoT Application Deployment in Hybrid Cloud Environment

    Get PDF
    We adopt a systematic approach to investigate the efficiency of near-optimal deployment of large-scale CPU-intensive Bag-of-Task applications running on cloud resources with the non-proportional cost to performance ratios. Our analytical solutions perform in both known and unknown running time of the given application. It tries to optimize users' utility by choosing the most desirable tradeoff between the make-span and the total incurred expense. We propose a schema to provide a near-optimal deployment of BoT application regarding users' preferences. Our approach is to provide user with a set of Pareto-optimal solutions, and then she may select one of the possible scheduling points based on her internal utility function. Our framework can cope with uncertainty in the tasks' execution time using two methods, too. First, an estimation method based on a Monte Carlo sampling called AA algorithm is presented. It uses the minimum possible number of sampling to predict the average task running time. Second, assuming that we have access to some code analyzer, code profiling or estimation tools, a hybrid method to evaluate the accuracy of each estimation tool in certain interval times for improving resource allocation decision has been presented. We propose approximate deployment strategies that run on hybrid cloud. In essence, proposed strategies first determine either an estimated or an exact optimal schema based on the information provided from users' side and environmental parameters. Then, we exploit dynamic methods to assign tasks to resources to reach an optimal schema as close as possible by using two methods. A fast yet simple method based on First Fit Decreasing algorithm, and a more complex approach based on the approximation solution of the transformed problem into a subset sum problem. Extensive experiment results conducted on a hybrid cloud platform confirm that our framework can deliver a near optimal solution respecting user's utility function

    Wireless Sensor Network Deployment

    Get PDF
    Wireless Sensor Networks (WSNs) are widely used for various civilian and military applications, and thus have attracted significant interest in recent years. This work investigates the important problem of optimal deployment of WSNs in terms of coverage and energy consumption. Five deployment algorithms are developed for maximal sensing range and minimal energy consumption in order to provide optimal sensing coverage and maximum lifetime. Also, all developed algorithms include self-healing capabilities in order to restore the operation of WSNs after a number of nodes have become inoperative. Two centralized optimization algorithms are developed, one based on Genetic Algorithms (GAs) and one based on Particle Swarm Optimization (PSO). Both optimization algorithms use powerful central nodes to calculate and obtain the global optimum outcomes. The GA is used to determine the optimal tradeoff between network coverage and overall distance travelled by fixed range sensors. The PSO algorithm is used to ensure 100% network coverage and minimize the energy consumed by mobile and range-adjustable sensors. Up to 30% - 90% energy savings can be provided in different scenarios by using the developed optimization algorithms thereby extending the lifetime of the sensor by 1.4 to 10 times. Three distributed optimization algorithms are also developed to relocate the sensors and optimize the coverage of networks with more stringent design and cost constraints. Each algorithm is cooperatively executed by all sensors to achieve better coverage. Two of our algorithms use the relative positions between sensors to optimize the coverage and energy savings. They provide 20% to 25% more energy savings than existing solutions. Our third algorithm is developed for networks without self-localization capabilities and supports the optimal deployment of such networks without requiring the use of expensive geolocation hardware or energy consuming localization algorithms. This is important for indoor monitoring applications since current localization algorithms cannot provide good accuracy for sensor relocation algorithms in such indoor environments. Also, no sensor redeployment algorithms, which can operate without self-localization systems, developed before our work
    • …
    corecore