5,507 research outputs found

    Distributed Clustering in Cognitive Radio Ad Hoc Networks Using Soft-Constraint Affinity Propagation

    Get PDF
    Absence of network infrastructure and heterogeneous spectrum availability in cognitive radio ad hoc networks (CRAHNs) necessitate the self-organization of cognitive radio users (CRs) for efficient spectrum coordination. The cluster-based structure is known to be effective in both guaranteeing system performance and reducing communication overhead in variable network environment. In this paper, we propose a distributed clustering algorithm based on soft-constraint affinity propagation message passing model (DCSCAP). Without dependence on predefined common control channel (CCC), DCSCAP relies on the distributed message passing among CRs through their available channels, making the algorithm applicable for large scale networks. Different from original soft-constraint affinity propagation algorithm, the maximal iterations of message passing is controlled to a relatively small number to accommodate to the dynamic environment of CRAHNs. Based on the accumulated evidence for clustering from the message passing process, clusters are formed with the objective of grouping the CRs with similar spectrum availability into smaller number of clusters while guaranteeing at least one CCC in each cluster. Extensive simulation results demonstrate the preference of DCSCAP compared with existing algorithms in both efficiency and robustness of the clusters

    Optimal Orchestration of Virtual Network Functions

    Full text link
    -The emergence of Network Functions Virtualization (NFV) is bringing a set of novel algorithmic challenges in the operation of communication networks. NFV introduces volatility in the management of network functions, which can be dynamically orchestrated, i.e., placed, resized, etc. Virtual Network Functions (VNFs) can belong to VNF chains, where nodes in a chain can serve multiple demands coming from the network edges. In this paper, we formally define the VNF placement and routing (VNF-PR) problem, proposing a versatile linear programming formulation that is able to accommodate specific features and constraints of NFV infrastructures, and that is substantially different from existing virtual network embedding formulations in the state of the art. We also design a math-heuristic able to scale with multiple objectives and large instances. By extensive simulations, we draw conclusions on the trade-off achievable between classical traffic engineering (TE) and NFV infrastructure efficiency goals, evaluating both Internet access and Virtual Private Network (VPN) demands. We do also quantitatively compare the performance of our VNF-PR heuristic with the classical Virtual Network Embedding (VNE) approach proposed for NFV orchestration, showing the computational differences, and how our approach can provide a more stable and closer-to-optimum solution

    From supply chains to demand networks. Agents in retailing: the electrical bazaar

    Get PDF
    A paradigm shift is taking place in logistics. The focus is changing from operational effectiveness to adaptation. Supply Chains will develop into networks that will adapt to consumer demand in almost real time. Time to market, capacity of adaptation and enrichment of customer experience seem to be the key elements of this new paradigm. In this environment emerging technologies like RFID (Radio Frequency ID), Intelligent Products and the Internet, are triggering a reconsideration of methods, procedures and goals. We present a Multiagent System framework specialized in retail that addresses these changes with the use of rational agents and takes advantages of the new market opportunities. Like in an old bazaar, agents able to learn, cooperate, take advantage of gossip and distinguish between collaborators and competitors, have the ability to adapt, learn and react to a changing environment better than any other structure. Keywords: Supply Chains, Distributed Artificial Intelligence, Multiagent System.Postprint (published version

    Static and Dynamic Scheduling for Effective Use of Multicore Systems

    Get PDF
    Multicore systems have increasingly gained importance in high performance computers. Compared to the traditional microarchitectures, multicore architectures have a simpler design, higher performance-to-area ratio, and improved power efficiency. Although the multicore architecture has various advantages, traditional parallel programming techniques do not apply to the new architecture efficiently. This dissertation addresses how to determine optimized thread schedules to improve data reuse on shared-memory multicore systems and how to seek a scalable solution to designing parallel software on both shared-memory and distributed-memory multicore systems. We propose an analytical cache model to predict the number of cache misses on the time-sharing L2 cache on a multicore processor. The model provides an insight into the impact of cache sharing and cache contention between threads. Inspired by the model, we build the framework of affinity based thread scheduling to determine optimized thread schedules to improve data reuse on all the levels in a complex memory hierarchy. The affinity based thread scheduling framework includes a model to estimate the cost of a thread schedule, which consists of three submodels: an affinity graph submodel, a memory hierarchy submodel, and a cost submodel. Based on the model, we design a hierarchical graph partitioning algorithm to determine near-optimal solutions. We have also extended the algorithm to support threads with data dependences. The algorithms are implemented and incorporated into a feedback directed optimization prototype system. The prototype system builds upon a binary instrumentation tool and can improve program performance greatly on shared-memory multicore architectures. We also study the dynamic data-availability driven scheduling approach to designing new parallel software on distributed-memory multicore architectures. We have implemented a decentralized dynamic runtime system. The design of the runtime system is focused on the scalability metric. At any time only a small portion of a task graph exists in memory. We propose an algorithm to solve data dependences without process cooperation in a distributed manner. Our experimental results demonstrate the scalability and practicality of the approach for both shared-memory and distributed-memory multicore systems. Finally, we present a scalable nonblocking topology-aware multicast scheme for distributed DAG scheduling applications

    Optimal configuration of active and backup servers for augmented reality cooperative games

    Get PDF
    Interactive applications as online games and mobile devices have become more and more popular in recent years. From their combination, new and interesting cooperative services could be generated. For instance, gamers endowed with Augmented Reality (AR) visors connected as wireless nodes in an ad-hoc network, can interact with each other while immersed in the game. To enable this vision, we discuss here a hybrid architecture enabling game play in ad-hoc mode instead of the traditional client-server setting. In our architecture, one of the player nodes also acts as the server of the game, whereas other backup server nodes are ready to become active servers in case of disconnection of the network i.e. due to low energy level of the currently active server. This allows to have a longer gaming session before incurring in disconnections or energy exhaustion. In this context, the server election strategy with the aim of maximizing network lifetime is not so straightforward. To this end, we have hence analyzed this issue through a Mixed Integer Linear Programming (MILP) model and both numerical and simulation-based analysis shows that the backup servers solution fulfills its design objective

    Improving network connection locality on multicore systems

    Get PDF
    Incoming and outgoing processing for a given TCP connection often execute on different cores: an incoming packet is typically processed on the core that receives the interrupt, while outgoing data processing occurs on the core running the relevant user code. As a result, accesses to read/write connection state (such as TCP control blocks) often involve cache invalidations and data movement between cores' caches. These can take hundreds of processor cycles, enough to significantly reduce performance. We present a new design, called Affinity-Accept, that causes all processing for a given TCP connection to occur on the same core. Affinity-Accept arranges for the network interface to determine the core on which application processing for each new connection occurs, in a lightweight way; it adjusts the card's choices only in response to imbalances in CPU scheduling. Measurements show that for the Apache web server serving static files on a 48-core AMD system, Affinity-Accept reduces time spent in the TCP stack by 30% and improves overall throughput by 24%.National Science Foundation (U.S.). (Grant number CNS-0834415)National Science Foundation (U.S.). (Grant number CNS-0915164)Quanta Computer (Firm
    • 

    corecore