16 research outputs found

    Exascale machines require new programming paradigms and runtimes

    Get PDF
    Extreme scale parallel computing systems will have tens of thousands of optionally accelerator-equiped nodes with hundreds of cores each, as well as deep memory hierarchies and complex interconnect topologies. Such Exascale systems will provide hardware parallelism at multiple levels and will be energy constrained. Their extreme scale and the rapidly deteriorating reliablity of their hardware components means that Exascale systems will exhibit low mean-time-between-failure values. Furthermore, existing programming models already require heroic programming and optimisation efforts to achieve high efficiency on current supercomputers. Invariably, these efforts are platform-specific and non-portable. In this paper we will explore the shortcomings of existing programming models and runtime systems for large scale computing systems. We then propose and discuss important features of programming paradigms and runtime system to deal with large scale computing systems with a special focus on data-intensive applications and resilience. Finally, we also discuss code sustainability issues and propose several software metrics that are of paramount importance for code development for large scale computing systems

    Overhead in available bandwidth estimation tools: Evaluation and analysis

    Get PDF
    Current Available Bandwidth Estimation Tools (ABET) insert into the network probing packets to perform a single estimation. The utilization of these packets makes ABET intrusive and prone to errors since they consume part of the available bandwidth they are measuring. This paper presents a comparative of Overhead Estimation Tools (OET) analysis of representative ABET: Abing, Diettopp, Pathload, PathChirp, Traceband, IGI, PTR, Assolo, and Wbest. By using Internet traffic, the study shows that the insertion of probing packets is a factor that affects two metrics associated to the estimation. First, it is shown that the accuracy is affected proportionally to the amount of probing traffic. Secondly, the Estimation Time (ET) is increased in high congested end-to-end links when auto-induced congestion tools are use

    Overhead in Available Bandwidth Estimation Tools: Evaluation and Analysis

    Get PDF
    The current Available Bandwidth Estimation Tools (ABET's) to perform an estimation, using probes packets are inserted into the network. The utilization These packages, makes ABET's are intrusive and consumes part of which is measuring bandwidth to noise known as "Overhead Estimation Tools" (OET); it’s can produce negative effects on measurements performed by the ABET. This paper presents a complete and comparative analysis of behavior of Available Bandwidth (av_bw), of the ABET's most representative, as well as: Abing, Diettopp, Pathload, PathChirp, Traceband, IGI, PTR, Assolo and Wbest. The study with real Internet traffic, shows the percentage of test that is a factor packets affecting two main aspects of the estimation. The first, the accuracy, and increased indicating that EOT is directly proportional to the percentage of RE, reaching up to 70% in the tool evaluated with most of 30% of Cross-Traffic (CT). And second, the techniques used to send probes packets highly influences the Estimation Time (ET), where some tools that use slops spend up to 240s to converge when there is 60% CT in the network, ensuring that the estimate this technique av_bw highly congested channel, OET as much is used, resulting in inaccuracies in measurement

    Recent Trends in Communication Networks

    Get PDF
    In recent years there has been many developments in communication technology. This has greatly enhanced the computing power of small handheld resource-constrained mobile devices. Different generations of communication technology have evolved. This had led to new research for communication of large volumes of data in different transmission media and the design of different communication protocols. Another direction of research concerns the secure and error-free communication between the sender and receiver despite the risk of the presence of an eavesdropper. For the communication requirement of a huge amount of multimedia streaming data, a lot of research has been carried out in the design of proper overlay networks. The book addresses new research techniques that have evolved to handle these challenges

    Intelligent Circuits and Systems

    Get PDF
    ICICS-2020 is the third conference initiated by the School of Electronics and Electrical Engineering at Lovely Professional University that explored recent innovations of researchers working for the development of smart and green technologies in the fields of Energy, Electronics, Communications, Computers, and Control. ICICS provides innovators to identify new opportunities for the social and economic benefits of society.  This conference bridges the gap between academics and R&D institutions, social visionaries, and experts from all strata of society to present their ongoing research activities and foster research relations between them. It provides opportunities for the exchange of new ideas, applications, and experiences in the field of smart technologies and finding global partners for future collaboration. The ICICS-2020 was conducted in two broad categories, Intelligent Circuits & Intelligent Systems and Emerging Technologies in Electrical Engineering

    Agent organization in the KP

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 181-191).In designing and building a network like the Internet, we continue to face the problems of scale and distribution. With the dramatic expansion in scale and heterogeneity of the Internet, network management has become an increasingly difficult task. Furthermore, network applications often need to maintain efficient organization among the participants by collecting information from the underlying networks. Such individual information collection activities lead to duplicate efforts and contention for network resources. The Knowledge Plane (KP) is a new common construct that provides knowledge and expertise to meet the functional, policy and scaling requirements of network management, as well as to create synergy and exploit commonality among many network applications. To achieve these goals, we face many challenging problems, including widely distributed data collection, efficient processing of that data, wide availability of the expertise, etc. In this thesis, to provide better support for network management and large-scale network applications, I propose a knowledge plane architecture that consists of a network knowledge plane (NetKP) at the network layer, and on top of it, multiple specialized KPs (spec-KPs). The NetKP organizes agents to provide valuable knowledge and facilities about the Internet to the spec-KPs. Each spec-KP is specialized in its own area of interest. In both the NetKP and the spec-KPs, agents are organized into regions based on different sets of constraints. I focus on two key design issues in the NetKP: (1) a region-based architecture for agent organization, in which I design an efficient and non-intrusive organization among regions that combines network topology and a distributed hash table; (2) request and knowledge dissemination, in which I design a robust and efficient broadcast and aggregation mechanism using a tree structure among regions.(cont.) In the spec-KPs, I build two examples: experiment management on the PlanetLab testbed and distributed intrusion detection on the DETER testbed. The experiment results suggest a common approach driven by the design principles of the Internet and more specialized constraints can derive productive organization for network management and applications.by Ji Li.Ph.D

    Supporting Internet Access and Quality of Service in Distributed Wireless Ad Hoc Networks

    Get PDF
    In this era of wireless hysteria, with continuous technological advances in wireless communication and new wireless technologies becoming standardized at a fast rate, we can expect an increased interest for wireless networks, such as ad hoc and mesh networks. These networks operate in a distributed manner, independent of any centralized device. In order to realize the practical benefits of ad hoc networks, two challenges (among others) need to be considered: distributed QoS guarantees and multi-hop Internet access. In this thesis we present conceivable solutions to both of these problems. An autonomous, stand-alone ad hoc network is useful in many cases, such as search and rescue operations and meetings where participants wish to quickly share information. However, an ad hoc network connected to the Internet is even more desirable. This is because Internet plays an important role in the daily life of many people by offering a broad range of services. In this thesis we present AODV+, which is our solution to achieve this network interconnection between a wireless ad hoc network and the wired Internet. Providing QoS in distributed wireless networks is another challenging, but yet important, task mainly because there is no central device controlling the medium access. In this thesis we propose EDCA with Resource Reservation (EDCA/RR), which is a fully distributed MAC scheme that provides QoS guarantees by allowing applications with strict QoS requirements to reserve transmission time for contention-free medium access. Our scheme is compatible with existing standards and provides both parameterized and prioritized QoS. In addition, we present the Distributed Deterministic Channel Access (DDCA) scheme, which is a multi-hop extension of EDCA/RR and can be used in wireless mesh networks. Finally, we have complemented our simulation studies with real-world ad hoc and mesh network experiments. With the experience from these experiments, we obtained a clear insight into the limitations of wireless channels. We could conclude that a wise design of the network architecture that limits the number of consecutive wireless hops may result in a wireless mesh network that is able to satisfy users’ needs. Moreover, by using QoS mechanisms like EDCA/RR or DDCA we are able to provide different priorities to traffic flows and reserve resources for the most time-critical applications

    Agent Organization in the Knowledge Plane

    Get PDF
    In designing and building a network like the Internet, we continue to face the problems of scale and distribution. With the dramatic expansion in scale and heterogeneity of the Internet, network management has become an increasingly difficult task. Furthermore, network applications often need to maintain efficient organization among the participants by collecting information from the underlying networks. Such individual information collection activities lead to duplicate efforts and contention for network resources.The Knowledge Plane (KP) is a new common construct that provides knowledge and expertise to meet the functional, policy and scaling requirements of network management, as well as to create synergy and exploit commonality among many network applications. To achieve these goals, we face many challenging problems, including widely distributed data collection, efficient processing of that data, wide availability of the expertise, etc.In this thesis, to provide better support for network management and large-scale network applications, I propose a knowledge plane architecture that consists of a network knowledge plane (NetKP) at the network layer, and on top of it, multiple specialized KPs (spec-KPs). The NetKP organizes agents to provide valuable knowledge and facilities about the Internet to the spec-KPs. Each spec-KP is specialized in its own area of interest. In both the NetKP and the spec-KPs, agents are organized into regions based on different sets of constraints. I focus on two key design issues in the NetKP: (1) a regionbased architecture for agent organization, in which I design an efficient and non-intrusive organization among regions that combines network topology and a distributed hash table; (2) request and knowledge dissemination, in which I design a robust and efficient broadcast and aggregation mechanism using a tree structure among regions. In the spec-KPs, I build two examples: experiment management on the PlanetLab testbed and distributed intrusion detection on the DETER testbed. The experiment results suggest a common approach driven by the design principles of the Internet and more specialized constraints can derive productive organization for network management and applications
    corecore