39,443 research outputs found

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Bridging the biodiversity data gaps: Recommendations to meet users’ data needs

    Get PDF
    A strong case has been made for freely available, high quality data on species occurrence, in order to track changes in biodiversity. However, one of the main issues surrounding the provision of such data is that sources vary in quality, scope, and accuracy. Therefore publishers of such data must face the challenge of maximizing quality, utility and breadth of data coverage, in order to make such data useful to users. Here, we report a number of recommendations that stem from a content need assessment survey conducted by the Global Biodiversity Information Facility (GBIF). Through this survey, we aimed to distil the main user needs regarding biodiversity data. We find a broad range of recommendations from the survey respondents, principally concerning issues such as data quality, bias, and coverage, and extending ease of access. We recommend a candidate set of actions for the GBIF that fall into three classes: 1) addressing data gaps, data volume, and data quality, 2) aggregating new kinds of data for new applications, and 3) promoting ease-of-use and providing incentives for wider use. Addressing the challenge of providing high quality primary biodiversity data can potentially serve the needs of many international biodiversity initiatives, including the new 2020 biodiversity targets of the Convention on Biological Diversity, the emerging global biodiversity observation network (GEO BON), and the new Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES)

    A Logically Centralized Approach for Control and Management of Large Computer Networks

    Get PDF
    Management of large enterprise and Internet Service Provider networks is a complex, error-prone, and costly challenge. It is widely accepted that the key contributors to this complexity are the bundling of control and data forwarding in traditional routers and the use of fully distributed protocols for network control. To address these limitations, the networking research community has been pursuing the vision of simplifying the functional role of a router to its primary task of packet forwarding. This enables centralizing network control at a decision plane where network-wide state can be maintained, and network control can be centrally and consistently enforced. However, scalability and fault-tolerance concerns with physical centralization motivate the need for a more flexible and customizable approach. This dissertation is an attempt at bridging the gap between the extremes of distribution and centralization of network control. We present a logically centralized approach for the design of network decision plane that can be realized by using a set of physically distributed controllers in a network. This approach is aimed at giving network designers the ability to customize the level of control and management centralization according to the scalability, fault-tolerance, and responsiveness requirements of their networks. Our thesis is that logical centralization provides a robust, reliable, and efficient paradigm for management of large networks and we present several contributions to prove this thesis. For network planning, we describe techniques for optimizing the placement of network controllers and provide guidance on the physical design of logically centralized networks. For network operation, algorithms for maintaining dynamic associations between the decision plane and network devices are presented, along with a protocol that allows a set of network controllers to coordinate their decisions, and present a unified interface to the managed network devices. Furthermore, we study the trade-offs in decision plane application design and provide guidance on application state and logic distribution. Finally, we present results of extensive numerical and simulative analysis of the feasibility and performance of our approach. The results show that logical centralization can provide better scalability and fault-tolerance while maintaining performance similarity with traditional distributed approach

    Frictionless Authentication Systems: Emerging Trends, Research Challenges and Opportunities

    Get PDF
    Authentication and authorization are critical security layers to protect a wide range of online systems, services and content. However, the increased prevalence of wearable and mobile devices, the expectations of a frictionless experience and the diverse user environments will challenge the way users are authenticated. Consumers demand secure and privacy-aware access from any device, whenever and wherever they are, without any obstacles. This paper reviews emerging trends and challenges with frictionless authentication systems and identifies opportunities for further research related to the enrollment of users, the usability of authentication schemes, as well as security and privacy trade-offs of mobile and wearable continuous authentication systems.Comment: published at the 11th International Conference on Emerging Security Information, Systems and Technologies (SECURWARE 2017

    Balancing measures or a balanced accounting of improvement impact:a qualitative analysis of individual and focus group interviews with improvement experts in Scotland

    Get PDF
    Background As quality improvement (QI) programmes have become progressively larger scale, the risks of implementation having unintended consequences are increasingly recognised. More routine use of balancing measures to monitor unintended consequences has been proposed to evaluate overall effectiveness, but in practice published improvement interventions hardly ever report identification or measurement of consequences other than intended goals of improvement. Methods We conducted 15 semistructured interviews and two focus groups with 24 improvement experts to explore the current understanding of balancing measures in QI and inform a more balanced accounting of the overall impact of improvement interventions. Data were analysed iteratively using the framework approach. Results Participants described the consequences of improvement in terms of desirability/undesirability and the extent to which they were expected/unexpected when planning improvement. Four types of consequences were defined: expected desirable consequences (goals); expected undesirable consequences (trade-offs); unexpected undesirable consequences (unpleasant surprises); and unexpected desirable consequences (pleasant surprises). Unexpected consequences were considered important but rarely measured in existing programmes, and an improvement pause to take stock after implementation would allow these to be more actively identified and managed. A balanced accounting of all consequences of improvement interventions can facilitate staff engagement and reduce resistance to change, but has to be offset against the cost of additional data collection. Conclusion Improvement measurement is usually focused on measuring intended goals, with minimal use of balancing measures which when used, typically monitor trade-offs expected before implementation. This paper proposes that improvers and leaders should seek a balanced accounting of all consequences of improvement across the life of an improvement programme, including deliberately pausing after implementation to identify and quantitatively or qualitatively evaluate any pleasant or unpleasant surprises

    Games for a new climate: experiencing the complexity of future risks

    Full text link
    This repository item contains a single issue of the Pardee Center Task Force Reports, a publication series that began publishing in 2009 by the Boston University Frederick S. Pardee Center for the Study of the Longer-Range Future.This report is a product of the Pardee Center Task Force on Games for a New Climate, which met at Pardee House at Boston University in March 2012. The 12-member Task Force was convened on behalf of the Pardee Center by Visiting Research Fellow Pablo Suarez in collaboration with the Red Cross/Red Crescent Climate Centre to “explore the potential of participatory, game-based processes for accelerating learning, fostering dialogue, and promoting action through real-world decisions affecting the longer-range future, with an emphasis on humanitarian and development work, particularly involving climate risk management.” Compiled and edited by Janot Mendler de Suarez, Pablo Suarez and Carina Bachofen, the report includes contributions from all of the Task Force members and provides a detailed exploration of the current and potential ways in which games can be used to help a variety of stakeholders – including subsistence farmers, humanitarian workers, scientists, policymakers, and donors – to both understand and experience the difficulty and risks involved related to decision-making in a complex and uncertain future. The dozen Task Force experts who contributed to the report represent academic institutions, humanitarian organization, other non-governmental organizations, and game design firms with backgrounds ranging from climate modeling and anthropology to community-level disaster management and national and global policymaking as well as game design.Red Cross/Red Crescent Climate Centr
    corecore