1,289 research outputs found

    Long-Range Correlations and Memory in the Dynamics of Internet Interdomain Routing

    Full text link
    Data transfer is one of the main functions of the Internet. The Internet consists of a large number of interconnected subnetworks or domains, known as Autonomous Systems. Due to privacy and other reasons the information about what route to use to reach devices within other Autonomous Systems is not readily available to any given Autonomous System. The Border Gateway Protocol is responsible for discovering and distributing this reachability information to all Autonomous Systems. Since the topology of the Internet is highly dynamic, all Autonomous Systems constantly exchange and update this reachability information in small chunks, known as routing control packets or Border Gateway Protocol updates. Motivated by scalability and predictability issues with the dynamics of these updates in the quickly growing Internet, we conduct a systematic time series analysis of Border Gateway Protocol update rates. We find that Border Gateway Protocol update time series are extremely volatile, exhibit long-term correlations and memory effects, similar to seismic time series, or temperature and stock market price fluctuations. The presented statistical characterization of Border Gateway Protocol update dynamics could serve as a ground truth for validation of existing and developing better models of Internet interdomain routing

    The Origins of ccTLD Policymaking

    Get PDF
    Extract: A long time ago in a galaxy not so far away, there was a decentralized global network of computers. These computers shared information with each other regardless of how far apart they were and whether there was any direct line of communication between them. In the very beginning, this network was used exclusively by government and military agencies, educational and research institutions, government contractors, scientists, and technology specialists. Instead of the domain names we use today, such as “www. amazon.com,” users typed in numeric addresses, such as “123.45.67.89,” and, later, host names to send information to other computers. This network soon expanded, and domain names became a practical necessity. There are at least two reasons. First, alphanumeric texts are generally easier for humans to remember than numeric addresses. Second, as Internet traffic increases and computer systems are reconfigured, the computer server used for a particular Web site may change from time to time. In fact, some busy Web sites might use multiple servers, requiring them to take turns to address requests directed to a single domain name. While the Web site owner (or his or her technical staff) might know internally to which numeric address the Web site corresponds at a particular moment, the general public does not. Domain names are therefore needed for identification purposes

    Web Tracking: Mechanisms, Implications, and Defenses

    Get PDF
    This articles surveys the existing literature on the methods currently used by web services to track the user online as well as their purposes, implications, and possible user's defenses. A significant majority of reviewed articles and web resources are from years 2012-2014. Privacy seems to be the Achilles' heel of today's web. Web services make continuous efforts to obtain as much information as they can about the things we search, the sites we visit, the people with who we contact, and the products we buy. Tracking is usually performed for commercial purposes. We present 5 main groups of methods used for user tracking, which are based on sessions, client storage, client cache, fingerprinting, or yet other approaches. A special focus is placed on mechanisms that use web caches, operational caches, and fingerprinting, as they are usually very rich in terms of using various creative methodologies. We also show how the users can be identified on the web and associated with their real names, e-mail addresses, phone numbers, or even street addresses. We show why tracking is being used and its possible implications for the users (price discrimination, assessing financial credibility, determining insurance coverage, government surveillance, and identity theft). For each of the tracking methods, we present possible defenses. Apart from describing the methods and tools used for keeping the personal data away from being tracked, we also present several tools that were used for research purposes - their main goal is to discover how and by which entity the users are being tracked on their desktop computers or smartphones, provide this information to the users, and visualize it in an accessible and easy to follow way. Finally, we present the currently proposed future approaches to track the user and show that they can potentially pose significant threats to the users' privacy.Comment: 29 pages, 212 reference

    The economics of Information Technologies Standards &

    Get PDF
    This research investigates the problem of Information Technologies Standards or Recommendations from an economical point of view. In our competitive economy, most enterprises adopted standardization’s processes, following recommendations of specialized Organisations such as ISO (International Organisation for Standardization), W3C (World Wide Web Consortium) and ISOC (Internet Society) in order to reassure their customers. But with the development of new and open internet standards, different enterprises from the same sector fields, decided to develop their own IT standards for their activities. So we will hypothesis that the development of a professional IT standard required a network of enterprises but also a financial support, a particular organizational form and a precise activity to describe. In order to demonstrate this hypothesis and understand how professional organise themselves for developing and financing IT standards, we will take the Financial IT Standards as an example. So after a short and general presentation of IT Standards for the financial market, based on XML technologies, we will describe how professional IT standards could be created (nearly 10 professional norms or recommendations appear in the beginning of this century). We will see why these standards are developed outside the classical circles of standardisation organisations, and what could be the “key factors of success” for the best IT standards in Finance. We will use a descriptive and analytical method, in order to evaluate the financial support and to understand these actors’ strategies and the various economical models described behind. Then, we will understand why and how these standards have emerged and been developed. We will conclude this paper with a prospective view on future development of standards and recommendations.information technologies, financial standards, development of standards, evaluation of the economical costs of standards

    IPv6 Network Mobility

    Get PDF
    Network Authentication, Authorization, and Accounting has been used since before the days of the Internet as we know it today. Authentication asks the question, “Who or what are you?” Authorization asks, “What are you allowed to do?” And fi nally, accounting wants to know, “What did you do?” These fundamental security building blocks are being used in expanded ways today. The fi rst part of this two-part series focused on the overall concepts of AAA, the elements involved in AAA communications, and highlevel approaches to achieving specifi c AAA goals. It was published in IPJ Volume 10, No. 1[0]. This second part of the series discusses the protocols involved, specifi c applications of AAA, and considerations for the future of AAA

    Building on Progress - Expanding the Research Infrastructure for the Social, Economic, and Behavioral Sciences. Vol. 1

    Get PDF
    The publication provides a comprehensive compendium of the current state of Germany's research infrastructure in the social, economic, and behavioural sciences. In addition, the book presents detailed discussions of the current needs of empirical researchers in these fields and opportunities for future development. The book contains 68 advisory reports by more than 100 internationally recognized authors from a wide range of fields and recommendations by the German Data Forum (RatSWD) on how to improve the research infrastructure so as to create conditions ideal for making Germany's social, economic, and behavioral sciences more innovative and internationally competitive. The German Data Forum (RatSWD) has discussed the broad spectrum of issues covered by these advisory reports extensively, and has developed general recommendations on how to expand the research infrastructure to meet the needs of scholars in the social and economic sciences

    Interactive Food and Beverage Marketing: Targeting Children and Youth in the Digital Age

    Get PDF
    Looks at the practices of food and beverage industry marketers in reaching youth via digital videos, cell phones, interactive games and social networking sites. Recommends imposing governmental regulations on marketing to children and adolescents
    • …
    corecore