66 research outputs found

    Shortcuts through Colocation Facilities

    Full text link
    Network overlays, running on top of the existing Internet substrate, are of perennial value to Internet end-users in the context of, e.g., real-time applications. Such overlays can employ traffic relays to yield path latencies lower than the direct paths, a phenomenon known as Triangle Inequality Violation (TIV). Past studies identify the opportunities of reducing latency using TIVs. However, they do not investigate the gains of strategically selecting relays in Colocation Facilities (Colos). In this work, we answer the following questions: (i) how Colo-hosted relays compare with other relays as well as with the direct Internet, in terms of latency (RTT) reductions; (ii) what are the best locations for placing the relays to yield these reductions. To this end, we conduct a large-scale one-month measurement of inter-domain paths between RIPE Atlas (RA) nodes as endpoints, located at eyeball networks. We employ as relays Planetlab nodes, other RA nodes, and machines in Colos. We examine the RTTs of the overlay paths obtained via the selected relays, as well as the direct paths. We find that Colo-based relays perform the best and can achieve latency reductions against direct paths, ranging from a few to 100s of milliseconds, in 76% of the total cases; 75% (58% of total cases) of these reductions require only 10 relays in 6 large Colos.Comment: In Proceedings of the ACM Internet Measurement Conference (IMC '17), London, GB, 201

    On the latency and routing impacts of remote peering to the Internet

    Get PDF
    Remote peering (RP) has crucially altered the Internet topology and its economics. In creasingly popular thanks to its lower costs and simplicity, RP has shifted the member base of Internet eXchange Points (IXPs) from strictly local to include ASes located any where in the world. While the popularity of RP is well understood, its implications on Internet routing and performance are not. In this thesis, we perform a comprehensive measurement study of RP in the wild, based on a representative set of IXPs (including some of the largest ones in the world, covering the five continents). We first identify the challenges of inferring remote peering and the limitations of the existing methodologies. Next, we perform active measurements to identify the deployment of remote IXP inter faces and announced prefixes in these IXPs, including a longitudinal analysis to observe RP growth over one and a half years. We use the RP inferences on IXPs to investigate whether RP routes announced at IXPs tend to be preferred over local ones and what are their latency and latency variability impacts when using different interconnection meth ods (remote peering, local peering, and transit) to deliver traffic. Next, we asses the RP latency impact when using a remote connection to international IXPs and reaching prefix destinations announced by their members. We perform measurements leveraging the in frastructure of a large Latin American RP reseller and compare the latency to reach IXP prefixes via RP and four Transit providers. Finally, we glimpse some of the RP impli cations on Internet routing. We evaluate how RP can considerably affect IXP members’ connection stability, potentially introduce routing detours caused by prefix announcement mispractices and be the target of traffic engineering by ASes using BGP communities

    Securities Fraud Embedded in the Market Structure Crisis: High-Frequency Traders as Primary Violators

    Full text link
    This Article analyzes approaches to attaching liability for securities fraud to high-frequency traders as primary violators in connection with the current market structure crisis. One of the manifestations of this crisis pertains to inadequate disclosure of advanced functionalities offered by trading venues, as exemplified by the order type controversy. The Article’s analysis is applied to secret arrangements between trading venues and preferred traders, glitches and gaming, and the reach of the doctrine of market manipulation, and several relevant issues are also viewed from the standpoint of the integrity of the trading process. The Article concludes by arguing for a balanced approach to catching certain problematic practices of high-frequency traders as securities fraud

    Performance Analysis of Multipath BGP

    Get PDF
    Multipath BGP (M-BGP) allows a BGP router to install multiple 'equally-good' paths, via parallel inter-domain border links, to a destination prefix. M-BGP differs from the multipath routing techniques in many ways, e.g. M-BGP is only implemented at border routers of Autonomous Systems (ASes); and while it shares traffic to different IP addresses in a destination prefix via different border links, any traffic to a given destination IP always follows the same border link. Recently we studied Looking Glass data and reported the wide deployment of M-BGP in the Internet; in particular, Hurricane Electric (AS6939) has implemented over 1,000 cases of M-BGP to hundreds of its peering ASes. In this paper, we analyzed the performance of M-BGP. We used RIPE Atlas to send traceroute probes to a series of destination prefixes through Hurricane Electric's border routers implemented with M-BGP. We examined the distribution of Round Trip Time to each probed IP address in a destination prefix and their variation during the measurement. We observed that the deployment of M-BGP can guarantee stable routing between ASes and enhance a network's resilience to traffic changes. Our work provides insights into the unique characteristics of M-BGP as an effective technique for load balancing.Comment: IEEE Global Internet (GI) Symposium 202

    Inter-brain synchronization occurs without physical co-presence during cooperative online gaming

    Get PDF
    Inter-brain synchronization during social interaction has been linked with several positive phenomena, including closeness, cooperation, prosociality, and team performance. However, the temporal dynamics of inter-brain synchronization during collaboration are not yet fully understood. Furthermore, with collaboration increasingly happening online, the dependence of inter-brain phase synchronization of oscillatory activity on physical presence is an important but understudied question. In this study, physically isolated participants performed a collaborative coordination task in the form of a cooperative multiplayer game. We measured EEG from 42 subjects working together as pairs in the task. During the measurement, the only interaction between the participants happened through on-screen movement of a racing car, controlled by button presses of both participants working with distinct roles, either controlling the speed or the direction of the car. Pairs working together in the task were found to have elevated neural coupling in the alpha, beta, and gamma frequency bands, compared to performance matched false pairs. Higher gamma synchrony was associated with better momentary performance within dyads and higher alpha synchrony was associated with better mean performance across dyads. These results are in line with previous findings of increased inter-brain synchrony during interaction, and show that phase synchronization of oscillatory activity occurs during online real-time joint coordination without any physical co-presence or video and audio connection. Synchrony decreased during a playing session, but was found to be higher during the second session compared to the first. The novel paradigm, developed for the measurement of real-time collaborative performance, demonstrates that changes in inter-brain EEG phase synchrony can be observed continuously during interaction.Peer reviewe

    Seven years in the life of Hypergiants’ off-nets

    Get PDF

    Integrated project delivery (IPD) in Norwegian construction projects : Sharing of risk and opportunities aiming at better cooperation and project achievement

    Get PDF
    Master's thesis Industrial Economics and Technology Management IND590 - University of Agder 2018In the architecture, engineering and construction (AEC) industry projects are often temporary in nature and are often characterised as PBO project-based organisations. Each project chooses a number of strategies which makes up a project development model. Traditionally models have focused on transactional contracting between actors in the value chain which opens up the way, for sub-optimisation and risk aversion as the common challenges. In such circumstances, each actor has a different perception of the aim and the success of the project, and so try to find ways to take the lowest risk and gain the most profit coupled with optimising their own interests. Based on the highlighted problem the study aims to investigate of the research is to examine Integrated project delivery (IPD) in Norwegian construction projects. Sharing of risk and opportunities aiming at better collaboration and project achievement. The thesis aims at testing the differences between IPD and alternative implementation models. Hence this thesis attempts to answer the following propositions: • Proposition one states that IPD provides less scope for sub-optimisation and opportunistic behaviour between companies in the value chain. • Proposition two states that IPD provides better conditions for unified solutions (swapping) than traditional contracts. • Proposition three states that IPD safeguards quality and customer value in a better way than alternative implementation models while maintaining constructability. • Proposition four states that IPD, in combination with TVD, provides better framework conditions for continuous improvement and innovation compared to a Design Build model. Furthermore, this case study answers the propositions by collecting data composed mainly of the documents, survey and interviews which were analysed using the framework method. Based on analysed content the findings show that IPD provides helps to alleviate the characteristic problems in the AEC industry and how this have changed using IPD. These problems come from a lack of quality, lack of collaboration, opportunism, lack of customer value and a lack of innovation. By using a critical realist perspective, one can identify the influencing mechanisms

    Algorithmic trading, market quality and information : a dual -process account

    Get PDF
    One of the primary challenges encountered when conducting theoretical research on the subject of algorithmic trading is the wide array of strategies employed by practitioners. Current theoretical models treat algorithmic traders as a homogenous trader group, resulting in a gap between theoretical discourse and empirical evidence on algorithmic trading practices. In order to address this, the current study introduces an organisational framework from which to conceptualise and synthesise the vast amount of algorithmic trading strategies. More precisely, using the principles of contemporary cognitive science, it is argued that the dual process paradigm - the most prevalent contemporary interpretation of the nature and function of human decision making - lends itself well to a novel taxonomy of algorithmic trading. This taxonomy serves primarily as a heuristic to inform a theoretical market microstructure model of algorithmic trading. Accordingly, this thesis presents the first unified, all-inclusive theoretical model of algorithmic trading; the overall aim of which is to determine the evolving nature of financial market quality as a consequence of this practice. In accordance with the literature on both cognitive science and algorithmic trading, this thesis espouses that there exists two distinct types of algorithmic trader; one (System 1) having fast processing characteristics, and the other (System 2) having slower, more analytic or reflective processing characteristics. Concomitantly, the current microstructure literature suggests that a trader can be superiorly informed as a result of either (1) their superior speed in accessing or exploiting information, or (2) their superior ability to more accurately forecast future variables. To date, microstructure models focus on either one aspect but not both. This common modelling assumption is also evident in theoretical models of algorithmic trading. Theoretical papers on the topic have coalesced around the idea that algorithmic traders possess a comparative advantage relative to their human counterparts. However, the literature is yet to reach consensus as to what this advantage entails, nor its subsequent effects on financial market quality. Notably, the key assumptions underlying the dual-process taxonomy of algorithmic trading suggest that two distinct informational advantages underlie algorithmic trading. The possibility then follows that System 1 algorithmic traders possess an inherent speed advantage and System 2 algorithmic traders, an inherent accuracy advantage. Inevitably, the various strategies associated with algorithmic trading correspond to their own respective system, and by implication, informational advantage. A model that incorporates both types of informational advantage is a challenging problem in the context of a microstructure model of trade. Models typically eschew this issue entirely by restricting themselves to the analysis of one type of information variable in isolation. This is done solely for the sake of tractability and simplicity (models can in theory include both variables). Thus, including both types of private information within a single microstructure model serves to enhance the novel contribution of this work. To prepare for the final theoretical model of this thesis, the present study will first conjecture and verify a benchmark model with only one type/system of algorithmic trader. More formally, iv a System 2 algorithmic trader will be introduced into Kyle’s (1985) static Bayesian Nash Equilibrium (BNE) model. The behavioral and informational characteristics of this agent emanate from the key assumptions reflected in the taxonomy. The final dual-process microstructure model, presented in the concluding chapter of this thesis, extends the benchmark model (which builds on Kyle (1985)) by introducing the System 1 algorithmic trader; thereby, incorporating both algorithmic trader systems. As said above: the benchmark model nests the Kyle (1985) model. In a limiting case of the benchmark model, where the System 2 algorithmic trader does not have access to this particular form of private information, the equilibrium reduces to the equilibrium of the static model of Kyle (1985). Likewise, in the final model, when the System 1 algorithmic trader’s information is negligible, the model collapses to the benchmark model. Interestingly, this thesis was able to determine how the strategic interplay between two differentially informed algorithmic traders impact market quality over time. The results indicate that a disparity exists between each distinctive algorithmic trading system and its relative impact on financial market quality. The unique findings of this thesis are addressed in the concluding chapter. Empirical implications of the final model will also be discussed.GR201

    ACUTA Journal of Telecommunications in Higher Education

    Get PDF
    In This Issue President\u27s Message From the Editor lT-Style Alphabet Soup Software-Defined WAN (SO-WAN)- Moving Beyond MPLS loT: The lnternet of Things ls the LPWAN in Your Future? lngredient for Wireless Success: DAS Hot lssues in Communications Technology Law lnstitutional Excellence Award: CSU Fullerton\u27s Shared Cloud Services DlDs for ELINs? lSE...ERP... KnowBe

    ACUTA Journal of Telecommunications in Higher Education

    Get PDF
    In This Issue President\u27s Message From the Editor lT-Style Alphabet Soup Software-Defined WAN (SO-WAN)- Moving Beyond MPLS loT: The lnternet of Things ls the LPWAN in Your Future? lngredient for Wireless Success: DAS Hot lssues in Communications Technology Law lnstitutional Excellence Award: CSU Fullerton\u27s Shared Cloud Services DlDs for ELINs? lSE...ERP... KnowBe
    • …
    corecore