136 research outputs found

    Deliverable DJRA1.2. Solutions and protocols proposal for the network control, management and monitoring in a virtualized network context

    Get PDF
    This deliverable presents several research proposals for the FEDERICA network, in different subjects, such as monitoring, routing, signalling, resource discovery, and isolation. For each topic one or more possible solutions are elaborated, explaining the background, functioning and the implications of the proposed solutions.This deliverable goes further on the research aspects within FEDERICA. First of all the architecture of the control plane for the FEDERICA infrastructure will be defined. Several possibilities could be implemented, using the basic FEDERICA infrastructure as a starting point. The focus on this document is the intra-domain aspects of the control plane and their properties. Also some inter-domain aspects are addressed. The main objective of this deliverable is to lay great stress on creating and implementing the prototype/tool for the FEDERICA slice-oriented control system using the appropriate framework. This deliverable goes deeply into the definition of the containers between entities and their syntax, preparing this tool for the future implementation of any kind of algorithm related to the control plane, for both to apply UPB policies or to configure it by hand. We opt for an open solution despite the real time limitations that we could have (for instance, opening web services connexions or applying fast recovering mechanisms). The application being developed is the central element in the control plane, and additional features must be added to this application. This control plane, from the functionality point of view, is composed by several procedures that provide a reliable application and that include some mechanisms or algorithms to be able to discover and assign resources to the user. To achieve this, several topics must be researched in order to propose new protocols for the virtual infrastructure. The topics and necessary features covered in this document include resource discovery, resource allocation, signalling, routing, isolation and monitoring. All these topics must be researched in order to find a good solution for the FEDERICA network. Some of these algorithms have started to be analyzed and will be expanded in the next deliverable. Current standardization and existing solutions have been investigated in order to find a good solution for FEDERICA. Resource discovery is an important issue within the FEDERICA network, as manual resource discovery is no option, due to scalability requirement. Furthermore, no standardization exists, so knowledge must be obtained from related work. Ideally, the proposed solutions for these topics should not only be adequate specifically for this infrastructure, but could also be applied to other virtualized networks.Postprint (published version

    Wrong Turn in Cyberspace: Using ICANN to Route Around the APA and the Constitution

    Get PDF
    The Internet relies on an underlying centralized hierarchy built into the domain name system (DNS) to control the routing for the vast majority of Internet traffic. At its heart is a single data file, known as the root. Control of the root provides singular power in cyberspace. This Article first describes how the United States government found itself in control of the root. It then describes how, in an attempt to meet concerns that the United States could so dominate an Internet chokepoint, the U. S. Department of Commerce (DoC) summoned into being the Internet Corporation for Assigned Names and Numbers (ICANN), a formally private nonprofit California corporation. DoC then signed contracts with ICANN in order to clothe it with most of the U. S. government\u27s power over the DNS, and convinced other parties to recognize ICANN\u27s authority. ICANN then took regulatory actions that the U. S. Department of Commerce was unable or unwilling to make itself, including the imposition on all registrants of Internet addresses of an idiosyncratic set of arbitration rules and procedures that benefit third-party trademark holders. Professor Froomkin then argues that the use of ICANN to regulate in the stead of an executive agency violates fundamental values and policies designed to ensure democratic control over the use of government power, and sets a precedent that risks being expanded into other regulatory activities. He argues that DoC\u27s use of ICANN to make rules either violates the APA\u27s requirement for notice and comment in rulemaking and judicial review, or it violates the Constitution\u27s nondelegation doctrine. Professor Froomkin reviews possible alternatives to ICANN, and ultimately proposes a decentralized structure in which the namespace of the DNS is spread out over a transnational group of policy partners with DoC

    Wrong Turn in Cyberspace: Using ICANN to Route Around the APA and the Constitution

    Get PDF
    The Internet relies on an underlying centralized hierarchy built into the domain name system (DNS) to control the routing for the vast majority of Internet traffic. At its heart is a single data file, known as the root. Control of the root provides singular power in cyberspace. This Article first describes how the United States government found itself in control of the root. It then describes how, in an attempt to meet concerns that the United States could so dominate an Internet chokepoint, the U. S. Department of Commerce (DoC) summoned into being the Internet Corporation for Assigned Names and Numbers (ICANN), a formally private nonprofit California corporation. DoC then signed contracts with ICANN in order to clothe it with most of the U. S. government\u27s power over the DNS, and convinced other parties to recognize ICANN\u27s authority. ICANN then took regulatory actions that the U. S. Department of Commerce was unable or unwilling to make itself, including the imposition on all registrants of Internet addresses of an idiosyncratic set of arbitration rules and procedures that benefit third-party trademark holders. Professor Froomkin then argues that the use of ICANN to regulate in the stead of an executive agency violates fundamental values and policies designed to ensure democratic control over the use of government power, and sets a precedent that risks being expanded into other regulatory activities. He argues that DoC\u27s use of ICANN to make rules either violates the APA\u27s requirement for notice and comment in rulemaking and judicial review, or it violates the Constitution\u27s nondelegation doctrine. Professor Froomkin reviews possible alternatives to ICANN, and ultimately proposes a decentralized structure in which the namespace of the DNS is spread out over a transnational group of policy partners with DoC

    MementoMap: A Web Archive Profiling Framework for Efficient Memento Routing

    Get PDF
    With the proliferation of public web archives, it is becoming more important to better profile their contents, both to understand their immense holdings as well as to support routing of requests in Memento aggregators. A memento is a past version of a web page and a Memento aggregator is a tool or service that aggregates mementos from many different web archives. To save resources, the Memento aggregator should only poll the archives that are likely to have a copy of the requested Uniform Resource Identifier (URI). Using the Crawler Index (CDX), we generate profiles of the archives that summarize their holdings and use them to inform routing of the Memento aggregator’s URI requests. Additionally, we use full text search (when available) or sample URI lookups to build an understanding of an archive’s holdings. Previous work in profiling ranged from using full URIs (no false positives, but with large profiles) to using only top-level domains (TLDs) (smaller profiles, but with many false positives). This work explores strategies in between these two extremes. For evaluation we used CDX files from Archive-It, UK Web Archive, Stanford Web Archive Portal, and Arquivo.pt. Moreover, we used web server access log files from the Internet Archive’s Wayback Machine, UK Web Archive, Arquivo.pt, LANL’s Memento Proxy, and ODU’s MemGator Server. In addition, we utilized historical dataset of URIs from DMOZ. In early experiments with various URI-based static profiling policies we successfully identified about 78% of the URIs that were not present in the archive with less than 1% relative cost as compared to the complete knowledge profile and 94% URIs with less than 10% relative cost without any false negatives. In another experiment we found that we can correctly route 80% of the requests while maintaining about 0.9 recall by discovering only 10% of the archive holdings and generating a profile that costs less than 1% of the complete knowledge profile. We created MementoMap, a framework that allows web archives and third parties to express holdings and/or voids of an archive of any size with varying levels of details to fulfil various application needs. Our archive profiling framework enables tools and services to predict and rank archives where mementos of a requested URI are likely to be present. In static profiling policies we predefined the maximum depth of host and path segments of URIs for each policy that are used as URI keys. This gave us a good baseline for evaluation, but was not suitable for merging profiles with different policies. Later, we introduced a more flexible means to represent URI keys that uses wildcard characters to indicate whether a URI key was truncated. Moreover, we developed an algorithm to rollup URI keys dynamically at arbitrary depths when sufficient archiving activity is detected under certain URI prefixes. In an experiment with dynamic profiling of archival holdings we found that a MementoMap of less than 1.5% relative cost can correctly identify the presence or absence of 60% of the lookup URIs in the corresponding archive without any false negatives (i.e., 100% recall). In addition, we separately evaluated archival voids based on the most frequently accessed resources in the access log and found that we could have avoided more than 8% of the false positives without introducing any false negatives. We defined a routing score that can be used for Memento routing. Using a cut-off threshold technique on our routing score we achieved over 96% accuracy if we accept about 89% recall and for a recall of 99% we managed to get about 68% accuracy, which translates to about 72% saving in wasted lookup requests in our Memento aggregator. Moreover, when using top-k archives based on our routing score for routing and choosing only the topmost archive, we missed only about 8% of the sample URIs that are present in at least one archive, but when we selected top-2 archives, we missed less than 2% of these URIs. We also evaluated a machine learning-based routing approach, which resulted in an overall better accuracy, but poorer recall due to low prevalence of the sample lookup URI dataset in different web archives. We contributed various algorithms, such as a space and time efficient approach to ingest large lists of URIs to generate MementoMaps and a Random Searcher Model to discover samples of holdings of web archives. We contributed numerous tools to support various aspects of web archiving and replay, such as MemGator (a Memento aggregator), Inter- Planetary Wayback (a novel archival replay system), Reconstructive (a client-side request rerouting ServiceWorker), and AccessLog Parser. Moreover, this work yielded a file format specification draft called Unified Key Value Store (UKVS) that we use for serialization and dissemination of MementoMaps. It is a flexible and extensible file format that allows easy interactions with Unix text processing tools. UKVS can be used in many applications beyond MementoMaps

    Federal Regulatory Barriers to Grid-Deployed Energy Storage

    Get PDF
    Until recently, the most advanced form of grid-deployed energy storage involved pumping water up a hill. But “newer storage technologies like flywheels and chemical batteries have recently achieved technological maturity and are well into successful pilot stages and, in some cases, commercial operation”. If widely adopted these new energy storage technologies will fundamentally alter the operation of our electricity syste

    Probabilistic route discovery for Wireless Mobile Ad Hoc Networks (MANETs)

    Get PDF
    Mobile wireless ad hoc networks (MANETs) have become of increasing interest in view of their promise to extend connectivity beyond traditional fixed infrastructure networks. In MANETs, the task of routing is distributed among network nodes which act as both end points and routers in a wireless multi-hop network environment. To discover a route to a specific destination node, existing on-demand routing protocols employ a broadcast scheme referred to as simple flooding whereby a route request packet (RREQ) originating from a source node is blindly disseminated to the rest of the network nodes. This can lead to excessive redundant retransmissions, causing high channel contention and packet collisions in the network, a phenomenon called a broadcast storm. To reduce the deleterious impact of flooding RREQ packets, a number of route discovery algorithms have been suggested over the past few years based on, for example, location, zoning or clustering. Most such approaches however involve considerably increased complexity requiring additional hardware or the maintenance of complex state information. This research argues that such requirements can be largely alleviated without sacrificing performance gains through the use of probabilistic broadcast methods, where an intermediate node rebroadcasts RREQ packets based on some suitable forwarding probability rather than in the traditional deterministic manner. Although several probabilistic broadcast algorithms have been suggested for MANETs in the past, most of these have focused on “pure” broadcast scenarios with relatively little investigation of the performance impact on specific applications such as route discovery. As a consequence, there has been so far very little study of the performance of probabilistic route discovery applied to the well-established MANET routing protocols. In an effort to fill this gap, the first part of this thesis evaluates the performance of the routing protocols Ad hoc On demand Distance Vector (AODV) and Dynamic Source Routing (DSR) augmented with probabilistic route discovery, taking into account parameters such as network density, traffic density and nodal mobility. The results reveal encouraging benefits in overall routing control overhead but also show that network operating conditions have a critical impact on the optimality of the forwarding probabilities. In most existing probabilistic broadcast algorithms, including the one used here for preliminary investigations, each forwarding node is allowed to rebroadcast a received packet with a fixed forwarding probability regardless of its relative location with respect to the locations of the source and destination pairs. However, in a route discovery operation, if the location of the destination node is known, the dissemination of the RREQ packets can be directed towards this location. Motivated by this, the second part of the research proposes a probabilistic route discovery approach that aims to reduce further the routing overhead by limiting the dissemination of the RREQ packets towards the anticipated location of the destination. This approach combines elements of the fixed probabilistic and flooding-based route discovery approaches. The results indicate that in a relatively dense network, these combined effects can reduce the routing overhead very significantly when compared with that of the fixed probabilistic route discovery. Typically in a MANET there are regions of varying node density. Under such conditions, fixed probabilistic route discovery can suffer from a degree of inflexibility, since every node is assigned the same forwarding probability regardless of local conditions. Ideally, the forwarding probability should be high for a node located in a sparse region of the network while relatively lower for a node located in a denser region of the network. As a result, it can be helpful to identify and categorise mobile nodes in the various regions of the network and appropriately adjust their forwarding probabilities. To this end the research examines probabilistic route discovery methods that dynamically adjust the forwarding probability at a node, based on local node density, which is estimated using number of neighbours as a parameter. Results from this study return significantly superior performance measures compared with fixed probabilistic variants. Although the probabilistic route discovery methods suggested above can significantly reduce the routing control overhead without degrading the overall network throughput, there remains the problem of how to select efficiently forwarding probabilities that will optimize the performance of a broadcast under any given conditions. In an attempt to address this issue, the final part of this thesis proposes and evaluates the feasibility of a node estimating its own forwarding probability dynamically based on locally collected information. The technique examined involves each node piggybacking a list of its 1-hop neighbours in its transmitted RREQ packets. Based on this list, relay nodes can determine the number of neighbours that have been already covered by a broadcast and thus compute the forwarding probabilities most suited to individual circumstances

    Building Better Bailouts: The Case for a Long-Term Investment Approach

    Get PDF
    The Article seeks to fill a crucial gap in the Dodd-Frank Wall Street Reform and Consumer Protection Act: the failure to create a framework for dealing with future financial bailouts. It argues that the federal government’s ad hoc, “break even” approach to the recent bailouts not only shortchanged taxpayers, but more importantly failed to provide deterrence against the type of reckless risk-taking that led to the financial crisis. This Article argues that the key to legitimizing future bailouts and limiting moral hazard is to institutionalize a long-term investment-oriented approach that delineates clear contours and conditions for aid. It calls for establishing an independent agency, the Federal Government Investment Corporation (FGIC), to serve as an investor of last resort, which would make bailout monies contingent on beneficiaries sharing both risks and long-term returns with taxpayers. The FGIC would establish express, ex ante conditions for providing aid that would temper corporate risk-taking, protect taxpayers, and establish bounds to bailouts. Tying government bailouts to shared sacrifices with managers, shareholders, and creditors of beneficiaries, proportional profit sharing with taxpayers, and corporate governance reforms would help to ensure that future bailouts serve a productive purpose
    corecore