109,056 research outputs found

    The Design of an Output Data Collection Framework for ns-3

    Get PDF
    An important design decision in the construction of a simulator is how to enable users to access the data generated in each run of a simulation experiment. As the simulator executes, the samples of performance metrics that are generated beg to be exposed either in their raw state or after having undergone mathematical processing. Also of concern is the particular format this data assumes when externalized to mass storage, since it determines the ease of processing by other applications or interpretation by the user. In this paper, we present a framework for the \ns network simulator for capturing data from inside an experiment, subjecting it to mathematical transformations, and ultimately marshaling it into various output formats. The application of this functionality is illustrated and analyzed via a study of common use cases. Although the implementation of our approach is specific to \ns, this design presents lessons transferrable to other platforms

    Tracking Adaptation and Measuring Development in Kenya

    Get PDF
    Tracking Adaptation and Measuring Development (TAMD) is a twin-track framework that evaluates adaptation success as a combination of how widely and how well countries or institutions manage climate risks (Track 1) and how successful adaptation interventions are in reducing climate vulnerability and in keeping development on course (Track 2). With this twin-track approach, TAMD can be used to assess whether climate change adaptation leads to effective development, and also how development interventions can boost communities' capacity to adapt to climate change. Importantly, TAMD offers a flexible framework that can be used to generate bespoke frameworks for individual countries that can be tailored to specific contexts and used at different scales. This report compiles the results of TAMD feasibility testing phase in Kenya

    Automated Network Service Scaling in NFV: Concepts, Mechanisms and Scaling Workflow

    Get PDF
    Next-generation systems are anticipated to be digital platforms supporting innovative services with rapidly changing traffic patterns. To cope with this dynamicity in a cost-efficient manner, operators need advanced service management capabilities such as those provided by NFV. NFV enables operators to scale network services with higher granularity and agility than today. For this end, automation is key. In search of this automation, the European Telecommunications Standards Institute (ETSI) has defined a reference NFV framework that make use of model-driven templates called Network Service Descriptors (NSDs) to operate network services through their lifecycle. For the scaling operation, an NSD defines a discrete set of instantiation levels among which a network service instance can be resized throughout its lifecycle. Thus, the design of these levels is key for ensuring an effective scaling. In this article, we provide an overview of the automation of the network service scaling operation in NFV, addressing the options and boundaries introduced by ETSI normative specifications. We start by providing a description of the NSD structure, focusing on how instantiation levels are constructed. For illustrative purposes, we propose an NSD for a representative NS. This NSD includes different instantiation levels that enable different ways to automatically scale this NS. Then, we show the different scaling procedures the NFV framework has available, and how it may automate their triggering. Finally, we propose an ETSI-compliant workflow to describe in detail a representative scaling procedure. This workflow clarifies the interactions and information exchanges between the functional blocks in the NFV framework when performing the scaling operation.Comment: This work has been accepted for publication in the IEEE Communications Magazin

    ATLAS Data Challenge 1

    Full text link
    In 2002 the ATLAS experiment started a series of Data Challenges (DC) of which the goals are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made. A major feature of the first Data Challenge (DC1) was the preparation and the deployment of the software required for the production of large event samples for the High Level Trigger (HLT) and physics communities, and the production of those samples as a world-wide distributed activity. The first phase of DC1 was run during summer 2002, and involved 39 institutes in 18 countries. More than 10 million physics events and 30 million single particle events were fully simulated. Over a period of about 40 calendar days 71000 CPU-days were used producing 30 Tbytes of data in about 35000 partitions. In the second phase the next processing step was performed with the participation of 56 institutes in 21 countries (~ 4000 processors used in parallel). The basic elements of the ATLAS Monte Carlo production system are described. We also present how the software suite was validated and the participating sites were certified. These productions were already partly performed by using different flavours of Grid middleware at ~ 20 sites.Comment: 10 pages; 3 figures; CHEP03 Conference, San Diego; Reference MOCT00

    Interoperability and FAIRness through a novel combination of Web technologies

    Get PDF
    Data in the life sciences are extremely diverse and are stored in a broad spectrum of repositories ranging from those designed for particular data types (such as KEGG for pathway data or UniProt for protein data) to those that are general-purpose (such as FigShare, Zenodo, Dataverse or EUDAT). These data have widely different levels of sensitivity and security considerations. For example, clinical observations about genetic mutations in patients are highly sensitive, while observations of species diversity are generally not. The lack of uniformity in data models from one repository to another, and in the richness and availability of metadata descriptions, makes integration and analysis of these data a manual, time-consuming task with no scalability. Here we explore a set of resource-oriented Web design patterns for data discovery, accessibility, transformation, and integration that can be implemented by any general- or special-purpose repository as a means to assist users in finding and reusing their data holdings. We show that by using off-the-shelf technologies, interoperability can be achieved atthe level of an individual spreadsheet cell. We note that the behaviours of this architecture compare favourably to the desiderata defined by the FAIR Data Principles, and can therefore represent an exemplar implementation of those principles. The proposed interoperability design patterns may be used to improve discovery and integration of both new and legacy data, maximizing the utility of all scholarly outputs

    Belle II Technical Design Report

    Full text link
    The Belle detector at the KEKB electron-positron collider has collected almost 1 billion Y(4S) events in its decade of operation. Super-KEKB, an upgrade of KEKB is under construction, to increase the luminosity by two orders of magnitude during a three-year shutdown, with an ultimate goal of 8E35 /cm^2 /s luminosity. To exploit the increased luminosity, an upgrade of the Belle detector has been proposed. A new international collaboration Belle-II, is being formed. The Technical Design Report presents physics motivation, basic methods of the accelerator upgrade, as well as key improvements of the detector.Comment: Edited by: Z. Dole\v{z}al and S. Un

    Pixel detector R&D for the Compact Linear Collider

    Full text link
    The physics aims at the proposed future CLIC high-energy linear e+ee^+ e^- collider pose challenging demands on the performance of the detector system. In particular the vertex and tracking detectors have to combine precision measurements with robustness against the expected high rates of beam-induced backgrounds. A spatial resolution of a few microns and a material budget down to 0.2\% of a radiation length per vertex-detector layer have to be achieved together with a few nanoseconds time stamping accuracy. These requirements are addressed with innovative technologies in an ambitious detector R\&D programme, comprising hardware developments as well as detailed device and Monte Carlo simulations based on TCAD, Geant4 and Allpix-Squared. Various fine pitch hybrid silicon pixel detector technologies are under investigation for the CLIC vertex detector. The CLICpix and CLICpix2 readout ASICs with \SI{25}{\micro\meter} pixel pitch have been produced in a \SI{65}{\nano\meter} commercial CMOS process and bump-bonded to planar active edge sensors as well as capacitively coupled to High-Voltage (HV) CMOS sensors. Monolithic silicon tracking detectors are foreseen for the large surface (\approx \SI{140}{\meter\squared}) CLIC tracker. Fully monolithic prototypes are currently under development in High-Resistivity (HR) CMOS, HV-CMOS and Silicon on Insulator (SOI) technologies. The laboratory and beam tests of all recent prototypes profit from the development of the CaRIBou universal readout system. This talk presents an overview of the CLIC pixel-detector R\&D programme, focusing on recent test-beam and simulation results.Comment: On behalf of CLICdp collaboration, Conference proceedings for PIXEL201

    Independent Evaluation of the Water and Sanitation Hibah Program Indonesia

    Get PDF
    This evaluation assess the effectiveness of Indonesia's Water and Sanitation Hibah Program pilot and identifies lessons for applying this mechanism more broadly. The program, which operates by paying an agreed amount for verified household water or sanitation service connections installed by local water and sanitation utilities, takes advantage of excess capacity of water companies to bypass the need for infrastructure investment. It was evaluated through a document review, key informant interviews, a key stakeholder workshop, field work, and a beneficiary survey, along with quantitative data from existing sources

    Data generator for evaluating ETL process quality

    Get PDF
    Obtaining the right set of data for evaluating the fulfillment of different quality factors in the extract-transform-load (ETL) process design is rather challenging. First, the real data might be out of reach due to different privacy constraints, while manually providing a synthetic set of data is known as a labor-intensive task that needs to take various combinations of process parameters into account. More importantly, having a single dataset usually does not represent the evolution of data throughout the complete process lifespan, hence missing the plethora of possible test cases. To facilitate such demanding task, in this paper we propose an automatic data generator (i.e., Bijoux). Starting from a given ETL process model, Bijoux extracts the semantics of data transformations, analyzes the constraints they imply over input data, and automatically generates testing datasets. Bijoux is highly modular and configurable to enable end-users to generate datasets for a variety of interesting test scenarios (e.g., evaluating specific parts of an input ETL process design, with different input dataset sizes, different distributions of data, and different operation selectivities). We have developed a running prototype that implements the functionality of our data generation framework and here we report our experimental findings showing the effectiveness and scalability of our approach.Peer ReviewedPostprint (author's final draft
    corecore