18,298 research outputs found
MONROE-Nettest: A Configurable Tool for Dissecting Speed Measurements in Mobile Broadband Networks
As the demand for mobile connectivity continues to grow, there is a strong
need to evaluate the performance of Mobile Broadband (MBB) networks. In the
last years, mobile "speed", quantified most commonly by data rate, gained
popularity as the widely accepted metric to describe their performance.
However, there is a lack of consensus on how mobile speed should be measured.
In this paper, we design and implement MONROE-Nettest to dissect mobile speed
measurements, and investigate the effect of different factors on speed
measurements in the complex mobile ecosystem. MONROE-Nettest is built as an
Experiment as a Service (EaaS) on top of the MONROE platform, an open dedicated
platform for experimentation in operational MBB networks. Using MONROE-Nettest,
we conduct a large scale measurement campaign and quantify the effects of
measurement duration, number of TCP flows, and server location on measured
downlink data rate in 6 operational MBB networks in Europe. Our results
indicate that differences in parameter configuration can significantly affect
the measurement results. We provide the complete MONROE-Nettest toolset as open
source and our measurements as open data.Comment: 6 pages, 3 figures, submitted to INFOCOM CNERT Workshop 201
On the User Perception of Configurable Reference Process Models - Initial Insights
Enterprise Systems potentially lead to significant efficiency gains but require a well-conducted configuration process. A configurable reference modelling language based on the widely used EPC notation, which can be used to specify Configurable EPCs (C-EPCs), has been developed to support the task of Enterprise Systems configuration. This paper presents a laboratory experiment on C-EPCs and discusses empirical data on the comparison of C-EPCs to regular EPCs. Using the Method Adoption Model we report on modeller’s perceptions as to the usefulness and ease of use of C-EPCs, concluding that C-EPCs provide sufficient yet improvable conceptual support towards reference model configuration
Business Process Configuration According to Data Dependency Specification
Configuration techniques have been used in several fields, such as the design of business
process models. Sometimes these models depend on the data dependencies, being easier to describe
what has to be done instead of how. Configuration models enable to use a declarative representation
of business processes, deciding the most appropriate work-flow in each case. Unfortunately,
data dependencies among the activities and how they can affect the correct execution of the process,
has been overlooked in the declarative specifications and configurable systems found in the literature.
In order to find the best process configuration for optimizing the execution time of processes according
to data dependencies, we propose the use of Constraint Programming paradigm with the aim of
obtaining an adaptable imperative model in function of the data dependencies of the activities
described declarative.Ministerio de Ciencia y Tecnología TIN2015-63502-C3-2-RFondo Europeo de Desarrollo Regiona
Recommended from our members
Techniques for the dynamic randomization of network attributes
Critical infrastructure control systems continue to foster predictable communication paths and static configurations that allow easy access to our networked critical infrastructure around the world. This makes them attractive and easy targets for cyber-attack. We have developed technologies that address these attack vectors by automatically reconfiguring network settings. Applying these protective measures will convert control systems into «moving targets» that proactively defend themselves against attack. This «Moving Target Defense» (MTD) revolves about the movement of network reconfiguration, securely communicating reconfiguration specifications to other network nodes as required, and ensuring that connectivity between nodes is uninterrupted. Software-defined Networking (SDN) is leveraged to meet many of these goals. Our MTD approach eliminates adversaries targeting known static attributes of network devices and systems, and consists of the following three techniques: (1) Network Randomization for TCP/UDP Ports; (2) Network Randomization for IP Addresses; (3) Network Randomization for Network Paths In this paper, we describe the implementation of the aforementioned technologies. We also discuss the individual and collective successes for the techniques, challenges for deployment, constraints and assumptions, and the performance implications for each technique
MiceTrap: scalable traffic engineering of datacenter mice flows using OpenFlow
Datacenter network topologies are inherently built with enough redundancy to offer multiple paths between pairs of end hosts for increased flexibility and resilience. On top, traffic engineering (TE) methods are needed to utilize the abundance of bisection bandwidth efficiently. Previously proposed TE approaches differentiate between long-lived flows (elephant flows) and short-lived flows (mice flows), using dedicated traffic management techniques to handle elephant flows, while treating mice flows with baseline routing methods. We show through an example that such an approach can cause congestion to short-lived (but not necessarily less critical) flows. To overcome this, we propose MiceTrap, an OpenFlow-based TE approach targeting datacenter mice flows. MiceTrap employs scalability against the number of mice flows through flow aggregation, together with a software-configurable weighted routing algorithm that offers improved load balancing for mice flows
- …