4,060 research outputs found
Merlin: A Language for Provisioning Network Resources
This paper presents Merlin, a new framework for managing resources in
software-defined networks. With Merlin, administrators express high-level
policies using programs in a declarative language. The language includes
logical predicates to identify sets of packets, regular expressions to encode
forwarding paths, and arithmetic formulas to specify bandwidth constraints. The
Merlin compiler uses a combination of advanced techniques to translate these
policies into code that can be executed on network elements including a
constraint solver that allocates bandwidth using parameterizable heuristics. To
facilitate dynamic adaptation, Merlin provides mechanisms for delegating
control of sub-policies and for verifying that modifications made to
sub-policies do not violate global constraints. Experiments demonstrate the
expressiveness and scalability of Merlin on real-world topologies and
applications. Overall, Merlin simplifies network administration by providing
high-level abstractions for specifying network policies and scalable
infrastructure for enforcing them
Shared Arrangements: practical inter-query sharing for streaming dataflows
Current systems for data-parallel, incremental processing and view
maintenance over high-rate streams isolate the execution of independent
queries. This creates unwanted redundancy and overhead in the presence of
concurrent incrementally maintained queries: each query must independently
maintain the same indexed state over the same input streams, and new queries
must build this state from scratch before they can begin to emit their first
results. This paper introduces shared arrangements: indexed views of maintained
state that allow concurrent queries to reuse the same in-memory state without
compromising data-parallel performance and scaling. We implement shared
arrangements in a modern stream processor and show order-of-magnitude
improvements in query response time and resource consumption for interactive
queries against high-throughput streams, while also significantly improving
performance in other domains including business analytics, graph processing,
and program analysis
Orthrus: A Framework for Implementing Efficient Collective I/O in Multi-core Clusters
Abstract. Optimization of access patterns using collective I/O imposes the overhead of exchanging data between processes. In a multi-core-based cluster the costs of inter-node and intra-node data communication are vastly different, and heterogeneity in the efficiency of data exchange poses both a challenge and an opportunity for implementing efficient collective I/O. The opportunity is to effectively exploit fast intra-node communication. We propose to improve communication locality for greater data exchange efficiency. However, such an effort is at odds with improving access locality for I/O efficiency, which can also be critical to collective-I/O performance. To address this issue we propose a framework, Orthrus, that can accommodate multiple collective-I/O implementations, each optimized for some performance aspects, and dynamically select the best performing one accordingly to current workload and system patterns. We have implemented Orthrus in the ROMIO library. Our experimental results with representative MPI-IO benchmarks on both a small dedicated cluster and a large production HPC system show that Orthrus can significantly improve collective I/O performance under various workloads and system scenarios.
Assessing the Performance of Virtualization Technologies for NFV: a Preliminary Benchmarking
The NFV paradigm transforms those applications executed for decades in dedicated appliances, into software images to be consolidated in standard server.
Although NFV is implemented through cloud computing technologies (e.g., virtual machines, virtual switches), the network traffic that such components have to handle in NFV is different than the traffic they process when used in a cloud computing scenario.
Then, this paper provides a (preliminary) benchmarking of the widespread virtualization technologies when used in NFV, which means when they are exploited to run the so called virtual network functions and to chain them in order to create complex services
VThreads: A novel VLIW chip multiprocessor with hardware-assisted PThreads
We discuss VThreads, a novel VLIW CMP with hardware-assisted shared-memory Thread support. VThreads supports Instruction Level Parallelism via static multiple-issue and Thread Level Parallelism via hardware-assisted POSIX Threads along with extensive customization. It allows the instantiation of tightlycoupled streaming accelerators and supports up to 7-address Multiple-Input, Multiple-Output instruction extensions. VThreads is designed in technology-independent Register-Transfer-Level VHDL and prototyped on 40 nm and 28 nm Field-Programmable gate arrays. It was evaluated against a PThreads-based multiprocessor
based on the Sparc-V8 ISA. On a 65 nm ASIC implementation VThreads achieves up to x7.2
performance increase on synthetic benchmarks, x5 on a parallel Mandelbrot implementation, 66% better on a threaded JPEG implementation, 79% better on an edge-detection benchmark and ~13% improvement on DES compared to the Leon3MP CMP. In the range of 2 to 8 cores VThreads demonstrates a post-route (statistical) power reduction between 65% to 57% at an area increase of 1.2%-10% for 1-8 cores, compared to a similarly-configured Leon3MP CMP. This combination of micro-architectural features, scalability, extensibility,
hardware support for low-latency PThreads, power efficiency and area make the processor an attractive proposition for low-power, deeply-embedded applications requiring minimum OS support
The Analysis of a Link between a Remote Local Area Network and its Server Resources
As the Air Force transitions to an expeditionary force, the service\u27s ability to provide computer capabilities at remote locations becomes more and more paramount. One way to provide this support is to create a Local Area Network (LAN) in which the workstations are positioned at the deployed location while the servers are maintained at a Main Operating Base (MOB). This saves the military money, because it eliminates the need to purchase and deploy server equipment as well as eliminating the need to deploy personnel to set-up and maintain the servers. There is, however, a tradeoff. As the number of personnel at the deployed location increases and their computing requirements change, the link between the deployed location and the MOB can become saturated causing degraded performance. This research looks at how the number of personnel at the deployed location and the types of applications they are using affect the link and the overall system performance. It also examines the effects of adding a server to the deployed location. The results of this study show that the network as configured can support up to 30 users. With the addition of an FTP server at the deployed location, the system can handle 50 users. The system was only able to handle 70 users under the lightest application loads. If the network must support over 50 users, more bandwidth is needed between the deployed location and the MOB
Recommended from our members
User adoption of a CRM-based information system within a financial services organisation: An empirical analysis
Financial Services firms require processes and systems which can support and maintain customer-related information for the purposes of core business-focussed activity. Specifically within the investment banking sector, the importance and criticality of such customer information underpins the firm’s ability to transact sales, trading and other advisory-based services in an efficient and relevant manner. The design and development of Customer Relationship Management (CRM) systems to address the given external vs. internal customer information touchpoints, therefore provides a vital link between financial services professionals, client data and business processes. In doing so, the input of CRM user requirements is a key step in deriving benefit from such a technology solution. This paper henceforth identifies and details user requirements and experiences of such an information system within a case study company and highlights pertinent issues for the adoption of such systems within the given secto
Recommended from our members
Analytical Modeling Framework to Assess the Economic and Environmental Impacts of Residential Deliveries, and Evaluate Sustainable Last-Mile Strategies
In the last decade, e‐commerce has grown substantially, increasing business‐to‐business, business‐to‐consumer, and consumer‐to‐consumer transactions. While this has brought prosperity for the e-retailers, the ever-increasing consumer demand has brought more trucks to the residential areas, bringing along externalities such as congestion, air and noise pollution, and energy consumption. To cope with this, different logistics strategies such as the introduction of micro-hubs, alternative delivery points, and use of cargo bikes and zero emission vehicles for the last mile have been introduced and, in some cases, implemented as well. This project, hence, aims to develop an analytical framework to model urban last mile delivery. In particular, this study will build upon the previously developed econometric behavior models that capture e-commerce demand. Then, based on continuous approximation techniques, the authors will model the last-mile delivery operations. And finally, using the cost-based sustainability assessment model (developed in this study), the authors will estimate the economic and environmental impacts of residential deliveries under different city logistics strategies.View the NCST Project Webpag
Secure Cloud Connectivity for Scientific Applications
Cloud computing improves utilization and flexibility in allocating computing resources while reducing the infrastructural costs. However, in many cases cloud technology is still proprietary and tainted by security issues rooted in the multi-user and hybrid cloud environment. A lack of secure connectivity in a hybrid cloud environment hinders the adaptation of clouds by scientific communities that require scaling-out of the local infrastructure using publicly available resources for large-scale experiments. In this article, we present a case study of the DII-HEP secure cloud infrastructure and propose an approach to securely scale-out a private cloud deployment to public clouds in order to support hybrid cloud scenarios. A challenge in such scenarios is that cloud vendors may offer varying and possibly incompatible ways to isolate and interconnect virtual machines located in different cloud networks. Our approach is tenant driven in the sense that the tenant provides its connectivity mechanism. We provide a qualitative and quantitative analysis of a number of alternatives to solve this problem. We have chosen one of the standardized alternatives, Host Identity Protocol, for further experimentation in a production system because it supports legacy applications in a topologically-independent and secure way.Peer reviewe
- …