17,120 research outputs found
CERN openlab Whitepaper on Future IT Challenges in Scientific Research
This whitepaper describes the major IT challenges in scientific research at CERN and several other European and international research laboratories and projects. Each challenge is exemplified through a set of concrete use cases drawn from the requirements of large-scale scientific programs. The paper is based on contributions from many researchers and IT experts of the participating laboratories and also input from the existing CERN openlab industrial sponsors. The views expressed in this document are those of the individual contributors and do not necessarily reflect the view of their organisations and/or affiliates
Re-designing Dynamic Content Delivery in the Light of a Virtualized Infrastructure
We explore the opportunities and design options enabled by novel SDN and NFV
technologies, by re-designing a dynamic Content Delivery Network (CDN) service.
Our system, named MOSTO, provides performance levels comparable to that of a
regular CDN, but does not require the deployment of a large distributed
infrastructure. In the process of designing the system, we identify relevant
functions that could be integrated in the future Internet infrastructure. Such
functions greatly simplify the design and effectiveness of services such as
MOSTO. We demonstrate our system using a mixture of simulation, emulation,
testbed experiments and by realizing a proof-of-concept deployment in a
planet-wide commercial cloud system.Comment: Extended version of the paper accepted for publication in JSAC
special issue on Emerging Technologies in Software-Driven Communication -
November 201
CloudJet4BigData: Streamlining Big Data via an Accelerated Socket Interface
Big data needs to feed users with fresh processing results and cloud platforms can be used to speed up big data applications. This paper describes a new data communication protocol (CloudJet) for long distance and large volume big data accessing operations to alleviate the large latencies encountered in sharing big data resources in the clouds. It encapsulates a dynamic multi-stream/multi-path engine at the socket level, which conforms to Portable Operating System Interface (POSIX) and thereby can accelerate any POSIX-compatible applications across IP based networks. It was demonstrated that CloudJet accelerates typical big data applications such as very large database (VLDB), data mining, media streaming and office applications by up to tenfold in real-world tests
Data as a Service (DaaS) for sharing and processing of large data collections in the cloud
Data as a Service (DaaS) is among the latest kind of services being investigated in the Cloud computing community. The main aim of DaaS is to overcome limitations of state-of-the-art approaches in data technologies, according to which data is stored and accessed from repositories whose location is known and is relevant for sharing and processing. Besides limitations for the data sharing, current approaches also do not achieve to fully separate/decouple software services from data and thus impose limitations in inter-operability. In this paper we propose a DaaS approach for intelligent sharing and processing of large data collections with the aim of abstracting the data location (by making it relevant to the needs of sharing and accessing) and to fully decouple the data and its processing. The aim of our approach is to build a Cloud computing platform, offering DaaS to support large communities of users that need to share, access, and process the data for collectively building knowledge from data. We exemplify the approach from large data collections from health and biology domains.Peer ReviewedPostprint (author's final draft
Electronic Security Implications of NEC: A Tactical Battlefield Scenario
In [1] three principal themes are identified by the UK MoD (Ministry of Defence) in order to deliver the vision of NEC (Network Enabled Capability): Networks, People and Information. It is the security of information, which is discussed in this article. The drive towards NEC is due to many factors; one defining factor is to provide an increase in operational tempo in effect placing one ahead of their enemy in terms of acting within their OODA (Observe, Orient, Decide, Act) loop. However as technical and procedural systems are being advanced to achieve the vision of NEC, what impact does this have on the traditional information security triangle, of preserving the confidentiality, integrity and availability of information? And how does this influence current security engineering and accreditation practices, particularly in light of the proliferation problem? This article describes research conducted into answering these questions, building upon the findings of the NITEworks® [2] ISTAR (Intelligence, Surveillance, Target Acquisition and Reconnaissance) Theme studies and focusing on a tactical battlefield scenario. This scenario relates to the IFPA (Indirect Fire Precision Attack) [3] project where the efficient synchronisation of potentially numerous sources of information is required, providing real-time decisions and delivery of effects, in accordance with the requirements of NEC. It is envisaged that the IFPA systems will consist of numerous sub-systems each of which will provide a unique effecting capability to the UK army with differing levels of speed, accuracy and range
The Motivation, Architecture and Demonstration of Ultralight Network Testbed
In this paper we describe progress in the NSF-funded Ultralight project and a recent demonstration of Ultralight technologies at SuperComputing 2005 (SC|05). The goal of the
Ultralight project is to help meet the data-intensive computing challenges of the next generation of particle physics experiments with a comprehensive, network-focused approach. Ultralight adopts a new approach to networking: instead of treating it traditionally, as a static, unchanging and unmanaged set of inter-computer links, we are developing and using it as a dynamic, configurable, and closely monitored resource that is managed from end-to-end. Thus we are constructing a next-generation global system that is able to meet the data processing, distribution, access and analysis needs of the particle physics community. In this paper we present the motivation for, and an overview of, the Ultralight project. We then cover early
results in the various working areas of the project. The remainder of the paper describes our experiences of the Ultralight network architecture, kernel setup, application tuning and configuration used during the bandwidth challenge event at SC|05. During this Challenge, we
achieved a record-breaking aggregate data rate in excess of 150 Gbps while moving physics datasets between many sites interconnected by the Ultralight backbone network. The exercise highlighted the benefits of Ultralight's research and development efforts that are enabling new and advanced methods of distributed scientific data analysis
The Ultralight project: the network as an integrated and managed resource for data-intensive science
Looks at the UltraLight project which treats the network interconnecting globally distributed data sets as a dynamic, configurable, and closely monitored resource to construct a next-generation system that can meet the high-energy physics community's data-processing, distribution, access, and analysis needs
- …