31,506 research outputs found
Recommended from our members
Integrating information and knowledge for enterprise innovation
It has widely been accepted that enterprise integration, can be a source of socio-technical and cultural problems within organisations wishing to provide a focussed end-to-end business service. This can cause possible “straitjacketing” of business process architectures, thus suppressing responsive business re-engineering and competitive advantage for some companies. Accordingly, the current typology and emergent forms of Enterprise Resource Planning (ERP) and Enterprise Application Integration (EAI) technologies are set in the context of understanding information and knowledge integration philosophies. As such, key influences and trends in emerging IS integration choices, for end-to-end, cost-effective and flexible knowledge integration, are examined. As touch points across and outside organisations proliferate, via work-flow and relationship management-driven value innovation, aspects of knowledge refinement and knowledge integration pose challenges to maximising the potential of innovation and sustainable success, within enterprises. This is in terms of the increasing propensity for data fragmentation and the lack of effective information management, in the light of information overload. Furthermore, the nature of IS mediation which is inherent within decision making and workflow-based business processes, provides the basis for evaluation of the effects of information and knowledge integration. Hence, the authors propose a conceptual, holistic evaluation framework which encompasses these ideas. It is thus argued that such trends, and their implications regarding enterprise IS integration to engender sustainable competitive advantage, require fundamental re-thinking
DISCO: Distributed Multi-domain SDN Controllers
Modern multi-domain networks now span over datacenter networks, enterprise
networks, customer sites and mobile entities. Such networks are critical and,
thus, must be resilient, scalable and easily extensible. The emergence of
Software-Defined Networking (SDN) protocols, which enables to decouple the data
plane from the control plane and dynamically program the network, opens up new
ways to architect such networks. In this paper, we propose DISCO, an open and
extensible DIstributed SDN COntrol plane able to cope with the distributed and
heterogeneous nature of modern overlay networks and wide area networks. DISCO
controllers manage their own network domain and communicate with each others to
provide end-to-end network services. This communication is based on a unique
lightweight and highly manageable control channel used by agents to
self-adaptively share aggregated network-wide information. We implemented DISCO
on top of the Floodlight OpenFlow controller and the AMQP protocol. We
demonstrated how DISCO's control plane dynamically adapts to heterogeneous
network topologies while being resilient enough to survive to disruptions and
attacks and providing classic functionalities such as end-point migration and
network-wide traffic engineering. The experimentation results we present are
organized around three use cases: inter-domain topology disruption, end-to-end
priority service request and virtual machine migration
Architecting the cyberinfrastructure for National Science Foundation Ocean Observatories Initiative (OOI)
The NSF Ocean Observatories Initiative (OOI) is a networked ocean
research observatory with arrays of instrumented water column moorings and
buoys, profilers, gliders and autonomous underwater vehicles (AUV) within different
open ocean and coastal regions. OOI infrastructure also includes a cabled
array of instrumented seafloor platforms and water column moorings on the
Juan de Fuca tectonic plate. This networked system of instruments, moored and
mobile platforms, and arrays will provide ocean scientists, educators and the
public the means to collect sustained, time-series data sets that will enable examination
of complex, interlinked physical, chemical, biological, and geological
processes operating throughout the coastal regions and open ocean. The seven
arrays built and deployed during construction support the core set of OOI multidisciplinary
scientific instruments that are integrated into a networked software
system that will process, distribute, and store all acquired data. The OOI
has been built with an expectation of operation for 25 years.Peer Reviewe
H2O: An Autonomic, Resource-Aware Distributed Database System
This paper presents the design of an autonomic, resource-aware distributed
database which enables data to be backed up and shared without complex manual
administration. The database, H2O, is designed to make use of unused resources
on workstation machines. Creating and maintaining highly-available, replicated
database systems can be difficult for untrained users, and costly for IT
departments. H2O reduces the need for manual administration by autonomically
replicating data and load-balancing across machines in an enterprise.
Provisioning hardware to run a database system can be unnecessarily costly as
most organizations already possess large quantities of idle resources in
workstation machines. H2O is designed to utilize this unused capacity by using
resource availability information to place data and plan queries over
workstation machines that are already being used for other tasks. This paper
discusses the requirements for such a system and presents the design and
implementation of H2O.Comment: Presented at SICSA PhD Conference 2010 (http://www.sicsaconf.org/
Container network functions: bringing NFV to the network edge
In order to cope with the increasing network utilization driven by new mobile clients, and to satisfy demand for new network services and performance guarantees, telecommunication service providers are exploiting virtualization over their network by implementing network services in virtual machines, decoupled from legacy hardware accelerated appliances. This effort, known as NFV, reduces OPEX and provides new business opportunities. At the same time, next generation mobile, enterprise, and IoT networks are introducing the concept of computing capabilities being pushed at the network edge, in close proximity of the users. However, the heavy footprint of today's NFV platforms prevents them from operating at the network edge. In this article, we identify the opportunities of virtualization at the network edge and present Glasgow Network Functions (GNF), a container-based NFV platform that runs and orchestrates lightweight container VNFs, saving core network utilization and providing lower latency. Finally, we demonstrate three useful examples of the platform: IoT DDoS remediation, on-demand troubleshooting for telco networks, and supporting roaming of network functions
ANCHOR: logically-centralized security for Software-Defined Networks
While the centralization of SDN brought advantages such as a faster pace of
innovation, it also disrupted some of the natural defenses of traditional
architectures against different threats. The literature on SDN has mostly been
concerned with the functional side, despite some specific works concerning
non-functional properties like 'security' or 'dependability'. Though addressing
the latter in an ad-hoc, piecemeal way, may work, it will most likely lead to
efficiency and effectiveness problems. We claim that the enforcement of
non-functional properties as a pillar of SDN robustness calls for a systemic
approach. As a general concept, we propose ANCHOR, a subsystem architecture
that promotes the logical centralization of non-functional properties. To show
the effectiveness of the concept, we focus on 'security' in this paper: we
identify the current security gaps in SDNs and we populate the architecture
middleware with the appropriate security mechanisms, in a global and consistent
manner. Essential security mechanisms provided by anchor include reliable
entropy and resilient pseudo-random generators, and protocols for secure
registration and association of SDN devices. We claim and justify in the paper
that centralizing such mechanisms is key for their effectiveness, by allowing
us to: define and enforce global policies for those properties; reduce the
complexity of controllers and forwarding devices; ensure higher levels of
robustness for critical services; foster interoperability of the non-functional
property enforcement mechanisms; and promote the security and resilience of the
architecture itself. We discuss design and implementation aspects, and we prove
and evaluate our algorithms and mechanisms, including the formalisation of the
main protocols and the verification of their core security properties using the
Tamarin prover.Comment: 42 pages, 4 figures, 3 tables, 5 algorithms, 139 reference
- …