10,397 research outputs found
Why (and How) Networks Should Run Themselves
The proliferation of networked devices, systems, and applications that we
depend on every day makes managing networks more important than ever. The
increasing security, availability, and performance demands of these
applications suggest that these increasingly difficult network management
problems be solved in real time, across a complex web of interacting protocols
and systems. Alas, just as the importance of network management has increased,
the network has grown so complex that it is seemingly unmanageable. In this new
era, network management requires a fundamentally new approach. Instead of
optimizations based on closed-form analysis of individual protocols, network
operators need data-driven, machine-learning-based models of end-to-end and
application performance based on high-level policy goals and a holistic view of
the underlying components. Instead of anomaly detection algorithms that operate
on offline analysis of network traces, operators need classification and
detection algorithms that can make real-time, closed-loop decisions. Networks
should learn to drive themselves. This paper explores this concept, discussing
how we might attain this ambitious goal by more closely coupling measurement
with real-time control and by relying on learning for inference and prediction
about a networked application or system, as opposed to closed-form analysis of
individual protocols
Beyond Power over Ethernet : the development of Digital Energy Networks for buildings
Alternating current power distribution using analogue control and safety devices has been the dominant process of power distribution within our buildings since the electricity industry began in the late 19th century. However, with advances in digital technology, the seeds of change have been growing over the last decade. Now, with the simultaneous dramatic fall in power requirements of digital devices and corresponding rise in capability of Power over Ethernet, an entire desktop environment can be powered by a single direct current (dc) Ethernet cable. Going beyond this, it will soon be possible to power entire office buildings using dc networks. This means the logic of âone-size fits allâ from the existing ac system is no longer relevant and instead there is an opportunity to redesign the power topology to be appropriate for different applications, devices and end-users throughout the building. This paper proposes a 3-tier classification system for the topology of direct current microgrids in commercial buildings â called a Digital Energy Network or DEN. The first tier is power distribution at a full building level (otherwise known as the microgrid); the second tier is power distribution at a room level (the nanogrid); and the third tier is power distribution at a desktop or appliance level (the picogrid). An important aspect of this classification system is how the design focus changes for each grid. For example; a key driver of the picogrid is the usability of the network â high data rates, and low power requirements; however, in the microgrid, the main driver is high power and efficiency at low cost
HyperCell
This research believes that understanding the relationship between Interactive Architecture and the principles of biology will become a mainstream research area in future architectural design. Aiming towards achieving the goal of âmaking architecture as organic bodiesâ, almost all the current digital techniques in architectural design are executed using computational simulation: digital fabrication technologies and physical computing. Based on itsâ main biological inspirations, Evolutionary Development Biology (Evo-Devo), this research intends to propose a novel bio-inspired design thinking wherein architecture should become analogs to the growing process of living organisms (Figure 6.1). Instead of being born from static optimization results most of the architecture seems content at aiming for nowadays, this research is looking towards designing dynamic architectural bodies which can adapt to the constantly changing environments and are thus seeking optimization in real-time. In other words, architecture should come âaliveâ as a living creature in order to actively optimize itself with respect to dynamic environmental conditions and user behaviorâ requirements in real-time. Following the notion of âarchitecture as organic bodiesâ, six major topics were derived from the publication of âNew Wombs: Electric Bodies and Architectural Disordersâ (Palumbo, 2000). These topics are aimed at initiating critical discussions between body and space, which, are used here to re-interpret six main traits of being an interactive architecture: Dis-measurement, Uprooting, Fluidity, Visceral Nature, Virtuality, and Sensitivity. These six topics merge diverse key points from aforementioned chapters including outlining the vision of active interacting architecture, the transformation of human bodies under digital culture, the profound biological inspiration from Evo-Devo and the fundamental componential notion of swarm, which leads to the ultimate notion of embodying organic body-like interactive Bio-architecture.
Dis-measurement: Acknowledging the premise of âarchitecture (technology) as an extension of human bodiesâ proposed by Marshall McLuhan (McLuhan, Understanding Media: The Extensions of Man, 1964), it is, still difficult to explicitly define the boundary of a space, especially in the context of a borderless cyberspace (the Internet). Space in such a context expands more than ever before and thus makes traditional measurements techniques unfeasible. With cyberspace, people can be virtually present in different places at the same time, thus breaking existing physical boundaries of a space. From another point of view, space as an extension of our bodies constantly adapting to environmental conditions and user demands, creates an intimate linkage between physical bodies and spatial bodies. Interaction in such instances can be seen from a micro-scale: between biological cells and intelligent architectural components to the macro-scale: between physical organic bodies and spatial bodies/architectural space.
Uprooting: Apart from further extending the âDis-measurementâ idea by directly plugging into cyberspace (the Internet), âUprootingâ is also interpreted as adaptation devoid of any site/location constraints. In other words, the idea of âUprootingâ implies, generating an architecture that can adjust/modify in accordance with its existing surroundings by interactions between its smallest intelligent components like cells in a body searching for dynamic equilibrium. In this case, architecture has no particular reason to be designed as ârootedâ on sites.
Fluidity: With the neural system inside the body, most of the messages can be transmitted, received and sent within less than a millionth of a second. To envision architecture as an information processor, which has abilities to react to dynamic environmental conditions and user demands, efficient information protocols must be built into such an organic architectural body to create seamless exterior/interior transformations.
Visceral Nature: Visceral can be interpreted in the form of an embodied organ. This implies envisioning architecture in the form of a living-entity. It is no longer the case of mimicking a natural form and thus claiming a building to be organic, but rather instigates one to look deeper into the principles of a natural formâs morphogenesis and apply these to generate a truly organic space. Through the study of Evo-Devo, several principles will be applied to generate an interactive organic Bio-architecture. It is thus not an organic looking shape that matters, but the principles behind the shape, which matter. For instance, principles of self-organization, self-assembly, and self-adaptation, providing possibilities of making body-like architectures with multi-directional and multi-modal communications both inside out and outside in. An intelligent architecture, should âliveâ in the environment just as how the body lives with itsâ Visceral Nature.
Virtuality: It is impossible to talk about physical space without mentioning virtual space nowadays. From cyberspace, augmented reality to virtual reality, âVirtualityâ is related to âinteractionâ since the beginning and has gradually become an inevitable aspect of our daily lives. In fact, virtual space has to still use constraints from the physical world to enhance experiential aspects. The ultimate goal of virtual reality here is not to end up with a VR helmet and keep constantly being stimulated by electronic messages, but to bring the physical to the virtual and in the process, attempt to search for a dynamic balance between the virtual and real by merging them together. With the assistance of virtual reality, novel unrealistic space can still be realized into creative tangible immersive and fascinating spaces, which, earlier was not possible.
Sensitivity: The notion of âarchitecture is an extension of human bodiesâ, is crucial to embrace, if we consider enhancing the sensing abilities of the space as a body not only externally but also internally. In a digital space, active sensing can be achieved by attaching specific devices. In an interactive space, like an organic body, the sensing capabilities of the space have to be fast, accurate, intuitive, and predictive. The sensing system should thus not only work externally to sense the surrounding environment but also internally in order to fulfill the usersâ demands in time. With such a connection between human bodies and spatial bodies, it should become relatively understandable for the space to know the requirements of the users by means of hand gestures instead of verbal cues. The sensitivity, in this case, should rely on local information distribution as a bottom-up system rather than a top-down centralized demanding structure
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
The HSS/SNiC : a conceptual framework for collapsing security down to the physical layer
This work details the concept of a novel network security model called the Super NIC (SNIC) and a Hybrid Super Switch (HSS). The design will ultimately incorporate deep packet inspection (DPI), intrusion detection and prevention (IDS/IPS) functions, as well as network access control technologies therefore making all end-point network devices inherently secure. The SNIC and HSS functions are modelled using a transparent GNU/Linux Bridge with the Netfilter framework
CYCLONE Unified Deployment and Management of Federated, Multi-Cloud Applications
Various Cloud layers have to work in concert in order to manage and deploy
complex multi-cloud applications, executing sophisticated workflows for Cloud
resource deployment, activation, adjustment, interaction, and monitoring. While
there are ample solutions for managing individual Cloud aspects (e.g. network
controllers, deployment tools, and application security software), there are no
well-integrated suites for managing an entire multi cloud environment with
multiple providers and deployment models. This paper presents the CYCLONE
architecture that integrates a number of existing solutions to create an open,
unified, holistic Cloud management platform for multi-cloud applications,
tailored to the needs of research organizations and SMEs. It discusses major
challenges in providing a network and security infrastructure for the
Intercloud and concludes with the demonstration how the architecture is
implemented in a real life bioinformatics use case
- âŠ