1,335 research outputs found
Monkeys, typewriters and networks: the internet in the light of the theory of accidental excellence
Viewed in the light of the theory of accidental excellence, there is much to suggest that the success of the Internet and its various protocols derives from a communications technology accident, or better, a series of accidents. In the early 1990s, many experts still saw the Internet as an academic toy that would soon vanish into thin air again. The Internet probably gained its reputation as an academic toy largely because it violated the basic principles of traditional communications networks. The quarrel about paradigms that erupted in the 1970s between the telephony world and the newly emerging Internet community was not, however, only about transmission technology doctrines. It was also about the question â still unresolved today â as to who actually governs the flow of information: the operators or the users of the network? The paper first describes various network architectures in relation to the communication cultures expressed in their make-up. It then examines the creative environment found at the nodes of the network, whose coincidental importance for the Internet boom must not be forgotten. Finally, the example of Usenet is taken to look at the kind of regulatory practices that have emerged in the communications services provided within the framework of a decentralised network architecture. --
Internet... the final frontier: an ethnographic account: exploring the cultural space of the Net from the inside
The research project The Internet as a space for interaction, which completed its mission in Autumn 1998, studied the constitutive features of network culture and network organisation. Special emphasis was given to the dynamic interplay of technical and social conventions regarding both the Netâs organisation as well as its change. The ethnographic perspective chosen studied the Internet from the inside. Research concentrated upon three fields of study: the hegemonial operating technology of net nodes (UNIX) the networkâs basic transmission technology (the Internet Protocol IP) and a popular communication service (Usenet). The projectâs final report includes the results of the three branches explored. Drawing upon the development in the three fields it is shown that changes that come about on the Net are neither anarchic nor arbitrary. Instead, the decentrally organised Internet is based upon technically and organisationally distributed forms of coordination within which individual preferences collectively attain the power of developing into definitive standards. --
Monkeys, typewriters and networks: the internet in the light of the theory of accidental excellence
"Viewed in the light of the theory of accidental excellence, there is much to suggest that the success of the Internet and its various protocols derives from a communications technology accident, or better, a series of accidents. In the early 1990s, many experts still saw the Internet as an academic toy that would soon vanish into thin air again. The Internet probably gained its reputation as an academic toy largely because it violated the basic principles of traditional communications networks. The quarrel about paradigms that erupted in the 1970s between the telephony world and the newly emerging Internet community was not, however, only about transmission technology doctrines. It was also about the question - still unresolved today - as to who actually governs the flow of information: the operators or the users of the network? The paper first describes various network architectures in relation to the communication cultures expressed in their make-up. It then examines the creative environment found at the nodes of the network, whose coincidental importance for the Internet boom must not be forgotten. Finally, the example of Usenet is taken to look at the kind of regulatory practices that have emerged in the communications services provided within the framework of a decentralised network architecture." (author's abstract)"Aus der Perspektive der Theorie von der zufĂ€lligen Entstehung herausragender Leistungen spricht vieles dafĂŒr, daĂ sich der Erfolg des Internet und der ihm zugrundeliegenden technischen Standards einer Reihe von kommunikationstechnischen ZufĂ€llen verdankt. Noch in den frĂŒhen 1990er Jahren galt das Internet in den Augen vieler Experten als akademisches Spielzeug ohne groĂe Zukunft, denn es widersprach allen Konstruktionslehren herkömmlicher Telekommunikationsnetze. Der Paradigmenstreit, der in den 1970er Jahren zwischen der Telefonwelt und der sich herausbildenden Internetgemeinde ausbrach, drehte sich nicht nur um die 'rechte' Art von Ăbertragungstechnik. Es ging dabei auch um die - bis heute unentschiedene - Frage, wer ĂŒber den KommunikationsfluĂ herrscht: die Betreiber oder die Nutzer des Netzes? Der Aufsatz beschreibt zunĂ€chst unterschiedliche Netzarchitekturen und setzt diese in Beziehung zu den jeweiligen Kommunikationskulturen, die sich in ihre Gestalt eingeschrieben haben. AnschlieĂend wird die an den Netzknoten beheimatete, kreative Umgebung dargestellt, deren Bedeutung fĂŒr die rasche Ausbreitung des Internet nicht unterschĂ€tzt werden sollte. Am Beispiel des Usenet wird schlieĂlich auf die regulativen Praktiken eingegangen, die sich im Rahmen der dezentralen Internetarchitektur bei den Kommunikationsdiensten gebildet haben." (Autorenreferat
Virtual integration platform for computational fluid dynamics
Computational Fluid Dynamics (CFD) tools used in shipbuilding industry involve multiple disciplines, such as resistance, manoeuvring, and cavitation. Traditionally, the analysis was performed separately and sequentially in each discipline, which often resulted in conflict and inconsistency of hydrodynamic prediction. In an effort to solve such problems for future CFD computations, a Virtual Integration Platform (VIP) has been developed in the University of Strathclyde within two EU FP6 projects - VIRTUE and SAFEDOR1. The VIP provides a holistic collaborative environment for designers with features such as Project/Process Management, Distributed Tools Integration, Global Optimisation, Version Management, and Knowledge Management. These features enhance collaboration among customers, ship design companies, shipyards, and consultancies not least because they bring together the best expertise and resources around the world. The platform has been tested in seven European ship design companies including consultancies. Its main functionalities along with advances are presented in this paper with two industrial applications
CamFlow: Managed Data-sharing for Cloud Services
A model of cloud services is emerging whereby a few trusted providers manage
the underlying hardware and communications whereas many companies build on this
infrastructure to offer higher level, cloud-hosted PaaS services and/or SaaS
applications. From the start, strong isolation between cloud tenants was seen
to be of paramount importance, provided first by virtual machines (VM) and
later by containers, which share the operating system (OS) kernel. Increasingly
it is the case that applications also require facilities to effect isolation
and protection of data managed by those applications. They also require
flexible data sharing with other applications, often across the traditional
cloud-isolation boundaries; for example, when government provides many related
services for its citizens on a common platform. Similar considerations apply to
the end-users of applications. But in particular, the incorporation of cloud
services within `Internet of Things' architectures is driving the requirements
for both protection and cross-application data sharing.
These concerns relate to the management of data. Traditional access control
is application and principal/role specific, applied at policy enforcement
points, after which there is no subsequent control over where data flows; a
crucial issue once data has left its owner's control by cloud-hosted
applications and within cloud-services. Information Flow Control (IFC), in
addition, offers system-wide, end-to-end, flow control based on the properties
of the data. We discuss the potential of cloud-deployed IFC for enforcing
owners' dataflow policy with regard to protection and sharing, as well as
safeguarding against malicious or buggy software. In addition, the audit log
associated with IFC provides transparency, giving configurable system-wide
visibility over data flows. [...]Comment: 14 pages, 8 figure
Multi-Agent System Control and Coordination of an Electrical Network
Multi-Agent Systems (MAS) have the potential to solve Active Network Management (ANM) problems arising from an increase in Distributed Energy Resources (DER). The aim of this research is to integrate a MAS into an electrical network emulation for the purpose of implementing ANM. Initially an overview of agents and MAS and how their characteristics can be used to control and coordinate an electrical network is presented. An electrical network comprising a real-time emulated transmission network connected to a live DER network controlled and coordinated by a MAS is then constructed. The MAS is then used to solve a simple ANM problem: the control and coordination of an electrical network in order to maintain frequency within operational limits. The research concludes that a MAS is successful in solving this ANM problem and also that in the future the developed MAS can be applied to other ANM problems. © 2012 IEEE
Internet... the final frontier: an ethnographic account ; exploring the cultural space of the net from the inside
"The research project 'The Internet as a space for interaction', which completed its mission
in Autumn 1998, studied the constitutive features of network culture and network
organisation. Special emphasis was given to the dynamic interplay of technical and social
conventions regarding both the net's organisation as well as its change. The ethnographic
perspective chosen studied the Internet from the inside. Research concentrated upon three
fields of study: the hegemonial operating technology of net nodes (UNIX) the networkâs
basic transmission technology (the Internet Protocol IP) and a popular communication
service (Usenet). The project's final report includes the results of the three branches explored. Drawing upon the development in the three fields it is shown that changes that come about on the Net are neither anarchic nor arbitrary. Instead, the decentrally organised Internet is based upon
technically and organisationally distributed forms of coordination within which individual
preferences collectively attain the power of developing into definitive standards." (author's abstract)"Das im Herbst 1998 abgeschlossene Forschungsprojekt 'Interaktionsraum Internet' hat sich mit den konstitutiven Merkmalen der Netzkultur und Netzwerkorganisation beschĂ€ftigt. Im Vordergrund des Interesses stand das dynamische Zusammenspiel technischer und gesellschaftlicher Konventionen in der Organisation wie auch im Wandel des Netzes. Die ethnographisch angeleitete Binnenperspektive auf das Internet konzentrierte sich auf drei ausgewĂ€hlte Bereiche, um Prozesse der Institutionenbildung und die Formen ihrer Transformation zu studieren: die hegemoniale Betriebstechnik der Netzknoten (UNIX), die grundlegende Ăbertragungstechnik im Netz (das Internet Protokoll IP) und einen populĂ€ren Kommunikationsdienst (Usenet). Der SchluĂbericht des Projekts enthĂ€lt die Ergebnisse der drei UntersuchungsstrĂ€nge. Gezeigt wird anhand der Entwicklung in den drei Feldern, daĂ sich der Wandel des Netzes weder beliebig noch anarchisch vollzieht. Das dezentral organisierte Internet beruht vielmehr auf technisch wie organisatorisch verteilten Formen der Koordination, in denen individuelle HandlungsprĂ€ferenzen kollektiv definitionsmĂ€chtig werden." (Autorenreferat
Distributed Mathematical Model Simulation on a Parallel Architecture
The aim of this article is to discuss the design of distributed mathematical models and suitable parallel architecture of computers. The paper summarises the authorâs experience with mathematical modelling of decomposed information systems of a simulator. Conclusions are based on the theory of the design of the computer control systems. The author describes computers that create a distributed computer system of a flight simulator. Modelling of a time precision of mathematical model of the speed of a simulator system is done by describing equations. The qualities of models depend on the architecture of computer systems. Some functions of other sections of POSIX are also analysed including semaphores and scheduling functions. An important part of this article is the implementation of computation speed of aircraft in multicore processor architecture
- âŠ