110 research outputs found
Quality of Service Controlled Multimedia Transport Protocol
PhDThis research looks at the design of an open transport protocol that supports a range of
services including multimedia over low data-rate networks. Low data-rate multimedia
applications require a system that provides quality of service (QoS) assurance and flexibility.
One promising field is the area of content-based coding. Content-based systems use an array
of protocols to select the optimum set of coding algorithms. A content-based transport
protocol integrates a content-based application to a transmission network.
General transport protocols form a bottleneck in low data-rate multimedia
communicationbsy limiting throughpuot r by not maintainingt iming requirementsT. his work
presents an original model of a transport protocol that eliminates the bottleneck by
introducing a flexible yet efficient algorithm that uses an open approach to flexibility and
holistic architectureto promoteQ oS.T he flexibility andt ransparenccyo mesi n the form of a
fixed syntaxt hat providesa seto f transportp rotocols emanticsT. he mediaQ oSi s maintained
by defining a generic descriptor. Overall, the structure of the protocol is based on a single
adaptablea lgorithm that supportsa pplication independencen, etwork independencea nd
quality of service.
The transportp rotocol was evaluatedth rougha set of assessmentos:f f-line; off-line
for a specific application; and on-line for a specific application. Application contexts used
MPEG-4 test material where the on-line assessmenuts eda modified MPEG-4 pl; yer. The
performanceo f the QoSc ontrolledt ransportp rotocoli s often bettert hano thers chemews hen
appropriateQ oS controlledm anagemenatl gorithmsa re selectedT. his is shownf irst for an
off-line assessmenwt here the performancei s compared between the QoS controlled
multiplexer,a n emulatedM PEG-4F lexMux multiplexers chemea, ndt he targetr equirements.
The performanceis also shownt o be better in a real environmentw hen the QoS controlled
multiplexeri s comparedw ith the real MPEG-4F lexMux scheme
A Tool to Check Status of All Replicas in the FreeIPA Infrastructure
Tato diplomová práce se zabývá možnostmi zjištění stavu všech replik ve FreeIPA infrastruktuře. Na úvod práce jsou vysvětleny důležité pojmy jako FreeIPA, FreeIPA infrastruktura a replika. FreeIPA server se skládá z několika součástí, které budou popsány podrobněji. Nástroj navržený v této práci využívá SNMP pro sledování stavu služeb běžících na FreeIPA serveru. Nástroj má dvě základní části, kterými jsou konfigurace SNMP agenta a uživatelské rozhraní. This master's thesis deals with possibilities how to check status of all replicas in FreeIPA infrastructure. At the begining of the thesis some important terms like FreeIPA, FreeIPA infrastructure and replica are explained. FreeIPA is a composition of several components which will be described. The tool designed in this master thesis uses SNMP for tracking a status of FreeIPA services. Two main parts of the tool are SNMP agent's configuration and user interface.
Research into the design of distributed directory services
Distributed, computer based communication is becoming established within many working environments. Furthermore, the near future is likely to see an increase in the scale, complexity and usage of telecommunications services and distributed applications. As a result, there is a critical need for a global Directory service to store and manage communication information and therefore support the emerging world-wide telecommunications environment.
This thesis describes research into the design of distributed Directory services. It addresses a number of Directory issues ranging from the abstract structure of information to the concrete implementation of a prototype system. In particular, it examines a number of management related issues concerning the management of communication information and the management of the Directory service itself.
The following work develops models describing different aspects of Directory services. These include data access control and data integrity control models concerning the abstract structure and management of information as well as knowledge management, distributed operation and replication models concerning the realisation of the Directory as a distributed system.
In order to clarify the relationships between these models, a layered directory architecture is proposed. This architecture provides a framework for the discussion of directory issues and defines the overall structure of this thesis.
This thesis also describes the implementation of a prototype Directory service, supported by software tools typical of those currently available within many environments. It should be noted that, although this thesis emphasises the design of abstract directory models, development of the prototype consumed a large amount of time and effort and prototyping activities accounted for a substantial portion of this research.
Finally, this thesis reaches a number of conclusions which are applied to the emerging ISO/CCITT X. 500 standard for Directory services, resulting in possible input for the 1988-92 study period
Research into the design of distributed directory services
Distributed, computer based communication is becoming established within many working environments. Furthermore, the near future is likely to see an increase in the scale, complexity and usage of telecommunications services and distributed applications. As a result, there is a critical need for a global Directory service to store and manage communication information and therefore support the emerging world-wide telecommunications environment.
This thesis describes research into the design of distributed Directory services. It addresses a number of Directory issues ranging from the abstract structure of information to the concrete implementation of a prototype system. In particular, it examines a number of management related issues concerning the management of communication information and the management of the Directory service itself.
The following work develops models describing different aspects of Directory services. These include data access control and data integrity control models concerning the abstract structure and management of information as well as knowledge management, distributed operation and replication models concerning the realisation of the Directory as a distributed system.
In order to clarify the relationships between these models, a layered directory architecture is proposed. This architecture provides a framework for the discussion of directory issues and defines the overall structure of this thesis.
This thesis also describes the implementation of a prototype Directory service, supported by software tools typical of those currently available within many environments. It should be noted that, although this thesis emphasises the design of abstract directory models, development of the prototype consumed a large amount of time and effort and prototyping activities accounted for a substantial portion of this research.
Finally, this thesis reaches a number of conclusions which are applied to the emerging ISO/CCITT X. 500 standard for Directory services, resulting in possible input for the 1988-92 study period
Standards as interdependent artifacts : the case of the Internet
Thesis (Ph. D.)--Massachusetts Institute of Technology, Engineering Systems Division, 2008.Includes bibliographical references.This thesis has explored a new idea: viewing standards as interdependent artifacts and studying them with network analysis tools. Using the set of Internet standards as an example, the research of this thesis includes the citation network, the author affiliation network, and the co-author network of the Internet standards over the period of 1989 to 2004. The major network analysis tools used include cohesive subgroup decomposition (the algorithm by Newman and Girvan is used), regular equivalence class decomposition (the REGE algorithm and the method developed in this thesis is used), nodal prestige and acquaintance (both calculated from Kleinberg's technique), and some social network analysis tools. Qualitative analyses of the historical and technical context of the standards as well as statistical analyses of various kinds are also used in this research. A major finding of this thesis is that for the understanding of the Internet, it is beneficial to consider its standards as interdependent artifacts. Because the basic mission of the Internet (i.e. to be an interoperable system that enables various services and applications) is enabled, not by one or a few, but by a great number of standards developed upon each other, to study the standards only as stand-alone specifications cannot really produce meaningful understandings about a workable system. Therefore, the general approaches and methodologies introduced in this thesis which we label a systems approach is a necessary addition to the existing approaches. A key finding of this thesis is that the citation network of the Internet standards can be decomposed into functionally coherent subgroups by using the Newman-Girvan algorithm.(cont.) This result shows that the (normative) citations among the standards can meaningfully be used to help us better manage and monitor the standards system. The results in this thesis indicate that organizing the developing efforts of the Internet standards into (now) 121 Working Groups was done in a manner reasonably consistent with achieving a modular (and thus more evolvable) standards system. A second decomposition of the standards network was achieved by employing the REGE algorithm together with a new method developed in this thesis (see the Appendix) for identifying regular equivalence classes. Five meaningful subgroups of the Internet standards were identified, and each of them occupies a specific position and plays a specific role in the network. The five positions are reflected in the names we have assigned to them: the Foundations, the Established, the Transients, the Newcomers, and the Stand-alones. The life cycle among these positions was uncovered and is one of the insights that the systems approach on this standard system gives relative to the evolution of the overall standards system. Another insight concerning evolution of the standard system is the development of a predictive model for promotion of standards to a new status (i.e. Proposed, Draft and Internet Standards as the three ascending statuses). This model also has practical potential to managers of standards setting organizations and to firms (and individuals) interested in efficiently participating in standards setting processes. The model prediction is based on assessing the implicit social influence of the standards (based upon the social network metric, betweenness centrality, of the standards' authors) and the apparent importance of the standard to the network (based upon calculating the standard's prestige from the citation network).(cont.) A deeper understanding of the factors that go into this model was also developed through the analysis of the factors that can predict increased prestige over time for a standard. The overall systems approach and the tools developed and demonstrated in this thesis for the study of the Internet standards can be applied to other standards systems. Application (and extension) to the World Wide Web, electric power system, mobile communication, and others would we believe lead to important improvements in our practical and scholarly understanding of these systems.by Mo-Han Hsieh.Ph.D
Recommended from our members
An examination and confirmation of a macro theory of conversations through a realization of the protologic Lp by microscopic simulation
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Conversation Theory is a theory of interaction. From interaction (the theory asserts) arises all individuals and all concepts. Interaction, if it is to allow for evolution, must perforce contain conflict, and, if concepts and individuals are to endure, resolution of conflict. Conversation Theory as developed by Pask led to the protologic called Lp which describes the interaction of conceptual entities. Lp contains injunctions as to how entities can and may interact, including how they may conflict and how their conflict may be resolved. Unlike existing software implementations based on Conversation Theory, Lp in its pure form is a logic of process as well as coherence and distinction. The hypothesis is that a low-level simulation of Lp, that of an internal and microscopic level in which topics are influenced by "forces" that are exerted by the topology of the conceptual space, would, in its activation as a dynamic process of appropriate dimension, produce as a result (and hence be a confirmation of) the macroscopically-observed behaviour of the system manifest as conflict and resolution of conflict. Without this confirmation, the relationships between Conversation Theory and Lp remain only proposed; with it, their mutual consistencies and validity as a model of cognition, are affirmed. The background of Conversation Theory and Lp necessary to support the thesis is presented a long with a comparison of other software approaches to related problems. A description of THOUGHTSTICKER, a current embodiment of Lp at the macro level, provides a detailed sense of the Lp operations. Then a computer program (developed to provide a proof by demonstration of the thesis) is described, in which a microscopic simulation of Lp processes confirms the macroscopic behaviour predicted by Conversation Theory. Conversation Theory thereby gains support f or its use as a valid observer's language for every-day experience owing to this confirmation and its protologic as a basis for psychological phenomena in the interaction of conceptual entities of mind
Delay tolerant network for Navy scenarios: quality-based approach
Mestrado em Engenharia Eletrónica e TelecomunicaçõesThe navy operations involve several participants that work between them with common objectives and usually under challenged communication conditions. There are natural constrains that are imposed by the operation environment, e.g. hilly terrains. There are also artificial constrains that are created by enemy elements which force conditions to affect the navy operation (or other military forces), e.g. intentional jamming. The military often uses proprietary devices to communicate between them. Despite of the effectiveness of these devices, they are expensive and usually offer a limited range of services. However, the recent technological advances allow the proliferation of several mobile devices with wireless communication capabilities and with the value to easily insert new features, but these devices are still not prepared to military networks in terms of communication. Thus, this dissertation proposes to use Delay Tolerant Networks (DTNs) with a new routing protocol Quality-PRoPHET (Q-PRoPHET) able to measure the quality of the wireless links and route the information using the connections with best quality, where the probability of transmission is higher. The Q-PRoPHET uses a quality function to evaluate the quality of the connections and a transitive property to route through multiple hops. This algorithm was implemented in IBR-DTN and it was evaluated in three scenarios that emulate three scenarios observed during the navy tactical operations. Two of these scenarios were tested inside a building and the last one was tested in an external environment using real mobility of the nodes. The obtained results show that Q-PRoPHET has better performance than PRoPHET in terms of delivery ratio, endto-end delay and packets transmission, which are critical parameters for the communication in navy operations.As operações da marinha envolvem vários intervenientes que trabalham entre si com objetivos comuns e frequentemente sob condições de comunicação desafiadoras. Existem constrangimentos naturais que são impostos pelo ambiente da operação, por exemplo, geografia acidentada do terreno. Existem também constrangimentos artificiais que são criados por elementos hostis que forçam condições de modo a prejudicar as operações da marinha (ou outras equipas militares), por exemplo, criação de interferência intencional. Os militares geralmente usam equipamentos de comunicação proprietários para comunicar entre si. Apesar da eficácia destes equipamentos, eles são caros e normalmente oferecem uma gama de serviços limitada. Contudo, os recentes avanços tecnológicos permitiram a proliferação de muitos dispositivos portáteis com capacidade de comunicação sem fios e com o valor de acrescentar novas funcionalidades de formas muito simples, mas estes dispositivos ainda não estão adaptados para as redes militares em termos de comunicação. Esta dissertação propõe usar Redes Tolerantes a Atrasos (DTNs) com um novo protocolo de encaminhamento QualityPRoPHET (Q-PRoPHET) capaz de medir a qualidade das ligações sem-fios e encaminhar a informação pelas ligações de melhor qualidade, onde a probabilidade de sucesso da transmissão é maior. O Q-PRoPHET usa uma função de qualidade para avaliar a qualidade das ligações e uma propriedade transitiva para encaminhamento a múltiplos saltos. Este algoritmo foi implementado no IBR-DTN e foi avaliado em três cenários que emulam três cenários observados durante operações táticas da Marinha. Dois destes cenários foram testados dentro de um edifício e o último foi testado em ambiente exterior, recorrendo a mobilidade real dos nós. Os resultados obtidos mostram que o Q-PRoPHET tem melhor desempenho que o PRoPHET em termos de taxa de entrega, tempo de entrega e transmissão de pacotes, que são parâmetros críticos para as comunicações das operações da marinha
- …