145 research outputs found

    Internet Standardization: A Participant Analysis

    Get PDF
    Tässä diplomityössä tarkastellaan tieto- ja viestintätekniikka-alan (ICT) standardointityötä. Tutkimme erityisesti Internet-standardointia. Aikaisempi tutkimus osoittaa, että standardit luovat perustan teknologioiden yhteentoimivuudelle ja, että ne mahdollistavat sekä tietojen että informaation välittämisen järjestelmien välillä. Standardit toimivat siten edesauttajina ja katalyytteinä, joista on sekä taloudellista että teknistä hyötyä. Aikaisemmat tutkimukset osittavat myös, että usean johtavan toimijan ydinstrategiaan kuuluu nykyään standardien kehitystyöhön osallistuminen. Osallistuminen standardointityöhön saattaa myös olla avain tulevaan kaupalliseen menestykseen. Tässä työssä tutkimme ensisijaisesti mikä Internet-standardoinnin menestyksen mittari ja perustaso on. Vertailemme myös tiettyjen yhtiöiden standardointipanostuksia ja niitten yhteyttä yhtiöiden kaupalliseen menestykseen. Lisäksi tutkimme myös erikseen suomalaisten toimijoiden saavutuksia IETF:ssä. Kirjallisuuskatsauksessa otamme tarkemmin esille, missä, miksi ja miten ICT-alan standardeja kehitetään. Selvitämme myös tutkimus- ja kehitystyön yhteyttä standardointiin sekä määritämme motivaatioita standardointityöhön osallistumiselle. Osana tätä diplomityötä suunnittelemme ja kehitämme ohjelmiston sekä tietokannan joka antaa meille mahdollisuuden tallentaa, käsitellä ja tutkia useita eri Internet Engineering Task Force:n (IETF) työprosesseja ja -dokumentteja. Hyödyntämällä tietokantaamme ja kehitettyä ohjelmistoa voimme mitata ja analysoida useita IETF:n standardointiprosessin näkökulmia ja myös tutkia lähemmin siihen osallistuvia yrityksiä. Tuloksemme osoittavat, että Suomen ICT-klusterin aikaansaannokset IETF:ssä ovat verrattain hyvät. Lisäksi voimme todeta, että Cisco:n saavutukset voidaan pitää menestyksen mittarina lähes kaikkia IETF:n standardointityön osa-alueita tarkasteltaessa. Tulostemme perusteella ehdotamme myös, että osallistujien kaupallisella ja standardointityön menestyksellä on yhteys.This thesis examines standards-setting in the Information and Communications Technology (ICT) industry. Special attention is given to Internet standardization. Previous research suggest that standards lay the ground for compatibility, interoperability, and interchange of data in the ICT field. Standards thus function as enablers and accelerators with both economical and technological benefits. Previous research also suggests that participating in standards development and influencing the outcome by contributing to the standardization process have become core strategic choices of many leading players. Participating in the development of a winning standard can be critical to later business success. In this thesis we will therefore aim to clarify what the benchmark for success in Internet standardization is. We also compare selected organizations' standardization activities to figures measuring their success on the marketplace. The standardization achievements of the Finnish ICT cluster are also given extra attention. Our literature study elaborates on how, why, and where ICT standards are developed. The relationship between Research and Development (R&D) and ICT standardization is clarified and we also establish motivations for participating in the standards development process. As a part of this thesis we design and create a database that enables us to retrieve and process all working documents related to the Internet Engineering Task Force (IETF) standardization process. Using the database and custom tools created for this task allows us to measure and analyze several aspects of the IETF standardization process and the participants active therein. The results suggest the Finnish ICT cluster has performed comparatively well within the IETF, that Cisco's achievements can be considered the benchmark for success regarding virtually all aspects of IETF standardization, and that there is a linkage between participants' success in standardization and their merits on the marketplace

    Engineering a semantic web trust infrastructure

    No full text
    The ability to judge the trustworthiness of information is an important and challenging problem in the field of Semantic Web research. In this thesis, we take an end-to-end look at the challenges posed by trust on the Semantic Web, and present contributions in three areas: a Semantic Web identity vocabulary, a system for bootstrapping trust environments, and a framework for trust aware information management. Typically Semantic Web agents, which consume and produce information, are not described with sufficient information to permit those interacting with them to make good judgements of trustworthiness. A descriptive vocabulary for agent identity is required to enable effective inter agent discourse, and the growth of trust and reputation within the Semantic Web; we therefore present such a foundational identity ontology for describing web-based agents.It is anticipated that the Semantic Web will suffer from a trust network bootstrapping problem. In this thesis, we propose a novel approach which harnesses open data to bootstrap trust in new trust environments. This approach brings together public records published by a range of trusted institutions in order to encourage trust in identities within new environments. Information integrity and provenance are both critical prerequisites for well-founded judgements of information trustworthiness. We propose a modification to the RDF Named Graph data model in order to address serious representational limitations with the named graph proposal, which affect the ability to cleanly represent claims and provenance records. Next, we propose a novel graph based approach for recording the provenance of derived information. This approach offers computational and memory savings while maintaining the ability to answer graph-level provenance questions. In addition, it allows new optimisations such as strategies to avoid needless repeat computation, and a delta-based storage strategy which avoids data duplication.<br/

    Procedures for Protocol Extensions and Variations

    Full text link

    Curating E-Mails; A life-cycle approach to the management and preservation of e-mail messages

    Get PDF
    E-mail forms the backbone of communications in many modern institutions and organisations and is a valuable type of organisational, cultural, and historical record. Successful management and preservation of valuable e-mail messages and collections is therefore vital if organisational accountability is to be achieved and historical or cultural memory retained for the future. This requires attention by all stakeholders across the entire life-cycle of the e-mail records. This instalment of the Digital Curation Manual reports on the several issues involved in managing and curating e-mail messages for both current and future use. Although there is no 'one-size-fits-all' solution, this instalment outlines a generic framework for e-mail curation and preservation, provides a summary of current approaches, and addresses the technical, organisational and cultural challenges to successful e-mail management and longer-term curation.

    Optimizing complex queries with multiple relational instances

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Messenger Visual, a pictogram-based instant messaging service for individuals with cognitive disability

    Get PDF
    Along history disabled individuals have suffered from social exclusion due to the limitations posed by their condition. For instance, deaf people are not able to watch television programs because of their sensory limitation. Despite this situation has improved thanks to the efforts in adapting the different services —today the majority of television programs offer subtitles or simultaneous translation to sign language—, the arrival of the Internet, as well as the rest of the information and communication technologies, poses new risks to the inclusion of disabled individuals. Taking into account the present digital exclusion situation of disabled individuals this project presents Messenger Visual, an Instant Messaging service based on pictograms for individuals with cognitive disability. Messenger Visual is composed of two different parts. On the one hand, the Instant Messaging service has been designed considering the requirements of communication based on pictograms. On the other hand, the Instant Messaging client has been designed taking into account the user interface usability requirements of individuals with cognitive disability. Finally, the project presents the methodology that we have used to evaluate Messenger Visual with a group of individuals with cognitive disability, as well as the results we have obtained. The evaluation process has lasted for six months and one-hour fortnightly sessions have been held with two groups of individuals from Fundació El Maresme with different cognitive disability profiles. These sessions have allowed us to gain better understanding of the user interface accessibility requirements, as well as to know how individuals with cognitive disability communicate using pictograms

    Intelligent Network Infrastructures: New Functional Perspectives on Leveraging Future Internet Services

    Get PDF
    The Internet experience of the 21st century is by far very different from that of the early '80s. The Internet has adapted itself to become what it really is today, a very successful business platform of global scale. As every highly successful technology, the Internet has suffered from a natural process of ossification. Over the last 30 years, the technical solutions adopted to leverage emerging applications can be divided in two categories. First, the addition of new functionalities either patching existing protocols or adding new upper layers. Second, accommodating traffic grow with higher bandwidth links. Unfortunately, this approach is not suitable to provide the proper ground for a wide gamma of new applications. To be deployed, these future Internet applications require from the network layer advanced capabilities that the TCP/IP stack and its derived protocols can not provide by design in a robust, scalable fashion. NGNs (Next Generation Networks) on top of intelligent telecommunication infrastructures are being envisioned to support future Internet Services. This thesis contributes with three proposals to achieve this ambitious goal. The first proposal presents a preliminary architecture to allow NGNs to seamlessly request advanced services from layer 1 transport networks, such as QoS guaranteed point-to-multipoint circuits. This architecture is based on virtualization techniques applied to layer 1 networks, and hides from NGNs all complexities of interdomain provisioning. Moreover, the economic aspects involved were also considered, making the architecture attractive to carriers. The second contribution regards a framework to develop DiffServ-MPLS capable networks based exclusively on open source software and commodity PCs. The developed DiffServ-MPLS flexible software router was designed to allow NGN prototyping, that make use of pseudo virtual circuits and assured QoS as a starting point of development. The third proposal presents a state of the art routing and wavelength assignment algorithm for photonic networks. This algorithm considers physical layer impairments to 100% guarantee the requested QoS profile, even in case of single network failures. A number of novel techniques were applied to offer lower blocking probability when compared with recent proposed algorithms, without impacting on setup delay time

    Digital evidence bags

    Get PDF
    This thesis analyses the traditional approach and methodology used to conduct digital forensic information capture, analysis and investigation. The predominant toolsets and utilities that are used and the features that they provide are reviewed. This is used to highlight the difficulties that are encountered due to both technological advances and the methodologies employed. It is suggested that these difficulties are compounded by the archaic methods and proprietary formats that are used. An alternative framework for the capture and storage of information used in digital forensics is defined named the `Digital Evidence Bag' (DEB). A DEB is a universal extensible container for the storage of digital information acquired from any digital source. The format of which can be manipulated to meet the requirements of the particular information that is to be stored. The format definition is extensible thereby allowing it to encompass new sources of data, cryptographic and compression algorithms and protocols as developed, whilst also providing the flexibility for some degree of backwards compatibility as the format develops. The DEB framework utilises terminology to define its various components that are analogous with evidence bags, tags and seals used for traditional physical evidence storage and continuity. This is crucial for ensuring that the functionality provided by each component is comprehensible by the general public, judiciary and law enforcement personnel without detracting or obscuring the evidential information contained within. Furthermore, information can be acquired from a dynamic or more traditional static environment and from a disparate range of digital devices. The flexibility of the DEB framework permits selective and/or intelligent acquisition methods to be employed together with enhanced provenance and continuity audit trails to be recorded. Evidential integrity is assured using accepted cryptographic techniques and algorithms. The DEB framework is implemented in a number of tool demonstrators and applied to a number of typical scenarios that illustrate the flexibility of the DEB framework and format. The DEB framework has also formed the basis of a patent application

    Towards Automated Network Configuration Management

    Get PDF
    Modern networks are designed to satisfy a wide variety of competing goals related to network operation requirements such as reachability, security, performance, reliability and availability. These high level goals are realized through a complex chain of low level configuration commands performed on network devices. As networks become larger, more complex and more heterogeneous, human errors become the most significant threat to network operation and the main cause of network outage. In addition, the gap between high-level requirements and low-level configuration data is continuously increasing and difficult to close. Although many solutions have been introduced to reduce the complexity of configuration management, network changes, in most cases, are still manually performed via low--level command line interfaces (CLIs). The Internet Engineering Task Force (IETF) has introduced NETwork CONFiguration (NETCONF) protocol along with its associated data--modeling language, YANG, that significantly reduce network configuration complexity. However, NETCONF is limited to the interaction between managers and agents, and it has weak support for compliance to high-level management functionalities. We design and develop a network configuration management system called AutoConf that addresses the aforementioned problems. AutoConf is a distributed system that manages, validates, and automates the configuration of IP networks. We propose a new framework to augment NETCONF/YANG framework. This framework includes a Configuration Semantic Model (CSM), which provides a formal representation of domain knowledge needed to deploy a successful management system. Along with CSM, we develop a domain--specific language called Structured Configuration language to specify configuration tasks as well as high--level requirements. CSM/SCL together with NETCONF/YANG makes a powerful management system that supports network--wide configuration. AutoConf supports two levels of verifications: consistency verification and behavioral verification. We apply a set of logical formalizations to verifying the consistency and dependency of configuration parameters. In behavioral verification, we present a set of formal models and algorithms based on Binary Decision Diagram (BDD) to capture the behaviors of forwarding control lists that are deployed in firewalls, routers, and NAT devices. We also adopt an enhanced version of Dyna-Q algorithm to support dynamic adaptation of network configuration in response to changes occurred during network operation. This adaptation approach maintains a coherent relationship between high level requirements and low level device configuration. We evaluate AutoConf by running several configuration scenarios such as interface configuration, RIP configuration, OSPF configuration and MPLS configuration. We also evaluate AutoConf by running several simulation models to demonstrate the effectiveness and the scalability of handling large-scale networks

    QoS related admission control for Web services

    Get PDF
    In a military tactical network bandwidth is often scarce. Web services lack a standardized approach to provide Quality of Service. In this thesis a broker which performs access control on bandwidth is created to provide a role based admission control. The goal is to achieve high client satisfaction where the client's role is considered important
    corecore