10 research outputs found

    Розробка та реалізація мережних протоколів. Навчальний посібник

    Get PDF
    Розробка та реалізація мережних протоколів важлива частина сучасної галузі знань, що необхідна для актуального забезпечення взаємозв’язку рівнів та різних технологій будь-якої локальної і глобальної мереж. Мережеві протоколи базуються на міжнародних стандартах, що забезпечують якісну взаємодію різних інноваційних технологій та різних елементів мережі. Вони складають семирівневу структуру, яка здійснює забезпечення вирішення інженерно-технічних питань та потребує постійно оновлювати, вдосконалювати та розробки нових протоколів, як правила взаємодії всіх складових глобальної мережі. Розробка та реалізація мережних протоколів потребує постійного розвитку та вдосконалення для надання абонентам високонадійних видів послуг з високошвидкісною передачею даних.The development and implementation of network protocols is an important part of the modern field of knowledge that is necessary for the actual interconnection of levels and different technologies of any local and global networks. Network protocols are based on international standards that ensure high-quality interaction of various innovative technologies and various network elements. They form a seven-tier structure that provides solutions to engineering and technical issues and requires constant updating, improvement and development of new protocols, as rules of interaction of all components of the global network. The development and implementation of network protocols requires constant development and improvement to provide subscribers with highly reliable types of services with high-speed data transmission.Разработка и реализация сетевых протоколов важная часть современной отрасли знаний, которая необходима для актуального обеспечения взаимосвязи уровней и различных технологий любой локальной и глобальной сетей. Сетевые протоколы базируются на международных стандартах, обеспечивающих качественное взаимодействие различных инновационных технологий и различных элементов сети. Они составляют семиступенчатая структуру, которая осуществляет обеспечение решения инженерно-технических вопросов и требует постоянно обновлять, совершенствовать и разрабатывать новые протоколы, как правила взаимодействия всех составляющих глобальной сети. Разработка и реализация сетевых протоколов требует постоянного развития и совершенствования для предоставления абонентам высоконадежных видов услуг по высокоскоростной передачей данных

    Security and Performance Verification of Distributed Authentication and Authorization Tools

    Get PDF
    Parallel distributed systems are widely used for dealing with massive data sets and high performance computing. Securing parallel distributed systems is problematic. Centralized security tools are likely to cause bottlenecks and introduce a single point of failure. In this paper, we introduce existing distributed authentication and authorization tools. We evaluate the quality of the security tools by verifying their security and performance. For security tool verification, we use process calculus and mathematical modeling languages. Casper, Communicating Sequential Process (CSP) and Failure Divergence Refinement (FDR) to test for security vulnerabilities, Petri nets and Karp Miller trees are used to find performance issues of distributed authentication and authorization methods. Kerberos, PERMIS, and Shibboleth are evaluated. Kerberos is a ticket based distributed authentication service, PERMIS is a role and attribute based distributed authorization service, and Shibboleth is an integration solution for federated single sign-on authentication. We find no critical security and performance issues

    SIP based IP-telephony network security analysis

    Get PDF
    Masteroppgave i informasjons- og kommunikasjonsteknologi 2004 - Høgskolen i Agder, GrimstadThis thesis evaluates the SIP Protocol implementation used in the Voice over IP (VoIP) solution at the fibre/DSL network of Èlla Kommunikasjon AS. The evaluation focuses on security in the telephony service, and is performed from the perspective of an attacker trying to find weaknesses in the network. For each type of attempt by the malicious attacker, we examined the security level and possible solutions to flaws in the system. The conclusion of this analysis is that the VoIP service is exploitable, and that serious improvements are needed to achieve a satisfying level of security for the system

    Secure VoIP Performance Measurement

    Get PDF
    This project presents a mechanism for instrumentation of secure VoIP calls. The experiments were run under different network conditions and security systems. VoIP services such as Google Talk, Express Talk and Skype were under test. The project allowed analysis of the voice quality of the VoIP services based on the Mean Opinion Score (MOS) values generated by Perceptual valuation of Speech Quality (PESQ). The quality of the audio streams produced were subjected to end-to-end delay, jitter, packet loss and extra processing in the networking hardware and end devices due to Internetworking Layer security or Transport Layer security implementations. The MOS values were mapped to Perceptual Evaluation of Speech Quality for wideband (PESQ-WB) scores. From these PESQ-WB scores, the graphs of the mean of 10 runs and box and whisker plots for each parameter were drawn. Analysis on the graphs was performed in order to deduce the quality of each VoIP service. The E-model was used to predict the network readiness and Common vulnerability Scoring System (CVSS) was used to predict the network vulnerabilities. The project also provided the mechanism to measure the throughput for each test case. The overall performance of each VoIP service was determined by PESQ-WB scores, CVSS scores and the throughput. The experiment demonstrated the relationship among VoIP performance, VoIP security and VoIP service type. The experiment also suggested that, when compared to an unsecure IPIP tunnel, Internetworking Layer security like IPSec ESP or Transport Layer security like OpenVPN TLS would improve a VoIP security by reducing the vulnerabilities of the media part of the VoIP signal. Morever, adding a security layer has little impact on the VoIP voice quality

    A method for securing online community service: A study of selected Western Australian councils

    Get PDF
    Since the Internet was made publicly accessible, it has become increasingly popular and its deployment has been broad and global thereby facilitating a range of available online services such as Electronic Mail (email), news or bulletins, Internet Relay Chat (IRC) and World Wide Web (WWW). Progressively, other online services such as telephony, video conference, video on demand, Interactive Television (ITV) and Geographic Information System (GIS) have been integrated with the Internet and become publicly available. Presently, Internet broadband communication services incorporating both wired and wireless network technologies has seen the emergence of the concept of a digital community which has been growing and expanding rapidly around the world. Internet and the ever expanding online services to the wider digital community has raised the issue of security of these services during usage. Most local councils throughout Western Australia have resorted to delivering online services such as library, online payments and email accessibility. The provision and usage of these services have inherent security risks. Consequently, this study investigated the concept of a secure digital community in the secure provision and usage of these online services in selected local councils in Western Australia (WA). After an extensive review of existing literature, information security frameworks were derived from the adaptation of various resources, such as the OSSTMM 2.2 Section C: Internet Technology Security benchmark which was used as the main template. In addition, this template was enhanced into a framework model by incorporating other benchmarks such as NIST, CIS, ISSAF as well as other sources of information. These included information security related books, related ICT network and security websites such as CERT, CheckPoint, Cisco, GFI, Juniper, MS, NESSUS and NMAP together with journals and personal interviews. The proposed information security frameworks were developed to enhance the level of security strength of the email and online web systems as well as to increase the level of confidence in the system security within the selected local councils in WA. All the investigative studies were based upon the available selected local councils’ data and the associated analyses of the results as obtained from the testing software. In addition, the interpretive multiple-case study principles were used during the investigation to achieve or fulfil the purpose of this study. The findings from this study were then abstracted for use in a framework and made available for use as a model for possible adaptation and implementation to other similarly structured councils or organisations. As a result, the study confirmed that the proposed information security frameworks have the capability and potential to improve the level of security strength. In addition, the level of satisfaction and confidence of council staff of the selected local councils in WA in the system security would also be increased due to the application of these frameworks. Although these information security frameworks may be recommended as practical and supporting tools for local councils, the findings from this study were specific only to the selected local councils used in this study. Further research using other councils, may be necessary in order for the information security frameworks to be adopted within a wider range of councils or organisations in WA or elsewhere

    A reference model for Server Based Computing infrastructures and its application for the capacity management

    Get PDF
    Der weltweit rasant steigende Bedarf an Unterstützung von Anwendern durch leistungsfähige IT-Systeme führt zu einer gleichermaßen steigenden Nachfrage nach Technologien, die es Unternehmen ermöglichen, ihren Endanwendern Desktop-Umgebungen und Applikationen in effizienter und effektiver Weise bereitzustellen. Daraus leitet sich sowohl unter ökologischen als auch unter ökonomischen Aspekten die Anforderung ab, vorhandene Hardware- und Software-Plattformen möglichst passend zum heutigen und zukünftigen Bedarf zu dimensionieren und die Systeme optimal auszulasten. Protokolle zum Zugriff auf Server-Ressourcen unter Microsoft Windows Betriebssystemen nach dem Prinzip der entfernten Präsentation wurden erstmals ca. 1995 implementiert. Seither hat das damit auch unter Windows mögliche Server Based Computing (SBC) mit Terminal Servern und im Nachgang auch virtuellen Desktops eine technische Reife erlangt, die dem Betriebsmodell der verteilten Ausführung und Datenhaltung mittels konventioneller Personal Computer nicht nachsteht. Energie- und ressourcensparende Thin Clients haben sich entsprechend als Alternative zu herkömmlichen Arbeitsplatz-Computern und ihrer lokalen Datenverarbeitung etabliert. Die Leistungsfähigkeit der Thin Clients hängt jedoch maßgeblich von der Kapazität der Server-Infrastruktur im Rechenzentrum ab. Die vorliegende Dissertation greift dieses Thema auf und entwirft ein Referenzmodell für das Kapazitätsmanagement von Server Based Computing Infrastrukturen mit dem Ziel, vorhandene wie auch neu zu konzipierende Systeme zu planen und in einem iterativen Prozess weiterzuentwickeln. Der zu Grunde liegende Ansatz baut auf Methoden und Sprachen der Referenzmodellierung auf. Zunächst wird die aus fünf Schichten bestehende Gesamtsicht einer Server Based Computing Infrastruktur entworfen. Aus diesem Referenzmodell werden nach einem methodischen Vorgehen konkretere Informationsmodelle abgeleitet und in der Sprache der Fundamental Modeling Concepts (FMC) notiert. Ein solches Modell kann anschließend im Rahmen einer Simulation oder einer analytischen Herangehensweise dazu verwendet werden, bereits bei der Konzeption verschiedene Handlungsalternativen zu untersuchen und bezüglich der Kapazität der Ressourcen zu bewerten. Das Referenzmodell und seine Methodik werden anhand eines exemplarischen Szenarios mit verschiedenen Gruppen von Anwendern und Arbeitsplatzgeräten auf der Client-Seite sowie mehreren Profilen von Anwendungen auf der Server-Seite erprobt. Hierbei wird deutlich, dass die modellbasierte Herangehensweise einen wertvollen Beitrag zum Kapazitätsmanagement leisten kann, ohne dass vorab der tatsächliche Aufbau einer neuen IT-Infrastruktur durch die Installation eines physischen Prototypen und die Simulation von Arbeitslasten darauf notwendig wäre.A worldwide rapidly increasing need for assistance of staff by powerful IT-systems leads to an equally ever growing demand for technologies that enable organizations to provide desktop environments and applications to their end users in an efficient and effective way. In terms of both ecologic and economic aspects, the deduced requirement is to size existing hardware and software platforms as suitable as possible for present and future needs, and to allow for an optimum utilization of the system capacities. Access protocols on server resources based on Microsoft Windows operating systems within the scope of remote presentation were implemented for the first time around 1995. Since then, Server Based Computing (SBC), with terminal servers and virtual desktops later on, has reached a technical maturity which is not inferior to the distributed issue of the operating modeland data storage as used in conventional personal computers. Accordingly, energy and resource saving thin clients have established themselves as an alternative to conventional desktop computers and local data processing. Their performance, however, depends significantly on the capacity of the server infrastructure located in the data center. The present thesis takes up this subject and outlines a reference model for the capacity management of Server Based Computing infrastructures with the intention to plan novel designed systems and, further, to develop both these as well as exsisting ones by means of an iterative process. The underlying approach bases upon methods for reference modeling and languages. Initially, a global view of a Server Based Computing infrastructure consisting of five layers is developed. From this reference model, more precise information models are derived following a methodological approach and are stated according to language elements of the Fundamental Modeling Concepts (FMC). Such model can be used subsequently within the scope of a simulation or an analytical approach, hereby aiming to investigate and evaluate various alternative courses of action regarding the capacity of resources already during the conception phase. The reference model and its methodology are evaluated using an exemplary scenario with different groups of users and workstation devices on the client side and several profiles of applications on the server side. This shows clearly that the model-based approach can make a valuable contribution to the capacity management, without requiring the actual implementation of a new IT infrastructure by building a physical prototype and simulating workloads within this prototype

    Virtualisation and Thin Client : A Survey of Virtual Desktop environments

    Get PDF
    This survey examines some of the leading commercial Virtualisation and Thin Client technologies. Reference is made to a number of academic research sources and to prominent industry specialists and commentators. A basic virtualisation Laboratory model is assembled to demonstrate fundamental Thin Client operations and to clarify potential problem areas

    Global connectivity architecture of mobile personal devices

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 193-207).The Internet's architecture, designed in the days of large, stationary computers tended by technically savvy and accountable administrators, fails to meet the demands of the emerging ubiquitous computing era. Nontechnical users now routinely own multiple personal devices, many of them mobile, and need to share information securely among them using interactive, delay-sensitive applications.Unmanaged Internet Architecture (UIA) is a novel, incrementally deployable network architecture for modern personal devices, which reconsiders three architectural cornerstones: naming, routing, and transport. UIA augments the Internet's global name system with a personal name system, enabling users to build personal administrative groups easily and intuitively, to establish secure bindings between his devices and with other users' devices, and to name his devices and his friends much like using a cell phone's address book. To connect personal devices reliably, even while mobile, behind NATs or firewalls, or connected via isolated ad hoc networks, UIA gives each device a persistent, location-independent identity, and builds an overlay routing service atop IP to resolve and route among these identities. Finally, to support today's interactive applications built using concurrent transactions and delay-sensitive media streams, UIA introduces a new structured stream transport abstraction, which solves the efficiency and responsiveness problems of TCP streams and the functionality limitations of UDP datagrams. Preliminary protocol designs and implementations demonstrate UIA's features and benefits. A personal naming prototype supports easy and portable group management, allowing use of personal names alongside global names in unmodified Internet applications. A prototype overlay router leverages the naming layer's social network to provide efficient ad hoc connectivity in restricted but important common-case scenarios.(cont) Simulations of more general routing protocols--one inspired by distributed hash tables, one based on recent compact routing theory--explore promising generalizations to UIA's overlay routing. A library-based prototype of UIA's structured stream transport enables incremental deployment in either OS infrastructure or applications, and demonstrates the responsiveness benefits of the new transport abstraction via dynamic prioritization of interactive web downloads. Finally, an exposition and experimental evaluation of NAT traversal techniques provides insight into routing optimizations useful in UIA and elsewhere.by Bryan Alexander Ford.Ph.D

    EFFICIENT ASYMMETRIC IPSEC FOR SECURE ISCSI

    No full text
    corecore