44 research outputs found
Network security mechanisms and implementations for the next generation reliable fast data transfer protocol - UDT
University of Technology, Sydney. Faculty of Engineering and Information Technology.TCP protocol variants (such as FAST, BiC, XCP, Scalable and High Speed) have
demonstrated improved performance in simulation and in several limited
network experiments. However, practical use of these protocols is still very
limited because of implementation and installation difficulties. Users who
require to transfer bulk data (e.g., in Cloud/GRID computing) usually turn to
application level solutions where these variants do not fair well. Among protocols
considered in the application level are User Datagram Protocol (UDP)-based
protocols, such as UDT (UDP-based Data Transport Protocol). UDT is one of the
most recently developed new transport protocols with congestion control
algorithms. It was developed to support next generation high-speed networks,
including wide area optical networks. It is considered a state-of-the-art protocol,
addressing infrastructure requirements for transmitting data in high-speed
networks. Its development, however, creates new vulnerabilities because like
many other protocols, it relies solely on the existing security mechanisms for
current protocols such as the Transmission Control Protocol (TCP) and UDP.
Certainly, both UDT and the decades-old TCP/UDP lack a well-thought-out
security architecture that addresses problems in today’s networks. In this
dissertation, we focus on investigating UDT security issues and offer important
contributions to the field of network security. The choice of UDT is significant for
several reasons: UDT as a newly designed next generation protocol is considered
one of the most promising and fastest protocols ever created that operates on top
of the UDP protocol. It is a reliable UDP-based application-level data-transport
protocol intended for distributing data intensive applications over wide area
high-speed networks. It can transfer data in a highly configurable framework and
can accommodate various congestion control algorithms. Its proven success at
transferring terabytes of data gathered from outer space across long distances is
a testament to its significant commercial promise. In this work, our objective is to
examine a range of security methods used on existing mature protocols such as
TCP and UDP and evaluate their viability for UDT. We highlight the security
limitations of UDT and determine the threshold of feasible security schemes
within the constraints under which UDT was designed and developed.
Subsequently, we provide ways of securing applications and traffic using UDT
protocol, and offer recommendations for securing UDT. We create security
mechanisms tailored for UDT and propose a new security architecture that can
assist network designers, security investigators, and users who want to
incorporate security when implementing UDT across wide area networks.
We then conduct practical experiments on UDT using our security mechanisms
and explore the use of other existing security mechanisms used on TCP/UDP for
UDT. To analyse the security mechanisms, we carry out a formal proof of
correctness to assist us in determining their applicability by using Protocol
Composition Logic (PCL). This approach is modular, comprising a separate proof
of each protocol section and providing insight into the network environment in
which each section can be reliably employed. Moreover, the proof holds for a
variety of failure recovery strategies and other implementation and configuration
options. We derive our technique from the PCL on TLS and Kerberos in the
literature. We maintain, however, the novelty of our work for UDT particularly
our newly developed mechanisms such as UDT-AO, UDT-DTLS, UDT-Kerberos
(GSS-API) specifically for UDT, which all now form our proposed UDT security
architecture.
We further analyse this architecture using rewrite systems and automata. We
outline and use symbolic analysis approach to effectively verify our proposed
architecture. This approach allows dataflow replication in the implementation of
selected mechanisms that are integrated into the proposed architecture. We
consider this approach effective by utilising the properties of the rewrite systems
to represent specific flows within the architecture to present a theoretical and
reliable method to perform the analysis. We introduce abstract representations of
the components that compose the architecture and conduct our investigation,
through structural, semantics and query analyses.
The result of this work, which is first in the literature, is a more robust
theoretical and practical representation of a security architecture of UDT, viable
to work with other high speed network protocols
МОДЕЛЮВАННЯ ТРАНСПОРТНО-ЛОГІСТИЧНИХ СХЕМ ВАНТАЖНИХ ПЕРЕВЕЗЕНЬ В УМОВАХ ГЛОБАЛЬНИХ РИЗИКІВ
As a result of the increase in the number of factors that ensure the efficiency of transport flows, the methods of building mathematical models, which are based on the consideration of general laws, turned out to be ineffective. Therefore, it is promising to involve experimental methods of identification based on the formalization of the results of observations and analysis of the arrival of new information about changes in the situation that has developed with the use of new digital technologies.
The article shows that in the near future, road connections together with water transport will be of key importance, and therefore the task of mathematically ensuring the management of the preservation of traffic flows under the conditions of global risks will always be relevant. The method of work is the modeling of traffic flows under conditions of preservation of global risks.
A solution to the problem of maintaining the dynamics of traffic flows caused by the pandemic, military actions and extreme situations is proposed. Based on graph theory, Ford-Falkerson and Dinitz algorithms, a modified algorithm for determining the structure of transportation was developed. A feature of the algorithm is the synchronization of the capacity of transport flows with the moments of lifting and introducing restrictions on transport. The novel proposed algorithm is the possibility of adjusting transport routes. Also, a new use of the proposed modified algorithm is the synchronization of technologies using the methodology of determining the throughput capacity of the branches of the implementation of transport flows with moments of the concept and introduction of restrictions due to unforeseen situations and global risks. The modified algorithm for determining traffic flows in the conditions of unforeseen situations and global risks based on the maximum algorithms of Ford-Falkerson and Dinitz ensures the minimization of losses of carriers and traffic flow. Implementation of the algorithm ensures maximum traffic flow in extreme conditions and global risks.Внаслідок існування великої кількості факторів, що визначають ефективність транспортних потоків, методи побудови математичних моделей, що ґрунтуються на розгляді загальних закономірностей, виявляються малоефективними. Тому перспективне залучення експериментальних методів ідентифікації, заснованих на формалізації результатів спостережень та аналізі надходжень нової інформації про зміни ситуації, що склалася з використанням нових цифрових технологій. У статті показано, що у найближчій перспективі автомобільне сполучення разом з водним транспортом матиме ключове значення і тому завдання математичного забезпечення управлінням збереження транспортних потоків за умов глобальних ризиків буде завжди актуальною. Метою роботи є моделювання збереження транспортних потоків за умов глобальних ризиків. Запропоновано рішення задачі про збереження динаміки транспортних потоків, викликаних пандемією, військовими діями та екстремальними ситуаціями. На основі теорії графів, алгоритмів Форда-Фалкерсона та Дініца розроблено модифікований алгоритм визначення структури транспортних перевезень. Особливістю алгоритму є синхронізація пропускної спроможності транспортних потоків з моментами зняття та запровадження обмежень на транспортні перевезення. Новизною запропонованого алгоритму є можливість коригування транспортних маршрутів. Також новизною використання запропонованого модифікованого алгоритму є синхронізація технологій використання методології визначення пропускних здібностей гілок реалізації транспортних потоків з моментами зняття та введення обмежень через непередбачувані ситуації й глобальні ризики. Модифікований алгоритм визначення транспортних потоків в умовах непередбачуваних ситуацій та глобальних ризиків на основі алгоритмів Форда-Фалкерсона та Дініца забезпечує мінімізацію збитків перевізників та максимальний транспортний потік. Впровадження алгоритму забезпечує максимальний транспортний потік в екстремальних умовах та глобальних ризиків
Dekomposisi dan Rekombinasi Pengacakan Citra Digital dengan Logistic Mapping
Beberapa citra digital membutuhkan privasi dan kerahasiaan, seperti citra medis, citra diagnosa medis jarak jauh, citra rahasia melalui komunikasi internet, atau citra rahasia kemiliteran. Salah satu cara untuk mengamankan informasi di dalam citra digital adalah dengan melakukan pengacakan (scrambling). Penelitian ini mengacak nilai piksel citra digital dengan mengubah nilai piksel dari sistem bilangan desimal menjadi bilangan basis empat (kuartener), kemudian mengurai (dekomposisi) keempat bit kuartener dan melakukan pengacakan terhadap keempat posisi bit berdasarkan pada bilangan acak yang dihasilkan oleh algoritma logistic mapping, kemudian bit hasil pengacakan digabungkan kembali (rekombinasi) untuk menghasilkan nilai piksel baru. Logistic mapping merupakan penghasil bilangan acak yang mampu menghasilkan deretan bilangan yang acak berdasarkan nilai kunci µ (3.569945 < µ < 4) dan nilai awal x0 (0 < x0 < 1). Hasil penelitian ini dapat melakukan pengacakan terhadap citra digital dengan dekomposisi dan rekombinasi nilai piksel berdasarkan pada nilai acak yang dihasilkan oleh algoritma logistic mapping. Hasil pengujian menunjukkan bahwa pasangan kunci-1 (µ1, x1) memiliki sensitivitas paling tinggi dalam mengacak citra, kemudian diikuti oleh pasangan kunci-2 (µ2, x2), pasangan kunci-3 (µ3, x3) dan pasangan kunci-4 (µ4, x4)
A comparison of techniques to detect similarities in cloud virtual machines
Scalability in monitoring and management of cloud data centres may be improved through the clustering of virtual machines (VMs) exhibiting similar behaviour. However, available solutions for automatic VM clustering present some important drawbacks that hinder their applicability to real cloud scenarios. For example, existing solutions show a clear trade-off between the accuracy of the VMs clustering and the computational cost of the automatic process; moreover, their performance shows a strong dependence on specific technique parameters. To overcome these issues, we propose a novel approach for VM clustering that uses Mixture of Gaussians (MoGs) together with the Kullback-Leiber divergence to model similarity between VMs. Furthermore, we provide a thorough experimental evaluation of our proposal and of existing techniques to identify the most suitable solution for different workload scenarios
Modern approaches to modeling user requirements on resource and task allocation in hierarchical computational grids
Peer ReviewedPostprint (published version
Business Model Innovation to Support Smart Manufacturing
In today’s fast changing and hyper-competitive business environments such as the automotive industry, Business Modell Innovation (BMI) has emerged as a promising approach to achieve competitive advantage. At the same time, however, BMI entails high levels of uncertainty and financial risk. In order to reduce the cost and risk involved, product and process innovation as well as manufacturing – and particularly smart manufacturing – have become increasingly open and collaborative in the recent past. The aim of this paper is to investigate the role of open and collaborative innovation practices in BMI as basis for competitive manufacturing ecosystems and provide a comprehensive review of available literature in this field. For this purpose a systematic analysis of literature at the intersection of BMI and Open Innovation has been performed. Furthermore, the role of supply chain partners (suppliers, customers and research institutions for manufacturing ecosystems) in open BMI processes has been investigated
Data Replication Strategies with Performance Objective in Data Grid Systems: A Survey
Replicating for performance constitutes an important issue in large-scale data management systems. In this context, a significant number of replication strategies have been proposed for data grid systems. Some works classified these strategies into static vs. dynamic or centralised vs. decentralised or client vs. server initiated strategies. Very few works deal with a replication strategy classification based on the role of these strategies when building a replica management system. In this paper, we propose a new replication strategy classification based on objective functions of these strategies. Also, each replication strategy is designed according to the data grid topology for which it was proposed. We point out the impact of the topology on replication performance although most of these strategies have been proposed for a hierarchical grid topology. We also study the impact of some factors on performance of these strategies, e.g. access pattern, bandwidth consumption and storage capacity