303 research outputs found

    Optimizing Flow Routing Using Network Performance Analysis

    Get PDF
    Relevant conferences were attended at which work was often presented and several papers were published in the course of this project. • Muna Al-Saadi, Bogdan V Ghita, Stavros Shiaeles, Panagiotis Sarigiannidis. A novel approach for performance-based clustering and management of network traffic flows, IWCMC, ©2019 IEEE. • M. Al-Saadi, A. Khan, V. Kelefouras, D. J. Walker, and B. Al-Saadi: Unsupervised Machine Learning-Based Elephant and Mice Flow Identification, Computing Conference 2021. • M. Al-Saadi, A. Khan, V. Kelefouras, D. J. Walker, and B. Al-Saadi: SDN-Based Routing Framework for Elephant and Mice Flows Using Unsupervised Machine Learning, Network, 3(1), pp.218-238, 2023.The main task of a network is to hold and transfer data between its nodes. To achieve this task, the network needs to find the optimal route for data to travel by employing a particular routing system. This system has a specific job that examines each possible path for data and chooses the suitable one and transmit the data packets where it needs to go as fast as possible. In addition, it contributes to enhance the performance of network as optimal routing algorithm helps to run network efficiently. The clear performance advantage that provides by routing procedures is the faster data access. For example, the routing algorithm take a decision that determine the best route based on the location where the data is stored and the destination device that is asking for it. On the other hand, a network can handle many types of traffic simultaneously, but it cannot exceed the bandwidth allowed as the maximum data rate that the network can transmit. However, the overloading problem are real and still exist. To avoid this problem, the network chooses the route based on the available bandwidth space. One serious problem in the network is network link congestion and disparate load caused by elephant flows. Through forwarding elephant flows, network links will be congested with data packets causing transmission collision, congestion network, and delay in transmission. Consequently, there is not enough bandwidth for mice flows, which causes the problem of transmission delay. Traffic engineering (TE) is a network application that concerns with measuring and managing network traffic and designing feasible routing mechanisms to guide the traffic of the network for improving the utilization of network resources. The main function of traffic engineering is finding an obvious route to achieve the bandwidth requirements of the network consequently optimizing the network performance [1]. Routing optimization has a key role in traffic engineering by finding efficient routes to achieve the desired performance of the network [2]. Furthermore, routing optimization can be considered as one of the primary goals in the field of networks. In particular, this goal is directly related to traffic engineering, as it is based on one particular idea: to achieve that traffic is routed according to accurate traffic requirements [3]. Therefore, we can say that traffic engineering is one of the applications of multiple improvements to routing; routing can also be optimized based on other factors (not just on traffic requirements). In addition, these traffic requirements are variable depending on analyzed dataset that considered if it is data or traffic control. In this regard, the logical central view of the Software Defined Network (SDN) controller facilitates many aspects compared to traditional routing. The main challenge in all network types is performance optimization, but the situation is different in SDN because the technique is changed from distributed approach to a centralized one. The characteristics of SDN such as centralized control and programmability make the possibility of performing not only routing in traditional distributed manner but also routing in centralized manner. The first advantage of centralized routing using SDN is the existence of a path to exchange information between the controller and infrastructure devices. Consequently, the controller has the information for the entire network, flexible routing can be achieved. The second advantage is related to dynamical control of routing due to the capability of each device to change its configuration based on the controller commands [4]. This thesis begins with a wide review of the importance of network performance analysis and its role for understanding network behavior, and how it contributes to improve the performance of the network. Furthermore, it clarifies the existing solutions of network performance optimization using machine learning (ML) techniques in traditional networks and SDN environment. In addition, it highlights recent and ongoing studies of the problem of unfair use of network resources by a particular flow (elephant flow) and the possible solutions to solve this problem. Existing solutions are predominantly, flow routing-based and do not consider the relationship between network performance analysis and flow characterization and how to take advantage of it to optimize flow routing by finding the convenient path for each type of flow. Therefore, attention is given to find a method that may describe the flow based on network performance analysis and how to utilize this method for managing network performance efficiently and find the possible integration for the traffic controlling in SDN. To this purpose, characteristics of network flows is identified as a mechanism which may give insight into the diversity in flow features based on performance metrics and provide the possibility of traffic engineering enhancement using SDN environment. Two different feature sets with respect to network performance metrics are employed to characterize network traffic. Applying unsupervised machine learning techniques including Principal Component Analysis (PCA) and k-means cluster analysis to derive a traffic performance-based clustering model. Afterward, thresholding-based flow identification paradigm has been built using pre-defined parameters and thresholds. Finally, the resulting data clusters are integrated within a unified SDN architectural solution, which improves network management by finding the best flow routing based on the type of flow, to be evaluated against a number of traffic data sources and different performance experiments. The validation process of the novel framework performance has been done by making a performance comparison between SDN-Ryu controller and the proposed SDN-external application based on three factors: throughput, bandwidth,and data transfer rate by conducting two experiments. Furthermore, the proposed method has been validated by using different Data Centre Network (DCN) topologies to demonstrate the effectiveness of the network traffic management solution. The overall validation metrics shows real gains, the results show that 70% of the time, it has high performance with different flows. The proposed routing SDN traffic-engineering paradigm for a particular flow therefore, dynamically provisions network resources among different flow types

    Surveillance Graphs: Vulgarity and Cloud Orthodoxy in Linked Data Infrastructures

    Get PDF
    Information is power, and that power has been largely enclosed by a handful of information conglomerates. The logic of the surveillance-driven information economy demands systems for handling mass quantities of heterogeneous data, increasingly in the form of knowledge graphs. An archaeology of knowledge graphs and their mutation from the liberatory aspirations of the semantic web gives us an underexplored lens to understand contemporary information systems. I explore how the ideology of cloud systems steers two projects from the NIH and NSF intended to build information infrastructures for the public good to inevitable corporate capture, facilitating the development of a new kind of multilayered public/private surveillance system in the process. I argue that understanding technologies like large language models as interfaces to knowledge graphs is critical to understand their role in a larger project of informational enclosure and concentration of power. I draw from multiple histories of liberatory information technologies to develop Vulgar Linked Data as an alternative to the Cloud Orthodoxy, resisting the colonial urge for universality in favor of vernacular expression in peer to peer systems

    Improving Data Availability in Decentralized Storage Systems

    Get PDF
    PhD thesis in Information technologyPreserving knowledge for future generations has been a primary concern for humanity since the dawn of civilization. State-of-the-art methods have included stone carvings, papyrus scrolls, and paper books. With each advance in technology, it has become easier to record knowledge. In the current digital age, humanity may preserve enormous amounts of knowledge on hard drives with the click of a button. The aggregation of several hard drives into a computer forms the basis for a storage system. Traditionally, large storage systems have comprised many distinct computers operated by a single administrative entity. With the rise in popularity of blockchain and cryptocurrencies, a new type of storage system has emerged. This new type of storage system is fully decentralized and comprises a network of untrusted peers cooperating to act as a single storage system. During upload, files are split into chunks and distributed across a network of peers. These storage systems encode files using Merkle trees, a hierarchical data structure that provides integrity verification and lookup services. While decentralized storage systems are popular and have a user base in the millions, many technical aspects are still in their infancy. As such, they have yet to prove themselves viable alternatives to traditional centralized storage systems. In this thesis, we contribute to the technical aspects of decentralized storage systems by proposing novel techniques and protocols. We make significant contributions with the design of three practical protocols that each improve data availability in different ways. Our first contribution is Snarl and entangled Merkle trees. Entangled Merkle trees are resilient data structures that decrease the impact hierarchical dependencies have on data availability. Whenever a chunk loss is detected, Snarl uses the entangled Merkle trees to find parity chunks to repair the lost chunk. Our results show that by encoding data as an entangled Merkle tree and using Snarl’s repair algorithm, the storage utilization in current systems could be improved by over five times, with improved data availability. Second, we propose SNIPS, a protocol that efficiently synchronizes the data stored on peers to ensure that all peers have the same data. We designed a Proof of Storage-like construction using a Minimal Perfect Hash Function. Each peer uses the PoS-like construction to create a storage proof for those chunks it wants to synchronize. Peers exchange storage proofs and use them to efficiently determine which chunks they are missing. The evaluation shows that by using SNIPS, the amount of synchronization data can be reduced by three orders of magnitude in current systems. Lastly, in our third contribution, we propose SUP, a protocol that uses cryptographic proofs to check if a chunk is already stored in the network before doing wasteful uploads. We show that SUP may reduce the amount of data transferred by up to 94 % in current systems. The protocols may be deployed independently or in combination to create a decentralized storage system that is more robust to major outages. Each of the protocols has been implemented and evaluated on a large cluster of 1,000 peers

    Measuring knowledge sharing processes through social network analysis within construction organisations

    Get PDF
    The construction industry is a knowledge intensive and information dependent industry. Organisations risk losing valuable knowledge, when the employees leave them. Therefore, construction organisations need to nurture opportunities to disseminate knowledge through strengthening knowledge-sharing networks. This study aimed at evaluating the formal and informal knowledge sharing methods in social networks within Australian construction organisations and identifying how knowledge sharing could be improved. Data were collected from two estimating teams in two case studies. The collected data through semi-structured interviews were analysed using UCINET, a Social Network Analysis (SNA) tool, and SNA measures. The findings revealed that one case study consisted of influencers, while the other demonstrated an optimal knowledge sharing structure in both formal and informal knowledge sharing methods. Social networks could vary based on the organisation as well as the individuals’ behaviour. Identifying networks with specific issues and taking steps to strengthen networks will enable to achieve optimum knowledge sharing processes. This research offers knowledge sharing good practices for construction organisations to optimise their knowledge sharing processes

    Estudo do IPFS como protocolo de distribuição de conteúdos em redes veiculares

    Get PDF
    Over the last few years, vehicular ad-hoc networks (VANETs) have been the focus of great progress due to the interest in autonomous vehicles and in distributing content not only between vehicles, but also to the Cloud. Performing a download/upload to/from a vehicle typically requires the existence of a cellular connection, but the costs associated with mobile data transfers in hundreds or thousands of vehicles quickly become prohibitive. A VANET allows the costs to be several orders of magnitude lower - while keeping the same large volumes of data - because it is strongly based in the communication between vehicles (nodes of the network) and the infrastructure. The InterPlanetary File System (IPFS) is a protocol for storing and distributing content, where information is addressed by its content, instead of its location. It was created in 2014 and it seeks to connect all computing devices with the same system of files, comparable to a BitTorrent swarm exchanging Git objects. It has been tested and deployed in wired networks, but never in an environment where nodes have intermittent connectivity, such as a VANET. This work focuses on understanding IPFS, how/if it can be applied to the vehicular network context, and comparing it with other content distribution protocols. In this dissertation, IPFS has been tested in a small and controlled network to understand its working applicability to VANETs. Issues such as neighbor discoverability times and poor hashing performance have been addressed. To compare IPFS with other protocols (such as Veniam’s proprietary solution or BitTorrent) in a relevant way and in a large scale, an emulation platform was created. The tests in this emulator were performed in different times of the day, with a variable number of files and file sizes. Emulated results show that IPFS is on par with Veniam’s custom V2V protocol built specifically for V2V, and greatly outperforms BitTorrent regarding neighbor discoverability and data transfers. An analysis of IPFS’ performance in a real scenario was also conducted, using a subset of STCP’s vehicular network in Oporto, with the support of Veniam. Results from these tests show that IPFS can be used as a content dissemination protocol, showing it is up to the challenge provided by a constantly changing network topology, and achieving throughputs up to 2.8 MB/s, values similar or in some cases even better than Veniam’s proprietary solution.Nos últimos anos, as redes veiculares (VANETs) têm sido o foco de grandes avanços devido ao interesse em veículos autónomos e em distribuir conteúdos, não só entre veículos mas também para a "nuvem" (Cloud). Tipicamente, fazer um download/upload de/para um veículo exige a utilização de uma ligação celular (SIM), mas os custos associados a fazer transferências com dados móveis em centenas ou milhares de veículos rapidamente se tornam proibitivos. Uma VANET permite que estes custos sejam consideravelmente inferiores - mantendo o mesmo volume de dados - pois é fortemente baseada na comunicação entre veículos (nós da rede) e a infraestrutura. O InterPlanetary File System (IPFS - "sistema de ficheiros interplanetário") é um protocolo de armazenamento e distribuição de conteúdos, onde a informação é endereçada pelo conteúdo, em vez da sua localização. Foi criado em 2014 e tem como objetivo ligar todos os dispositivos de computação num só sistema de ficheiros, comparável a um swarm BitTorrent a trocar objetos Git. Já foi testado e usado em redes com fios, mas nunca num ambiente onde os nós têm conetividade intermitente, tal como numa VANET. Este trabalho tem como foco perceber o IPFS, como/se pode ser aplicado ao contexto de rede veicular e compará-lo a outros protocolos de distribuição de conteúdos. Numa primeira fase o IPFS foi testado numa pequena rede controlada, de forma a perceber a sua aplicabilidade às VANETs, e resolver os seus primeiros problemas como os tempos elevados de descoberta de vizinhos e o fraco desempenho de hashing. De modo a poder comparar o IPFS com outros protocolos (tais como a solução proprietária da Veniam ou o BitTorrent) de forma relevante e em grande escala, foi criada uma plataforma de emulação. Os testes neste emulador foram efetuados usando registos de mobilidade e conetividade veicular de alturas diferentes de um dia, com um número variável de ficheiros e tamanhos de ficheiros. Os resultados destes testes mostram que o IPFS está a par do protocolo V2V da Veniam (desenvolvido especificamente para V2V e VANETs), e que o IPFS é significativamente melhor que o BitTorrent no que toca ao tempo de descoberta de vizinhos e transferência de informação. Uma análise do desempenho do IPFS em cenário real também foi efetuada, usando um pequeno conjunto de nós da rede veicular da STCP no Porto, com o apoio da Veniam. Os resultados destes testes demonstram que o IPFS pode ser usado como protocolo de disseminação de conteúdos numa VANET, mostrando-se adequado a uma topologia constantemente sob alteração, e alcançando débitos até 2.8 MB/s, valores parecidos ou nalguns casos superiores aos do protocolo proprietário da Veniam.Mestrado em Engenharia de Computadores e Telemátic

    Music and Digital Media

    Get PDF
    Anthropology has neglected the study of music. Music and Digital Media shows how and why this should be redressed. It does so by enabling music to expand the horizons of digital anthropology, demonstrating how the field can build interdisciplinary links to music and sound studies, digital/media studies, and science and technology studies. Music and Digital Media is the first comparative ethnographic study of the impact of digital media on music worldwide. It offers a radical and lucid new theoretical framework for understanding digital media through music, showing that music is today where the promises and problems of the digital assume clamouring audibility. The book contains ten chapters, eight of which present comprehensive original ethnographies; they are bookended by an authoritative introduction and a comparative postlude. Five chapters address popular, folk, art and crossover musics in the global South and North, including Kenya, Argentina, India, Canada and the UK. Three chapters bring the digital experimentally to the fore, presenting pioneering ethnographies of anextra-legal peer-to-peer site and the streaming platform Spotify, a series of prominent internet-mediated music genres, and the first ethnography of a global software package, the interactive music platform Max. The book is unique in bringing ethnographic research on popular, folk, art and crossover musics from the global North and South into a comparative framework on a large scale, and creates an innovative new paradigm for comparative anthropology. It shows how music enlarges anthropology while demanding to be understood with reference to classic themes of anthropological theory. Praise for Music and Digital Media ‘Music and Digital Media is a groundbreaking update to our understandings of sound, media, digitization, and music. Truly transdisciplinary and transnational in scope, it innovates methodologically through new models for collaboration, multi-sited ethnography, and comparative work. It also offers an important defense of—and advancement of—theories of mediation.’ Jonathan Sterne, Communication Studies and Art History, McGill University 'Music and Digital Media is a nuanced exploration of the burgeoning digital music scene across both the global North and the global South. Ethnographically rich and theoretically sophisticated, this collection will become the new standard for this field.' Anna Tsing, Anthropology, University of California at Santa Cruz 'The global drama of music's digitisation elicits extreme responses – from catastrophe to piratical opportunism – but between them lie more nuanced perspectives. This timely, absolutely necessary collection applies anthropological understanding to a deliriously immersive field, bringing welcome clarity to complex processes whose impact is felt far beyond what we call music.' David Toop, London College of Communication, musician and writer ‘Spanning continents and academic disciplines, the rich ethnographies contained in Music and Digital Media makes it obligatory reading for anyone wishing to understand the complex, contradictory, and momentous effects that digitization is having on musical cultures.’ Eric Drott, Music, University of Texas, Austin ‘This superb collection, with an authoritative overview as its introduction, represents the state of the art in studies of the digitalisation of music. It is also a testament to what anthropology at its reflexive best can offer the rest of the social sciences and humanities.’ David Hesmondhalgh, Media and Communication, University of Leeds ‘This exciting volume forges new ground in the study of local conditions, institutions, and sounds of digital music in the Global South and North. The book’s planetary scope and its commitment to the “messiness” of ethnographic sites and concepts amplifies emergent configurations and meanings of music, the digital, and the aesthetic.’ Marina Peterson, Anthropology, University of Texas, Austi

    Towards practicalization of blockchain-based decentralized applications

    Get PDF
    Blockchain can be defined as an immutable ledger for recording transactions, maintained in a distributed network of mutually untrusting peers. Blockchain technology has been widely applied to various fields beyond its initial usage of cryptocurrency. However, blockchain itself is insufficient to meet all the desired security or efficiency requirements for diversified application scenarios. This dissertation focuses on two core functionalities that blockchain provides, i.e., robust storage and reliable computation. Three concrete application scenarios including Internet of Things (IoT), cybersecurity management (CSM), and peer-to-peer (P2P) content delivery network (CDN) are utilized to elaborate the general design principles for these two main functionalities. Among them, the IoT and CSM applications involve the design of blockchain-based robust storage and management while the P2P CDN requires reliable computation. Such general design principles derived from disparate application scenarios have the potential to realize practicalization of many other blockchain-enabled decentralized applications. In the IoT application, blockchain-based decentralized data management is capable of handling faulty nodes, as designed in the cybersecurity application. But an important issue lies in the interaction between external network and blockchain network, i.e., external clients must rely on a relay node to communicate with the full nodes in the blockchain. Compromization of such relay nodes may result in a security breach and even a blockage of IoT sensors from the network. Therefore, a censorship-resistant blockchain-based decentralized IoT management system is proposed. Experimental results from proof-of-concept implementation and deployment in a real distributed environment show the feasibility and effectiveness in achieving censorship resistance. The CSM application incorporates blockchain to provide robust storage of historical cybersecurity data so that with a certain level of cyber intelligence, a defender can determine if a network has been compromised and to what extent. The CSM functions can be categorized into three classes: Network-centric (N-CSM), Tools-centric (T-CSM) and Application-centric (A-CSM). The cyber intelligence identifies new attackers, victims, or defense capabilities. Moreover, a decentralized storage network (DSN) is integrated to reduce on-chain storage costs without undermining its robustness. Experiments with the prototype implementation and real-world cyber datasets show that the blockchain-based CSM solution is effective and efficient. The P2P CDN application explores and utilizes the functionality of reliable computation that blockchain empowers. Particularly, P2P CDN is promising to provide benefits including cost-saving and scalable peak-demand handling compared with centralized CDNs. However, reliable P2P delivery requires proper enforcement of delivery fairness. Unfortunately, most existing studies on delivery fairness are based on non-cooperative game-theoretic assumptions that are arguably unrealistic in the ad-hoc P2P setting. To address this issue, an expressive security requirement for desired fair P2P content delivery is defined and two efficient approaches based on blockchain for P2P downloading and P2P streaming are proposed. The proposed system guarantees the fairness for each party even when all others collude to arbitrarily misbehave and achieves asymptotically optimal on-chain costs and optimal delivery communication

    Music and Digital Media: A planetary anthropology

    Get PDF
    Anthropology has neglected the study of music. Music and Digital Media shows how and why this should be redressed. It does so by enabling music to expand the horizons of digital anthropology, demonstrating how the field can build interdisciplinary links to music and sound studies, digital/media studies, and science and technology studies. Music and Digital Media is the first comparative ethnographic study of the impact of digital media on music worldwide. It offers a radical and lucid new theoretical framework for understanding digital media through music, showing that music is today where the promises and problems of the digital assume clamouring audibility. The book contains ten chapters, eight of which present comprehensive original ethnographies; they are bookended by an authoritative introduction and a comparative postlude. Five chapters address popular, folk, art and crossover musics in the global South and North, including Kenya, Argentina, India, Canada and the UK. Three chapters bring the digital experimentally to the fore, presenting pioneering ethnographies of an extra-legal peer-to-peer site and the streaming platform Spotify, a series of prominent internet-mediated music genres, and the first ethnography of a global software package, the interactive music platform Max. The book is unique in bringing ethnographic research on popular, folk, art and crossover musics from the global North and South into a comparative framework on a large scale, and creates an innovative new paradigm for comparative anthropology. It shows how music enlarges anthropology while demanding to be understood with reference to classic themes of anthropological theory

    The 45th Australasian Universities Building Education Association Conference: Global Challenges in a Disrupted World: Smart, Sustainable and Resilient Approaches in the Built Environment, Conference Proceedings, 23 - 25 November 2022, Western Sydney University, Kingswood Campus, Sydney, Australia

    Get PDF
    This is the proceedings of the 45th Australasian Universities Building Education Association (AUBEA) conference which will be hosted by Western Sydney University in November 2022. The conference is organised by the School of Engineering, Design, and Built Environment in collaboration with the Centre for Smart Modern Construction, Western Sydney University. This year’s conference theme is “Global Challenges in a Disrupted World: Smart, Sustainable and Resilient Approaches in the Built Environment”, and expects to publish over a hundred double-blind peer review papers under the proceedings

    NFT as a proof of Digital Ownership-reward system integrated to a Secure Distributed Computing Blockchain Framework

    Get PDF
    Today, the global economy is dependent on the Internet and computational resources. Although they are tightly interconnected, it is difficult to evaluate their degree of interdependence. Keeping up with the pace of technology can be a challenging task, mainly when updating the hardware and software infrastructure. Every day, corporations and governments are faced with this issue; most have been victims of cyber attacks, security breaches, and data leaks. The consequences are significant in monetary losses; damage remediation is unattainable, even impossible, in certain circumstances. The repercussions might include reputational damage, legal responsibility, and threats to national security (when attacks are carried out against critical infrastructures to control the resources of a country), to name a few. Similarly, data has become such an integral part of many industries that it is one of the most critical targets for attackers that often is encrypted by ransomware, stolen, or corrupted. Without data, many companies are not be able to continue operating as they do. The combination of all these factors complicates the ability of organizations to cooperate, trust, and share information in efforts to research and develop solutions for industry and government. A promising technology can assist in significantly reducing the damage caused by the security threats outlined above: Blockchain technology has proven to be one of the most promising inventions of the twenty-first century for transmitting and protecting information while offering high reliability and availability, low exposure to attacks, protected encrypted data, and accessible to the entities willing to participate. Blockchain enabled the possibility to embed immutable data and compiled source code known as ‘smart contract’ where certain rules can be programmed to create business workflows. This thesis report proposes a Blockchain-based infrastructure solution provided by ”Hyperledger Fabric” technology for companies to securely transmit and share information using the latest encryption and data storage technologies operating on the model of distributed systems and smart contracts. By presenting unique digital assets as Non-Fungible Tokens (NFT), the infrastructure is able to trust the integrity of the data, while protecting it from counterfeiting. Through the use of a Blockchain-based file storage system known as IPFS, and by connecting all the relevant elements together through a web-based application, it is possible to demonstrate that the implementation of such systems is feasible, highly scalable and a useful tool that many organizations can utilize to create new work systems and workflows for digital asset management
    corecore