11 research outputs found

    Challenges in the capture and dissemination of measurements from high-speed networks

    Get PDF
    The production of a large-scale monitoring system for a high-speed network leads to a number of challenges. These challenges are not purely technical but also socio-political and legal. The number of stakeholders in such monitoring activity is large including the network operators, the users, the equipment manufacturers and, of course, the monitoring researchers. The MASTS project (measurement at all scales in time and space) was created to instrument the high-speed JANET Lightpath network and has been extended to incorporate other paths supported by JANET(UK). Challenges the project has faced included: simple access to the network; legal issues involved in the storage and dissemination of the captured information, which may be personal; the volume of data captured and the rate at which these data appear at store. To this end, the MASTS system will have established four monitoring points each capturing packets on a high-speed link. Traffic header data will be continuously collected, anonymised, indexed, stored and made available to the research community. A legal framework for the capture and storage of network measurement data has been developed which allows the anonymised IP traces to be used for research purposes

    A novel node selection method for wireless distributed edge storage based on SDN and a maldistributed decision model

    Get PDF
    In distributed edge storage, data storage data is allocated to network edge devices to achieve low latency, high security, and flexibility. However, traditional systems for distributed edge storage only consider individual factors, such as node capacity, while overlooking the network status and the load states of the storage nodes, thereby impacting the system's read and write performance. Moreover, these systems exhibit inadequate scalability in widely adopted wireless terminal application scenarios. To overcome these challenges, this paper introduces a software-defined edge storage model and a distributed edge storage architecture grounded in software-defined networking (SDN) and the Server Message Block (SMB) protocol. A data storage node selection and distribution algorithm is formulated based on a maldistributed decision model that comprehensively considers the network and storage node load states. A system prototype is implemented in combination with 5G wireless communication technology. The experimental results demonstrate that, in comparison to conventional distributed edge storage systems, the proposed wireless distributed edge storage system exhibits significantly enhanced performance under high load conditions, demonstrating superior scalability and adaptability. This approach effectively addresses the scalability limitation, rendering it suitable for edge scenarios in mobile applications and reducing hardware deployment costs

    Topological characteristics of IP networks

    Get PDF
    Topological analysis of the Internet is needed for developments on network planning, optimal routing algorithms, failure detection measures, and understanding business models. Accurate measurement, inference and modelling techniques are fundamental to Internet topology research. A requirement towards achieving such goals is the measurements of network topologies at different levels of granularity. In this work, I start by studying techniques for inferring, modelling, and generating Internet topologies at both the router and administrative levels. I also compare the mathematical models that are used to characterise various topologies and the generation tools based on them. Many topological models have been proposed to generate Internet Autonomous System(AS) topologies. I use an extensive set of measures and innovative methodologies to compare AS topology generation models with several observed AS topologies. This analysis shows that the existing AS topology generation models fail to capture important characteristics, such as the complexity of the local interconnection structure between ASes. Furthermore, I use routing data from multiple vantage points to show that using additional measurement points significantly affect our observations about local structural properties, such as clustering and node centrality. Degree-based properties, however, are not notably affected by additional measurements locations. The shortcomings of AS topology generation models stems from an underestimation of the complexity of the connectivity in the Internet and biases of measurement techniques. An increasing number of synthetic topology generators are available, each claiming to produce representative Internet topologies. Every generator has its own parameters, allowing the user to generate topologies with different characteristics. However, there exist no clear guidelines on tuning the value of these parameters in order to obtain a topology with specific characteristics. I propose a method which allows optimal parameters of a model to be estimated for a given target topology. The optimisation is performed using the weighted spectral distribution metric, which simultaneously takes into account many the properties of a graph. In order to understand the dynamics of the Internet, I study the evolution of the AS topology over a period of seven years. To understand the structural changes in the topology, I use the weighted spectral distribution as this metric reveals differences in the hierarchical structure of two graphs. The results indicate that the Internet is changing from a strongly customer-provider oriented, disassortative network, to a soft-hierarchical, peering-oriented, assortative network. This change is indicative of evolving business relationships amongst organisations

    A Comprehensive Review on Adaptability of Network Forensics Frameworks for Mobile Cloud Computing

    Get PDF
    Network forensics enables investigation and identification of network attacks through the retrieved digital content. The proliferation of smartphones and the cost-effective universal data access through cloud has made Mobile Cloud Computing (MCC) a congenital target for network attacks. However, confines in carrying out forensics in MCC is interrelated with the autonomous cloud hosting companies and their policies for restricted access to the digital content in the back-end cloud platforms. It implies that existing Network Forensic Frameworks (NFFs) have limited impact in the MCC paradigm. To this end, we qualitatively analyze the adaptability of existing NFFs when applied to the MCC. Explicitly, the fundamental mechanisms of NFFs are highlighted and then analyzed using the most relevant parameters. A classification is proposed to help understand the anatomy of existing NFFs. Subsequently, a comparison is given that explores the functional similarities and deviations among NFFs. The paper concludes by discussing research challenges for progressive network forensics in MCC

    A Comprehensive Review on Adaptability of Network Forensics Frameworks for Mobile Cloud Computing

    Get PDF
    Network forensics enables investigation and identification of network attacks through the retrieved digital content. The proliferation of smartphones and the cost-effective universal data access through cloud has made Mobile Cloud Computing (MCC) a congenital target for network attacks. However, confines in carrying out forensics in MCC is interrelated with the autonomous cloud hosting companies and their policies for restricted access to the digital content in the back-end cloud platforms. It implies that existing Network Forensic Frameworks (NFFs) have limited impact in the MCC paradigm. To this end, we qualitatively analyze the adaptability of existing NFFs when applied to the MCC. Explicitly, the fundamental mechanisms of NFFs are highlighted and then analyzed using the most relevant parameters. A classification is proposed to help understand the anatomy of existing NFFs. Subsequently, a comparison is given that explores the functional similarities and deviations among NFFs. The paper concludes by discussing research challenges for progressive network forensics in MCC

    Inferring malicious network events in commercial ISP networks using traffic summarisation

    Get PDF
    With the recent increases in bandwidth available to home users, traffic rates for commercial national networks have also been increasing rapidly. This presents a problem for any network monitoring tool as the traffic rate they are expected to monitor is rising on a monthly basis. Security within these networks is para- mount as they are now an accepted home of trade and commerce. Core networks have been demonstrably and repeatedly open to attack; these events have had significant material costs to high profile targets. Network monitoring is an important part of network security, providing in- formation about potential security breaches and in understanding their impact. Monitoring at high data rates is a significant problem; both in terms of processing the information at line rates, and in terms of presenting the relevant information to the appropriate persons or systems. This thesis suggests that the use of summary statistics, gathered over a num- ber of packets, is a sensible and effective way of coping with high data rates. A methodology for discovering which metrics are appropriate for classifying signi- ficant network events using statistical summaries is presented. It is shown that the statistical measures found with this methodology can be used effectively as a metric for defining periods of significant anomaly, and further classifying these anomalies as legitimate or otherwise. In a laboratory environment, these metrics were used to detect DoS traffic representing as little as 0.1% of the overall network traffic. The metrics discovered were then analysed to demonstrate that they are ap- propriate and rational metrics for the detection of network level anomalies. These metrics were shown to have distinctive characteristics during DoS by the analysis of live network observations taken during DoS events. This work was implemented and operated within a live system, at multiple sites within the core of a commercial ISP network. The statistical summaries are generated at city based points of presence and gathered centrally to allow for spacial and topological correlation of security events. The architecture chosen was shown to be exible in its application. The system was used to detect the level of VoIP traffic present on the network through the implementation of packet size distribution analysis in a multi-gigabit environment. It was also used to detect unsolicited SMTP generators injecting messages into the core. ii Monitoring in a commercial network environment is subject to data protec- tion legislation. Accordingly the system presented processed only network and transport layer headers, all other data being discarded at the capture interface. The system described in this thesis was operational for a period of 6 months, during which a set of over 140 network anomalies, both malicious and benign were observed over a range of localities. The system design, example anomalies and metric analysis form the majority of this thesis

    Ontological interpretation of network monitoring data

    Get PDF
    Interpreting measurement and monitoring data from networks in general and the Internet in particular is a challenge. The motivation for this work has been to in- vestigate new ways to bridge the gap between the kind of data which are available and the more developed information which is needed by network stakeholders to support decision making and network management. Specific problems of syntax, semantics, conflicting data and modeling domain-specific knowledge have been identified. The methods developed and tested have used the Resource Descrip- tion Framework (rdf) and the ontology languages of the Semantic Web to bring together data from disparate sources into unified knowledgebases in two discrete case studies, both using real network data. Those knowledgebases have then been demonstrated to be usable and valuable sources of information about the networks concerned. Some success has been achieved in overcoming each of the identified problems using these techniques, proving the thesis that taking an ontological ap- proach to the processing of network monitoring data can be a very useful technique for overcoming problems of interpretation and for making information available to those who need it

    Mobile Internet Usage - Network Traffic Measurements

    Get PDF
    Perustavanlaatuisia muutoksia on tapahtumassa tietoliikennetoimialalla kun Internet ja mobiili konvergoituvat. Matkapuhelimet ovat kehittymässä multimediatietokoneiksi ja kannettavat tietokoneet muuttuvat pienemmiksi ja sisältävät kasvavissa määrin liitettävyyden matkapuhelinverkkoon. Kun samaan aikaan mobiililaajakaistojen hinnat ovat laskeneet ja tarjotut kaistannopeudet kasvaneet, mobiilin Internetin käyttö on lisääntynyt nopeasti viimeisten parin vuoden aikana. Uusia tulonlähteitä etsiessä teollisuuden eri sidosryhmät ovat kiinnostuneita mittauksista jotka voivat auttaa ymmärtämään mobiilin Internetin käyttöä. Tämä diplomityö keskittyy mobiiliverkon liikennemittauksiin ja niiden soveltuvuuden tutkimiseen markkinatiedon tuottamisessa eri sidosryhmille. Työssä analysoidaan suomalaisissa mobiiliverkoissa tehtyjä liikennemittauksia ja tuotetaan statistiikkaa mobiilin Internetin käytöstä. Lisäksi nykyisen mittausjärjestelyn ominaisuuksia analysoidaan, mahdolliset mittausten kehittämisalueet luokitellaan ja suosituksia esitetään mittausten kehittämiseksi. Statistiikat Suomen mobiilii-Internetin käytöstä osoittivat että tietokoneet generoivat suurimman osan Suomen liikennevolyymistä, kun taas matkapuhelinten tuottaman liikenteen osuus on alle prosentti. Symbian-käyttöjärjestelmä dominoi matkapuhelinten käyttöä joka on web-orientoitunutta. Muitakin web-liikenteen luokkia kuin web-selailua, kuten email- ja muuta synkronointiliikennettä, huomattiin käytettävän matkapuhelimilla. Perinteiset kotimaiset mediatalot, sosiaalisen median sivustot ja Nokia ovat suosituimpien web-sisällöntuottajien joukossa. Koska eri mittauspisteet mobiiliverkossa tuottavat eri tarkkuustason tietoa, täytyy mittausprosessiin liittyvät valinnat tehdä mittauksen tavoitteiden mukaan. Jos kehittynyt analyysi on tarpeen, mittausten olisi suositeltavaa suoritettavan pisteessä jossa käyttäjien tunnistaminen on mahdollista, kun taas pelkästä IP-liikenteestä saatavat tulokset ovat riittäviä yleisempään markkinan kuvaamiseen. Mobiilioperaattorin kannalta yhtäjaksoiset ja automatisoidut mittaukset mahdollistaisivat tulosten hyväksikäyttämisen useissa eri yrityksen toiminnoissa. Yleisesti liikennemittausten mahdollisuudet ovat laajat, mutta toisaalta täyden hyödyn saavuttamiseksi resurssivaateet saattavat olla suuria. Liikennemittaukset voivat kuitenkin tuottaa tietoa ja tukea operaattoreita päätöksenteossa ja liiketoiminnan kehittämisessä.Fundamental transformations are taking place in the telecommunication domain as the Internet and mobile industries are converging. Mobile phones are developing into multimedia computers and laptops are getting smaller with cellular connectivity, increasing the amount of mobile Internet capable devices. Furthermore, as mobile broadband prices have decreased and offered bandwidths increased, also the usage of mobile Internet has been increasing rapidly during the past couple of years. In search for new revenue sources, various industry stakeholders are interested in measurements that can help understanding the mobile Internet usage patterns. This thesis focuses on mobile network traffic measurements and studies their applicability for providing market understanding for the different stakeholders. First, measurements from operational Finnish mobile networks are analyzed to provide factual statistics on the usage patterns of the Finnish market. Second, the properties of the existing measurement organization are analyzed, possible measurement design and development areas are classified, and recommendations are provided for further development of the measurements. The factual statistics showed that most of the Finnish mobile Internet usage traffic volume is generated by computers, whereas the share of mobile handset generated traffic is less than one percent. Symbian operating system dominates the web oriented mobile handset usage. Traditional Finnish media houses, social media sites, and Nokia are among the most popular content providers for web usage. In addition, also other web traffic classes than web browsing, such as email and synchronization, were observed to be used by mobile handsets. As different measurement points in a mobile network provide different data granularity, the choices related to the measurement have to be made according to the objectives of the measurement. If advanced analysis is needed, the measurements are recommended to be conducted at a point in the mobile network where user identification is possible, whereas total traffic level patterns from IP traffic are adequate for general market description. From a mobile operator viewpoint, automated and continuous data collection and analysis could enable utilization of the results in multiple corporate functions. In general, the possibilities of traffic measurements are vast. On the other hand, they may require a lot of resources to succeed in their full potential. Nevertheless, mobile network traffic measurements can provide intelligence and support for operators in their decision making and business development

    A Digital Forensic Readiness Approach for e-Supply Chain Systems

    Get PDF
    The internet has had a major impact on how information is shared within supply chains, and in commerce in general. This has resulted in the establishment of information systems such as esupply chains (eSCs) amongst others which integrate the internet and other information and communications technology (ICT) with traditional business processes for the swift transmission of information between trading partners. Many organisations have reaped the benefits that come from adopting the eSC model, but have also faced the challenges with which it comes. One such major challenge is information security. With the current state of cybercrime, system developers are challenged with the task of developing cutting-edge digital forensic readiness (DFR) systems that can keep up with current technological advancements, such as eSCs. Hence, the research highlights the lack of a well-formulated eSC-DFR approach that can assist system developers in the development of e-supply chain digital forensic readiness systems. The main objective of such a system is that it must be able to provide law enforcement/digital forensic investigators that operate on eSC platforms with forensically sound and readily available potential digital evidence that can expedite and support digital forensics incident-response processes. This approach, if implemented can also prepare trading partners for security incidents that might take place, if not prevent them from occurring. Therefore, the work presented in this research is aimed at providing a procedural approach that is based on digital forensic principles for eSC system architects and eSC network service providers to follow in the design of eSC-DFR tools. The author proposes an eSC-DFR process model and eSC-DFR system architectural design that was implemented as part of this research illustrating the concepts of evidence collection, evidence pre-analysis, evidence preservation, system usability alongside other digital forensic principles and techniques. It is the view of the authors that the conclusions drawn from this research can spearhead the development of cutting-edge eSC-DFR systems that are intelligent, effective, user friendly and compliant with international standards.Dissertation (MEng)--University of Pretoria, 2019.Computer ScienceMScUnrestricte
    corecore