1,328 research outputs found

    Is Content Publishing in BitTorrent Altruistic or Profit-Driven

    Get PDF
    BitTorrent is the most popular P2P content delivery application where individual users share various type of content with tens of thousands of other users. The growing popularity of BitTorrent is primarily due to the availability of valuable content without any cost for the consumers. However, apart from required resources, publishing (sharing) valuable (and often copyrighted) content has serious legal implications for user who publish the material (or publishers). This raises a question that whether (at least major) content publishers behave in an altruistic fashion or have other incentives such as financial. In this study, we identify the content publishers of more than 55k torrents in 2 major BitTorrent portals and examine their behavior. We demonstrate that a small fraction of publishers are responsible for 66% of published content and 75% of the downloads. Our investigations reveal that these major publishers respond to two different profiles. On one hand, antipiracy agencies and malicious publishers publish a large amount of fake files to protect copyrighted content and spread malware respectively. On the other hand, content publishing in BitTorrent is largely driven by companies with financial incentive. Therefore, if these companies lose their interest or are unable to publish content, BitTorrent traffic/portals may disappear or at least their associated traffic will significantly reduce

    Storytelling Security: User-Intention Based Traffic Sanitization

    Get PDF
    Malicious software (malware) with decentralized communication infrastructure, such as peer-to-peer botnets, is difficult to detect. In this paper, we describe a traffic-sanitization method for identifying malware-triggered outbound connections from a personal computer. Our solution correlates user activities with the content of outbound traffic. Our key observation is that user-initiated outbound traffic typically has corresponding human inputs, i.e., keystroke or mouse clicks. Our analysis on the causal relations between user inputs and packet payload enables the efficient enforcement of the inter-packet dependency at the application level. We formalize our approach within the framework of protocol-state machine. We define new application-level traffic-sanitization policies that enforce the inter-packet dependencies. The dependency is derived from the transitions among protocol states that involve both user actions and network events. We refer to our methodology as storytelling security. We demonstrate a concrete realization of our methodology in the context of peer-to-peer file-sharing application, describe its use in blocking traffic of P2P bots on a host. We implement and evaluate our prototype in Windows operating system in both online and offline deployment settings. Our experimental evaluation along with case studies of real-world P2P applications demonstrates the feasibility of verifying the inter-packet dependencies. Our deep packet inspection incurs overhead on the outbound network flow. Our solution can also be used as an offline collect-and-analyze tool

    Anomaly detection using network traffic characterization

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 2009Includes bibliographical references (leaves: 63-66)Text in English Abstract: Turkish and Englishix, 80 leavesDetecting suspicious traffic and anomaly sources are a general tendency about approaching the traffic analyzing. Since the necessity of detecting anomalies, different approaches are developed with their software candidates. Either event based or signature based anomaly detection mechanism can be applied to analyze network traffic. Signature based approaches require the detected signatures of the past anomalies though event based approaches propose a more flexible approach that is defining application level abnormal anomalies is possible. Both approach focus on the implementing and defining abnormal traffic. The problem about anomaly is that there is not a common definition of anomaly for all protocols or malicious attacks. In this thesis it is aimed to define the non-malicious traffic and extract it, so that the rest is marked as suspicious traffic for further traffic. To achieve this approach, a method and its software application to identify IP sessions, based on statistical metrics of the packet flows are presented. An adaptive network flow knowledge-base is derived. The knowledge-base is constructed using calculated flows attributes. A method to define known traffic is displayed by using the derived flow attributes. By using the attributes, analyzed flow is categorized as a known application level protocol. It is also explained a mathematical model to analyze the undefined traffic to display network traffic anomalies. The mathematical model is based on principle component analysis which is applied on the origindestination pair flows. By using metric based traffic characterization and principle component analysis it is observed that network traffic can be analyzed and some anomalies can be detected

    Network Traffic Measurements, Applications to Internet Services and Security

    Get PDF
    The Internet has become along the years a pervasive network interconnecting billions of users and is now playing the role of collector for a multitude of tasks, ranging from professional activities to personal interactions. From a technical standpoint, novel architectures, e.g., cloud-based services and content delivery networks, innovative devices, e.g., smartphones and connected wearables, and security threats, e.g., DDoS attacks, are posing new challenges in understanding network dynamics. In such complex scenario, network measurements play a central role to guide traffic management, improve network design, and evaluate application requirements. In addition, increasing importance is devoted to the quality of experience provided to final users, which requires thorough investigations on both the transport network and the design of Internet services. In this thesis, we stress the importance of users’ centrality by focusing on the traffic they exchange with the network. To do so, we design methodologies complementing passive and active measurements, as well as post-processing techniques belonging to the machine learning and statistics domains. Traffic exchanged by Internet users can be classified in three macro-groups: (i) Outbound, produced by users’ devices and pushed to the network; (ii) unsolicited, part of malicious attacks threatening users’ security; and (iii) inbound, directed to users’ devices and retrieved from remote servers. For each of the above categories, we address specific research topics consisting in the benchmarking of personal cloud storage services, the automatic identification of Internet threats, and the assessment of quality of experience in the Web domain, respectively. Results comprise several contributions in the scope of each research topic. In short, they shed light on (i) the interplay among design choices of cloud storage services, which severely impact the performance provided to end users; (ii) the feasibility of designing a general purpose classifier to detect malicious attacks, without chasing threat specificities; and (iii) the relevance of appropriate means to evaluate the perceived quality of Web pages delivery, strengthening the need of users’ feedbacks for a factual assessment

    Designing and optimization of VOIP PBX infrastructure

    Get PDF
    In the recent decade, communication has stirred from the old wired medium such as public switched telephone network (PSTN) to the Internet. Present, Voice over Internet Protocol (VoIP) Technology used for communication on internet by means of packet switching technique. Several years ago, an internet protocol (IP) based organism was launched, which is known as Private Branch Exchange "PBX", as a substitute of common PSTN systems. For free communication, probably you must have to be pleased with starting of domestic calls. Although, fairly in few cases, VoIP services can considerably condense our periodical phone bills. For instance, if someone makes frequent global phone calls, VoIP talk service is the actual savings treat which cannot achieve by using regular switched phone. VoIP talk services strength help to trim down your phone bills if you deal with a lot of long-distance (international) and as well as domestic phone calls. However, with the VoIP success, threats and challenges also stay behind. In this dissertation, by penetration testing one will know that how to find network vulnerabilities how to attack them to exploit the network for unhealthy activities and also will know about some security techniques to secure a network. And the results will be achieved by penetration testing will indicate of proven of artefact and would be helpful to enhance the level of network security to build a more secure network in future

    Using deep learning to classify community network traffic

    Get PDF
    Traffic classification is an important aspect of network management. This aspect improves the quality of service, traffic engineering, bandwidth management and internet security. Traffic classification methods continue to evolve due to the ever-changing dynamics of modern computer networks and the traffic they generate. Numerous studies on traffic classification make use of the Machine Learning (ML) and single Deep Learning (DL) models. ML classification models are effective to a certain degree. However, studies have shown they record low prediction and accuracy scores. In contrast, the proliferation of various deep learning techniques has recorded higher accuracy in traffic classification. The Deep Learning models have been successful in identifying encrypted network traffic. Furthermore, DL learns new features without the need to do much feature engineering compared to ML or Traditional methods. Traditional methods are inefficient in meeting the demands of ever-changing requirements of networks and network applications. Traditional methods are unfeasible and costly to maintain as they need constant updates to maintain their accuracy. In this study, we carry out a comparative analysis by adopting an ML model (Support Vector Machine) against the DL Models (Convolutional Neural Networks (CNN), Gated Recurrent Unit (GRU) and a hybrid model: CNNGRU to classify encrypted internet traffic collected from a community network. In this study, we performed a comparative analysis by adopting an ML model (Support vector machine). Machine against DL models (Convolutional Neural networks (CNN), Gated Recurrent Unit (GRU) and a hybrid model: CNNGRU) and to classify encrypted internet traffic that was collected from a community network. The results show that DL models tend to generalise better with the dataset in comparison to ML. Among the deep Learning models, the hybrid model outperformed all the other models in terms of accuracy score. However, the model that had the best accuracy rate was not necessarily the one that took the shortest time when it came to prediction speed considering that it was more complex. Support vector machines outperformed the deep learning models in terms of prediction speed
    • 

    corecore