181 research outputs found

    Herding Vulnerable Cats: A Statistical Approach to Disentangle Joint Responsibility for Web Security in Shared Hosting

    Full text link
    Hosting providers play a key role in fighting web compromise, but their ability to prevent abuse is constrained by the security practices of their own customers. {\em Shared} hosting, offers a unique perspective since customers operate under restricted privileges and providers retain more control over configurations. We present the first empirical analysis of the distribution of web security features and software patching practices in shared hosting providers, the influence of providers on these security practices, and their impact on web compromise rates. We construct provider-level features on the global market for shared hosting -- containing 1,259 providers -- by gathering indicators from 442,684 domains. Exploratory factor analysis of 15 indicators identifies four main latent factors that capture security efforts: content security, webmaster security, web infrastructure security and web application security. We confirm, via a fixed-effect regression model, that providers exert significant influence over the latter two factors, which are both related to the software stack in their hosting environment. Finally, by means of GLM regression analysis of these factors on phishing and malware abuse, we show that the four security and software patching factors explain between 10\% and 19\% of the variance in abuse at providers, after controlling for size. For web-application security for instance, we found that when a provider moves from the bottom 10\% to the best-performing 10\%, it would experience 4 times fewer phishing incidents. We show that providers have influence over patch levels--even higher in the stack, where CMSes can run as client-side software--and that this influence is tied to a substantial reduction in abuse levels

    Foundations and Technological Landscape of Cloud Computing

    Get PDF
    The cloud computing paradigm has brought the benefits of utility computing to a global scale. It has gained paramount attention in recent years. Companies are seriously considering to adopt this new paradigm and expecting to receive significant benefits. In fact, the concept of cloud computing is not a revolution in terms of technology; it has been established based on the solid ground of virtualization, distributed system, and web services. To comprehend cloud computing, its foundations and technological landscape need to be adequately understood. This paper provides a comprehensive review on the building blocks of cloud computing and relevant technological aspects. It focuses on four key areas including architecture, virtualization, data management, and security issues

    Endpoints and Interdependencies in Internet of Things Residual Artifacts: Measurements, Analyses, and Insights into Defenses

    Get PDF
    The usage of Internet of Things (IoT) devices is growing fast. Moreover, the lack of security measures among the IoT devices and their persistent online connection give adversaries an opportunity to exploit them for multiple types of attacks, such as distributed denial-of-service (DDoS). To understand the risks of IoT devices, we analyze IoT malware from an endpoint standpoint. We investigate the relationship between endpoints infected and attacked by IoT malware, and gain insights into the underlying dynamics in the malware ecosystem. We observe the affinities and different patterns among endpoints. Towards this, we reverse-engineer 2,423 IoT malware samples and extract IP addresses from them. We further gather information about these endpoints from Internet-wide scans. For masked IP addresses, we examine their network distribution, with networks accumulating more than 100 million endpoints. Moreover, we conduct a network penetration analysis, leveraging information such as active ports, vulnerabilities, and organizations. We discover the possibility of ports being an entry point of attack and observe the low presence of vulnerable services in dropzones. Our analysis shows the tolerance of organizations towards endpoints with malicious intent. To understand the dependencies among malware, we highlight dropzone characteristics including spatial, network, and organizational affinities. Towards the analysis of dropzones\u27 interdependencies and dynamics, we identify dropzones chains. In particular, we identify 56 unique chains, which unveil coordination among different malware families. Our further analysis of chains suggests a centrality-based defense and monitoring mechanism to limit malware propagation. Finally, we propose a defense based on the observed measures, such as the blocked/blacklisted IP addresses or ports. In particular, we investigate network-level and country-level defenses, by blocking a list of ports that are not commonly used by benign applications, and study the underlying issues and possible solutions of such a defense

    Efficient algorithms for passive network measurement

    Get PDF
    Network monitoring has become a necessity to aid in the management and operation of large networks. Passive network monitoring consists of extracting metrics (or any information of interest) by analyzing the traffic that traverses one or more network links. Extracting information from a high-speed network link is challenging, given the great data volumes and short packet inter-arrival times. These difficulties can be alleviated by using extremely efficient algorithms or by sampling the incoming traffic. This work improves the state of the art in both these approaches. For one-way packet delay measurement, we propose a series of improvements over a recently appeared technique called Lossy Difference Aggregator. A main limitation of this technique is that it does not provide per-flow measurements. We propose a data structure called Lossy Difference Sketch that is capable of providing such per-flow delay measurements, and, unlike recent related works, does not rely on any model of packet delays. In the problem of collecting measurements under the sliding window model, we focus on the estimation of the number of active flows and in traffic filtering. Using a common approach, we propose one algorithm for each problem that obtains great accuracy with significant resource savings. In the traffic sampling area, the selection of the sampling rate is a crucial aspect. The most sensible approach involves dynamically adjusting sampling rates according to network traffic conditions, which is known as adaptive sampling. We propose an algorithm called Cuckoo Sampling that can operate with a fixed memory budget and perform adaptive flow-wise packet sampling. It is based on a very simple data structure and is computationally extremely lightweight. The techniques presented in this work are thoroughly evaluated through a combination of theoretical and experimental analysis.Postprint (published version

    Foundations and Technological Landscape of Cloud Computing

    Get PDF

    Investigação e desenho de uma solução empresarial de alojamento web para Angola

    Get PDF
    As infraestruturas tecnológicas são pilares indispensáveis do aumento da intensidade tecnológica, do crescimento económico de um pais ou empresa, bem como a sua capacidade produtiva e empreendedora. Uma das actividades mais importantes nesta mudança tecnológica é a de fornecer serviços de acesso a sítios e servidores Internet. Se em países ditos desenvolvidos existe uma oferta vasta e muito variada, noutros países essa oferta é ainda escassa e pouco versátil. Uma das barreiras para o aumento desta capacidade de oferta é a complexidade do desenho de uma solução de fornecimento deste tipo de serviços, porque envolve um conjunto amplo de requisitos, desde logo os físicos como os de equipamentos de rede até aos de software, como os da virtualização de servidores. Foi realizado um estudo tendo como base na situação actual do mercado angolano em relação aos serviços baseados na Internet, a promoção de capacidades inovadoras e as infraestruturas de suporte à estes serviços. Foi verificada uma notável necessidade de se investir fortemente em infraestruturas tecnológicas bem como a integração de novas tecnologias no mercado e nas empresas. Esta dissertação sugere uma estrutura que colmata as lacunas encontradas contribuindo para uma maior compatibilidade nesta área pela implementação de uma solução tecnologicamente actual preparada para responder ao dinamismo e crescimento do mercado de Angola nos próximos anos.The technology infrastructures are indispensable pillars to the increase of the technological development, supporting the economic growth of a country or a company, as well as to its productive and entrepreneurial capacity. One of the most important activities in this technologic change is to supply access services to sites and Internet servers. If in developed countries there is a wide and varied offer for these services, in other countries this offer is scarse and not very versatile. One of the barriers to the increase in the capacity of offering these Internet services is the complexity of the design of a solution for these types of services, as this comprehends a wide set of requirements, ranging from the physical such as the network infrastructures, to the logical such as virtualization technologies. A study was carried out having as base the current situation of the market in Angola regarding the providers of Internet based services, the promotion of innovative solutions and the infrastructures that support these services. It was noted the need to make a strong investment in technology infrastructures and to integrate new technologies in the market. This dissertation suggests a structure that fills the found gaps, contributing to a greater compatibility in the implementation of Technologies in this area, and providing a solution that is ready to respond to the needs of innovation and growth of the Angolan market in the coming years

    Impact of Location on Content Delivery

    Get PDF
    Steigende Benutzerzahlen und steigende Internetnutzung sind seit über 15 Jahren verantwortlich für ein exponentielles Wachstum des Internetverkehrs. Darüber hinaus haben neue Applikationen und Anwendungsfälle zu einer Veränderung der Eigenschaften des Verkehrs geführt. Zum Beispiel erlauben soziale Netze dem Benutzer die Veröffentlichung eigener Inhalte. Diese benutzergenerierten Inhalte werden häufig auf beliebten Webseiten wie YouTube, Twitter oder Facebook publiziert. Weitere Beispiele sind die Angebote an interaktiven oder multimedialen Inhalten wie Google Maps oder Fernsehdienste (IPTV). Die Einführung von Peer-to-Peer-Protokollen (P2P) im Jahre 1998 bewirkte einen noch radikaleren Wandel, da sie den direkten Austausch von großen Mengen an Daten erlauben: Die Peers übertragen die Daten ohne einen dazwischenliegenden, oft zentralisierten Server. Allerdings zeigen aktuelle Forschungsarbeiten, dass Internetverkehr wieder von HTTP dominiert wird, zum Großteil auf Kosten von P2P. Dieses Verkehrswachstum erhöht die Anforderungen an die Komponenten aus denen das Internet aufgebaut ist, z.B. Server und Router. Darüber hinaus wird der Großteil des Verkehrs von wenigen, sehr beliebten Diensten erzeugt. Die gewaltige Nachfrage nach solchen beliebten Inhalten kann nicht mehr durch das traditionelle Hostingmodell gedeckt werden, bei dem jeder Inhalt nur auf einem Server verfügbar gemacht wird. Stattdessen müssen Inhalteanbieter ihre Infrastruktur ausweiten, z.B. indem sie sie in großen Datenzentren vervielfältigen, oder indem sie den Dienst einer Content Distribution Infrastructure wie Akamai oder Limelight in Anspruch nehmen. Darüber hinaus müssen nicht nur die Anbieter von Inhalten sich der Nachfrage anpassen: Auch die Netzwerkinfrastruktur muss kontinuierlich mit der ständig steigenden Nachfrage mitwachsen. In dieser Doktorarbeit charakterisieren wir die Auswirkung von Content Delivery auf das Netzwerk. Wir nutzen Datensätze aus aktiven und aus passiven Messungen, die es uns ermöglichen, das Problem auf verschiedenen Abstraktionsebenen zu untersuchen: vom detaillierten Verhalten auf der Protokollebene von verschiedenen Content Delivery-Methoden bis hin zum ganzheitlichen Bild des Identifizierens und Kartographierens der Content Distribution Infrastructures, die für die populärsten Inhalte verantwortlich sind. Unsere Ergebnisse zeigen, dass das Cachen von Inhalten immer noch ein schwieriges Problem darstellt und dass die Wahl des DNS-Resolvers durch den Nutzer einen ausgeprägten Einfluß auf den Serverwahlmechanismus der Content Distribution Infrastructure hat. Wir schlagen vor, Webinhalte zu kartographieren, um darauf rückschließen zu können, wie Content Distribution Infrastructures ausgerollt sind und welche Rollen verschiedene Organisationen im Internet einnehmen. Wir schließen die Arbeit ab, indem wir unsere Ergebnisse mit zeitnahen Arbeiten vergleichen und geben Empfehlungen, wie man die Auslieferung von Inhalten weiter verbessern kann, an alle betroffenen Parteien: Benutzer, Internetdienstanbieter und Content Distribution Infrastructures.The increasing number of users as well as their demand for more and richer content has led to an exponential growth of Internet traffic for more than 15 years. In addition, new applications and use cases have changed the type of traffic. For example, social networking enables users to publish their own content. This user generated content is often published on popular sites such as YouTube, Twitter, and Facebook. Another example are the offerings of interactive and multi-media content by content providers, e.g., Google Maps or IPTV services. With the introduction of peer-to-peer (P2P) protocols in 1998 an even more radical change emerged because P2P protocols allow users to directly exchange large amounts of content: The peers transfer data without the need for an intermediary and often centralized server. However, as shown by recent studies Internet traffic is again dominated by HTTP, mostly at the expense of P2P. This traffic growth increases the demands on the infrastructure components that form the Internet, e.g., servers and routers. Moreover, most of the traffic is generated by a few very popular services. The enormous demand for such popular content cannot be satisfied by the traditional hosting model in which content is located on a single server. Instead, content providers need to scale up their delivery infrastructure, e.g., by using replication in large data centers or by buying service from content delivery infrastructures, e.g., Akamai or Limelight. Moreover, not only content providers have to cope with the demand: The network infrastructure also needs to be constantly upgraded to keep up with the growing demand for content. In this thesis we characterize the impact of content delivery on the network. We utilize data sets from both active and passive measurements. This allows us to cover a wide range of abstraction levels from a detailed protocol level view of several content delivery mechanisms to the high-level picture of identifying and mapping the content infrastructures that are hosting the most popular content. We find that caching content is still hard and that the user's choice of DNS resolvers has a profound impact on the server selection mechanism of content distribution infrastructures. We propose Web content cartography to infer how content distribution infrastructures are deployed and what the role of different organizations in the Internet is. We conclude by putting our findings in the context of contemporary work and give recommendations on how to improve content delivery to all parties involved: users, Internet service providers, and content distribution infrastructures
    corecore