12 research outputs found

    Information Leakage Attacks and Countermeasures

    Get PDF
    The scientific community has been consistently working on the pervasive problem of information leakage, uncovering numerous attack vectors, and proposing various countermeasures. Despite these efforts, leakage incidents remain prevalent, as the complexity of systems and protocols increases, and sophisticated modeling methods become more accessible to adversaries. This work studies how information leakages manifest in and impact interconnected systems and their users. We first focus on online communications and investigate leakages in the Transport Layer Security protocol (TLS). Using modern machine learning models, we show that an eavesdropping adversary can efficiently exploit meta-information (e.g., packet size) not protected by the TLS’ encryption to launch fingerprinting attacks at an unprecedented scale even under non-optimal conditions. We then turn our attention to ultrasonic communications, and discuss their security shortcomings and how adversaries could exploit them to compromise anonymity network users (even though they aim to offer a greater level of privacy compared to TLS). Following up on these, we delve into physical layer leakages that concern a wide array of (networked) systems such as servers, embedded nodes, Tor relays, and hardware cryptocurrency wallets. We revisit location-based side-channel attacks and develop an exploitation neural network. Our model demonstrates the capabilities of a modern adversary but also presents an inexpensive tool to be used by auditors for detecting such leakages early on during the development cycle. Subsequently, we investigate techniques that further minimize the impact of leakages found in production components. Our proposed system design distributes both the custody of secrets and the cryptographic operation execution across several components, thus making the exploitation of leaks difficult

    On the evolution of digital evidence: novel approaches for cyber investigation

    Get PDF
    2012-2013Nowadays Internet is the fulcrum of our world, and the World Wide Web is the key to access it. We develop relationships on social networks and entrust sensitive documents to online services. Desktop applications are being replaced by fully-fledged web-applications that can be accessed from any devices. This is possible thanks to new web technologies that are being introduced at a very fast pace. However, these advances come at a price. Today, the web is the principal means used by cyber-criminals to perform attacks against people and organizations. In a context where information is extremely dynamic and volatile, the fight against cyber-crime is becoming more and more difficult. This work is divided in two main parts, both aimed at fueling research against cybercrimes. The first part is more focused on a forensic perspective and exposes serious limitations of current investigation approaches when dealing with modern digital information. In particular, it shows how it is possible to leverage common Internet services in order to forge digital evidence, which can be exploited by a cyber-criminal to claim an alibi. Hereinafter, a novel technique to track cyber-criminal activities on the Internet is proposed, aimed at the acquisition and analysis of information from highly dynamic services such as online social networks. The second part is more concerned about the investigation of criminal activities on the web. Aiming at raising awareness for upcoming threats, novel techniques for the obfuscation of web-based attacks are presented. These attacks leverage the same cuttingedge technology used nowadays to build pleasant and fully-featured web applications. Finally, a comprehensive study of today’s top menaces on the web, namely exploit kits, is presented. The result of this study has been the design of new techniques and tools that can be employed by modern honeyclients to better identify and analyze these menaces in the wild. [edited by author]XII n.s

    Using Context to Improve Network-based Exploit Kit Detection

    Get PDF
    Today, our computers are routinely compromised while performing seemingly innocuous activities like reading articles on trusted websites (e.g., the NY Times). These compromises are perpetrated via complex interactions involving the advertising networks that monetize these sites. Web-based compromises such as exploit kits are similar to any other scam -- the attacker wants to lure an unsuspecting client into a trap to steal private information, or resources -- generating 10s of millions of dollars annually. Exploit kits are web-based services specifically designed to capitalize on vulnerabilities in unsuspecting client computers in order to install malware without a user's knowledge. Sadly, it only takes a single successful infection to ruin a user's financial life, or lead to corporate breaches that result in millions of dollars of expense and loss of customer trust. Exploit kits use a myriad of techniques to obfuscate each attack instance, making current network-based defenses such as signature-based network intrusion detection systems far less effective than in years past. Dynamic analysis or honeyclient analysis on these exploits plays a key role in identifying new attacks for signature generation, but provides no means of inspecting end-user traffic on the network to identify attacks in real time. As a result, defenses designed to stop such malfeasance often arrive too late or not at all resulting in high false positive and false negative (error) rates. In order to deal with these drawbacks, three new detection approaches are presented. To deal with the issue of a high number of errors, a new technique for detecting exploit kit interactions on a network is proposed. The technique capitalizes on the fact that an exploit kit leads its potential victim through a process of exploitation by forcing the browser to download multiple web resources from malicious servers. This process has an inherent structure that can be captured in HTTP traffic and used to significantly reduce error rates. The approach organizes HTTP traffic into tree-like data structures, and, using a scalable index of exploit kit traces as samples, models the detection process as a subtree similarity search problem. The technique is evaluated on 3,800 hours of web traffic on a large enterprise network, and results show that it reduces false positive rates by four orders of magnitude over current state-of-the-art approaches. While utilizing structure can vastly improve detection rates over current approaches, it does not go far enough in helping defenders detect new, previously unseen attacks. As a result, a new framework that applies dynamic honeyclient analysis directly on network traffic at scale is proposed. The framework captures and stores a configurable window of reassembled HTTP objects network wide, uses lightweight content rendering to establish the chain of requests leading up to a suspicious event, then serves the initial response content back to the honeyclient in an isolated network. The framework is evaluated on a diverse collection of exploit kits as they evolve over a 1 year period. The empirical evaluation suggests that the approach offers significant operational value, and a single honeyclient can support a campus deployment of thousands of users. While the above approaches attempt to detect exploit kits before they have a chance to infect the client, they cannot protect a client that has already been infected. The final technique detects signs of post infection behavior by intrusions that abuses the domain name system (DNS) to make contact with an attacker. Contemporary detection approaches utilize the structure of a domain name and require hundreds of DNS messages to detect such malware. As a result, these detection mechanisms cannot detect malware in a timely manner and are susceptible to high error rates. The final technique, based on sequential hypothesis testing, uses the DNS message patterns of a subset of DNS traffic to detect malware in as little as four DNS messages, and with orders of magnitude reduction in error rates. The results of this work can make a significant operational impact on network security analysis, and open several exciting future directions for network security research.Doctor of Philosoph

    Cyber Law and Espionage Law as Communicating Vessels

    Get PDF
    Professor Lubin\u27s contribution is Cyber Law and Espionage Law as Communicating Vessels, pp. 203-225. Existing legal literature would have us assume that espionage operations and “below-the-threshold” cyber operations are doctrinally distinct. Whereas one is subject to the scant, amorphous, and under-developed legal framework of espionage law, the other is subject to an emerging, ever-evolving body of legal rules, known cumulatively as cyber law. This dichotomy, however, is erroneous and misleading. In practice, espionage and cyber law function as communicating vessels, and so are better conceived as two elements of a complex system, Information Warfare (IW). This paper therefore first draws attention to the similarities between the practices – the fact that the actors, technologies, and targets are interchangeable, as are the knee-jerk legal reactions of the international community. In light of the convergence between peacetime Low-Intensity Cyber Operations (LICOs) and peacetime Espionage Operations (EOs) the two should be subjected to a single regulatory framework, one which recognizes the role intelligence plays in our public world order and which adopts a contextual and consequential method of inquiry. The paper proceeds in the following order: Part 2 provides a descriptive account of the unique symbiotic relationship between espionage and cyber law, and further explains the reasons for this dynamic. Part 3 places the discussion surrounding this relationship within the broader discourse on IW, making the claim that the convergence between EOs and LICOs, as described in Part 2, could further be explained by an even larger convergence across all the various elements of the informational environment. Parts 2 and 3 then serve as the backdrop for Part 4, which details the attempt of the drafters of the Tallinn Manual 2.0 to compartmentalize espionage law and cyber law, and the deficits of their approach. The paper concludes by proposing an alternative holistic understanding of espionage law, grounded in general principles of law, which is more practically transferable to the cyber realmhttps://www.repository.law.indiana.edu/facbooks/1220/thumbnail.jp

    Information flows at OS level unmask sophisticated Android malware

    No full text
    International audienceThe detection of new Android malware is far from being a relaxing job. Indeed, each day new Android malware appear in the market and it remains difficult to quickly identify them. Unfortunately users still pay the lack of real efficient tools able to detect zero day malware that have no known signature. The difficulty is that most of the existing approaches rely on static analysis coupled with the ability of malware to hide their malicious code. Thus, we believe that it should be easier to study what malware do instead of what they contain. In this article, we propose to unmask Android malware hidden among benign applications using the observed information flows at the OS level. For achieving such a goal, we introduce a simple characterization of all the accountable information flows of a standard benign application. With such a model for benign apps, we lead some experiments evidencing that malware present some deviations from the expected normal behavior. Experiments show that our model recognizes most of the 3206 tested benign applications and spots most of the tested sophisticated malware (ransomware, rootkits, bootkit)

    Secret texts and cipherballots: secret suffrage and remote electronic voting

    Get PDF
    Una de les principals preocupacions sobre el vot telemàtic és com preservar el sufragi secret. La llista d’estudis que afirmen que el vot per Internet és incompatible amb el secret del vot és força extensa. Si bé estudis posteriors sobre experiències reals han tingut resultats més matisats, les preocupacions sobre el sufragi secret i el vot telemàtic es mantenen. Abordar aquestes preocupacions esdevé una obligació ineludible. En aquest context, la nostra recerca és novadora. En primer lloc, el nostre punt de partida no es basa en definicions legals preexistents que s'accepten com a donades. Partint de l'enfocament universalista del dret constitucional comparat, hem entès que el principi del sufragi secret transcendeix les opinions i convencions lligades a comunitats polítiques concretes. Aquesta concepció comú i bàsica s'ha traduït en tres estàndards: individualitat, confidencialitat i anonimat. Aquests estàndards s’han de satisfer en qualsevol canal de votació. En segon lloc, hem adoptat un enfocament més ampli en l’aplicació d’aquest principi al vot telemàtic. Hem demostrat que el sufragi secret es pot garantir mitjançant la llei, el codi informàtic, les normes i fins i tot el mercat. La normativa actual tendeix a ser limitada perquè recorre a analogies amb els canals de votació en paper i no reconeix les especificitats del vot telemàtic. Per contra, aquí hem examinat el paper que exerceixen (i les limitacions pròpies) del xifrat asimètric, l'anonimització basada en mix-nets o el recompte homomòrfic, i el vot múltiple.Una de las principales preocupaciones sobre el voto telemático es cómo garantizar el secreto del voto. La lista de autores que afirman que el voto por Internet es incompatible con el sufragio secreto es considerable. Aunque las conclusiones de estudios posteriores sobre experiencias reales hayan sido más matizadas, las preocupaciones sobre el sufragio secreto y el voto telemático se mantienen. Abordar estas preocupaciones constituye en una obligación ineludible. En este contexto, nuestra investigación es novedosa. En primer lugar, nuestro punto de partida no se basa en definiciones legales preexistentes que se aceptan como dadas. Partiendo del enfoque universalista del derecho constitucional comparado, hemos entendido que el principio del sufragio secreto trasciende las opiniones y convenciones ligadas a la cultura de comunidades políticas concretas. Esta concepción se ha traducido en tres normas: individualidad, confidencialidad y anonimato. Estas normas deberían aplicarse a cualquier canal de votación. En segundo lugar, hemos adoptado un enfoque más amplio sobre la aplicación de este principio. Hemos demostrado que el sufragio secreto puede garantizarse mediante la ley, el código, las normas e incluso el mercado. La normativa actual tiende a ser limitada porque recurre a analogías con los canales de votación en papel y no reconoce las especificidades del voto telemático.One of the key concerns about remote electronic voting is how to preserve secret suffrage. The list of authors who claim that Internet voting is incompatible with the secrecy of the vote is actually quite long. Even if later studies that analysed the actual implementation of remote electronic voting in public political elections had more nuanced findings, concerns about secret suffrage and remote electronic voting remain. Addressing these concerns becomes an inescapable obligation. In this context, our research is quite novel. First and foremost, our starting point is not based on pre-existing legal definitions that are accepted as given. Drawing from the universalist approach to comparative constitutional law, we have understood that the principle of secret suffrage exists in such a way that it transcends the culture bound opinions and conventions of particular political communities. This core understanding has been translated into three standards: individuality, confidentiality, and anonymity. These standards should apply to any voting channel. Second, we have taken a wider approach at the enforcement of this principle. We have showed that secret suffrage may be enforced through law, code, norms, and even the market. Current regulations tend to be constrained because they resort to analogies with paper-based voting channels and fail to acknowledge the specificities of remote electronic voting. In contrast, we have examined the role played by (and the limitations of) asymmetric encryption, anonymization based on mix-nets or homomorphic tallying, and of multiple voting to enforce secret suffrage

    Constructing and restraining the societies of surveillance: Accountability, from the rise of intelligence services to the expansion of personal data networks in Spain and Brazil (1975-2020)

    Get PDF
    541 p.The objective of this study is to examine the development of socio-technical accountability mechanisms in order to: a) preserve and increase the autonomy of individuals subjected to surveillance and b) replenish the asymmetry of power between those who watch and those who are watched. To do so, we address two surveillance realms: intelligence services and personal data networks. The cases studied are Spain and Brazil, from the beginning of the political transitions in the 1970s (in the realm of intelligence), and from the expansion of Internet digital networks in the 1990s (in the realm of personal data) to the present time. The examination of accountability, thus, comprises a holistic evolution of institutions, regulations, market strategies, as well as resistance tactics. The conclusion summarizes the accountability mechanisms and proposes universal principles to improve the legitimacy of authority in surveillance and politics in a broad sense

    Intelligent Circuits and Systems

    Get PDF
    ICICS-2020 is the third conference initiated by the School of Electronics and Electrical Engineering at Lovely Professional University that explored recent innovations of researchers working for the development of smart and green technologies in the fields of Energy, Electronics, Communications, Computers, and Control. ICICS provides innovators to identify new opportunities for the social and economic benefits of society.  This conference bridges the gap between academics and R&D institutions, social visionaries, and experts from all strata of society to present their ongoing research activities and foster research relations between them. It provides opportunities for the exchange of new ideas, applications, and experiences in the field of smart technologies and finding global partners for future collaboration. The ICICS-2020 was conducted in two broad categories, Intelligent Circuits & Intelligent Systems and Emerging Technologies in Electrical Engineering
    corecore