23 research outputs found

    SlowFuzz: Automated Domain-Independent Detection of Algorithmic Complexity Vulnerabilities

    Full text link
    Algorithmic complexity vulnerabilities occur when the worst-case time/space complexity of an application is significantly higher than the respective average case for particular user-controlled inputs. When such conditions are met, an attacker can launch Denial-of-Service attacks against a vulnerable application by providing inputs that trigger the worst-case behavior. Such attacks have been known to have serious effects on production systems, take down entire websites, or lead to bypasses of Web Application Firewalls. Unfortunately, existing detection mechanisms for algorithmic complexity vulnerabilities are domain-specific and often require significant manual effort. In this paper, we design, implement, and evaluate SlowFuzz, a domain-independent framework for automatically finding algorithmic complexity vulnerabilities. SlowFuzz automatically finds inputs that trigger worst-case algorithmic behavior in the tested binary. SlowFuzz uses resource-usage-guided evolutionary search techniques to automatically find inputs that maximize computational resource utilization for a given application.Comment: ACM CCS '17, October 30-November 3, 2017, Dallas, TX, US

    Fast Deterministic Selection

    Get PDF
    The Median of Medians (also known as BFPRT) algorithm, although a landmark theoretical achievement, is seldom used in practice because it and its variants are slower than simple approaches based on sampling. The main contribution of this paper is a fast linear-time deterministic selection algorithm QuickselectAdaptive based on a refined definition of MedianOfMedians. The algorithm's performance brings deterministic selection---along with its desirable properties of reproducible runs, predictable run times, and immunity to pathological inputs---in the range of practicality. We demonstrate results on independent and identically distributed random inputs and on normally-distributed inputs. Measurements show that QuickselectAdaptive is faster than state-of-the-art baselines.Comment: Pre-publication draf

    Defending Hash Tables from Subterfuge with Depth Charge

    Full text link
    We consider the problem of defending a hash table against a Byzantine attacker that is trying to degrade the performance of query, insertion and deletion operations. Our defense makes use of resource burning (RB) -- the the verifiable expenditure of network resources -- where the issuer of a request incurs some RB cost. Our algorithm, Depth Charge, charges RB costs for operations based on the depth of the appropriate object in the list that the object hashes to in the table. By appropriately setting the RB costs, our algorithm mitigates the impact of an attacker on the hash table's performance. In particular, in the presence of a significant attack, our algorithm incurs a cost which is asymptotically less that the attacker's cost

    Search-based energy optimization of some ubiquitous algorithms

    Get PDF
    Reducing computational energy consumption is of growing importance, particularly at the extremes (i.e. mobile devices and datacentres). Despite the ubiquity of the JavaTM Virtual Machine (JVM), very little work has been done to apply Search Based Software Engineering (SBSE) to minimize the energy consumption of programs that run on it. We describe OPACITOR , a tool for measuring the energy consumption of JVM programs using a bytecode level model of energy cost. This has several advantages over time-based energy approximations or hardware measurements. It is: deterministic.  unaffected by the rest of the computational environment.  able to detect small changes in execution profile, making it highly amenable to metaheuristic search which requires locality of representation. We show how generic SBSE approaches coupled with OPACITOR achieve substantial energy savings for three widely-used software components. Multi-Layer Perceptron implementations minimis- ing both energy and error were found, and energy reductions of up to 70% and 39.85% were obtained over the original code for Quicksort and Object-Oriented container classes respectively. These highlight three important considerations for automatically reducing computational energy: tuning software to particular distributions of data; trading off energy use against functional properties; and handling internal dependencies which can exist within software that render simple sweeps over program variants sub-optimal. Against these, global search greatly simplifies the developer’s job, freeing development time for other tasks

    (Un)Safe Browsing

    Get PDF
    Users often accidentally or inadvertently click ma- licious phishing or malware website links, and in doing so they sacrifice secret information and sometimes even fully compromise their devices. These URLs are intelligently scripted to remain inconspicuous over the Internet. In light of the ever increasing number of such URLs, new ingenious strategies have been in- vented to detect them and inform the end user when he is tempted to access such a link. The Safe Browsing technique provides an exemplary service to identify unsafe websites and notify users and webmasters allowing them to protect themselves from harm. In this work, we show how to turn Google Safe Browsing services against itself and its users. We propose several Distributed Denial- of-Service attacks that simultaneously affect both the Google Safe Browsing server and the end user. Our attacks leverage on the false positive probability of the data structures used for malicious URL detection. This probability exists because a trade- off was made between Google's server load and client's memory consumption. Our attack is based on the forgery of malicious URLs to increase the false positive probability. Finally we show how Bloom filter combined with universal hash functions and prefix lengthening can fix the problem

    Hyper-quicksort: energy efficient sorting via the Templar framework for Template Method Hyper-heuristics

    Get PDF
    Scalability remains an issue for program synthesis: - We don’t yet know how to generate sizeable algorithms from scratch. - Generative approaches such as GP still work best at the scale of expressions (though some recent promising results). - Formal approaches require a strong mathematical background. - ... but human ingenuity already provides a vast repertoire of specialized algorithms, usually with known asymptotic behaviour. Given these limitations, how can we best use generative hyper-heuristics to improve upon human-designed algorithms
    corecore