114 research outputs found

    Effective Bug Finding

    Get PDF

    Security and Privacy for IoT Ecosystems

    Get PDF
    Smart devices have become an integral part of our everyday life. In contrast to smartphones and laptops, Internet of Things (IoT) devices are typically managed by the vendor. They allow little or no user-driven customization. Users need to use and trust IoT devices as they are, including the ecosystems involved in the processing and sharing of personal data. Ensuring that an IoT device does not leak private data is imperative. This thesis analyzes security practices in popular IoT ecosystems across several price segments. Our results show a gap between real-world implementations and state-of-the-art security measures. The process of responsible disclosure with the vendors revealed further practical challenges. Do they want to support backward compatibility with the same app and infrastructure over multiple IoT device generations? To which extent can they trust their supply chains in rolling out keys? Mature vendors have a budget for security and are aware of its demands. Despite this goodwill, developers sometimes fail at securing the concrete implementations in those complex ecosystems. Our analysis of real-world products reveals the actual efforts made by vendors to secure their products. Our responsible disclosure processes and publications of design recommendations not only increase security in existing products but also help connected ecosystem manufacturers to develop secure products. Moreover, we enable users to take control of their connected devices with firmware binary patching. If a vendor decides to no longer offer cloud services, bootstrapping a vendor-independent ecosystem is the only way to revive bricked devices. Binary patching is not only useful in the IoT context but also opens up these devices as research platforms. We are the first to publish tools for Bluetooth firmware and lower-layer analysis and uncover a security issue in Broadcom chips affecting hundreds of millions of devices manufactured by Apple, Samsung, Google, and more. Although we informed Broadcom and customers of their technologies of the weaknesses identified, some of these devices no longer receive official updates. For these, our binary patching framework is capable of building vendor-independent patches and retrofit security. Connected device vendors depend on standards; they rarely implement lower-layer communication schemes from scratch. Standards enable communication between devices of different vendors, which is crucial in many IoT setups. Secure standards help making products secure by design and, thus, need to be analyzed as early as possible. One possibility to integrate security into a lower-layer standard is Physical-Layer Security (PLS). PLS establishes security on the Physical Layer (PHY) of wireless transmissions. With new wireless technologies emerging, physical properties change. We analyze how suitable PLS techniques are in the domain of mmWave and Visible Light Communication (VLC). Despite VLC being commonly believed to be very secure due to its limited range, we show that using VLC instead for PLS is less secure than using it with Radio Frequency (RF) communication. The work in this thesis is applied to mature products as well as upcoming standards. We consider security for the whole product life cycle to make connected devices and IoT ecosystems more secure in the long term

    The Construction of a Static Source Code Scanner Focused on SQL Injection Vulnerabilties in Java

    Get PDF
    SQL injection attacks are a significant threat to web application security, allowing attackers to execute arbitrary SQL commands and gain unauthorized access to sensitive data. Static source code analysis is a widely used technique to identify security vulnerabilities in software, including SQL injection attacks. However, existing static source code scanners often produce false positives and require a high level of expertise to use effectively. This thesis presents the design and implementation of a static source code scanner for SQL injection vulnerabilities in Java queries. The scanner uses a combination of pattern matching and data flow analysis to detect SQL injection vulnerabilities in code. The scanner identifies vulnerable code by analyzing method calls, expressions, and variable declarations to detect potential vulnerabilities. To evaluate the scanner, malicious SQL code is manually injected in queries to test the scanner\u27s ability to detect vulnerabilities. The results showed that the scanner could identify a high percentage of SQL injection vulnerabilities. The limitations of the scanner include the inability to detect runtime user input validation and the reliance on predefined patterns and heuristics to identify vulnerabilities. Despite these limitations, the scanner provides a useful tool for junior developers to identify and address SQL injection vulnerabilities in their code. This thesis presents a static source code scanner that can effectively detect SQL injection vulnerabilities in Java web applications. The scanner\u27s design and implementation provide a useful contribution to the field of software security, and future work could focus on improving the scanner\u27s precision and addressing its limitations

    Doctor of Philosophy

    Get PDF
    dissertationSuccessful molecular diagnosis using an exome sequence hinges on accurate association of damaging variants to the patient's phenotype. Unfortunately, many clinical scenarios (e.g., single affected or small nuclear families) have little power to confidently identify damaging alleles using sequence data alone. Today's diagnostic tools are simply underpowered for accurate diagnosis in these situations, limiting successful diagnoses. In response, clinical genetics relies on candidate-gene and variant lists to limit the search space. Despite their practical utility, these lists suffer from inherent and significant limitations. The impact of false negatives on diagnostic accuracy is considerable because candidate-genes and variants lists are assembled ad hoc, choosing alleles based upon expert knowledge. Alleles not in the list are not considered-ending hope for novel discoveries. Rational alternatives to ad hoc assemblages of candidate lists are thus badly needed. In response, I created Phevor, the Phenotype Driven Variant Ontological Re-ranking tool. Phevor works by combining knowledge resident in biomedical ontologies, like the human phenotype and gene ontologies, with the outputs of variant-interpretation tools such as SIFT, GERP+, Annovar and VAAST. Phevor can then accurately to prioritize candidates identified by third-party variant-interpretation tools in light of knowledge found in the ontologies, effectively bypassing the need for candidate-gene and variant lists. Phevor differs from tools such as Phenomizer and Exomiser, as it does not postulate a set of fixed associations between genes and phenotypes. Rather, Phevor dynamically integrates knowledge resident in multiple bio-ontologies into the prioritization process. This enables Phevor to improve diagnostic accuracy for established diseases and previously undescribed or atypical phenotypes. Inserting known disease-alleles into otherwise healthy exomes benchmarked Phevor. Using the phenotype of the known disease, and the variant interpretation tool VAAST (Variant Annotation, Analysis and Search Tool), Phevor can rank 100% of the known alleles in the top 10 and 80% as the top candidate. Phevor is currently part of the pipeline used to diagnose cases as part the Utah Genome Project. Successful diagnoses of several phenotypes have proven Phevor to be a reliable diagnostic tool that can improve the analysis of any disease-gene search

    Expert Finding in Disparate Environments

    Get PDF
    Providing knowledge workers with access to experts and communities-of-practice is central to expertise sharing, and crucial to effective organizational performance, adaptation, and even survival. However, in complex work environments, it is difficult to know who knows what across heterogeneous groups, disparate locations, and asynchronous work. As such, where expert finding has traditionally been a manual operation there is increasing interest in policy and technical infrastructure that makes work visible and supports automated tools for locating expertise. Expert finding, is a multidisciplinary problem that cross-cuts knowledge management, organizational analysis, and information retrieval. Recently, a number of expert finders have emerged; however, many tools are limited in that they are extensions of traditional information retrieval systems and exploit artifact information primarily. This thesis explores a new class of expert finders that use organizational context as a basis for assessing expertise and for conferring trust in the system. The hypothesis here is that expertise can be inferred through assessments of work behavior and work derivatives (e.g., artifacts). The Expert Locator, developed within a live organizational environment, is a model-based prototype that exploits organizational work context. The system associates expertise ratings with expert’s signaling behavior and is extensible so that signaling behavior from multiple activity space contexts can be fused into aggregate retrieval scores. Post-retrieval analysis supports evidence review and personal network browsing, aiding users in both detection and selection. During operational evaluation, the prototype generated high-precision searches across a range of topics, and was sensitive to organizational role; ranking true experts (i.e., authorities) higher than brokers providing referrals. Precision increased with the number of activity spaces used in the model, but varied across queries. The highest performing queries are characterized by high specificity terms, and low organizational diffusion amongst retrieved experts; essentially, the highest rated experts are situated within organizational niches

    Recent development of antiSMASH and other computational approaches to mine secondary metabolite biosynthetic gene clusters

    Get PDF
    Many drugs are derived from small molecules produced by microorganisms and plants, so-called natural products. Natural products have diverse chemical structures, but the biosynthetic pathways producing those compounds are often organized as biosynthetic gene clusters (BGCs) and follow a highly conserved biosynthetic logic. This allows for the identification of core biosynthetic enzymes using genome mining strategies that are based on the sequence similarity of the involved enzymes/genes. However, mining for a variety of BGCs quickly approaches a complexity level where manual analyses are no longer possible and require the use of automated genome mining pipelines, such as the antiSMASH software. In this review, we discuss the principles underlying the predictions of antiSMASH and other tools and provide practical advice for their application. Furthermore, we discuss important caveats such as rule-based BGC detection, sequence and annotation quality and cluster boundary prediction, which all have to be considered while planning for, performing and analyzing the results of genome mining studies
    • …
    corecore