2,655 research outputs found

    InternalBlue - Bluetooth Binary Patching and Experimentation Framework

    Full text link
    Bluetooth is one of the most established technologies for short range digital wireless data transmission. With the advent of wearables and the Internet of Things (IoT), Bluetooth has again gained importance, which makes security research and protocol optimizations imperative. Surprisingly, there is a lack of openly available tools and experimental platforms to scrutinize Bluetooth. In particular, system aspects and close to hardware protocol layers are mostly uncovered. We reverse engineer multiple Broadcom Bluetooth chipsets that are widespread in off-the-shelf devices. Thus, we offer deep insights into the internal architecture of a popular commercial family of Bluetooth controllers used in smartphones, wearables, and IoT platforms. Reverse engineered functions can then be altered with our InternalBlue Python framework---outperforming evaluation kits, which are limited to documented and vendor-defined functions. The modified Bluetooth stack remains fully functional and high-performance. Hence, it provides a portable low-cost research platform. InternalBlue is a versatile framework and we demonstrate its abilities by implementing tests and demos for known Bluetooth vulnerabilities. Moreover, we discover a novel critical security issue affecting a large selection of Broadcom chipsets that allows executing code within the attacked Bluetooth firmware. We further show how to use our framework to fix bugs in chipsets out of vendor support and how to add new security features to Bluetooth firmware

    Dagstuhl Reports : Volume 1, Issue 2, February 2011

    Get PDF
    Online Privacy: Towards Informational Self-Determination on the Internet (Dagstuhl Perspectives Workshop 11061) : Simone Fischer-Hübner, Chris Hoofnagle, Kai Rannenberg, Michael Waidner, Ioannis Krontiris and Michael Marhöfer Self-Repairing Programs (Dagstuhl Seminar 11062) : Mauro Pezzé, Martin C. Rinard, Westley Weimer and Andreas Zeller Theory and Applications of Graph Searching Problems (Dagstuhl Seminar 11071) : Fedor V. Fomin, Pierre Fraigniaud, Stephan Kreutzer and Dimitrios M. Thilikos Combinatorial and Algorithmic Aspects of Sequence Processing (Dagstuhl Seminar 11081) : Maxime Crochemore, Lila Kari, Mehryar Mohri and Dirk Nowotka Packing and Scheduling Algorithms for Information and Communication Services (Dagstuhl Seminar 11091) Klaus Jansen, Claire Mathieu, Hadas Shachnai and Neal E. Youn

    Enabling Richer Insight Into Runtime Executions Of Systems

    Get PDF
    Systems software of very large scales are being heavily used today in various important scenarios such as online retail, banking, content services, web search and social networks. As the scale of functionality and complexity grows in these software, managing the implementations becomes a considerable challenge for developers, designers and maintainers. Software needs to be constantly monitored and tuned for optimal efficiency and user satisfaction. With large scale, these systems incorporate significant degrees of asynchrony, parallelism and distributed executions, reducing the manageability of software including performance management. Adding to the complexity, developers are under pressure between developing new functionality for customers and maintaining existing programs. This dissertation argues that the manual effort currently required to manage performance of these systems is very high, and can be automated to both reduce the likelihood of problems and quickly fix them once identified. The execution logs from these systems are easily available and provide rich information about the internals at runtime for diagnosis purposes, but the volume of logs is simply too large for today\u27s techniques. Developers hence spend many human hours observing and investigating executions of their systems during development and diagnosis of software, for performance management. This dissertation proposes the application of machine learning techniques to automatically analyze logs from executions, to challenging tasks in different phases of the software lifecycle. It is shown that the careful application of statistical techniques to features extracted from instrumentation, can distill the rich log data into easily comprehensible forms for the developers

    Integrated Toolset for WSN Application Planning, Development, Commissioning and Maintenance: The WSN-DPCM ARTEMIS-JU Project

    Get PDF
    In this article we present the main results obtained in the ARTEMIS-JU WSN-DPCM project between October 2011 and September 2015. The first objective of the project was the development of an integrated toolset for Wireless sensor networks (WSN) application planning, development, commissioning and maintenance, which aims to support application domain experts, with limited WSN expertise, to efficiently develop WSN applications from planning to lifetime maintenance. The toolset is made of three main tools: one for planning, one for application development and simulation (which can include hardware nodes), and one for network commissioning and lifetime maintenance. The tools are integrated in a single platform which promotes software reuse by automatically selecting suitable library components for application synthesis and the abstraction of the underlying architecture through the use of a middleware layer. The second objective of the project was to test the effectiveness of the toolset for the development of two case studies in different domains, one for detecting the occupancy state of parking lots and one for monitoring air concentration of harmful gasses near an industrial site

    Diagnosing interoperability problems and debugging models by enhancing constraint satisfaction with case -based reasoning

    Get PDF
    Modeling, Diagnosis, and Model Debugging are the three main areas presented in this dissertation to automate the process of Interoperability Testing of networking protocols. The dissertation proposes a framework that uses the Constraint Satisfaction Problem (CSP) paradigm to define a modeling language and problem solving mechanism for interoperability testing, and uses Case-Based Reasoning (CBR) for debugging interoperability test cases. The dissertation makes three primary contributions: (1) Definition of a new modeling language using CSP and Object-Oriented Programming. This language is simple, declarative, and transparent. It provides a tool for testers to implement models of interoperability test cases. The dissertation introduces the notions of metavariables, metavalues and optional metavariables to improve the modeling language capabilities. It proposes modeling of test cases from test suite specifications that are usually used in interoperability testing performed manually by testers. Test suite specifications are written by organizations or individuals and break down the testing into modules of test cases that make diagnosis of problems more meaningful to testers. (2) Diagnosis of interoperability problems using search supplemented by consistency inference methods in a CSP context to support explanations of the problem solving behavior. These methods are adapted to the OO-based CSP context. Testers can then generate reports for individual test cases and for test groups from a test suite specification. (3) Detection and debugging of incompleteness and incorrectness in CSP models of interoperability test cases. This is done through the integration of two modes of reasoning, namely CBR and CSP. CBR manages cases that store information about updating models as well as cases that are related to interoperability problems where diagnosis fails to generate a useful explanation. For the latter cases, CBR recalls previous similar useful explanations

    Towards Exascale Scientific Metadata Management

    Full text link
    Advances in technology and computing hardware are enabling scientists from all areas of science to produce massive amounts of data using large-scale simulations or observational facilities. In this era of data deluge, effective coordination between the data production and the analysis phases hinges on the availability of metadata that describe the scientific datasets. Existing workflow engines have been capturing a limited form of metadata to provide provenance information about the identity and lineage of the data. However, much of the data produced by simulations, experiments, and analyses still need to be annotated manually in an ad hoc manner by domain scientists. Systematic and transparent acquisition of rich metadata becomes a crucial prerequisite to sustain and accelerate the pace of scientific innovation. Yet, ubiquitous and domain-agnostic metadata management infrastructure that can meet the demands of extreme-scale science is notable by its absence. To address this gap in scientific data management research and practice, we present our vision for an integrated approach that (1) automatically captures and manipulates information-rich metadata while the data is being produced or analyzed and (2) stores metadata within each dataset to permeate metadata-oblivious processes and to query metadata through established and standardized data access interfaces. We motivate the need for the proposed integrated approach using applications from plasma physics, climate modeling and neuroscience, and then discuss research challenges and possible solutions
    • …
    corecore