1,460 research outputs found

    Policy Enforcement with Proactive Libraries

    Full text link
    Software libraries implement APIs that deliver reusable functionalities. To correctly use these functionalities, software applications must satisfy certain correctness policies, for instance policies about the order some API methods can be invoked and about the values that can be used for the parameters. If these policies are violated, applications may produce misbehaviors and failures at runtime. Although this problem is general, applications that incorrectly use API methods are more frequent in certain contexts. For instance, Android provides a rich and rapidly evolving set of APIs that might be used incorrectly by app developers who often implement and publish faulty apps in the marketplaces. To mitigate this problem, we introduce the novel notion of proactive library, which augments classic libraries with the capability of proactively detecting and healing misuses at run- time. Proactive libraries blend libraries with multiple proactive modules that collect data, check the correctness policies of the libraries, and heal executions as soon as the violation of a correctness policy is detected. The proactive modules can be activated or deactivated at runtime by the users and can be implemented without requiring any change to the original library and any knowledge about the applications that may use the library. We evaluated proactive libraries in the context of the Android ecosystem. Results show that proactive libraries can automati- cally overcome several problems related to bad resource usage at the cost of a small overhead.Comment: O. Riganelli, D. Micucci and L. Mariani, "Policy Enforcement with Proactive Libraries" 2017 IEEE/ACM 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), Buenos Aires, Argentina, 2017, pp. 182-19

    Advanced Security Analysis for Emergent Software Platforms

    Get PDF
    Emergent software ecosystems, boomed by the advent of smartphones and the Internet of Things (IoT) platforms, are perpetually sophisticated, deployed into highly dynamic environments, and facilitating interactions across heterogeneous domains. Accordingly, assessing the security thereof is a pressing need, yet requires high levels of scalability and reliability to handle the dynamism involved in such volatile ecosystems. This dissertation seeks to enhance conventional security detection methods to cope with the emergent features of contemporary software ecosystems. In particular, it analyzes the security of Android and IoT ecosystems by developing rigorous vulnerability detection methods. A critical aspect of this work is the focus on detecting vulnerable and unsafe interactions between applications that share common components and devices. Contributions of this work include novel insights and methods for: (1) detecting vulnerable interactions between Android applications that leverage dynamic loading features for concealing the interactions; (2) identifying unsafe interactions between smart home applications by considering physical and cyber channels; (3) detecting malicious IoT applications that are developed to target numerous IoT devices; (4) detecting insecure patterns of emergent security APIs that are reused from open-source software. In all of the four research thrusts, we present thorough security analysis and extensive evaluations based on real-world applications. Our results demonstrate that the proposed detection mechanisms can efficiently and effectively detect vulnerabilities in contemporary software platforms. Advisers: Hamid Bagheri and Qiben Ya

    Standard interface definition for avionics data bus systems

    Get PDF
    Data bus for avionics system of space shuttle, noting functions of interface unit, error detection and recovery, redundancy, and bus control philosoph

    A Framework for Detecting and Diagnosing Configuration Faults in Web Applications

    Get PDF
    Software portability is a key concern when target operational environments are highly configurable; variations in configuration settings can significantly impact software correctness. While portability is key for a wide range of software types, it is a significant challenge in web application development. The client configuration used to navigate and interact with web content is known to be an important factor in the subsequent quality of deployed web applications. With the widespread use of diverse, heterogeneous web client configurations, the results of web application deployment can vary unpredictably among users. Given existing approaches and limited development resources, attempting to develop web applications that are viewable, functional, and portable for the vast web configuration space is a significant undertaking. As a result, faults that only surface in precise configurations, termed configuration faults, have the potential to escape detection until web applications are fielded. This dissertation presents an automated, model-based framework that uses static analysis to detect and diagnose web configuration faults. This approach overcomes the limitations of current techniques by featuring an extensible model of the configuration space that enables efficient portability analysis across the vast array of client environments. The basic idea behind this approach is that source code fragments (i.e., HTML tags and CSS rules) embedded in web application source code adversely impact portability of web applications when they are unsupported in target client configurations; without proper support, the source code is either processed incorrectly or ignored, resulting in configuration faults. Using static analysis, configuration fault detection is performed by applying a model of the web application source against knowledge of support criteria; any unsupported source code detected is considered an index to potential configuration faults. In the effort to fully exploit this approach, improve practicality, and maximize fault detection efficiency, manual and automated approaches to knowledge acquisition have been implemented, variations of web application and client support knowledge models have been investigated, and visualization of configuration fault detection results has been explored. To optimize the automated acquisition of support knowledge, alternate learning strategies have been empirically investigated and provisions for capturing tag interaction have been integrated into the process

    Testing Feedforward Neural Networks Training Programs

    Full text link
    Nowadays, we are witnessing an increasing effort to improve the performance and trustworthiness of Deep Neural Networks (DNNs), with the aim to enable their adoption in safety critical systems such as self-driving cars. Multiple testing techniques are proposed to generate test cases that can expose inconsistencies in the behavior of DNN models. These techniques assume implicitly that the training program is bug-free and appropriately configured. However, satisfying this assumption for a novel problem requires significant engineering work to prepare the data, design the DNN, implement the training program, and tune the hyperparameters in order to produce the model for which current automated test data generators search for corner-case behaviors. All these model training steps can be error-prone. Therefore, it is crucial to detect and correct errors throughout all the engineering steps of DNN-based software systems and not only on the resulting DNN model. In this paper, we gather a catalog of training issues and based on their symptoms and their effects on the behavior of the training program, we propose practical verification routines to detect the aforementioned issues, automatically, by continuously validating that some important properties of the learning dynamics hold during the training. Then, we design, TheDeepChecker, an end-to-end property-based debugging approach for DNN training programs. We assess the effectiveness of TheDeepChecker on synthetic and real-world buggy DL programs and compare it with Amazon SageMaker Debugger (SMD). Results show that TheDeepChecker's on-execution validation of DNN-based program's properties succeeds in revealing several coding bugs and system misconfigurations, early on and at a low cost. Moreover, TheDeepChecker outperforms the SMD's offline rules verification on training logs in terms of detection accuracy and DL bugs coverage

    \u3cem\u3eTechnological Tethereds\u3c/em\u3e: Potential Impact of Untrustworthy Artificial Intelligence in Criminal Justice Risk Assessment Instruments

    Full text link
    Issues of racial inequality and violence are front and center today, as are issues surrounding artificial intelligence (“AI”). This Article, written by a law professor who is also a computer scientist, takes a deep dive into understanding how and why hacked and rogue AI creates unlawful and unfair outcomes, particularly for persons of color. Black Americans are disproportionally featured in criminal justice, and their stories are obfuscated. The seemingly endless back-to-back murders of George Floyd, Breonna Taylor, Ahmaud Arbery, and heartbreakingly countless others have finally shaken the United States from its slumbering journey towards intentional criminal justice reform. Myths about Black crime and criminals are embedded in the data collected by AI and do not tell the truth about race and crime. However, the number of Black people harmed by hacked and rogue AI will dwarf all historical records, and the gravity of harm is incomprehensible. The lack of technical transparency and legal accountability leaves wrongfully convicted defendants without legal remedies if they are unlawfully detained based on a cyberattack, faulty or hacked data, or rogue AI. Scholars and engineers acknowledge that the artificial intelligence that is giving recommendations to law enforcement, prosecutors, judges, and parole boards lacks the common sense of an eighteen-month-old child. This Article reviews the ways AI is used in the legal system and the courts’ response to this use. It outlines the design schemes of proprietary risk assessment instruments used in the criminal justice system, outlines potential legal theories for victims, and provides recommendations for legal and technical remedies to victims of hacked data in criminal justice risk assessment instruments. It concludes that, with proper oversight, AI can increase fairness in the criminal justice system, but without this oversight, AI-based products will further exacerbate the extinguishment of liberty interests enshrined in the Constitution. According to anti-lynching advocate, Ida B. Wells-Barnett, “The way to right wrongs is to turn the light of truth upon them.” Thus, transparency is vital to safeguarding equity through AI design and must be the first step. The Article seeks ways to provide that transparency, for the benefit of all America, but particularly persons of color who are far more likely to be impacted by AI deficiencies. It also suggests legal reforms that will help plaintiffs recover when AI goes rogue

    Can Algorithms Promote Fair Use?

    Get PDF
    In the past few years, advances in big data, machine learning and artificial intelligence have generated many questions in the intellectual property field. One question that has attracted growing attention concerns whether algorithms can be better deployed to promote fair use in copyright law. The debate on the feasibility of developing automated fair use systems is not new; it can be traced back to more than a decade ago. Nevertheless, recent technological advances have invited policymakers and commentators to revisit this earlier debate.As part of the Symposium on Intelligent Entertainment: Algorithmic Generation and Regulation of Creative Works, this Article examines whether algorithms can be better deployed to promote fair use in copyright law. It begins by explaining why policymakers and commentators have remained skeptical about such deployment. The article then builds the case for greater algorithmic deployment to promote fair use. It concludes by identifying areas to which policymakers and commentators should pay greater attention if automated fair use systems are to be developed
    • …
    corecore