70,992 research outputs found
Improvements of and Extensions to FSMWeb: Testing Mobile Apps
A mobile application is a software program that runs on mobile device. In 2017, 178.1 billion mobile apps downloaded and the number is expected to grow to 258.2 billion app downloads in 2022 [19]. The number of app downloads poses a challenge for mobile application testers to find the right approach to test apps. This dissertation extends the FSMWeb approach for testing web applications [50] to test mobile applications (FSMApp). During the process of analyzing FSMWeb how it could be extended to test Mobile Apps, a number of shortcomings were detected which we improved upon. We discuss these first. We present an approach to generate black-box tests to test fail-safe behavior for web applications. We apply the approach to a large commercial web application. The approach uses a functional (behavioral) model to generate tests. It then determines at which states in the execution of behavioral test failures can occur and what mitigation requirements need to be tested. Mitigation requirements are used to build mitigation models for each failure type. From those mitigation models failure mitigation tests are generated. Next, this dissertation provides an approach for selective black-box model-based fail-safe regression testing for web applications. It classifies existing tests and test requirements as reusable, retestable, and obsolete. Removing reusable test requirements reduces test requirements between 49% to 65% in the case study. The approach also uses partial regeneration for new tests wherever possible. Third, we present the new FSMApp approach to test mobile applications and compare the approach with several other approaches [88, 37]. A number of case studies explore applicability, scalability, effectiveness, and efficiency of FSMApp with other approaches. Future work makes suggestion on how to improve test generation and execution efficiency with FSMApp
Full-scale testing, production and cost analysis data for the advanced composite stabilizer for Boeing 737 aircraft, volume 2
The development, testing, production activities, and associated costs that were required to produce five-and-one-half advanced-composite stabilizer shipsets for Boeing 737 aircraft are defined and discussed
Arguing Machines: Human Supervision of Black Box AI Systems That Make Life-Critical Decisions
We consider the paradigm of a black box AI system that makes life-critical
decisions. We propose an "arguing machines" framework that pairs the primary AI
system with a secondary one that is independently trained to perform the same
task. We show that disagreement between the two systems, without any knowledge
of underlying system design or operation, is sufficient to arbitrarily improve
the accuracy of the overall decision pipeline given human supervision over
disagreements. We demonstrate this system in two applications: (1) an
illustrative example of image classification and (2) on large-scale real-world
semi-autonomous driving data. For the first application, we apply this
framework to image classification achieving a reduction from 8.0% to 2.8% top-5
error on ImageNet. For the second application, we apply this framework to Tesla
Autopilot and demonstrate the ability to predict 90.4% of system disengagements
that were labeled by human annotators as challenging and needing human
supervision
Design, ancillary testing, analysis and fabrication data for the advanced composite stabilizer for Boeing 737 aircraft, volume 2
Results of tests conducted to demonstrate that composite structures save weight, possess long term durability, and can be fabricated at costs competitive with conventional metal structures are presented with focus on the use of graphite-epoxy in the design of a stabilizer for the Boeing 737 aircraft. Component definition, materials evaluation, material design properties, and structural elements tests are discussed. Fabrication development, as well as structural repair and inspection are also examined
Recommended from our members
Assessing the reliability of diverse fault-tolerant software-based systems
We discuss a problem in the safety assessment of automatic control and protection systems. There is an increasing dependence on software for performing safety-critical functions, like the safety shut-down of dangerous plants. Software brings increased risk of design defects and thus systematic failures; redundancy with diversity between redundant channels is a possible defence. While diversity techniques can improve the dependability of software-based systems, they do not alleviate the difficulties of assessing whether such a system is safe enough for operation. We study this problem for a simple safety protection system consisting of two diverse channels performing the same function. The problem is evaluating its probability of failure in demand. Assuming failure independence between dangerous failures of the channels is unrealistic. One can instead use evidence from the observation of the whole system's behaviour under realistic test conditions. Standard inference procedures can then estimate system reliability, but they take no advantage of a system’s fault-tolerant structure. We show how to extend these techniques to take account of fault tolerance by a conceptually straightforward application of Bayesian inference. Unfortunately, the method is computationally complex and requires the conceptually difficult step of specifying 'prior' distributions for the parameters of interest. This paper presents the correct inference procedure, exemplifies possible pitfalls in its application and clarifies some non-intuitive issues about reliability assessment for fault-tolerant software
An Evasion Attack against ML-based Phishing URL Detectors
Background: Over the year, Machine Learning Phishing URL classification
(MLPU) systems have gained tremendous popularity to detect phishing URLs
proactively. Despite this vogue, the security vulnerabilities of MLPUs remain
mostly unknown. Aim: To address this concern, we conduct a study to understand
the test time security vulnerabilities of the state-of-the-art MLPU systems,
aiming at providing guidelines for the future development of these systems.
Method: In this paper, we propose an evasion attack framework against MLPU
systems. To achieve this, we first develop an algorithm to generate adversarial
phishing URLs. We then reproduce 41 MLPU systems and record their baseline
performance. Finally, we simulate an evasion attack to evaluate these MLPU
systems against our generated adversarial URLs. Results: In comparison to
previous works, our attack is: (i) effective as it evades all the models with
an average success rate of 66% and 85% for famous (such as Netflix, Google) and
less popular phishing targets (e.g., Wish, JBHIFI, Officeworks) respectively;
(ii) realistic as it requires only 23ms to produce a new adversarial URL
variant that is available for registration with a median cost of only
$11.99/year. We also found that popular online services such as Google
SafeBrowsing and VirusTotal are unable to detect these URLs. (iii) We find that
Adversarial training (successful defence against evasion attack) does not
significantly improve the robustness of these systems as it decreases the
success rate of our attack by only 6% on average for all the models. (iv)
Further, we identify the security vulnerabilities of the considered MLPU
systems. Our findings lead to promising directions for future research.
Conclusion: Our study not only illustrate vulnerabilities in MLPU systems but
also highlights implications for future study towards assessing and improving
these systems.Comment: Draft for ACM TOP
Exploiting Input Sanitization for Regex Denial of Service
Web services use server-side input sanitization to guard against harmful input. Some web services publish their sanitization logic to make their client interface more usable, e.g., allowing clients to debug invalid requests locally. However, this usability practice poses a security risk. Specifically, services may share the regexes they use to sanitize input strings — and regex-based denial of service (ReDoS) is an emerging threat. Although prominent service outages caused by ReDoS have spurred interest in this topic, we know little about the degree to which live web services are vulnerable to ReDoS.
In this paper, we conduct the first black-box study measuring the extent of ReDoS vulnerabilities in live web services. We apply the Consistent Sanitization Assumption: that client-side sanitization logic, including regexes, is consistent with the sanitization logic on the server-side. We identify a service’s regex-based input sanitization in its HTML forms or its API, find vulnerable regexes among these regexes, craft ReDoS probes, and pinpoint vulnerabilities. We analyzed the HTML forms of 1,000 services and the APIs of 475 services. Of these, 355 services publish regexes; 17 services publish unsafe regexes; and 6 services are vulnerable to ReDoS through their APIs (6 domains; 15 subdomains). Both Microsoft and Amazon Web Services patched their web services as a result of our disclosure. Since these vulnerabilities were from API specifications, not HTML forms, we proposed a ReDoS defense for a popular API validation library, and our patch has been merged. To summarize: in client-visible sanitization logic, some web services advertise ReDoS vulnerabilities in plain sight. Our results motivate short-term patches and long-term fundamental solutions
- …