2,096 research outputs found
Testing Mobile Web Applications for W3C Best Practice Compliance
Adherence to best practices and standards when developing mobile web applications is important to achieving a quality outcome. As smartphones and tablet PCs continue to proliferate in the consumer electronics market, businesses and individuals are increasingly turning from the native application paradigm to HTML 5-based web applications as a means of software development and distribution. With an everincreasing reliance by users on the correct functioning of such applications, the requirement for stringent and comprehensive quality assurance measures is also brought sharply into focus. This research investigates the increasing trend towards mobile web application development in the mobile software domain, and assesses the requirement for an automated approach to best practice validation testing for mobile web applications. Contemporary approaches to automated web application testing are examined, with particular emphasis on issues relating to mobile web application tests. The individual guidelines proposed by the W3C Mobile Web Application Best Practices are analysed and where applicable automated conformance tests are implemented in a customised testing tool. A range of mobile web applications are tested using this tool in order to examine the extent to which implementation of the tested-for guidelines is detected. Automated tests were successfully implemented in respect of nearly 60% of the best practices
Enabling Social Applications via Decentralized Social Data Management
An unprecedented information wealth produced by online social networks,
further augmented by location/collocation data, is currently fragmented across
different proprietary services. Combined, it can accurately represent the
social world and enable novel socially-aware applications. We present
Prometheus, a socially-aware peer-to-peer service that collects social
information from multiple sources into a multigraph managed in a decentralized
fashion on user-contributed nodes, and exposes it through an interface
implementing non-trivial social inferences while complying with user-defined
access policies. Simulations and experiments on PlanetLab with emulated
application workloads show the system exhibits good end-to-end response time,
low communication overhead and resilience to malicious attacks.Comment: 27 pages, single ACM column, 9 figures, accepted in Special Issue of
Foundations of Social Computing, ACM Transactions on Internet Technolog
Principles of Security and Trust: 7th International Conference, POST 2018, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2018, Thessaloniki, Greece, April 14-20, 2018, Proceedings
authentication; computer science; computer software selection and evaluation; cryptography; data privacy; formal logic; formal methods; formal specification; internet; privacy; program compilers; programming languages; security analysis; security systems; semantics; separation logic; software engineering; specifications; verification; world wide we
Recommended from our members
Towards an aspect weaving BPEL engine
This position paper proposes the use of dynamic aspects and
the visitor design pattern to obtain a highly configurable and
extensible BPEL engine. Using these two techniques, the
core of this infrastructural software can be customised to
meet new requirements and add features such as debugging,
execution monitoring, or changing to another Web Service
selection policy. Additionally, it can easily be extended to
cope with customer-specific BPEL extensions. We propose
the use of dynamic aspects not only on the engine itself
but also on the workflow in order to tackle the problems of
Web Service hot deployment and hot fixes to long running
processes. In this way, composing aWeb Service "on-the-fly"
means weaving its choreography interface into the workflow
Recommended from our members
Modular and Safe Event-Driven Programming
Asynchronous event-driven systems are ubiquitous across domains such as device drivers, distributed systems, and robotics. These systems are notoriously hard to get right as the programmer needs to reason about numerous control paths resulting from the complex interleaving of events (or messages) and failures. Unsurprisingly, it is easy to introduce subtle errors while attempting to fill in gaps between high-level system specifications and their concrete implementations.This dissertation proposes new methods for programming safe event-driven asynchronous systems.In the first part of the thesis, we present ModP, a modular programming framework for compositional programming and testing of event-driven asynchronous systems.The ModP module system supports a novel theory of compositional refinement for assume-guarantee reasoning of dynamic event-driven asynchronous systems. We build a complex distributed systems software stack using ModP.Our results demonstrate that compositional reasoning can help scale model-checking (both explicit and symbolic) to large distributed systems.ModP is transforming the way asynchronous software is built at Microsoft and Amazon Web Services (AWS). Microsoft uses ModP for implementing safe device drivers and other software in the Windows kernel.AWS uses ModP for compositional model checking of complex distributed systems. While ModP simplifies analysis of such systems, the state space of industrial-scale systems remains extremely large.In the second part of this thesis, we present scalable verification and systematic testing approaches to further mitigate this state-space explosion problem.First, we introduce the concept of a delaying explorer to perform prioritized exploration of the behaviors of an asynchronous reactive program. A delaying explorer stratifies the search space using a custom strategy (tailored towards finding bugs faster), and a delay operation that allows deviation from that strategy. We show that prioritized search with a delaying explorer performs significantly better than existing approaches for finding bugs in asynchronous programs.Next, we consider the challenge of verifying time-synchronized systems; these are almost-synchronous systems as they are neither completely asynchronous nor synchronous.We introduce approximate synchrony, a sound and tunable abstraction for verification of almost-synchronous systems. We show how approximate synchrony can be used for verification of both time-synchronization protocols and applications running on top of them.Moreover, we show how approximate synchrony also provides a useful strategy to guide state-space exploration during model-checking.Using approximate synchrony and implementing it as a delaying explorer, we were able to verify the correctness of the IEEE 1588 distributed time-synchronization protocol and, in the process, uncovered a bug in the protocol that was well appreciated by the standards committee.In the final part of this thesis, we consider the challenge of programming a special class of event-driven asynchronous systems -- safe autonomous robotics systems.Our approach towards achieving assured autonomy for robotics systems consists of two parts: (1) a high-level programming language for implementing and validating the reactive robotics software stack; and (2) an integrated runtime assurance system to ensure that the assumptions used during design-time validation of the high-level software hold at runtime.Combining high-level programming language and model-checking with runtime assurance helps us bridge the gap between design-time software validation that makes assumptions about the untrusted components (e.g., low-level controllers), and the physical world, and the actual execution of the software on a real robotic platform in the physical world. We implemented our approach as DRONA, a programming framework for building safe robotics systems.We used DRONA for building a distributed mobile robotics system and deployed it on real drone platforms. Our results demonstrate that DRONA (with the runtime-assurance capabilities) enables programmers to build an autonomous robotics software stack with formal safety guarantees.To summarize, this thesis contributes new theory and tools to the areas of programming languages, verification, systematic testing, and runtime assurance for programming safe asynchronous event-driven across the domains of fault-tolerant distributed systems and safe autonomous robotics systems
CONTEXT MANAGEMENT: TOWARD ASSESSING QUALITY OF CONTEXT PARAMETERS IN A UBIQUITOUS AMBIENT ASSISTED LIVING ENVIRONMENT
This paper provides an approach to assessing Quality of Context (QoC) parameters in a ubiquitous Ambient Assisted Living (AAL) environment. Initially, the study presents a literature review on QoC, generating taxonomy. Then it introduces the context management architecture used. The proposal is verified with the Siafu simulator in an AAL scenario where the user’s health is monitored with information about blood pressure, heart rate and body temperature. Considering some parameters, the proposed QoC assessment allows verifying the extent to which the context information is up-to-date, valid, accurate, complete and significant. The implementation of this proposal might mean a big social impact and a technological innovation applied to AAL, at the disposal and support of a significant number of individuals such as elderly or sick people, and with a more precise technology
Cyber-security for embedded systems: methodologies, techniques and tools
L'abstract è presente nell'allegato / the abstract is in the attachmen
Parallel Simulation of Very Large-Scale General Cache Networks
In this paper we propose a methodology for the study of general cache networks, which is intrinsically scalable and amenable to parallel execution. We contrast two techniques: one that slices the network, and another that slices the content catalog. In the former, each core simulates requests for the whole catalog on a subgraph of the original topology, whereas in the latter each core simulates requests for a portion of the original catalog on a replica of the whole network. Interestingly, we find out that when the number of cores increases (and so the split ratio of the network topology), the overhead of message passing required to keeping consistency among nodes actually offsets any benefit from the parallelization: this is strictly due to the correlation among neighboring caches, meaning that requests arriving at one cache allocated on one core may depend on the status of one or more caches allocated on different cores. Even more interestingly, we find out that the newly proposed catalog slicing, on the contrary, achieves an ideal speedup in the number of cores. Overall, our system, which we make available as open source software, enables performance assessment of large scale general cache networks, i.e., comprising hundreds of nodes, trillions contents, and complex routing and caching algorithms, in minutes of CPU time and with exiguous amounts of memory
Recommended from our members
CacheCash: A Cryptocurrency-based Decentralized Content Delivery Network
Online content delivery has witnessed dramatic growth recently with traffic consuming over half of today’s Internet bandwidth. This escalating demand has motivated content publishers to move outside the traditional solutions of infrastructure-based content delivery networks (CDNs). Instead, many are employing peer-to-peer data transfers to reduce the service cost and avoid bandwidth over-provision to handle peak demands. Unfortunately, the open access work model of this paradigm, which allows anyone to join, introduces several design challenges related to security, efficiency, and peer availability.
In this dissertation, we introduce CacheCash, a cryptocurrency-based decentralized content distribution network designed to address these challenges. CacheCash bypasses the centralized approach of CDN companies for one in which end users organically set up new caches in exchange for cryptocurrency tokens. Thus, it enables publishers to hire caches on an as-needed basis, without constraining these parties with long-term business commitments.
To address the challenges encountered as the system evolved, we propose a number of protocols and techniques that represent basic building blocks of CacheCash’s design. First, motivated by the observation that conventional security assessment tools do not suit cryptocurrency-based systems, we propose ABC, a threat modeling framework capable of identifying attacker collusion and the new threat vectors that cryptocurrencies introduce. Second, we propose CAPnet, a defense mechanism against cache accounting attacks (i.e., a client pretends to be served allowing a colluding cache to collect rewards without doing any work). CAPnet features a bandwidth expenditure puzzle that clients must solve over the content before caches are given credit, which bounds the effectiveness of this collusion case. Third, to make it feasible to reward caches per data chunk served, we introduce MicroCash, a decentralized probabilistic micropayment scheme that reduces the overhead of processing these small payments. MicroCash implements several novel ideas that make micropayments more suitable for delay-sensitive applications, such as online content delivery.
CacheCash combines the previous techniques to produce a novel service-payment exchange protocol that secures the content distribution process. This protocol utilizes gradual content disclosure and partial payment collection to encourage the honest collaborative work between participants. We present a detailed game theoretic analysis showing how to exploit rational financial incentives to address several security threats. This is in addition to various performance optimization mechanisms that promote system efficiency and scalability. Lastly, we evaluate system performance and show that modest machines can serve/retrieve content at a high bitrate with minimal overhead
- …