5,076 research outputs found

    The Transitivity of Trust Problem in the Interaction of Android Applications

    Full text link
    Mobile phones have developed into complex platforms with large numbers of installed applications and a wide range of sensitive data. Application security policies limit the permissions of each installed application. As applications may interact, restricting single applications may create a false sense of security for the end users while data may still leave the mobile phone through other applications. Instead, the information flow needs to be policed for the composite system of applications in a transparent and usable manner. In this paper, we propose to employ static analysis based on the software architecture and focused data flow analysis to scalably detect information flows between components. Specifically, we aim to reveal transitivity of trust problems in multi-component mobile platforms. We demonstrate the feasibility of our approach with Android applications, although the generalization of the analysis to similar composition-based architectures, such as Service-oriented Architecture, can also be explored in the future

    The use of data-mining for the automatic formation of tactics

    Get PDF
    This paper discusses the usse of data-mining for the automatic formation of tactics. It was presented at the Workshop on Computer-Supported Mathematical Theory Development held at IJCAR in 2004. The aim of this project is to evaluate the applicability of data-mining techniques to the automatic formation of tactics from large corpuses of proofs. We data-mine information from large proof corpuses to find commonly occurring patterns. These patterns are then evolved into tactics using genetic programming techniques

    S-FaaS: Trustworthy and Accountable Function-as-a-Service using Intel SGX

    Full text link
    Function-as-a-Service (FaaS) is a recent and already very popular paradigm in cloud computing. The function provider need only specify the function to be run, usually in a high-level language like JavaScript, and the service provider orchestrates all the necessary infrastructure and software stacks. The function provider is only billed for the actual computational resources used by the function invocation. Compared to previous cloud paradigms, FaaS requires significantly more fine-grained resource measurement mechanisms, e.g. to measure compute time and memory usage of a single function invocation with sub-second accuracy. Thanks to the short duration and stateless nature of functions, and the availability of multiple open-source frameworks, FaaS enables non-traditional service providers e.g. individuals or data centers with spare capacity. However, this exacerbates the challenge of ensuring that resource consumption is measured accurately and reported reliably. It also raises the issues of ensuring computation is done correctly and minimizing the amount of information leaked to service providers. To address these challenges, we introduce S-FaaS, the first architecture and implementation of FaaS to provide strong security and accountability guarantees backed by Intel SGX. To match the dynamic event-driven nature of FaaS, our design introduces a new key distribution enclave and a novel transitive attestation protocol. A core contribution of S-FaaS is our set of resource measurement mechanisms that securely measure compute time inside an enclave, and actual memory allocations. We have integrated S-FaaS into the popular OpenWhisk FaaS framework. We evaluate the security of our architecture, the accuracy of our resource measurement mechanisms, and the performance of our implementation, showing that our resource measurement mechanisms add less than 6.3% latency on standardized benchmarks

    Using Ontologies for the Design of Data Warehouses

    Get PDF
    Obtaining an implementation of a data warehouse is a complex task that forces designers to acquire wide knowledge of the domain, thus requiring a high level of expertise and becoming it a prone-to-fail task. Based on our experience, we have detected a set of situations we have faced up with in real-world projects in which we believe that the use of ontologies will improve several aspects of the design of data warehouses. The aim of this article is to describe several shortcomings of current data warehouse design approaches and discuss the benefit of using ontologies to overcome them. This work is a starting point for discussing the convenience of using ontologies in data warehouse design.Comment: 15 pages, 2 figure

    Cloud Computing and Cloud Automata as A New Paradigm for Computation

    Get PDF
    Cloud computing addresses how to make right resources available to right computation to improve scaling, resiliency and efficiency of the computation. We argue that cloud computing indeed, is a new paradigm for computation with a higher order of artificial intelligence (AI), and put forward cloud automata as a new model for computation. A high-level AI requires infusing features that mimic human functioning into AI systems. One of the central features is that humans learn all the time and the learning is incremental. Consequently, for AI, we need to use computational models, which reflect incremental learning without stopping (sentience). These features are inherent in reflexive, inductive and limit Turing machines. To construct cloud automata, we use the mathematical theory of Oracles, which include Oracles of Turing machines as its special case. We develop a hierarchical approach based on Oracles with different ranks that includes Oracle AI as a special case. Discussing a named-set approach, we describe an implementation of a high-performance edge cloud using hierarchical name-oriented networking and Oracle AI-based orchestration. We demonstrate how cloud automata with a control overlay allows microservice network provisioning, monitoring and reconfiguration to address non-deterministic fluctuations affecting their behavior without interrupting the overall evolution of computation

    Detecting and Refactoring Operational Smells within the Domain Name System

    Full text link
    The Domain Name System (DNS) is one of the most important components of the Internet infrastructure. DNS relies on a delegation-based architecture, where resolution of names to their IP addresses requires resolving the names of the servers responsible for those names. The recursive structures of the inter dependencies that exist between name servers associated with each zone are called dependency graphs. System administrators' operational decisions have far reaching effects on the DNSs qualities. They need to be soundly made to create a balance between the availability, security and resilience of the system. We utilize dependency graphs to identify, detect and catalogue operational bad smells. Our method deals with smells on a high-level of abstraction using a consistent taxonomy and reusable vocabulary, defined by a DNS Operational Model. The method will be used to build a diagnostic advisory tool that will detect configuration changes that might decrease the robustness or security posture of domain names before they become into production.Comment: In Proceedings GaM 2015, arXiv:1504.0244

    Analysis of Security Service Oriented Architecture (SOA) with Access Control Models Dynamic Level

    Full text link
    Now we are moving towards the "Internet of Things" (IOT) in millions of devices will be interconnected with each other, giving and taking information provided within a network that can work together. Because of computing and information processing itself IOT core supporters, So in this paper introduces "Service-Oriented Computing" (SOA) as one of the models that can be used. Where's it at each device can offer functionality as a standard service [4]. In SOA, we can make the resources available to each other in the IOT together. However, a major challenge in these service-oriented environment is the design of effective access control schemes. In SOA, the service will be invoked by a large number, and at the same time authentication and authorization need to cross several security domains are always used. In this paper, we present the analysis of data safety suatua WorkflowBased Access Control Model associated oriented (WABAC) to troubleshoot problems that occur within a system integration. The analysis showed that the point system function model based integration system that is lower than the legacy model of SOA-based systems, by designing several services using WOA approach. In addition, we have observed that the integrated model can guarantee the quality of service, security and reliability main, by applying SOA approach when needed. Finally, experimental results have proved that the service can be run side by side seamlessly without performance degradation and additional complexity

    Definition of a Method for the Formulation of Problems to be Solved with High Performance Computing

    Get PDF
    Computational power made available by current technology has been continuously increasing, however today’s problems are larger and more complex and demand even more computational power. Interest in computational problems has also been increasing and is an important research area in computer science. These complex problems are solved with computational models that use an underlying mathematical model and are solved using computer resources, simulation, and are run with High Performance Computing. For such computations, parallel computing has been employed to achieve high performance. This thesis identifies families of problems that can best be solved using modelling and implementation techniques of parallel computing such as message passing and shared memory. Few case studies are considered to show when the shared memory model is suitable and when the message passing model would be suitable. The models of parallel computing are implemented and evaluated using some algorithms and simulations. This thesis mainly focuses on showing the more suitable model of computing for the various scenarios in attaining High Performance Computing
    corecore