12 research outputs found

    A Survey From Distributed Machine Learning to Distributed Deep Learning

    Full text link
    Artificial intelligence has achieved significant success in handling complex tasks in recent years. This success is due to advances in machine learning algorithms and hardware acceleration. In order to obtain more accurate results and solve more complex problems, algorithms must be trained with more data. This huge amount of data could be time-consuming to process and require a great deal of computation. This solution could be achieved by distributing the data and algorithm across several machines, which is known as distributed machine learning. There has been considerable effort put into distributed machine learning algorithms, and different methods have been proposed so far. In this article, we present a comprehensive summary of the current state-of-the-art in the field through the review of these algorithms. We divide this algorithms in classification and clustering (traditional machine learning), deep learning and deep reinforcement learning groups. Distributed deep learning has gained more attention in recent years and most of studies worked on this algorithms. As a result, most of the articles we discussed here belong to this category. Based on our investigation of algorithms, we highlight limitations that should be addressed in future research

    Evaluating the benefits of combined and continuous Fog-to-Cloud architectures

    Get PDF
    The need to extend the features of Cloud computing to the edge of the network has fueled the development of new computing architectures, such as Fog computing. When put together, the combined and continuous use of fog and cloud computing, lays the foundation for a new and highly heterogeneous computing ecosystem, making the most out of both, cloud and fog. Incipient research efforts are devoted to propose a management architecture to properly manage such combination of resources, such as the reference architecture proposed by the OpenFog Consortium or the recent Fog-to-Cloud (F2C). In this paper, we pay attention to such a combined ecosystem and particularly evaluate the potential benefits of F2C in dynamic scenarios, considering computing resources mobility and different traffic patterns. By means of extensive simulations we specifically study the aspects of service response time, network bandwidth occupancy, power consumption and service disruption probability. The results indicate that a combined fog-to-cloud architecture brings significant performance benefits in comparison with the traditional standalone Cloud, e.g., over 50% reduction in terms of power consumption.Preprin

    Urban Design Evolved: The Impact of Computational Tools and Data-Driven Approaches on Urban Design Practices and Civic Participation

    Get PDF
    In recent years, the changing pattern of human activities, increasing data regarding the spatial environment, and the possibility of collecting and processing this data allowed us to reconsider how we approach urban design, with a focus on a digital-oriented and data-driven perspective. In this study, we examine the evolution of urban design by analyzing the roles of designers and citizen empowerment. Our analysis includes a literature review and semi-structured interviews with computational design experts. In this sense, the literature is reviewed to investigate previous discussions and findings about the topic, and semi-structured interviews were carried out with seven computational design experts. The experts were selected by considering two criteria: (1) their experience with computational urban design subjects in practice and (2) their academic research background. This study concludes that technology-driven urban design solutions change designers' relationship with data, opening new avenues for objective, data-driven & data-informed decision-making. There are few differences between traditional and computational design practices regarding user empowerment and participatory design. Moreover, technology-driven urban design tools and methods are still in their early stages and are rarely used in actual projects

    The Role of the Adversary Model in Applied Security Research

    Get PDF
    Adversary models have been integral to the design of provably-secure cryptographic schemes or protocols. However, their use in other computer science research disciplines is relatively limited, particularly in the case of applied security research (e.g., mobile app and vulnerability studies). In this study, we conduct a survey of prominent adversary models used in the seminal field of cryptography, and more recent mobile and Internet of Things (IoT) research. Motivated by the findings from the cryptography survey, we propose a classification scheme for common app-based adversaries used in mobile security research, and classify key papers using the proposed scheme. Finally, we discuss recent work involving adversary models in the contemporary research field of IoT. We contribute recommendations to aid researchers working in applied (IoT) security based upon our findings from the mobile and cryptography literature. The key recommendation is for authors to clearly define adversary goals, assumptions and capabilities

    UML consistency rules: a systematic mapping study

    Get PDF
    Context: The Unified Modeling Language (UML), with its 14 different diagram types, is the de-facto standard tool for objectoriented modeling and documentation. Since the various UML diagrams describe different aspects of one, and only one, software under development, they are not independent but strongly depend on each other in many ways. In other words, the UML diagrams describing a software must be consistent. Inconsistencies between these diagrams may be a source of the considerable increase of faults in software systems. It is therefore paramount that these inconsistencies be detected, ana

    Computer Science 2019 APR Self-Study & Documents

    Get PDF
    UNM Computer Science APR self-study report and review team report for Spring 2019, fulfilling requirements of the Higher Learning Commission

    Scalable Logic Defined Static Analysis

    Get PDF
    Logic languages such as Datalog have been proposed as a method for specifying flexible and customisable static analysers. Using Datalog, various classes of static analyses can be expressed precisely and succinctly, requiring fewer lines of code than hand-crafted analysers. In this paradigm, a static analysis specification is encoded by a set of declarative logic rules and an o -the-shelf solver is used to compute the result of the static analysis. Unfortunately, when large-scale analyses are employed, Datalog-based tools currently fail to scale in comparison to hand-crafted static analysers. As a result, Datalog-based analysers have largely remained an academic curiosity, rather than industrially respectful tools. This thesis outlines our e orts in understanding the sources of performance limitations in Datalog-based tools. We propose a novel evaluation technique that is predicated on the fact that in the case of static analysis, the logical specification is a design time artefact and hence does not change during evaluation. Thus, instead of directly evaluating Datalog rules, our approach leverages partial evaluation to synthesise a specialised static analyser from these rules. This approach enables a novel indexing optimisations that automatically selects an optimal set of indexes to speedup and minimise memory usage in the Datalog computation. Lastly, we explore the case of more expressive logics, namely, constrained Horn clause and their use in proving the correctness of programs. We identify a bottleneck in various symbolic evaluation algorithms that centre around Craig interpolation. We propose a method of improving these evaluation algorithms by a proposing a method of guiding theorem provers to discover relevant interpolants with respect to the input logic specification. The culmination of our work is implemented in a general-purpose and highperformance tool called Souffl´e. We describe Souffl´e and evaluate its performance experimentally, showing significant improvement over alternative techniques and its scalability in real-world industrial use cases

    Multi-agent based simulation of self-governing knowledge commons

    No full text
    The potential of user-generated sensor data for participatory sensing has motivated the formation of organisations focused on the exploitation of collected information and associated knowledge. Given the power and value of both the raw data and the derived knowledge, we advocate an open approach to data and intellectual-property rights. By treating user-generated content as well as derived information and knowledge as a common-pool resource, we hypothesise that all participants can be compensated fairly for their input. To test this hypothesis, we undertake an extensive review of experimental, commercial and social participatory-sensing applications, from which we identify that a decentralised, community-oriented governance model is required to support this open approach. We show that the Institutional Analysis and Design framework as introduced by Elinor Ostrom, in conjunction with a framework for self-organising electronic institutions, can be used to give both an architectural and algorithmic base for the necessary governance model, in terms of operational and collective choice rules specified in computational logic. As a basis for understanding the effect of governance on these applications, we develop a testbed which joins our logical formulation of the knowledge commons with a generic model of the participatory-sensing problem. This requires a multi-agent platform for the simulation of autonomous and dynamic agents, and a method of executing the logical calculus in which our electronic institution is specified. To this end, firstly, we develop a general purpose, high performance platform for multi-agent based simulation, Presage2. Secondly, we propose a method for translating event-calculus axioms into rules compatible with business rule engines, and provide an implementation for JBoss Drools along with a suite of modules for electronic institutions. Through our simulations we show that, when building electronic institutions for managing participatory sensing as a knowledge commons, proper enfranchisement of agents (as outlined in Ostrom's work) is key to striking a balance between endurance, fairness and reduction of greedy behaviour. We conclude with a set of guidelines for engineering knowledge commons for the next generation of participatory-sensing applications.Open Acces
    corecore