10,674 research outputs found

    Evaluation Methodologies in Software Protection Research

    Full text link
    Man-at-the-end (MATE) attackers have full control over the system on which the attacked software runs, and try to break the confidentiality or integrity of assets embedded in the software. Both companies and malware authors want to prevent such attacks. This has driven an arms race between attackers and defenders, resulting in a plethora of different protection and analysis methods. However, it remains difficult to measure the strength of protections because MATE attackers can reach their goals in many different ways and a universally accepted evaluation methodology does not exist. This survey systematically reviews the evaluation methodologies of papers on obfuscation, a major class of protections against MATE attacks. For 572 papers, we collected 113 aspects of their evaluation methodologies, ranging from sample set types and sizes, over sample treatment, to performed measurements. We provide detailed insights into how the academic state of the art evaluates both the protections and analyses thereon. In summary, there is a clear need for better evaluation methodologies. We identify nine challenges for software protection evaluations, which represent threats to the validity, reproducibility, and interpretation of research results in the context of MATE attacks

    Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques

    Full text link
    The rapid growth of demanding applications in domains applying multimedia processing and machine learning has marked a new era for edge and cloud computing. These applications involve massive data and compute-intensive tasks, and thus, typical computing paradigms in embedded systems and data centers are stressed to meet the worldwide demand for high performance. Concurrently, the landscape of the semiconductor field in the last 15 years has constituted power as a first-class design concern. As a result, the community of computing systems is forced to find alternative design approaches to facilitate high-performance and/or power-efficient computing. Among the examined solutions, Approximate Computing has attracted an ever-increasing interest, with research works applying approximations across the entire traditional computing stack, i.e., at software, hardware, and architectural levels. Over the last decade, there is a plethora of approximation techniques in software (programs, frameworks, compilers, runtimes, languages), hardware (circuits, accelerators), and architectures (processors, memories). The current article is Part I of our comprehensive survey on Approximate Computing, and it reviews its motivation, terminology and principles, as well it classifies and presents the technical details of the state-of-the-art software and hardware approximation techniques.Comment: Under Review at ACM Computing Survey

    Industry 4.0 project prioritization by using q-spherical fuzzy rough analytic hierarchy process

    Get PDF
    The Fourth Industrial Revolution, also known as Industry 4.0, is attracting a significant amount of attention because it has the potential to revolutionize a variety of industries by developing a production system that is fully automated and digitally integrated. The implementation of this transformation, however, calls for a significant investment of resources and may present difficulties in the process of adapting existing technology to new endeavors. Researchers have proposed integrating the Analytic Hierarchy Process (AHP) with extensions of fuzzy rough sets, such as the three-dimensional q-spherical fuzzy rough set (q-SFRS), which is effective in handling uncertainty and quantifying expert judgments, to prioritize projects related to Industry 4.0. This would allow the projects to be ranked in order of importance. In this article, a novel framework is presented that combines AHP with q-SFRS. To calculate aggregated values, the new framework uses a new formula called the q-spherical fuzzy rough arithmetic mean, when applied to a problem involving the selection of a project with five criteria for evaluation and four possible alternatives, the suggested framework produces results that are robust and competitive in comparison to those produced by other multi-criteria decision-making approaches

    Knowledge Distillation and Continual Learning for Optimized Deep Neural Networks

    Get PDF
    Over the past few years, deep learning (DL) has been achieving state-of-theart performance on various human tasks such as speech generation, language translation, image segmentation, and object detection. While traditional machine learning models require hand-crafted features, deep learning algorithms can automatically extract discriminative features and learn complex knowledge from large datasets. This powerful learning ability makes deep learning models attractive to both academia and big corporations. Despite their popularity, deep learning methods still have two main limitations: large memory consumption and catastrophic knowledge forgetting. First, DL algorithms use very deep neural networks (DNNs) with many billion parameters, which have a big model size and a slow inference speed. This restricts the application of DNNs in resource-constraint devices such as mobile phones and autonomous vehicles. Second, DNNs are known to suffer from catastrophic forgetting. When incrementally learning new tasks, the model performance on old tasks significantly drops. The ability to accommodate new knowledge while retaining previously learned knowledge is called continual learning. Since the realworld environments in which the model operates are always evolving, a robust neural network needs to have this continual learning ability for adapting to new changes

    DAG-based Task Orchestration for Edge Computing

    Full text link
    As we increase the number of personal computing devices that we carry (mobile devices, tablets, e-readers, and laptops) and these come equipped with increasing resources, there is a vast potential computation power that can be utilized from those devices. Edge computing promises to exploit these underlying computation resources closer to users to help run latency-sensitive applications such as augmented reality and video analytics. However, one key missing piece has been how to incorporate personally owned unmanaged devices into a usable edge computing system. The primary challenges arise due to the heterogeneity, lack of interference management, and unpredictable availability of such devices. In this paper we propose an orchestration framework IBDASH, which orchestrates application tasks on an edge system that comprises a mix of commercial and personal edge devices. IBDASH targets reducing both end-to-end latency of execution and probability of failure for applications that have dependency among tasks, captured by directed acyclic graphs (DAGs). IBDASH takes memory constraints of each edge device and network bandwidth into consideration. To assess the effectiveness of IBDASH, we run real application tasks on real edge devices with widely varying capabilities.We feed these measurements into a simulator that runs IBDASH at scale. Compared to three state-of-the-art edge orchestration schemes, LAVEA, Petrel, and LaTS, and two intuitive baselines, IBDASH reduces the end-to-end latency and probability of failure, by 14% and 41% on average respectively. The main takeaway from our work is that it is feasible to combine personal and commercial devices into a usable edge computing platform, one that delivers low latency and predictable and high availability

    Reinforcement Learning Empowered Unmanned Aerial Vehicle Assisted Internet of Things Networks

    Get PDF
    This thesis aims towards performance enhancement for unmanned aerial vehicles (UAVs) assisted internet of things network (IoT). In this realm, novel reinforcement learning (RL) frameworks have been proposed for solving intricate joint optimisation scenarios. These scenarios include, uplink, downlink and combined. The multi-access technique utilised is non-orthogonal multiple access (NOMA), as key enabler in this regime. The outcomes of this research entail, enhancement in key performance metrics, such as sum-rate, energy efficiency and consequent reduction in outage. For the scenarios involving uplink transmissions by IoT devices, adaptive and tandem rein forcement learning frameworks have been developed. The aim is to maximise capacity over fixed UAV trajectory. The adaptive framework is utilised in a scenario wherein channel suitability is ascertained for uplink transmissions utilising a fixed clustering regime in NOMA. Tandem framework is utilised in a scenario wherein multiple-channel resource suitability is ascertained along with, power allocation, dynamic clustering and IoT node associations to NOMA clusters and channels. In scenarios involving downlink transmission to IoT devices, an ensemble RL (ERL) frame work is proposed for sum-rate enhancement over fixed UAV trajectory. For dynamic UAV trajec tory, hybrid decision framework (HDF) is proposed for energy efficiency optimisation. Downlink transmission power and bandwidth is managed for NOMA transmissions over fixed and dynamic UAV trajectories, facilitating IoT networks. In UAV enabled relaying scenario, for control system plants and their respective remotely deployed sensors, a head start reinforcement learning framework based on deep learning is de veloped and implemented. NOMA is invoked, in both uplink and downlink transmissions for IoT network. Dynamic NOMA clustering, power management and nodes association along with UAV height control is jointly managed. The primary aim is the, enhancement of net sum-rate and its subsequent manifestation in facilitating the IoT assisted use case. The simulation results relating to aforesaid scenarios indicate, enhanced sum-rate, energy efficiency and reduced outage for UAV-assisted IoT networks. The proposed RL frameworks surpass in performance in comparison to existing frameworks as benchmarks for the same sce narios. The simulation platforms utilised are MATLAB and Python, for network modeling, RL framework design and validation

    A field-based computing approach to sensing-driven clustering in robot swarms

    Get PDF
    Swarm intelligence leverages collective behaviours emerging from interaction and activity of several “simple” agents to solve problems in various environments. One problem of interest in large swarms featuring a variety of sub-goals is swarm clustering, where the individuals of a swarm are assigned or choose to belong to zero or more groups, also called clusters. In this work, we address the sensing-based swarm clustering problem, where clusters are defined based on both the values sensed from the environment and the spatial distribution of the values and the agents. Moreover, we address it in a setting characterised by decentralisation of computation and interaction, and dynamicity of values and mobility of agents. For the solution, we propose to use the field-based computing paradigm, where computation and interaction are expressed in terms of a functional manipulation of fields, distributed and evolving data structures mapping each individual of the system to values over time. We devise a solution to sensing-based swarm clustering leveraging multiple concurrent field computations with limited domain and evaluate the approach experimentally by means of simulations, showing that the programmed swarms form clusters that well reflect the underlying environmental phenomena dynamics

    Dialogue without barriers. A comprehensive approach to dealing with stuttering

    Get PDF
    publishedVersio

    Architecture and Advanced Electronics Pathways Toward Highly Adaptive Energy- Efficient Computing

    Get PDF
    With the explosion of the number of compute nodes, the bottleneck of future computing systems lies in the network architecture connecting the nodes. Addressing the bottleneck requires replacing current backplane-based network topologies. We propose to revolutionize computing electronics by realizing embedded optical waveguides for onboard networking and wireless chip-to-chip links at 200-GHz carrier frequency connecting neighboring boards in a rack. The control of novel rate-adaptive optical and mm-wave transceivers needs tight interlinking with the system software for runtime resource management

    Convex Optimization for Machine Learning

    Get PDF
    This book covers an introduction to convex optimization, one of the powerful and tractable optimization problems that can be efficiently solved on a computer. The goal of the book is to help develop a sense of what convex optimization is, and how it can be used in a widening array of practical contexts with a particular emphasis on machine learning. The first part of the book covers core concepts of convex sets, convex functions, and related basic definitions that serve understanding convex optimization and its corresponding models. The second part deals with one very useful theory, called duality, which enables us to: (1) gain algorithmic insights; and (2) obtain an approximate solution to non-convex optimization problems which are often difficult to solve. The last part focuses on modern applications in machine learning and deep learning. A defining feature of this book is that it succinctly relates the “story” of how convex optimization plays a role, via historical examples and trending machine learning applications. Another key feature is that it includes programming implementation of a variety of machine learning algorithms inspired by optimization fundamentals, together with a brief tutorial of the used programming tools. The implementation is based on Python, CVXPY, and TensorFlow. This book does not follow a traditional textbook-style organization, but is streamlined via a series of lecture notes that are intimately related, centered around coherent themes and concepts. It serves as a textbook mainly for a senior-level undergraduate course, yet is also suitable for a first-year graduate course. Readers benefit from having a good background in linear algebra, some exposure to probability, and basic familiarity with Python
    • …
    corecore