3,613 research outputs found

    Mediating Human-Robot Collaboration through Mixed Reality Cues

    Get PDF
    abstract: This work presents a communication paradigm, using a context-aware mixed reality approach, for instructing human workers when collaborating with robots. The main objective of this approach is to utilize the physical work environment as a canvas to communicate task-related instructions and robot intentions in the form of visual cues. A vision-based object tracking algorithm is used to precisely determine the pose and state of physical objects in and around the workspace. A projection mapping technique is used to overlay visual cues on tracked objects and the workspace. Simultaneous tracking and projection onto objects enables the system to provide just-in-time instructions for carrying out a procedural task. Additionally, the system can also inform and warn humans about the intentions of the robot and safety of the workspace. It was hypothesized that using this system for executing a human-robot collaborative task will improve the overall performance of the team and provide a positive experience to the human partner. To test this hypothesis, an experiment involving human subjects was conducted and the performance (both objective and subjective) of the presented system was compared with a conventional method based on printed instructions. It was found that projecting visual cues enabled human subjects to collaborate more effectively with the robot and resulted in higher efficiency in completing the task.Dissertation/ThesisMasters Thesis Electrical Engineering 201

    Trust-Based Rating Prediction and Malicious Profile Detection in Online Social Recommender Systems

    Get PDF
    Online social networks and recommender systems have become an effective channel for influencing millions of users by facilitating exchange and spread of information. This dissertation addresses multiple challenges that are faced by online social recommender systems such as: i) finding the extent of information spread; ii) predicting the rating of a product; and iii) detecting malicious profiles. Most of the research in this area do not capture the social interactions and rely on empirical or statistical approaches without considering the temporal aspects. We capture the temporal spread of information using a probabilistic model and use non-linear differential equations to model the diffusion process. To predict the rating of a product, we propose a social trust model and use the matrix factorization method to estimate user\u27s taste by incorporating user-item rating matrix. The effect of tastes of friends of a user is captured using a trust model which is based on similarities between users and their centralities. Similarity is modeled using Vector Space Similarity and Pearson Correlation Coefficient algorithms, whereas degree, eigen-vector, Katz, and PageRank are used to model centrality. As rating of a product has tremendous influence on its saleability, social recommender systems are vulnerable to profile injection attacks that affect user\u27s opinion towards favorable or unfavorable recommendations for a product. We propose a classification approach for detecting attackers based on attributes that provide the likelihood of a user profile of that of an attacker. To evaluate the performance, we inject push and nuke attacks, and use precision and recall to identify the attackers. All proposed models have been validated using datasets from Facebook, Epinions, and Digg. Results exhibit that the proposed models are able to better predict the information spread, rating of a product, and identify malicious user profiles with high accuracy and low false positives

    One Explanation Does Not Fit All The Promise of Interactive Explanations for Machine Learning Transparency

    Get PDF
    The need for transparency of predictive systems based on Machine Learning algorithms arises as a consequence of their ever-increasing proliferation in the industry. Whenever black-box algorithmic predictions influence human affairs, the inner workings of these algorithms should be scrutinised and their decisions explained to the relevant stakeholders, including the system engineers, the system's operators and the individuals whose case is being decided. While a variety of interpretability and explainability methods is available, none of them is a panacea that can satisfy all diverse expectations and competing objectives that might be required by the parties involved. We address this challenge in this paper by discussing the promises of Interactive Machine Learning for improved transparency of black-box systems using the example of contrastive explanations -- a state-of-the-art approach to Interpretable Machine Learning. Specifically, we show how to personalise counterfactual explanations by interactively adjusting their conditional statements and extract additional explanations by asking follow-up "What if?" questions. Our experience in building, deploying and presenting this type of system allowed us to list desired properties as well as potential limitations, which can be used to guide the development of interactive explainers. While customising the medium of interaction, i.e., the user interface comprising of various communication channels, may give an impression of personalisation, we argue that adjusting the explanation itself and its content is more important. To this end, properties such as breadth, scope, context, purpose and target of the explanation have to be considered, in addition to explicitly informing the explainee about its limitations and caveats...Comment: Published in the Kunstliche Intelligenz journal, special issue on Challenges in Interactive Machine Learnin

    Building bridges for better machines : from machine ethics to machine explainability and back

    Get PDF
    Be it nursing robots in Japan, self-driving buses in Germany or automated hiring systems in the USA, complex artificial computing systems have become an indispensable part of our everyday lives. Two major challenges arise from this development: machine ethics and machine explainability. Machine ethics deals with behavioral constraints on systems to ensure restricted, morally acceptable behavior; machine explainability affords the means to satisfactorily explain the actions and decisions of systems so that human users can understand these systems and, thus, be assured of their socially beneficial effects. Machine ethics and explainability prove to be particularly efficient only in symbiosis. In this context, this thesis will demonstrate how machine ethics requires machine explainability and how machine explainability includes machine ethics. We develop these two facets using examples from the scenarios above. Based on these examples, we argue for a specific view of machine ethics and suggest how it can be formalized in a theoretical framework. In terms of machine explainability, we will outline how our proposed framework, by using an argumentation-based approach for decision making, can provide a foundation for machine explanations. Beyond the framework, we will also clarify the notion of machine explainability as a research area, charting its diverse and often confusing literature. To this end, we will outline what, exactly, machine explainability research aims to accomplish. Finally, we will use all these considerations as a starting point for developing evaluation criteria for good explanations, such as comprehensibility, assessability, and fidelity. Evaluating our framework using these criteria shows that it is a promising approach and augurs to outperform many other explainability approaches that have been developed so far.DFG: CRC 248: Center for Perspicuous Computing; VolkswagenStiftung: Explainable Intelligent System

    Transparency: from tractability to model explanations

    Get PDF
    As artificial intelligence (AI) and machine learning (ML) models get increasingly incorporated into critical applications, ranging from medical diagnosis to loan approval, they show a tremendous potential to impact society in a beneficial way, however, this is predicated on establishing a transparent relationship between humans and automation. In particular, transparency requirements span across multiple dimensions, incorporating both technical and societal aspects, in order to promote the responsible use of AI/ML. In this thesis we present contributions along both of these axes, starting with the technical side and model transparency, where we study ways to enhance tractable probabilistic models (TPMs) with properties that enable acquiring an in-depth understanding of their decision-making process. Following this, we expand the scope of our work, studying how providing explanations about a model’s predictions influences the extent to which humans understand and collaborate with it, and finally we design an introductory course into the emerging field of explanations in AI to foster the competent use of the developed tools and methodologies. In more detail, the complex design of TPMs makes it very challenging to extract information that conveys meaningful insights, despite the fact that they are closely related to Bayesian networks (BNs), which readily provide such information. This has led to TPMs being viewed as black-boxes, in the sense that their internal representations are elusive, in contrast to BNs. The first part of this thesis challenges this view, focusing on the question of whether it is feasible to extend certain transparent features of BNs to TPMs. We start with considering the problem of transforming TPMs into alternative graphical models in a way that makes their internal representations easy to inspect. Furthermore, we study the utility of existing algorithms in causal applications, where we identify some significant limitations. To remedy this situation, we propose a set of algorithms that result in transformations that accurately uncover the internal representations of TPMs. Following this result, we look into the problem of incorporating probabilistic constraints into TPMs. Although it is well known that BNs satisfy this property, the complex structure of TPMs impedes applying the same arguments, thus advances on this problem have been very limited. Having said that, in this thesis we provide formal proofs that TPMs can be made to satisfy both probabilistic and causal constraints through parameter manipulation, showing that incorporating a constraint corresponds to solving a system of multilinear equations. We conclude the technical contributions studying the problem of generating counterfactual instances for classifiers based on TPMs, motivated by the fact that BNs are the building blocks of most standard approaches to perform this task. In this thesis we propose a novel algorithm that we prove is guaranteed to generate valid counterfactuals. The resulting algorithm takes advantage of the multilinear structure of TPMs, generalizing existing approaches, while also allowing for incorporating a priori constraints that should be respected by the final counterfactuals. In the second part of this thesis we go beyond model transparency, looking into the role of explanations in achieving an effective collaboration between human users and AI. To study this we design a behavioural experiment where we show that explanations provide unique insights, which cannot be obtained by looking at more traditional uncertainty measures. The findings of this experiment provide evidence supporting the view that explanations and uncertainty estimates have complementary functions, advocating in favour of incorporating elements of both in order to promote a synergistic relationship between humans and AI. Finally, building on our findings, in this thesis we design a course on explanations in AI, where we focus on both the technical details of state-of-the-art algorithms as well as the overarching goals, limitations, and methodological approaches in the field. This contribution aims at ensuring that users can make competent use of explanations, a need that has also been highlighted by recent large scale social initiatives. The resulting course was offered by the University of Edinburgh, at an MSc level, where student evaluations, as well as their performance, showcased the course’s effectiveness in achieving its primary goals

    Investigation into Detection of Hardware Trojans on Printed Circuit Boards

    Get PDF
    The modern semiconductor device manufacturing flow is becoming increasingly vulnerable to malicious implants called Hardware Trojans (HT). With HTs becoming stealthier, a need for more accurate and efficient detection methods is becoming increasingly crucial at both Integrated Circuit (IC) and Printed Circuit Board (PCB) levels. While HT detection at an IC level has been widely studied, there is still very limited research on detecting and preventing HTs implanted on PCBs. In recent years the rise of outsourcing design and fabrication of electronics, including PCBs, to third parties has dramatically increased the possibility of malicious alteration and consequently the security risk for systems incorporating PCBs. Providing mechanical support for the electrical interconnections between different components, PCBs are an important part of electronic systems. Modern, complex and highly integrated designs may contain up to thirty layers, with concealed micro-vias and embedded passive components. An adversary can aim to modify the PCB design by tampering the copper interconnections or inserting extra components in an internal layer of a multi-layer board. Similar to its IC counterpart, a PCB HT can, among other things, cause system failure or leakage of private information. The disruptive actions of a carefully designed HT attack can have catastrophic implications and should therefore be taken seriously by industry, academia and the government. This thesis gives an account of work carried out in three projects concerned with HT detection on a PCB. In the first contribution a power analysis method is proposed for detecting HT components, implanted on the surface or otherwise, consuming power from the power distribution network. The assumption is that any HT device actively tampering or eavesdropping on the signals in the PCB circuit will consume electrical power. Harvesting this side-channel effect and observing the fluctuations of power consumption on the PCB power distribution network enables evincing the HT. Using a purpose-built PCB prototype, an experimental setup is developed for verification of the methodology. The results confirm the ability to detect alien components on a PCB without interference with its main functionality. In the second contribution the monitoring methodology is further developed by applying machine learning (ML) techniques to detect stealthier HTs, consuming power from I/O ports of legitimate ICs on the PCB. Two algorithms, One-Class Support Vector Machine (SVM) and Local Outlier Factor (LOF), are implemented on the legitimate power consumption data harvested experimentally from the PCB prototype. Simulation results are validated through real-life measurements and experiments are carried out on the prototype PCB. For validation of the ML classification models, one hundred categories of HTs are modelled and inserted into the datasets. Simulation results show that using the proposed methodology an HT can be detected with high prediction accuracy (F1-score at 99% for a 15 mW HT). Further, the developed ML model is uploaded to the prototype PCB for experimental validation. The results show consistency between simulations and experiments, with an average discrepancy of ±5.9% observed between One-Class SVM simulations and real-life experiments. The machine learning models developed for HT detection are low-cost in terms of memory (around 27 KB). In the third contribution an automated visual inspection methodology is proposed for detecting HTs on the surface of a PCB. It is based on a combination of conventional computer vision techniques and a dual tower Siamese Neural Network (SNN), modelled in a three stage pipeline. In the interest of making the proposed methodology broadly applicable a particular emphasis is made on the imaging modality of choice, whereby a regular digital optical camera is chosen. The dataset of PCB images is developed in a controlled environment of a photographic tent. The novelty in this work is that, instead of a generic production fault detection, the algorithm is optimised and trained specifically for implanted HT component detection on a PCB, be it active or passive. The proposed HT detection methodology is trained and tested with three groups of HTs, categorised based on their surface area, ranging from 4 mm² to 280 mm² and above. The results show that it is possible to reach effective detection accuracy of 95.1% for HTs as small as 4 mm². In case of HTs with surface area larger than 280 mm² the detection accuracy is around 96.1%, while the average performance across all HT groups is 95.6%

    The Proceedings of 14th Australian Information Security Management Conference, 5-6 December 2016, Edith Cowan University, Perth, Australia

    Get PDF
    The annual Security Congress, run by the Security Research Institute at Edith Cowan University, includes the Australian Information Security and Management Conference. Now in its fourteenth year, the conference remains popular for its diverse content and mixture of technical research and discussion papers. The area of information security and management continues to be varied, as is reflected by the wide variety of subject matter covered by the papers this year. The conference has drawn interest and papers from within Australia and internationally. All submitted papers were subject to a double blind peer review process. Fifteen papers were submitted from Australia and overseas, of which ten were accepted for final presentation and publication. We wish to thank the reviewers for kindly volunteering their time and expertise in support of this event. We would also like to thank the conference committee who have organised yet another successful congress. Events such as this are impossible without the tireless efforts of such people in reviewing and editing the conference papers, and assisting with the planning, organisation and execution of the conferences. To our sponsors also a vote of thanks for both the financial and moral support provided to the conference. Finally, thank you to the administrative and technical staff, and students of the ECU Security Research Institute for their contributions to the running of the conference

    Proceedings of the 2020 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    In 2020 fand der jährliche Workshop des Faunhofer IOSB und the Lehrstuhls für interaktive Echtzeitsysteme statt. Vom 27. bis zum 31. Juli trugen die Doktorranden der beiden Institute über den Stand ihrer Forschung vor in Themen wie KI, maschinellen Lernen, computer vision, usage control, Metrologie vor. Die Ergebnisse dieser Vorträge sind in diesem Band als technische Berichte gesammelt

    Time and Reconciliation. Dealing with Festering Wounds

    Get PDF
    • …
    corecore