924 research outputs found

    Security and Privacy for Modern Wireless Communication Systems

    Get PDF
    The aim of this reprint focuses on the latest protocol research, software/hardware development and implementation, and system architecture design in addressing emerging security and privacy issues for modern wireless communication networks. Relevant topics include, but are not limited to, the following: deep-learning-based security and privacy design; covert communications; information-theoretical foundations for advanced security and privacy techniques; lightweight cryptography for power constrained networks; physical layer key generation; prototypes and testbeds for security and privacy solutions; encryption and decryption algorithm for low-latency constrained networks; security protocols for modern wireless communication networks; network intrusion detection; physical layer design with security consideration; anonymity in data transmission; vulnerabilities in security and privacy in modern wireless communication networks; challenges of security and privacy in node–edge–cloud computation; security and privacy design for low-power wide-area IoT networks; security and privacy design for vehicle networks; security and privacy design for underwater communications networks

    NASA/American Society for Engineering Education (ASEE) Summer Faculty Fellowship Program 1992

    Get PDF
    Since 1964, the National Aeronautics and Space Administration (NASA) has supported a program of summer faculty fellowships for engineering and science educators. In a series of collaborations between NASA research and development centers and nearby universities, engineering faculty members spend 10 weeks working with professional peers on research. The Summer Faculty Program Committee of the American Society for Engineering Education supervises the programs. Objectives of the program are (1) to further the professional knowledge of qualified engineering and science faculty members; (2) to stimulate and exchange ideas between participants and NASA; (3) to enrich and refresh the research and teaching activities of participants' institutions; and (4) to contribute to the research objectives of the NASA center

    NASA Lewis Research Center Futuring Workshop

    Get PDF
    On October 21 and 22, 1986, the Futures Group ran a two-day Futuring Workshop on the premises of NASA Lewis Research Center. The workshop had four main goals: to acquaint participants with the general history of technology forecasting; to familiarize participants with the range of forecasting methodologies; to acquaint participants with the range of applicability, strengths, and limitations of each method; and to offer participants some hands-on experience by working through both judgmental and quantitative case studies. Among the topics addressed during this workshop were: information sources; judgmental techniques; quantitative techniques; merger of judgment with quantitative measurement; data collection methods; and dealing with uncertainty

    Power and wind power: exploring experiences of renewable energy planning processes in Scotland.

    Get PDF
    Energy use and production have become highly salient within both national and international policy. This reflects an international recognition of the need to cut emissions in order to mitigate the threats of climate change. Within the UK there is significant policy support for renewable energy development generally, and wind power in particular. Nevertheless, the UK is not expected to meet its targets for renewable energy production. This is often portrayed as being the result of localised public opposition to particular proposed developments. However, this thesis challenges the notion that local objectors are powerful actors within renewable energy deployment. A detailed, multi-method case study of one planning application for a wind power development was conducted in order to explore how the planning process is experienced and perceived by various different actors involved (i.e. representatives of the developers, local objectors, local supporters). The findings refute the assertion that localised opposition presents significant obstacles for the development of renewable energy; they instead highlight the limited influence of objectors. In order to understand the many different forms of power which may be exercised the research employs Lukes three-dimensional view of power as a framework of how the concept is to be understood. Through this framework, the thesis does not only consider the power of objectors, but also of prospective developers and the forms of power that are found within the structures of the planning system. Power is considered to be visible not only in the outcomes of decision-making processes but also in the processes themselves. It is shown that whilst planning processes are presented as being public and democratic, considerable power is exercised in controlling the participation that is allowed and ultimately the range of outcomes which can be achieved

    Digital Signal Processing (Second Edition)

    Get PDF
    This book provides an account of the mathematical background, computational methods and software engineering associated with digital signal processing. The aim has been to provide the reader with the mathematical methods required for signal analysis which are then used to develop models and algorithms for processing digital signals and finally to encourage the reader to design software solutions for Digital Signal Processing (DSP). In this way, the reader is invited to develop a small DSP library that can then be expanded further with a focus on his/her research interests and applications. There are of course many excellent books and software systems available on this subject area. However, in many of these publications, the relationship between the mathematical methods associated with signal analysis and the software available for processing data is not always clear. Either the publications concentrate on mathematical aspects that are not focused on practical programming solutions or elaborate on the software development of solutions in terms of working ‘black-boxes’ without covering the mathematical background and analysis associated with the design of these software solutions. Thus, this book has been written with the aim of giving the reader a technical overview of the mathematics and software associated with the ‘art’ of developing numerical algorithms and designing software solutions for DSP, all of which is built on firm mathematical foundations. For this reason, the work is, by necessity, rather lengthy and covers a wide range of subjects compounded in four principal parts. Part I provides the mathematical background for the analysis of signals, Part II considers the computational techniques (principally those associated with linear algebra and the linear eigenvalue problem) required for array processing and associated analysis (error analysis for example). Part III introduces the reader to the essential elements of software engineering using the C programming language, tailored to those features that are used for developing C functions or modules for building a DSP library. The material associated with parts I, II and III is then used to build up a DSP system by defining a number of ‘problems’ and then addressing the solutions in terms of presenting an appropriate mathematical model, undertaking the necessary analysis, developing an appropriate algorithm and then coding the solution in C. This material forms the basis for part IV of this work. In most chapters, a series of tutorial problems is given for the reader to attempt with answers provided in Appendix A. These problems include theoretical, computational and programming exercises. Part II of this work is relatively long and arguably contains too much material on the computational methods for linear algebra. However, this material and the complementary material on vector and matrix norms forms the computational basis for many methods of digital signal processing. Moreover, this important and widely researched subject area forms the foundations, not only of digital signal processing and control engineering for example, but also of numerical analysis in general. The material presented in this book is based on the lecture notes and supplementary material developed by the author for an advanced Masters course ‘Digital Signal Processing’ which was first established at Cranfield University, Bedford in 1990 and modified when the author moved to De Montfort University, Leicester in 1994. The programmes are still operating at these universities and the material has been used by some 700++ graduates since its establishment and development in the early 1990s. The material was enhanced and developed further when the author moved to the Department of Electronic and Electrical Engineering at Loughborough University in 2003 and now forms part of the Department’s post-graduate programmes in Communication Systems Engineering. The original Masters programme included a taught component covering a period of six months based on two semesters, each Semester being composed of four modules. The material in this work covers the first Semester and its four parts reflect the four modules delivered. The material delivered in the second Semester is published as a companion volume to this work entitled Digital Image Processing, Horwood Publishing, 2005 which covers the mathematical modelling of imaging systems and the techniques that have been developed to process and analyse the data such systems provide. Since the publication of the first edition of this work in 2003, a number of minor changes and some additions have been made. The material on programming and software engineering in Chapters 11 and 12 has been extended. This includes some additions and further solved and supplementary questions which are included throughout the text. Nevertheless, it is worth pointing out, that while every effort has been made by the author and publisher to provide a work that is error free, it is inevitable that typing errors and various ‘bugs’ will occur. If so, and in particular, if the reader starts to suffer from a lack of comprehension over certain aspects of the material (due to errors or otherwise) then he/she should not assume that there is something wrong with themselves, but with the author

    Learning from expert advice framework: Algorithms and applications

    Get PDF
    Online recommendation systems have been widely used by retailers, digital marketing, and especially in e-commerce applications. Popular sites such as Netflix and Amazon suggest movies or general merchandise to their clients based on recommendations from peers. At core of recommendation systems resides a prediction algorithm, which based on recommendations received from a set of experts (users), recommends objects to other users. After a user ``consumes" an object, his feedback provided to the system is used to assess the performance of experts at that round and adjust the predictions of the recommendation system for the future rounds. This so-called ``learning from expert advice'' framework has been extensively studied in the literature. In this dissertation, we investigate various settings and applications ranging from partial information, adversarial scenarios, to limited resources. We propose provable algorithms for such systems, along with theoretical and experimental results. In the first part of the thesis, we focus our attention to a generalized model of learning from expert advice in which experts could abstain from participating at some rounds. Our proposed online algorithm falls into the class of weighted average predictors and uses a time varying multiplicative weight update rule. This update rule changes the weight of an expert based on his relative performance compared to the average performance of available experts at the current round. We prove the convergence of our algorithm to the best expert, defined in terms of both availability and accuracy, in the stochastic setting. Next, we study the optimal adversarial strategies against the weighted average prediction algorithm. All but one expert are honest and the malicious expert's goal is to sabotage the performance of the algorithm by strategically providing dishonest recommendations. We formulate the problem as a Markov decision process (MDP) and apply policy iteration to solve it. For the logarithmic loss, we prove that the optimal strategy for the adversary is the greedy policy, whereas for the absolute loss, in the 22-experts, discounted cost setting, we prove that the optimal strategy is a threshold policy. We extend the results to the infinite horizon problem and find the exact thresholds for the stationary optimal policy. As an effort to investigate the extended problem, we use a mean field approach in the NN-experts setting to find the optimal strategy when the predictions of the honest experts are i.i.d. In addition to designing an effective weight update rule and investigating optimal strategies of malicious experts, we also consider active learning applications for learning with expert advice framework. In this application, the target is to reduce the number of labeling while still keeping the regret bound as small as possible. We proposed two algorithms, EPSL and EPAL, which are able to efficiently request label for each object. In essence, the idea of two algorithms is to examine the opinion ranges of experts, and decide to acquire labels based on the maximum difference of those opinion using a randomized policy. Both algorithms obtain nearly optimal regret bound up to some constant depending on the characteristics of experts' predictions. Last but not least, we turn our attention to the generalized ``best arm identification" problem in which, at each time, there is a subset of products whose rewards or profits are unknown (but follow some fixed distributions), and the goal is to select the best product to recommend to users after trying on a number of sampling. We propose UCB based (Upper Confidence Bound) algorithms that provide flexible parameter tuning based on the availability of each arm in the collection. We also propose a simple, yet efficient, uniform sampling algorithm for this problem. We proved that, for these algorithms, the error of selecting the incorrect arm decays exponentially over time

    Integration system: A problem-solving framework for seeking stability in complex conflictual situations

    Get PDF
    The thesis examines some of the methodologies used for conflict study and analysis; it reviews Operational Research based approaches and methodologies from other areas of study that have been, and are still being used for the study and analysis of conflict situations in complex systems. The thesis argues against the prevalent use of single methodologies for such systems, and calls for the adoption of approaches that allows the use of multiple methodologies, which would place the emphasis on the "problem" rather than on any particular approach or methodology.The nature, causes and effects, ecology of conflict, and the concept of issue relevance and irrelevance are examined as well as the role of perceptions. The factors determining thedevelopment, level and scope of conflicts are reviewed with the aim of ascertaining their importance to conflict outcomes andwhen meaningful intervention could be made during conflict situations. Various outcomes of conflict, primarily management, dissolution, and resolution are discussed and their relative strengths and weaknesses as strategies for handling conflicts.Case studies are used to examine and support arguments about how different conflict outcomes arise and some proposals are made for the study of alternative futures. It is argued that undesired conflicts could be reduced or prevented in complex interaction systems through the deliberate design and incorporation, into such systems, of structures and mechanisms that will serve as integration systems. These integration systems involve all the parties in an interaction system and are intended to reconcile views, clarify positions, inform the parties about each other and assist in the formulation of joint responses to negative internal and external stimuli.An outline structure of an integration system is given and how it could be developed in a system. Many methodologies and approaches are based on the premise of a "prima facie" existence of a conflict; a tool is suggested in the thesis that will assist analysts, observers, or any interested party to monitor the relationship in an interaction system. This tool concerns what I have called the Y-points and Y-diagrams. The Y-concepts are based on the notion that there are periods in an interaction when a decision can be consciously taken to escalate or de-escalate a situation.The approach advocated in the thesis is based on two assumptions: the first is that the parties prefer a "normal" relationship to a conflictual one, the second is that the parties in a conflict would prefer the resolution of a conflict and its attendant stability to an unending management of the situation. Consequently, the main thrust of the arguments in the thesis is on conflict resolution and the design of stability into complex interaction systems.<p

    If interpretability is the answer, what is the question?

    Get PDF
    Due to the ability to model even complex dependencies, machine learning (ML) can be used to tackle a broad range of (high-stakes) prediction problems. The complexity of the resulting models comes at the cost of transparency, meaning that it is difficult to understand the model by inspecting its parameters. This opacity is considered problematic since it hampers the transfer of knowledge from the model, undermines the agency of individuals affected by algorithmic decisions, and makes it more challenging to expose non-robust or unethical behaviour. To tackle the opacity of ML models, the field of interpretable machine learning (IML) has emerged. The field is motivated by the idea that if we could understand the model's behaviour -- either by making the model itself interpretable or by inspecting post-hoc explanations -- we could also expose unethical and non-robust behaviour, learn about the data generating process, and restore the agency of affected individuals. IML is not only a highly active area of research, but the developed techniques are also widely applied in both industry and the sciences. Despite the popularity of IML, the field faces fundamental criticism, questioning whether IML actually helps in tackling the aforementioned problems of ML and even whether it should be a field of research in the first place: First and foremost, IML is criticised for lacking a clear goal and, thus, a clear definition of what it means for a model to be interpretable. On a similar note, the meaning of existing methods is often unclear, and thus they may be misunderstood or even misused to hide unethical behaviour. Moreover, estimating conditional-sampling-based techniques poses a significant computational challenge. With the contributions included in this thesis, we tackle these three challenges for IML. We join a range of work by arguing that the field struggles to define and evaluate "interpretability" because incoherent interpretation goals are conflated. However, the different goals can be disentangled such that coherent requirements can inform the derivation of the respective target estimands. We demonstrate this with the examples of two interpretation contexts: recourse and scientific inference. To tackle the misinterpretation of IML methods, we suggest deriving formal interpretation rules that link explanations to aspects of the model and data. In our work, we specifically focus on interpreting feature importance. Furthermore, we collect interpretation pitfalls and communicate them to a broader audience. To efficiently estimate conditional-sampling-based interpretation techniques, we propose two methods that leverage the dependence structure in the data to simplify the estimation problems for Conditional Feature Importance (CFI) and SAGE. A causal perspective proved to be vital in tackling the challenges: First, since IML problems such as algorithmic recourse are inherently causal; Second, since causality helps to disentangle the different aspects of model and data and, therefore, to distinguish the insights that different methods provide; And third, algorithms developed for causal structure learning can be leveraged for the efficient estimation of conditional-sampling based IML methods.Aufgrund der Fähigkeit, selbst komplexe Abhängigkeiten zu modellieren, kann maschinelles Lernen (ML) zur Lösung eines breiten Spektrums von anspruchsvollen Vorhersageproblemen eingesetzt werden. Die Komplexität der resultierenden Modelle geht auf Kosten der Interpretierbarkeit, d. h. es ist schwierig, das Modell durch die Untersuchung seiner Parameter zu verstehen. Diese Undurchsichtigkeit wird als problematisch angesehen, da sie den Wissenstransfer aus dem Modell behindert, sie die Handlungsfähigkeit von Personen, die von algorithmischen Entscheidungen betroffen sind, untergräbt und sie es schwieriger macht, nicht robustes oder unethisches Verhalten aufzudecken. Um die Undurchsichtigkeit von ML-Modellen anzugehen, hat sich das Feld des interpretierbaren maschinellen Lernens (IML) entwickelt. Dieses Feld ist von der Idee motiviert, dass wir, wenn wir das Verhalten des Modells verstehen könnten - entweder indem wir das Modell selbst interpretierbar machen oder anhand von post-hoc Erklärungen - auch unethisches und nicht robustes Verhalten aufdecken, über den datengenerierenden Prozess lernen und die Handlungsfähigkeit betroffener Personen wiederherstellen könnten. IML ist nicht nur ein sehr aktiver Forschungsbereich, sondern die entwickelten Techniken werden auch weitgehend in der Industrie und den Wissenschaften angewendet. Trotz der Popularität von IML ist das Feld mit fundamentaler Kritik konfrontiert, die in Frage stellt, ob IML tatsächlich dabei hilft, die oben genannten Probleme von ML anzugehen, und ob es überhaupt ein Forschungsgebiet sein sollte: In erster Linie wird an IML kritisiert, dass es an einem klaren Ziel und damit an einer klaren Definition dessen fehlt, was es für ein Modell bedeutet, interpretierbar zu sein. Weiterhin ist die Bedeutung bestehender Methoden oft unklar, so dass sie missverstanden oder sogar missbraucht werden können, um unethisches Verhalten zu verbergen. Letztlich stellt die Schätzung von auf bedingten Stichproben basierenden Verfahren eine erhebliche rechnerische Herausforderung dar. In dieser Arbeit befassen wir uns mit diesen drei grundlegenden Herausforderungen von IML. Wir schließen uns der Argumentation an, dass es schwierig ist, "Interpretierbarkeit" zu definieren und zu bewerten, weil inkohärente Interpretationsziele miteinander vermengt werden. Die verschiedenen Ziele lassen sich jedoch entflechten, sodass kohärente Anforderungen die Ableitung der jeweiligen Zielgrößen informieren. Wir demonstrieren dies am Beispiel von zwei Interpretationskontexten: algorithmischer Regress und wissenschaftliche Inferenz. Um der Fehlinterpretation von IML-Methoden zu begegnen, schlagen wir vor, formale Interpretationsregeln abzuleiten, die Erklärungen mit Aspekten des Modells und der Daten verknüpfen. In unserer Arbeit konzentrieren wir uns speziell auf die Interpretation von sogenannten Feature Importance Methoden. Darüber hinaus tragen wir wichtige Interpretationsfallen zusammen und kommunizieren sie an ein breiteres Publikum. Zur effizienten Schätzung auf bedingten Stichproben basierender Interpretationstechniken schlagen wir zwei Methoden vor, die die Abhängigkeitsstruktur in den Daten nutzen, um die Schätzprobleme für Conditional Feature Importance (CFI) und SAGE zu vereinfachen. Eine kausale Perspektive erwies sich als entscheidend für die Bewältigung der Herausforderungen: Erstens, weil IML-Probleme wie der algorithmische Regress inhärent kausal sind; zweitens, weil Kausalität hilft, die verschiedenen Aspekte von Modell und Daten zu entflechten und somit die Erkenntnisse, die verschiedene Methoden liefern, zu unterscheiden; und drittens können wir Algorithmen, die für das Lernen kausaler Struktur entwickelt wurden, für die effiziente Schätzung von auf bindingten Verteilungen basierenden IML-Methoden verwenden
    corecore