1,012 research outputs found

    Nature-inspired survivability: Prey-inspired survivability countermeasures for cloud computing security challenges

    Get PDF
    As cloud computing environments become complex, adversaries have become highly sophisticated and unpredictable. Moreover, they can easily increase attack power and persist longer before detection. Uncertain malicious actions, latent risks, Unobserved or Unobservable risks (UUURs) characterise this new threat domain. This thesis proposes prey-inspired survivability to address unpredictable security challenges borne out of UUURs. While survivability is a well-addressed phenomenon in non-extinct prey animals, applying prey survivability to cloud computing directly is challenging due to contradicting end goals. How to manage evolving survivability goals and requirements under contradicting environmental conditions adds to the challenges. To address these challenges, this thesis proposes a holistic taxonomy which integrate multiple and disparate perspectives of cloud security challenges. In addition, it proposes the TRIZ (Teorija Rezbenija Izobretatelskib Zadach) to derive prey-inspired solutions through resolving contradiction. First, it develops a 3-step process to facilitate interdomain transfer of concepts from nature to cloud. Moreover, TRIZ’s generic approach suggests specific solutions for cloud computing survivability. Then, the thesis presents the conceptual prey-inspired cloud computing survivability framework (Pi-CCSF), built upon TRIZ derived solutions. The framework run-time is pushed to the user-space to support evolving survivability design goals. Furthermore, a target-based decision-making technique (TBDM) is proposed to manage survivability decisions. To evaluate the prey-inspired survivability concept, Pi-CCSF simulator is developed and implemented. Evaluation results shows that escalating survivability actions improve the vitality of vulnerable and compromised virtual machines (VMs) by 5% and dramatically improve their overall survivability. Hypothesis testing conclusively supports the hypothesis that the escalation mechanisms can be applied to enhance the survivability of cloud computing systems. Numeric analysis of TBDM shows that by considering survivability preferences and attitudes (these directly impacts survivability actions), the TBDM method brings unpredictable survivability information closer to decision processes. This enables efficient execution of variable escalating survivability actions, which enables the Pi-CCSF’s decision system (DS) to focus upon decisions that achieve survivability outcomes under unpredictability imposed by UUUR

    A Corpus Driven Computational Intelligence Framework for Deception Detection in Financial Text

    Get PDF
    Financial fraud rampages onwards seemingly uncontained. The annual cost of fraud in the UK is estimated to be as high as £193bn a year [1] . From a data science perspective and hitherto less explored this thesis demonstrates how the use of linguistic features to drive data mining algorithms can aid in unravelling fraud. To this end, the spotlight is turned on Financial Statement Fraud (FSF), known to be the costliest type of fraud [2]. A new corpus of 6.3 million words is composed of102 annual reports/10-K (narrative sections) from firms formally indicted for FSF juxtaposed with 306 non-fraud firms of similar size and industrial grouping. Differently from other similar studies, this thesis uniquely takes a wide angled view and extracts a range of features of different categories from the corpus. These linguistic correlates of deception are uncovered using a variety of techniques and tools. Corpus linguistics methodology is applied to extract keywords and to examine linguistic structure. N-grams are extracted to draw out collocations. Readability measurement in financial text is advanced through the extraction of new indices that probe the text at a deeper level. Cognitive and perceptual processes are also picked out. Tone, intention and liquidity are gauged using customised word lists. Linguistic ratios are derived from grammatical constructs and word categories. An attempt is also made to determine ‘what’ was said as opposed to ‘how’. Further a new module is developed to condense synonyms into concepts. Lastly frequency counts from keywords unearthed from a previous content analysis study on financial narrative are also used. These features are then used to drive machine learning based classification and clustering algorithms to determine if they aid in discriminating a fraud from a non-fraud firm. The results derived from the battery of models built typically exceed classification accuracy of 70%. The above process is amalgamated into a framework. The process outlined, driven by empirical data demonstrates in a practical way how linguistic analysis could aid in fraud detection and also constitutes a unique contribution made to deception detection studies

    Minds Online: The Interface between Web Science, Cognitive Science, and the Philosophy of Mind

    Get PDF
    Alongside existing research into the social, political and economic impacts of the Web, there is a need to study the Web from a cognitive and epistemic perspective. This is particularly so as new and emerging technologies alter the nature of our interactive engagements with the Web, transforming the extent to which our thoughts and actions are shaped by the online environment. Situated and ecological approaches to cognition are relevant to understanding the cognitive significance of the Web because of the emphasis they place on forces and factors that reside at the level of agent–world interactions. In particular, by adopting a situated or ecological approach to cognition, we are able to assess the significance of the Web from the perspective of research into embodied, extended, embedded, social and collective cognition. The results of this analysis help to reshape the interdisciplinary configuration of Web Science, expanding its theoretical and empirical remit to include the disciplines of both cognitive science and the philosophy of mind

    If I Can\u27t Predict My Future, Why Can AI? Exploring Human Interaction with Predictive Analytics

    Get PDF
    This research study seeks to understand how AI-based chatbots can potentially be leveraged as a tool in a PSYOP. This study is methodologically driven as it employs validated scales concerning suggestibility and human-computer interaction to assess how participants interact with a specific AI chatbot, Replika. Recent studies demonstrate the capability of GPT-based analytics to influence user’s moral judgements, and this paper is interested in exploring why. Results will help draw conclusions regarding human interaction with predictive analytics (in this case a free GPT-based chatbot, Replika) to understand if suggestibility (how easily influenced someone generally is) impacts the overall usability of AI chatbots. This project will help assess how much of a concern predictive AI chatbots should be considered as virtual AI influencers and other bot-based propaganda modalities emerge in the contemporary media environment. This study uses the CASA paradigm, medium theory, and Boyd’s theory of conflict to explore how factors that often drive human computer interaction— like anthropomorphic autonomy and suspension of disbelief— potentially relate to suggestibility or chatbot usability. Overall, this study is interested in specifically exploring if suggestion can predict usability in AI chatbots

    Designing Persuasively using Playful Elements

    Get PDF
    Alongside productivity and communication, computers are a valuable tool for diversion and amusement. Game Designers leverage the multifaceted world of computing to create applications that can be developed persuasively; designs can be formulated to compel users towards actions and behaviours which range from engaging in the game’s mechanics, micro-transactions, or in more complex manifestations such as encouraging reflection via the evaluation of the moral argument presented in the gameplay narrative. In my dissertation, I explore how to create compelling experiences during playful interactions. Particularly, I explore how design decisions affect users’ behaviours, and evaluations of the gaming experience to learn more about crafting persuasive mechanics in games. First, I present research on calibrating aspects of difficulty and character behaviour in the design of simple games to create more immersive experiences. My work on calibration of game difficulty, and enemy behaviour contribute insight regarding the potential of games to create engaging activities, which inspire prolonged play sessions. Further work in my dissertation explores how players interact with in-game entities they perceive as human and explores the boundaries of acceptable player interaction during co-located gaming situations. My early work gives rise to deeper questions regarding perspectives on co-players during gaming experiences. Specifically, I probe the question of how players perceive human versus computer-controlled teammates during a shared gaming experience. Additionally, I explore how game design factors in the context of a tightly-coupled shared multi-touch large display gaming experience can influence the way that people interact and, in turn, their perspectives on one another to ask: ‘how can games be used persuasively to inspire positive behaviours and social interaction?’. Issues of perspectives are a theme I carry forward in my work by exploring how game dynamics – in particular the use of territoriality – can be used to foster collaborative behaviours. Further, I discuss how my work contributes to the study of persuasive game design, games with purpose, and cement my findings in relation to the games studies and computer science literature. Last, I discuss future work, in which I discuss my ambitions for using persuasive design for social good via Games4Change

    Audiovisual representations of Artificial Intelligence in Dystopian Tech Societies: Scaremongering or Reality? The Cases of Black Mirror (Charlie Brooker, 2011), Ex Machina (Alex Garland, 2017) and Her (Spike Jonze, 2014)

    Get PDF
    La intel·ligència artificial ha estat un concepte que captiva la humanitat des de fa mil·lennis. Des de l'antiguitat, els humans estan obsessionats amb la idea de crear un ésser humà artificial perfecte amb diferents objectius, com ara la companyia o l'ajuda domèstica, i han escrit sobre ells en textos antics de diverses cultures. Això va evolucionar cap a la literatura de protofantasia o protociència-ficció a l’alta edat mitjana. Tanmateix, no va ser fins al segle XIX que la influent obra de Mary Shelley, Frankenstein (1818), va reunir diferents aspectes de la creació artificial de vida humana artificial en el debat d’una comprensió psicològica social més àmplia. Amb l'arribada dels mitjans audiovisuals al segle XX, aquestes representacions dels humanoides creats artificialment o d'altres creacions amb cert grau de consciència han poblat tant la gran pantalla com la televisió. Aquesta tesi se centra en les connexions socials d'aquestes representacions de la Intel·ligència Artificial, a partir de la sèrie de televisió Black Mirror (Charlie Brooker, 2011), així com en les pel·lícules Ex Machina (Alex Garland, 2014) i Her (Spike Jonze, 2014), per tal d’analitzar la relació entre la Intel·ligència Artificial i els humans des de perspectives i paradigmes diversos. L’anàlisi audiovisual de les obres seleccionades és seguida d’una exploració de com s’estan produint aquests recents avenços tecnològics en la nostra societat actual, per relacionar-los amb les advertències que proposen les obres seleccionades i que ofereixen una lectura per al futur que requereix la implementació de normatives estrictes sobre la Intel·ligència Artificial per tal d’alleujar les angoixes humanes respecte a la tecnologia. Paraules clau: Intel·ligència artificial, tecnologia, ciència ficció, distopia, estudis cinematogràfics, societat.La inteligencia artificial es un concepto que fascina a la humanidad durante milenios. Desde la antigüedad, los humanos han estado obsesionados con la idea de crear un humano artificial perfecto para diferentes fines, como la compañía o la ayuda doméstica, y han escrito sobre ello en textos fundacionales de diversas culturas. Esto se convirtió progresivamente en literatura de protofantasía o proto-ciencia ficción en la Alta Edad Media. Sin embargo, no fue hasta el siglo XIX cuando la influyente obra Frankenstein (1818) de Mary Shelley reunió diferentes aspectos de la creación de un ser humano artificial, discutidos dentro de una comprensión psicológica y social más amplia. Con la llegada de los medios audiovisuales en el siglo XX, estas representaciones de humanoides creados artificialmente o de otras creaciones con cierto grado de conciencia han poblado tanto la gran pantalla como la televisión. Esta tesis se centra en las conexiones sociales de dichas representaciones de la Inteligencia Artificial, centrándose en la serie de televisión Black Mirror (Charlie Brooker, 2011), así como en las películas Ex Machina (Alex Garland, 2014) y Her (Spike Jonze, 2014), analizando las relaciones entre la Inteligencia Artificial y los humanos desde una variedad de perspectivas y paradigmas diferentes. El análisis audiovisual de las obras seleccionadas va seguido de una exploración sobre cómo estos avances tecnológicos recientes se están produciendo en nuestra sociedad actual, vinculándolos con las advertencias que formulan las obras seleccionadas y ofreciendo una lectura de futuro que requiere la implementación de una estricta normativa en torno a la Inteligencia Artificial para aliviar las ansiedades humanas sobre la tecnología. Palabras clave: inteligencia artificial, tecnología, sociedad, ciencia ficción, distopía, estudios cinematográficos.Artificial Intelligence has been a concept that has infatuated humankind for millennia. Since antiquity, humans have been obsessed with the idea of creating a perfect artificial human for different aims such as companionship or domestic help, and ancient cultures have devoted foundational texts to the artificial human. This literary occupation gradually evolved into proto-fantasy or proto-Science Fiction literature in the early middle ages. However, it wasn’t until the 19th century that Mary Shelley’s influential work Frankenstein (1818) brought together different aspects of creating an artificial human discussed within a broader social and psychological understanding. With the advent of audiovisual media in the 20th century, such representations of artificially created humanoids or other creations with some degree of consciousness have populated both the silver screen and television. This thesis focuses on the societal connections between such representations of Artificial Intelligence, focusing on the TV show Black Mirror (Charlie Brooker, 2011) as well as the films Ex Machina (Alex Garland, 2014) and Her (Spike Jonze, 2014) by analyzing the Artificial Intelligence - human relationships from a variety of different perspectives and paradigms. The audiovisual analyses of the selected works are then followed by an examination of how such recent technological developments are taking place in our current society. These texts under examination exhort us to beware the potential dangers of AI technology, which require implementation of strict regulations around the Artificial Intelligence framework in order to alleviate human anxieties about technology. Keywords: Artificial Intelligence, technology, technology and society, Science Fiction, dystopia, film studies, society

    Artificial neural networks for problems in computational cognition

    Get PDF
    Computationally modelling human level cognitive abilities is one of the principal goals of artificial intelligence research, one that draws together work from the human neurosciences, psychology, cognitive science, computer science, and mathematics. In the past 30 years, work towards this goal has been substantially accelerated by the development of neural network approaches, at least in part due to advances in algorithms that can train these networks efficiently [Rumelhart et al., 1986b] and computer hardware that is optimised for matrix computations [Krizhevsky et al., 2012]. Parallel to this body of work, research in social robotics has developed to the extent that embodied and socially intelligent artificial agents are becoming parts of our everyday lives. Where robots were traditionally placed as tools to be used to improve the efficiency of a number of industrial tasks, now they are increasingly expected to emulate humans in complex, dynamic, and unpredictable social environments. In such cases, endowing these robotic platforms with (approaching) human–like cognitive capabilities will significantly improve the efficacy of these systems, and likely see their uptake quicken as they come to be seen as safe, effective, and flexible partners in socially oriented situations such as physical healthcare, education, mental well–being, and commerce. Taken together, it would seem that neural network approaches are well placed to allow us to bestow these agents with the kinds of cognitive abilities that they require to meet this goal. However, the nascent nature of the interaction of these two fields and the risk that comes along with integrating social robots too quickly into high risk social areas, means that there is significant work still to be done before we can convince ourselves that neural networks are the right approach to this problem. In this thesis I contribute theoretical and empirical work that lends weight to the argument that neural network approaches are well suited to modelling human cognition for use in social robots. In Chapter 1 I provide a general introduction to human cognition and neural networks and motivate the use of these approaches to problems in social robotics and human–robot interaction. This chapter is written in such a way that readers with no technical background can get a good understanding of the concepts that are at the center of the thesis’ aims. In Chapter 2, I provide a more in–depth and technical overview of the mathematical concepts that are at the heart of modern neural networks, specifically detailing the logic behind the deep learning approaches that are used in the empirical chapters of the thesis. While a full understanding of this chapter requires a stronger mathematical background than the previous chapter, the concepts are explained in such a way that a non–technical reader should come out of it with a solid high level understanding of these ideas. Chapters Chapter 3 through Chapter 5 contain the empirical work that was carried out in order to attempt to answer the above questions. Specifically, Chapter 3 explores the viability of using deep learning as an approach to modelling human social–cognitive abilities by looking at the problems of subjective psychological stress and self–disclosure. I test a number of “off-the-shelf” deep learning architectures on a novel dataset and find that in all cases these models are able to score significantly above average on the task of classifying audio segments in relation to how much the person performing the contained utterance believed themselves to be stressed and performing an act of self-disclosure. In Chapter 4, I develop the work on subjective-self disclosure modelling in human–robot social interaction by collecting a much larger multi modal dataset that contains video recorded interactions between participants and a Pepper robot. I provide a novel multi-modal deep learning attention architecture, and a custom loss function, and compare the performance of our model to a number of non-neural network approach baselines. I find that all versions of our model significantly outperform the baseline approaches, and that our novel loss improves on performance when compared to other standard loss functions for regression and classification problems for subjective self-disclosure modelling. In Chapter 5, I move away from deep learning and consider how neural network models based more concretely on contemporary computational neuroscience might be used to bestow artificial agents with human like cognitive abilities. Here, I detail a novel biological neural network algorithm that is able to solve cognitive planning problems by producing short path solutions on graphs. I show how a number of such planning problems can be framed as graph traversal problem and show how our algorithm is able to form solutions to these problems in a number of experimental settings. Finally, in Chapter 6 I provide a final overview of this empirical work and explain its impact both within and without academia before outlining a number of limitations of the approaches that were used and discuss some potentially fruitful avenues for future research in these areas

    Cyber defensive capacity and capability::A perspective from the financial sector of a small state

    Get PDF
    This thesis explores ways in which the financial sectors of small states are able todefend themselves against ever-growing cyber threats, as well as ways these states can improve their cyber defense capability in order to withstand current andfuture attacks. To date, the context of small states in general is understudied. This study presents the challenges faced by financial sectors in small states with regard to withstanding cyberattacks. This study applies a mixed method approach through the use of various surveys, brainstorming sessions with financial sector focus groups, interviews with critical infrastructure stakeholders, a literature review, a comparative analysis of secondary data and a theoretical narrative review. The findings suggest that, for the Aruban financial sector, compliance is important, as with minimal drivers, precautionary behavior is significant. Countermeasures of formal, informal, and technical controls need to be in place. This study indicates the view that defending a small state such as Aruba is challenging, yet enough economic indicators indicate it not being outside the realm of possibility. On a theoretical level, this thesis proposes a conceptual “whole-of-cyber” model inspired by military science and the VSM (Viable Systems Model). The concept of fighting power components and governance S4 function form cyber defensive capacity’s shield and capability. The “whole-of-cyber” approach may be a good way to compensate for the lack of resources of small states. Collaboration may be an only out, as the fastest-growing need will be for advanced IT skillsets
    corecore