851 research outputs found
RADIC Voice Authentication: Replay Attack Detection using Image Classification for Voice Authentication Systems
Systems like Google Home, Alexa, and Siri that use voice-based authentication to verify their users’ identities are vulnerable to voice replay attacks. These attacks gain unauthorized access to voice-controlled devices or systems by replaying recordings of passphrases and voice commands. This shows the necessity to develop more resilient voice-based authentication systems that can detect voice replay attacks.
This thesis implements a system that detects voice-based replay attacks by using deep learning and image classification of voice spectrograms to differentiate between live and recorded speech. Tests of this system indicate that the approach represents a promising direction for detecting voice-based replay attacks
Analytical validation of innovative magneto-inertial outcomes: a controlled environment study.
peer reviewe
Under construction: infrastructure and modern fiction
In this dissertation, I argue that infrastructural development, with its technological promises but widening geographic disparities and social and environmental consequences, informs both the narrative content and aesthetic forms of modernist and contemporary Anglophone fiction. Despite its prevalent material forms—roads, rails, pipes, and wires—infrastructure poses particular formal and narrative problems, often receding into the background as mere setting. To address how literary fiction theorizes the experience of infrastructure requires reading “infrastructurally”: that is, paying attention to the seemingly mundane interactions between characters and their built environments. The writers central to this project—James Joyce, William Faulkner, Karen Tei Yamashita, and Mohsin Hamid—take up the representational challenges posed by infrastructure by bringing transit networks, sanitation systems, and electrical grids and the histories of their development and use into the foreground. These writers call attention to the political dimensions of built environments, revealing the ways infrastructures produce, reinforce, and perpetuate racial and socioeconomic fault lines. They also attempt to formalize the material relations of power inscribed by and within infrastructure; the novel itself becomes an imaginary counterpart to the technologies of infrastructure, a form that shapes and constrains what types of social action and affiliation are possible
Cognitive Machine Individualism in a Symbiotic Cybersecurity Policy Framework for the Preservation of Internet of Things Integrity: A Quantitative Study
This quantitative study examined the complex nature of modern cyber threats to propose the establishment of cyber as an interdisciplinary field of public policy initiated through the creation of a symbiotic cybersecurity policy framework. For the public good (and maintaining ideological balance), there must be recognition that public policies are at a transition point where the digital public square is a tangible reality that is more than a collection of technological widgets. The academic contribution of this research project is the fusion of humanistic principles with Internet of Things (IoT) technologies that alters our perception of the machine from an instrument of human engineering into a thinking peer to elevate cyber from technical esoterism into an interdisciplinary field of public policy. The contribution to the US national cybersecurity policy body of knowledge is a unified policy framework (manifested in the symbiotic cybersecurity policy triad) that could transform cybersecurity policies from network-based to entity-based. A correlation archival data design was used with the frequency of malicious software attacks as the dependent variable and diversity of intrusion techniques as the independent variable for RQ1. For RQ2, the frequency of detection events was the dependent variable and diversity of intrusion techniques was the independent variable. Self-determination Theory is the theoretical framework as the cognitive machine can recognize, self-endorse, and maintain its own identity based on a sense of self-motivation that is progressively shaped by the machine’s ability to learn. The transformation of cyber policies from technical esoterism into an interdisciplinary field of public policy starts with the recognition that the cognitive machine is an independent consumer of, advisor into, and influenced by public policy theories, philosophical constructs, and societal initiatives
Adversarial Learning in Real-World Fraud Detection: Challenges and Perspectives
Data economy relies on data-driven systems and complex machine learning
applications are fueled by them. Unfortunately, however, machine learning
models are exposed to fraudulent activities and adversarial attacks, which
threaten their security and trustworthiness. In the last decade or so, the
research interest on adversarial machine learning has grown significantly,
revealing how learning applications could be severely impacted by effective
attacks. Although early results of adversarial machine learning indicate the
huge potential of the approach to specific domains such as image processing,
still there is a gap in both the research literature and practice regarding how
to generalize adversarial techniques in other domains and applications. Fraud
detection is a critical defense mechanism for data economy, as it is for other
applications as well, which poses several challenges for machine learning. In
this work, we describe how attacks against fraud detection systems differ from
other applications of adversarial machine learning, and propose a number of
interesting directions to bridge this gap
Designs of Blackness
Across more than two centuries Afro-America has created a huge and dazzling variety of literary self-expression. Designs of Blackness provides less a narrative literary history than, precisely, a series of mappings—each literary-critical and comparative while at the same time offering cultural and historical context. This carefully re-edited version of the 1998 publication opens with an estimation of earliest African American voice in the names of Phillis Wheatley and her contemporaries. It then takes up the huge span of autobiography from Frederick Douglass through to Maya Angelou. "Harlem on My Mind," which follows, sets out the literary contours of America’s premier black city. Womanism, Alice Walker’s presiding term, is given full due in an analysis of fiction from Harriet E. Wilson to Toni Morrison. Richard Wright is approached not as some regulation "realist" but as a more inward, at times near-surreal, author. Decadology has its risks but the 1940s has rarely been approached as a unique era of war and peace and especially in African American texts. Beat Generation work usually adheres to Ginsberg and Kerouac, but black Beat writing invites its own chapter in the names of Amiri Baraka, Ted Joans and Bob Kaufman. The 1960s has long become a mythic change-decade, and in few greater respects than as a black theatre both of the stage and politics. In Leon Forrest African America had a figure of the postmodern turn: his work is explored in its own right and for how it takes its place in the context of other reflexive black fiction. "African American Fictions of Passing" unpacks the whole deceptive trope of "race" in writing from Williams Wells Brown through to Charles Johnson. The two newly added chapters pursue African American literary achievement into the Obama-Trump century, fiction from Octavia Butler to Darryl Pinkney, poetry from Rita Dove to Kevin Young
Способи автоматизованого тестування програм з використанням глибоких нейронних мереж
Актуальність теми. Моделі машинного навчання (Machine Learning, ML) відіграють важливу роль в різних застосуваннях. Зокрема, в останні роки глибокі нейронні мережі (Deep neural networks, DNN) використовуються в різних галузях науки та техніки. З огляду на таке зростання використання, можливі помилки у моделях DNN можуть викликати серйозні стурбовання з приводу їхньої надійності та спричинити значні втрати. Тому виявлення помилкової поведінки в будь-якій системі машинного навчання, особливо в DNN, є критичним. Тестування програмного забезпечення - широко використовуваний механізм для виявлення помилок. Однак, оскільки точний вихід більшості моделей DNN не відомий для заданих вхідних даних, традиційні техніки тестування програмного забезпечення не можуть бути безпосередньо застосовані. Останніми роками було запропоновано кілька методів тестування та критеріїв адекватності для тестування DNN. У даній дисертації досліджується три типи методів тестування DNN з використанням текстових та зображень вхідних даних.
Об’єктом дослідження є автоматизація тестування програм з використанням глибоких нейронних мереж..
Предметом дослідження є способи автоматизованого тестування програм з використанням глибоких нейронних мереж.
Мета роботи: розробка ефективних методів тестування програм з використанням глибоких нейронних мереж
Наукова новизна: Запропоновано методи тестування та критерії оцінювання їх ефективності для програм з використанням глибоких нейронних мереж, що за рахунок часткового об’єднання різних методологій в запропоновану одну, дозволяє підвищити точність тестування та оптимізувати сам процес тестування.
Практична цінність: Автоматизоване тестування програм з використанням глибоких нейронних- мереж значно полегшує тестування таких складних програм та значно покращує якість тестування, оскільки при такому підході виключається людський фактор і можливість попадання будь-яких вразливостей системи до кінцевого користувача.
Апробація роботи. Основні положення і результати роботи були представлені та обговорювались на XV науковій конференції магістрантів та аспірантів «Прикладна математика та комп’ютинг» ПМК-2022 та на міжнародній науково-практичній конференції «Наука, освіта, технології і суспільство в XXI столітті: наукові ідеї та механіхми реалізації».
Структура та обсяг роботи. Магістерська дисертація складається з вступу, чотирьох розділів та висновків.
У вступі подано загальну характеристику роботи, зроблено оцінку сучасного стану проблеми, обґрунтовано актуальність напрямку досліджень, сформульовано мету і задачі досліджень, показано наукову новизну отриманих результатів і практичну цінність роботи, наведено відомості результатів і їхнє впровадження.
У першому розділі було викладено загальну теорію по глибоким нейронним мережам. Детально розглянуто архітектуру нейронних мереж та їхні ключові складові. Розглянуто способи тестування таких нейронних мереж.
У другому розділі було розглянуто декілька методів тестування глибоких нейронних мереж, визначено їхні переваги та недоліки.
У третьому розділі було проведено три експерименти з різними методами тестування глибоких нейронних мереж.
У четвертому розділі наведено інформацію про нову запропоновану методологію, яка в собі поєднує кращі якості з попередньо розглянутих інсуючих методологій.
У висновках представлені результати проведеної роботи.
Робота представлена на 90 аркушах, містить посилання на список використаних літературних джерел.Machine Learning (ML) models play an important role in various applications. In particular, in recent years, Deep Neural Networks (DNNs) have been used in a variety of applications. Given this increase in use, possible errors in DNN models can raise serious concerns about their reliability and cause significant losses. Therefore, detecting erroneous behavior in any machine learning system, especially in DNNs, is critical. Software testing is a widely used mechanism for detecting errors. However, since the exact output of most DNN models is not known for a given input, traditional software testing techniques cannot be directly applied. In recent years, several testing methods and adequacy criteria have been proposed for testing DNNs. In this thesis, we investigate three types of testing methods for DNNs using text and image inputs.
The object of research is automation of program testing using deep neural networks..
The subject of research is methods for creating automated testing of programs using neural networks.
Purpose: development of effective methods for testing programs using deep neural networks.
Scientific novelty: Testing methods and criteria for evaluating their effectiveness for programs using deep neural networks are proposed.
Practical value: Automated testing of programs using deep neural networks greatly facilitates the testing of such complex programs and significantly improves the quality of testing, since this approach eliminates the human factor and the possibility of any system vulnerabilities reaching the end user.
Testing of the work. The main provisions and results of the work were presented and discussed at the XIV Scientific Conference of Undergraduate and Postgraduate Students "Applied Mathematics and Computer Science" PMK-2022.
Structure and scope of the work. The master's thesis consists of an introduction, four chapters and conclusions.
The introduction gives a general description of the work, assesses the current state of the problem, substantiates the relevance of the research area, formulates the purpose and objectives of the research, shows the scientific novelty of the results and the practical value of the work, and provides information on the results and their implementation.
In the first chapter, the general theory of deep neural networks is presented. The architecture of neural networks and their key components are discussed in detail. The ways of testing such neural networks are considered. Several types of deep neural network testing are discussed in detail, and their advantages and disadvantages are identified.
In the second section, several methods of testing deep neural networks were reviewed, their advantages and disadvantages were identified.
In the third section, three experiments with different methods of testing deep neural networks were conducted.
The fourth section analyzes the results of the experiments on the use of different methodologies for testing deep neural networks.
The conclusion presents the results of the work.
The work is presented on 90 pages and contains references to the list of used literature sources
Learning About Simulated Adversaries from Human Defenders using Interactive Cyber-Defense Games
Given the increase in cybercrime, cybersecurity analysts (i.e. Defenders) are
in high demand. Defenders must monitor an organization's network to evaluate
threats and potential breaches into the network. Adversary simulation is
commonly used to test defenders' performance against known threats to
organizations. However, it is unclear how effective this training process is in
preparing defenders for this highly demanding job. In this paper, we
demonstrate how to use adversarial algorithms to investigate defenders'
learning of defense strategies, using interactive cyber defense games. Our
Interactive Defense Game (IDG) represents a cyber defense scenario that
requires constant monitoring of incoming network alerts and allows a defender
to analyze, remove, and restore services based on the events observed in a
network. The participants in our study faced one of two types of simulated
adversaries. A Beeline adversary is a fast, targeted, and informed attacker;
and a Meander adversary is a slow attacker that wanders the network until it
finds the right target to exploit. Our results suggest that although human
defenders have more difficulty to stop the Beeline adversary initially, they
were able to learn to stop this adversary by taking advantage of their attack
strategy. Participants who played against the Beeline adversary learned to
anticipate the adversary and take more proactive actions, while decreasing
their reactive actions. These findings have implications for understanding how
to help cybersecurity analysts speed up their training.Comment: Submitted to Journal of Cybersecurit
Kairos: Practical Intrusion Detection and Investigation using Whole-system Provenance
Provenance graphs are structured audit logs that describe the history of a
system's execution. Recent studies have explored a variety of techniques to
analyze provenance graphs for automated host intrusion detection, focusing
particularly on advanced persistent threats. Sifting through their design
documents, we identify four common dimensions that drive the development of
provenance-based intrusion detection systems (PIDSes): scope (can PIDSes detect
modern attacks that infiltrate across application boundaries?), attack
agnosticity (can PIDSes detect novel attacks without a priori knowledge of
attack characteristics?), timeliness (can PIDSes efficiently monitor host
systems as they run?), and attack reconstruction (can PIDSes distill attack
activity from large provenance graphs so that sysadmins can easily understand
and quickly respond to system intrusion?). We present KAIROS, the first PIDS
that simultaneously satisfies the desiderata in all four dimensions, whereas
existing approaches sacrifice at least one and struggle to achieve comparable
detection performance.
Kairos leverages a novel graph neural network-based encoder-decoder
architecture that learns the temporal evolution of a provenance graph's
structural changes to quantify the degree of anomalousness for each system
event. Then, based on this fine-grained information, Kairos reconstructs attack
footprints, generating compact summary graphs that accurately describe
malicious activity over a stream of system audit logs. Using state-of-the-art
benchmark datasets, we demonstrate that Kairos outperforms previous approaches.Comment: 23 pages, 16 figures, to appear in the 45th IEEE Symposium on
Security and Privacy (S&P'24
- …