773 research outputs found
Configuration Management of Distributed Systems over Unreliable and Hostile Networks
Economic incentives of large criminal profits and the threat of legal consequences have pushed criminals to continuously improve their malware, especially command and control channels. This thesis applied concepts from successful malware command and control to explore the survivability and resilience of benign configuration management systems.
This work expands on existing stage models of malware life cycle to contribute a new model for identifying malware concepts applicable to benign configuration management. The Hidden Master architecture is a contribution to master-agent network communication. In the Hidden Master architecture, communication between master and agent is asynchronous and can operate trough intermediate nodes. This protects the master secret key, which gives full control of all computers participating in configuration management. Multiple improvements to idempotent configuration were proposed, including the definition of the minimal base resource dependency model, simplified resource revalidation and the use of imperative general purpose language for defining idempotent configuration.
Following the constructive research approach, the improvements to configuration management were designed into two prototypes. This allowed validation in laboratory testing, in two case studies and in expert interviews. In laboratory testing, the Hidden Master prototype was more resilient than leading configuration management tools in high load and low memory conditions, and against packet loss and corruption. Only the research prototype was adaptable to a network without stable topology due to the asynchronous nature of the Hidden Master architecture.
The main case study used the research prototype in a complex environment to deploy a multi-room, authenticated audiovisual system for a client of an organization deploying the configuration. The case studies indicated that imperative general purpose language can be used for idempotent configuration in real life, for defining new configurations in unexpected situations using the base resources, and abstracting those using standard language features; and that such a system seems easy to learn.
Potential business benefits were identified and evaluated using individual semistructured expert interviews. Respondents agreed that the models and the Hidden Master architecture could reduce costs and risks, improve developer productivity and allow faster time-to-market. Protection of master secret keys and the reduced need for incident response were seen as key drivers for improved security. Low-cost geographic scaling and leveraging file serving capabilities of commodity servers were seen to improve scaling and resiliency. Respondents identified jurisdictional legal limitations to encryption and requirements for cloud operator auditing as factors potentially limiting the full use of some concepts
A Trust Management Framework for Vehicular Ad Hoc Networks
The inception of Vehicular Ad Hoc Networks (VANETs) provides an opportunity for road users and public infrastructure to share information that improves the operation of roads and the driver experience. However, such systems can be vulnerable to malicious external entities and legitimate users. Trust management is used to address attacks from legitimate users in accordance with a user’s trust score. Trust models evaluate messages to assign rewards or punishments. This can be used to influence a driver’s future behaviour or, in extremis, block the driver. With receiver-side schemes, various methods are used to evaluate trust including, reputation computation, neighbour recommendations, and storing historical information. However, they incur overhead and add a delay when deciding whether to accept or reject messages. In this thesis, we propose a novel Tamper-Proof Device (TPD) based trust framework for managing trust of multiple drivers at the sender side vehicle that updates trust, stores, and protects information from malicious tampering. The TPD also regulates, rewards, and punishes each specific driver, as required. Furthermore, the trust score determines the classes of message that a driver can access. Dissemination of feedback is only required when there is an attack (conflicting information). A Road-Side Unit (RSU) rules on a dispute, using either the sum of products of trust and feedback or official vehicle data if available. These “untrue attacks” are resolved by an RSU using collaboration, and then providing a fixed amount of reward and punishment, as appropriate. Repeated attacks are addressed by incremental punishments and potentially driver access-blocking when conditions are met. The lack of sophistication in this fixed RSU assessment scheme is then addressed by a novel fuzzy logic-based RSU approach. This determines a fairer level of reward and punishment based on the severity of incident, driver past behaviour, and RSU confidence. The fuzzy RSU controller assesses judgements in such a way as to encourage drivers to improve their behaviour. Although any driver can lie in any situation, we believe that trustworthy drivers are more likely to remain so, and vice versa. We capture this behaviour in a Markov chain model for the sender and reporter driver behaviours where a driver’s truthfulness is influenced by their trust score and trust state. For each trust state, the driver’s likelihood of lying or honesty is set by a probability distribution which is different for each state. This framework is analysed in Veins using various classes of vehicles under different traffic conditions. Results confirm that the framework operates effectively in the presence of untrue and inconsistent attacks. The correct functioning is confirmed with the system appropriately classifying incidents when clarifier vehicles send truthful feedback. The framework is also evaluated against a centralized reputation scheme and the results demonstrate that it outperforms the reputation approach in terms of reduced communication overhead and shorter response time. Next, we perform a set of experiments to evaluate the performance of the fuzzy assessment in Veins. The fuzzy and fixed RSU assessment schemes are compared, and the results show that the fuzzy scheme provides better overall driver behaviour. The Markov chain driver behaviour model is also examined when changing the initial trust score of all drivers
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
Mapping the Focal Points of WordPress: A Software and Critical Code Analysis
Programming languages or code can be examined through numerous analytical lenses. This project is a critical analysis of WordPress, a prevalent web content management system, applying four modes of inquiry. The project draws on theoretical perspectives and areas of study in media, software, platforms, code, language, and power structures. The applied research is based on Critical Code Studies, an interdisciplinary field of study that holds the potential as a theoretical lens and methodological toolkit to understand computational code beyond its function. The project begins with a critical code analysis of WordPress, examining its origins and source code and mapping selected vulnerabilities. An examination of the influence of digital and computational thinking follows this. The work also explores the intersection of code patching and vulnerability management and how code shapes our sense of control, trust, and empathy, ultimately arguing that a rhetorical-cultural lens can be used to better understand code\u27s controlling influence. Recurring themes throughout these analyses and observations are the connections to power and vulnerability in WordPress\u27 code and how cultural, processual, rhetorical, and ethical implications can be expressed through its code, creating a particular worldview. Code\u27s emergent properties help illustrate how human values and practices (e.g., empathy, aesthetics, language, and trust) become encoded in software design and how people perceive the software through its worldview. These connected analyses reveal cultural, processual, and vulnerability focal points and the influence these entanglements have concerning WordPress as code, software, and platform. WordPress is a complex sociotechnical platform worthy of further study, as is the interdisciplinary merging of theoretical perspectives and disciplines to critically examine code. Ultimately, this project helps further enrich the field by introducing focal points in code, examining sociocultural phenomena within the code, and offering techniques to apply critical code methods
Cognitive Machine Individualism in a Symbiotic Cybersecurity Policy Framework for the Preservation of Internet of Things Integrity: A Quantitative Study
This quantitative study examined the complex nature of modern cyber threats to propose the establishment of cyber as an interdisciplinary field of public policy initiated through the creation of a symbiotic cybersecurity policy framework. For the public good (and maintaining ideological balance), there must be recognition that public policies are at a transition point where the digital public square is a tangible reality that is more than a collection of technological widgets. The academic contribution of this research project is the fusion of humanistic principles with Internet of Things (IoT) technologies that alters our perception of the machine from an instrument of human engineering into a thinking peer to elevate cyber from technical esoterism into an interdisciplinary field of public policy. The contribution to the US national cybersecurity policy body of knowledge is a unified policy framework (manifested in the symbiotic cybersecurity policy triad) that could transform cybersecurity policies from network-based to entity-based. A correlation archival data design was used with the frequency of malicious software attacks as the dependent variable and diversity of intrusion techniques as the independent variable for RQ1. For RQ2, the frequency of detection events was the dependent variable and diversity of intrusion techniques was the independent variable. Self-determination Theory is the theoretical framework as the cognitive machine can recognize, self-endorse, and maintain its own identity based on a sense of self-motivation that is progressively shaped by the machine’s ability to learn. The transformation of cyber policies from technical esoterism into an interdisciplinary field of public policy starts with the recognition that the cognitive machine is an independent consumer of, advisor into, and influenced by public policy theories, philosophical constructs, and societal initiatives
A Multi-level Analysis on Implementation of Low-Cost IVF in Sub-Saharan Africa: A Case Study of Uganda.
Introduction: Globally, infertility is a major reproductive disease that affects an estimated 186 million people worldwide. In Sub-Saharan Africa, the burden of infertility is considerably high, affecting one in every four couples of reproductive age. Furthermore, infertility in this context has severe psychosocial, emotional, economic and health consequences. Absence of affordable fertility services in Sub-Saharan Africa has been justified by overpopulation and limited resources, resulting in inequitable access to infertility treatment compared to developed countries. Therefore, low-cost IVF (LCIVF) initiatives have been developed to simplify IVF-related treatment, reduce costs, and improve access to treatment for individuals in low-resource contexts. However, there is a gap between the development of LCIVF initiatives and their implementation in Sub-Saharan Africa. Uganda is the first country in East and Central Africa to undergo implementation of LCIVF initiatives within its public health system at Mulago Women’s Hospital.
Methods: This was an exploratory, qualitative, single, case study conducted at Mulago Women’s Hospital in Kampala, Uganda. The objective of this study was to explore how LCIVF initiatives have been implemented within the public health system of Uganda at the macro-, meso- and micro-level. Primary qualitative data was collected using semi-structured interviews, hospital observations informal conversations, and document review. Using purposive and snowball sampling, a total of twenty-three key informants were interviewed including government officials, clinicians (doctors, nurses, technicians), hospital management, implementers, patient advocacy representatives, private sector practitioners, international organizational representatives, educational institution, and professional medical associations. Sources of secondary data included government and non-government reports, hospital records, organizational briefs, and press outputs. Using a multi-level data analysis approach, this study undertook a hybrid inductive/deductive thematic analysis, with the deductive analysis guided by the Consolidated Framework for Implementation Research (CFIR).
Findings: Factors facilitating implementation included international recognition of infertility as a reproductive disease, strong political advocacy and oversight, patient needs & advocacy, government funding, inter-organizational collaboration, tension to change, competition in the private sector, intervention adaptability & trialability, relative priority, motivation &advocacy of fertility providers and specialist training. While barriers included scarcity of embryologists, intervention complexity, insufficient knowledge, evidence strength & quality of intervention, inadequate leadership engagement & hospital autonomy, poor public knowledge, limited engagement with traditional, cultural, and religious leaders, lack of salary incentives and concerns of revenue loss associated with low-cost options.
Research contributions: This study contributes to knowledge of factors salient to implementation of LCIVF initiatives in a Sub-Saharan context. Effective implementation of these initiatives requires (1) sustained political support and favourable policy & legislation, (2) public sensitization and engagement of traditional, cultural, and religious leaders (3) strengthening local innovation and capacity building of fertility health workers, in particular embryologists (4) sustained implementor leadership engagement and inter-organizational collaboration and (5) proven clinical evidence and utilization of LCIVF initiatives in innovator countries. It also adds to the literature on the applicability of the CFIR framework in explaining factors that influence successful implementation in developing countries and offer opportunities for comparisons across studies
Modern meat: the next generation of meat from cells
Modern Meat is the first textbook on cultivated meat, with contributions from over 100 experts within the cultivated meat community.
The Sections of Modern Meat comprise 5 broad categories of cultivated meat: Context, Impact, Science, Society, and World.
The 19 chapters of Modern Meat, spread across these 5 sections, provide detailed entries on cultivated meat. They extensively tour a range of topics including the impact of cultivated meat on humans and animals, the bioprocess of cultivated meat production, how cultivated meat may become a food option in Space and on Mars, and how cultivated meat may impact the economy, culture, and tradition of Asia
A modern approach for Threat Modelling in agile environments: redesigning the process in a SaaS company
Dealing with security aspects has become one of the priorities for companies operating in every sector. In the software industry building security requires being proactive and preventive by incorporating requirements right from the ideation and design of the product. Threat modelling has been consistently proven as one of the most effective and rewarding security activities in doing that, being able to uncover threats and vulnerabilities before they are even introduced into the codebase. Numerous approaches to conduct such exercise have been proposed over time, however, most of them can not be adopted in intricate corporate environments with multiple development teams.
This is clear by analysing the case of Company Z, which introduced a well-documented process in 2019 but scalability, governance and knowledge issues blocked a widespread adoption. The main goal of the Thesis was to overcome these problems by designing a novel threat modelling approach, able to fit the company’s Agile environment and capable of closing the current gaps.
As a result, a complete description of the redefined workflow and a structured set of suggestions was proposed. The solution is flexible enough to be adopted in multiple different contexts while meeting the requirements of Company Z. Achieving this
result was possible only by analysing the industry’s best practices and solutions, understanding the current process, identifying the pain points, and gathering feedback from stakeholders. The solution proposed includes, alongside the new threat modelling process, a comprehensive method for evaluating and verifying the effectiveness of the proposed solution
Ransomware Simulator for In-Depth Analysis and Detection: Leveraging Centralized Logging and Sysmon for Improved Cybersecurity
Abstract
Ransomware attacks have become increasingly prevalent and sophisticated, posing significant threats to organizations and individuals worldwide. To effectively combat these threats,
security professionals must continuously develop and adapt their detection and mitigation
strategies. This master thesis presents the design and implementation of a ransomware simulator to facilitate an in-depth analysis of ransomware Tactics, Techniques, and Procedures
(TTPs) and to evaluate the effectiveness of centralized logging and Sysmon, including the
latest event types, in detecting and responding to such attacks.
The study explores the advanced capabilities of Sysmon as a logging tool and data source,
focusing on its ability to capture multiple event types, such as file creation, process execution,
and network traffic, as well as the newly added event types. The aim is to demonstrate the
effectiveness of Sysmon in detecting and analyzing malicious activities, with an emphasis on
the latest features. By focusing on the comprehensive aspects of a cyber-attack, the study
showcases the versatility and utility of Sysmon in detecting and addressing various attack
vectors.
The ransomware simulator is developed using a PowerShell script that emulates various
ransomware TTPs and attack scenarios, providing a comprehensive and realistic simulation
of a ransomware attack. Sysmon, a powerful system monitoring tool, is utilized to monitor
and log the activities associated with the simulated attack, including the events generated by
the new Sysmon features. Centralized logging is achieved through the integration of Splunk
Enterprise, a widely used platform for log analysis and management. The collected logs are
then analyzed to identify patterns, indicators of compromise (IoCs), and potential detection
and mitigation strategies.
Through the development of the ransomware simulator and the subsequent analysis of
Sysmon logs, this research contributes to strengthening the security posture of organizations
and improving cybersecurity measures against ransomware threats, with a focus on the latest
Sysmon capabilities. The results demonstrate the importance of monitoring and analyzing
system events to effectively detect and respond to ransomware attacks. This research can serve
as a basis for further exploration of ransomware detection and response strategies, contributing
to the advancement of cybersecurity practices and the development of more robust security
measures against ransomware threats
Security and Privacy of Resource Constrained Devices
The thesis aims to present a comprehensive and holistic overview on cybersecurity and privacy & data protection aspects related to IoT resource-constrained devices. Chapter 1 introduces the current technical landscape by providing a working definition and architecture taxonomy of ‘Internet of Things’ and ‘resource-constrained devices’, coupled with a threat landscape where each specific attack is linked to a layer of the taxonomy. Chapter 2 lays down the theoretical foundations for an interdisciplinary approach and a unified, holistic vision of cybersecurity, safety and privacy justified by the ‘IoT revolution’ through the so-called infraethical perspective. Chapter 3 investigates whether and to what extent the fast-evolving European cybersecurity regulatory framework addresses the security challenges brought about by the IoT by allocating legal responsibilities to the right parties. Chapters 4 and 5 focus, on the other hand, on ‘privacy’ understood by proxy as to include EU data protection. In particular, Chapter 4 addresses three legal challenges brought about by the ubiquitous IoT data and metadata processing to EU privacy and data protection legal frameworks i.e., the ePrivacy Directive and the GDPR. Chapter 5 casts light on the risk management tool enshrined in EU data protection law, that is, Data Protection Impact Assessment (DPIA) and proposes an original DPIA methodology for connected devices, building on the CNIL (French data protection authority) model
- …