861 research outputs found

    Attack Taxonomy Methodology Applied to Web Services

    Get PDF
    With the rapid evolution of attack techniques and attacker targets, companies and researchers question the applicability and effectiveness of security taxonomies. Although the attack taxonomies allow us to propose a classification scheme, they are easily rendered useless by the generation of new attacks. Due to its distributed and open nature, web services give rise to new security challenges. The purpose of this study is to apply a methodology for categorizing and updating attacks prior to the continuous creation and evolution of new attack schemes on web services. Also, in this research, we collected thirty-three (33) types of attacks classified into five (5) categories, such as brute force, spoofing, flooding, denial-of-services, and injection attacks, in order to obtain the state of the art of vulnerabilities against web services. Finally, the attack taxonomy is applied to a web service, modeling through attack trees. The use of this methodology allows us to prevent future attacks applied to many technologies, not only web services.Con la rápida evolución de las técnicas de ataque y los objetivos de los atacantes, las empresas y los investigadores cuestionan la aplicabilidad y eficacia de las taxonomías de seguridad. Si bien las taxonomías de ataque nos permiten proponer un esquema de clasificación, son fácilmente inutilizadas por la generación de nuevos ataques. Debido a su naturaleza distribuida y abierta, los servicios web plantean nuevos desafíos de seguridad. El propósito de este estudio es aplicar una metodología para categorizar y actualizar ataques previos a la continua creación y evolución de nuevos esquemas de ataque a servicios web. Asimismo, en esta investigación recolectamos treinta y tres (33) tipos de ataques clasificados en cinco (5) categorías, tales como fuerza bruta, suplantación de identidad, inundación, denegación de servicios y ataques de inyección, con el fin de obtener el estado del arte de las vulnerabilidades contra servicios web. Finalmente, se aplica la taxonomía de ataque a un servicio web, modelado a través de árboles de ataque. El uso de esta metodología nos permite prevenir futuros ataques aplicados a muchas tecnologías, no solo a servicios web

    The Fallacy of Systemic Racism in the American Criminal Justice System

    Get PDF
    Critics of the criminal justice system have repeatedly charged it with systemic racism. It is a tenet of the “war” on the “War on Drugs,” it is a justification used by the so-called “progressive prosecutors” to reject the “Broken Windows” theory of law enforcement, and it is an article of faith of the “Defund the Police!” movement. Even President Joe Biden and his chief lieutenants leveled the same allegation early in this administration. Although the President has eschewed the belief that Americans are a racist people, others have not, proclaiming that virtually anyone who is white is a racist. Yet, few people have defined what they mean by that term. This Article examines what it could mean and tests the truth of the systemic racism claim under each possible definition. None stands up to scrutiny. One argument is that the American citizens who run our many institutions are motivated by racial animus. But the evidence is that racial animus is no longer tolerated in society, and what is more, the criminal justice system strives to identify it when it does occur and to remedy it. Another argument says that the overtly racist beliefs and practices of the past have created lingering racist effects, but this argument cherry-picks historical facts (when it does not ignore them altogether) and fails to grapple with the country’s historic and ongoing efforts to eliminate racial discrimination. It also assumes a causal relationship between past discrimination and present disparities that is unsupported and often contradicted by the evidence. Yet another argument relies psychological research to claim that white Americans are animated by a subconscious racial animus. That research, however, has been debunked. Still another argument says that the criminal justice system is systemically racist because it has disparate effects across racial groups, but this argument looks only at the offenders’ side of the criminal justice system and fails to consider the effect of the criminal justice system on victims. Proponents of the systemic racism theory often proffer “solutions” to it. This Article examines those too and finds that many would, in fact, harm the very people they aim to help. In the context of the “War on Drugs,” where so much of the rhetoric is focused, the authors examine these arguments and solutions. The bottom line is this: the claim of systemic racism in the criminal justice system is unjustified

    The Great Request Robbery: An Empirical Study of Client-side Request Hijacking Vulnerabilities on the Web

    Get PDF
    Request forgery attacks are among the oldest threats to Web applications, traditionally caused by server-side confused deputy vulnerabilities. However, recent advancements in client-side technologies have introduced more subtle variants of request forgery, where attackers exploit input validation flaws in client-side programs to hijack outgoing requests. We have little-to-no information about these client-side variants, their prevalence, impact, and countermeasures, and in this paper we undertake one of the first evaluations of the state of client-side request hijacking on the Web platform. Starting with a comprehensive review of browser API capabilities and Web specifications, we systematize request hijacking vulnerabilities and the resulting attacks, identifying 10 distinct vulnerability variants, including seven new ones. Then, we use our systematization to design and implement Sheriff, a static-dynamic tool that detects vulnerable data flows from attacker-controllable inputs to request-sending instructions. We instantiate Sheriff on the top of the Tranco top 10K sites, performing, to our knowledge, the first investigation into the prevalence of request hijacking flaws in the wild. Our study uncovers that request hijacking vulnerabilities are ubiquitous, affecting 9.6% of the top 10K sites. We demonstrate the impact of these vulnerabilities by constructing 67 proof-of-concept exploits across 49 sites, making it possible to mount arbitrary code execution, information leakage, open redirections and CSRF also against popular websites like Microsoft Azure, Starz, Reddit, and Indeed. Finally, we review and evaluate the adoption and efficacy of existing countermeasures against client-side request hijacking attacks, including browser-based solutions like CSP, COOP and COEP, and input validation

    Mapping the Focal Points of WordPress: A Software and Critical Code Analysis

    Get PDF
    Programming languages or code can be examined through numerous analytical lenses. This project is a critical analysis of WordPress, a prevalent web content management system, applying four modes of inquiry. The project draws on theoretical perspectives and areas of study in media, software, platforms, code, language, and power structures. The applied research is based on Critical Code Studies, an interdisciplinary field of study that holds the potential as a theoretical lens and methodological toolkit to understand computational code beyond its function. The project begins with a critical code analysis of WordPress, examining its origins and source code and mapping selected vulnerabilities. An examination of the influence of digital and computational thinking follows this. The work also explores the intersection of code patching and vulnerability management and how code shapes our sense of control, trust, and empathy, ultimately arguing that a rhetorical-cultural lens can be used to better understand code\u27s controlling influence. Recurring themes throughout these analyses and observations are the connections to power and vulnerability in WordPress\u27 code and how cultural, processual, rhetorical, and ethical implications can be expressed through its code, creating a particular worldview. Code\u27s emergent properties help illustrate how human values and practices (e.g., empathy, aesthetics, language, and trust) become encoded in software design and how people perceive the software through its worldview. These connected analyses reveal cultural, processual, and vulnerability focal points and the influence these entanglements have concerning WordPress as code, software, and platform. WordPress is a complex sociotechnical platform worthy of further study, as is the interdisciplinary merging of theoretical perspectives and disciplines to critically examine code. Ultimately, this project helps further enrich the field by introducing focal points in code, examining sociocultural phenomena within the code, and offering techniques to apply critical code methods

    Cognitive Machine Individualism in a Symbiotic Cybersecurity Policy Framework for the Preservation of Internet of Things Integrity: A Quantitative Study

    Get PDF
    This quantitative study examined the complex nature of modern cyber threats to propose the establishment of cyber as an interdisciplinary field of public policy initiated through the creation of a symbiotic cybersecurity policy framework. For the public good (and maintaining ideological balance), there must be recognition that public policies are at a transition point where the digital public square is a tangible reality that is more than a collection of technological widgets. The academic contribution of this research project is the fusion of humanistic principles with Internet of Things (IoT) technologies that alters our perception of the machine from an instrument of human engineering into a thinking peer to elevate cyber from technical esoterism into an interdisciplinary field of public policy. The contribution to the US national cybersecurity policy body of knowledge is a unified policy framework (manifested in the symbiotic cybersecurity policy triad) that could transform cybersecurity policies from network-based to entity-based. A correlation archival data design was used with the frequency of malicious software attacks as the dependent variable and diversity of intrusion techniques as the independent variable for RQ1. For RQ2, the frequency of detection events was the dependent variable and diversity of intrusion techniques was the independent variable. Self-determination Theory is the theoretical framework as the cognitive machine can recognize, self-endorse, and maintain its own identity based on a sense of self-motivation that is progressively shaped by the machine’s ability to learn. The transformation of cyber policies from technical esoterism into an interdisciplinary field of public policy starts with the recognition that the cognitive machine is an independent consumer of, advisor into, and influenced by public policy theories, philosophical constructs, and societal initiatives

    A data taxonomy for adaptive multifactor authentication in the internet of health care things

    Get PDF
    The health care industry has faced various challenges over the past decade as we move toward a digital future where services and data are available on demand. The systems of interconnected devices, users, data, and working environments are referred to as the Internet of Health Care Things (IoHT). IoHT devices have emerged in the past decade as cost-effective solutions with large scalability capabilities to address the constraints on limited resources. These devices cater to the need for remote health care services outside of physical interactions. However, IoHT security is often overlooked because the devices are quickly deployed and configured as solutions to meet the demands of a heavily saturated industry. During the COVID-19 pandemic, studies have shown that cybercriminals are exploiting the health care industry, and data breaches are targeting user credentials through authentication vulnerabilities. Poor password use and management and the lack of multifactor authentication security posture within IoHT cause a loss of millions according to the IBM reports. Therefore, it is important that health care authentication security moves toward adaptive multifactor authentication (AMFA) to replace the traditional approaches to authentication. We identified a lack of taxonomy for data models that particularly focus on IoHT data architecture to improve the feasibility of AMFA. This viewpoint focuses on identifying key cybersecurity challenges in a theoretical framework for a data model that summarizes the main components of IoHT data. The data are to be used in modalities that are suited for health care users in modern IoHT environments and in response to the COVID-19 pandemic. To establish the data taxonomy, a review of recent IoHT papers was conducted to discuss the related work in IoHT data management and use in next-generation authentication systems. Reports, journal articles, conferences, and white papers were reviewed for IoHT authentication data technologies in relation to the problem statement of remote authentication and user management systems. Only publications written in English from the last decade were included (2012-2022) to identify key issues within the current health care practices and their management of IoHT devices. We discuss the components of the IoHT architecture from the perspective of data management and sensitivity to ensure privacy for all users. The data model addresses the security requirements of IoHT users, environments, and devices toward the automation of AMFA in health care. We found that in health care authentication, the significant threats occurring were related to data breaches owing to weak security options and poor user configuration of IoHT devices. The security requirements of IoHT data architecture and identified impactful methods of cybersecurity for health care devices, data, and their respective attacks are discussed. Data taxonomy provides better understanding, solutions, and improvements of user authentication in remote working environments for security features

    20th SC@RUG 2023 proceedings 2022-2023

    Get PDF

    SUTMS - Unified Threat Management Framework for Home Networks

    Get PDF
    Home networks were initially designed for web browsing and non-business critical applications. As infrastructure improved, internet broadband costs decreased, and home internet usage transferred to e-commerce and business-critical applications. Today’s home computers host personnel identifiable information and financial data and act as a bridge to corporate networks via remote access technologies like VPN. The expansion of remote work and the transition to cloud computing have broadened the attack surface for potential threats. Home networks have become the extension of critical networks and services, hackers can get access to corporate data by compromising devices attacked to broad- band routers. All these challenges depict the importance of home-based Unified Threat Management (UTM) systems. There is a need of unified threat management framework that is developed specifically for home and small networks to address emerging security challenges. In this research, the proposed Smart Unified Threat Management (SUTMS) framework serves as a comprehensive solution for implementing home network security, incorporating firewall, anti-bot, intrusion detection, and anomaly detection engines into a unified system. SUTMS is able to provide 99.99% accuracy with 56.83% memory improvements. IPS stands out as the most resource-intensive UTM service, SUTMS successfully reduces the performance overhead of IDS by integrating it with the flow detection mod- ule. The artifact employs flow analysis to identify network anomalies and categorizes encrypted traffic according to its abnormalities. SUTMS can be scaled by introducing optional functions, i.e., routing and smart logging (utilizing Apriori algorithms). The research also tackles one of the limitations identified by SUTMS through the introduction of a second artifact called Secure Centralized Management System (SCMS). SCMS is a lightweight asset management platform with built-in security intelligence that can seamlessly integrate with a cloud for real-time updates

    20th SC@RUG 2023 proceedings 2022-2023

    Get PDF

    Towards trustworthy computing on untrustworthy hardware

    Get PDF
    Historically, hardware was thought to be inherently secure and trusted due to its obscurity and the isolated nature of its design and manufacturing. In the last two decades, however, hardware trust and security have emerged as pressing issues. Modern day hardware is surrounded by threats manifested mainly in undesired modifications by untrusted parties in its supply chain, unauthorized and pirated selling, injected faults, and system and microarchitectural level attacks. These threats, if realized, are expected to push hardware to abnormal and unexpected behaviour causing real-life damage and significantly undermining our trust in the electronic and computing systems we use in our daily lives and in safety critical applications. A large number of detective and preventive countermeasures have been proposed in literature. It is a fact, however, that our knowledge of potential consequences to real-life threats to hardware trust is lacking given the limited number of real-life reports and the plethora of ways in which hardware trust could be undermined. With this in mind, run-time monitoring of hardware combined with active mitigation of attacks, referred to as trustworthy computing on untrustworthy hardware, is proposed as the last line of defence. This last line of defence allows us to face the issue of live hardware mistrust rather than turning a blind eye to it or being helpless once it occurs. This thesis proposes three different frameworks towards trustworthy computing on untrustworthy hardware. The presented frameworks are adaptable to different applications, independent of the design of the monitored elements, based on autonomous security elements, and are computationally lightweight. The first framework is concerned with explicit violations and breaches of trust at run-time, with an untrustworthy on-chip communication interconnect presented as a potential offender. The framework is based on the guiding principles of component guarding, data tagging, and event verification. The second framework targets hardware elements with inherently variable and unpredictable operational latency and proposes a machine-learning based characterization of these latencies to infer undesired latency extensions or denial of service attacks. The framework is implemented on a DDR3 DRAM after showing its vulnerability to obscured latency extension attacks. The third framework studies the possibility of the deployment of untrustworthy hardware elements in the analog front end, and the consequent integrity issues that might arise at the analog-digital boundary of system on chips. The framework uses machine learning methods and the unique temporal and arithmetic features of signals at this boundary to monitor their integrity and assess their trust level
    corecore