1,218 research outputs found

    A Hierarchical Security Event Correlation Model for Real-Time Threat Detection and Response

    Get PDF
    An intrusion detection system (IDS) perform postcompromise detection of security breaches whenever preventive measures such as firewalls do not avert an attack. However, these systems raise a vast number of alerts that must be analyzed and triaged by security analysts. This process is largely manual, tedious, and time-consuming. Alert correlation is a technique that reduces the number of intrusion alerts by aggregating alerts that are similar in some way. However, the correlation is performed outside the IDS through third-party systems and tools, after the IDS has already generated a high volume of alerts. These third-party systems add to the complexity of security operations. In this paper, we build on the highly researched area of alert and event correlation by developing a novel hierarchical event correlation model that promises to reduce the number of alerts issued by an intrusion detection system. This is achieved by correlating the events before the IDS classifies them. The proposed model takes the best features from similarity and graph-based correlation techniques to deliver an ensemble capability not possible by either approach separately. Further, we propose a correlation process for events rather than alerts as is the case in the current art. We further develop our own correlation and clustering algorithm which is tailor-made to the correlation and clustering of network event data. The model is implemented as a proof of concept with experiments run on standard intrusion detection sets. The correlation achieves an 87% data reduction through aggregation, producing nearly 21,000 clusters in about 30 s.</jats:p

    Examining systemic and dispositional factors impacting historically disenfranchised schools across North Carolina

    Get PDF
    This mixed method sequential explanatory study provided analysis of North Carolina (NC) school leaders’ dispositions in eliminating opportunity gaps, outlined in NC’s strategic plan. The study’s quantitative phase used descriptive and correlation analysis of eight Likert subscales around four tenets of transformative leadership (Shields, 2011) and aspects of critical race theory (Bell, 1992; Ladson-Billings, 1998; Ladson-Billings & Tate, 2006) to understand systemic inequities and leadership attitudes. The qualitative phase comprised three analyses of education leadership dispositions and systemic factors in NC schools. The first analysis of State Board of Education meeting minutes from 2018–2023 quantified and analyzed utterances of racism and critical race, outlined the sociopolitical context of such utterances, and identified systemic patterns and state leader dispositions. The second analysis of five interviews of K–12 graduates identified persistent and systemic factors influencing NC education 3 decades after Brown v. Board of Education (1954) and within the context of Leandro v. State of NC (1997), where the NC Supreme Court recognized the state constitutional right for every student to access a “sound basic education.” The final qualitative analysis consisted of five interviews of current NC public school system leaders, for personal narratives of the state of NC schools compared to patterns from lived experiences of NC K–12 graduates. The study’s findings suggested NC school and state education leaders experience a racialized dichotomy between willingness for change (equity intentions) and execution of transformative action (practice). Although leaders at the board and school levels recognize the need for inclusivity and equity, a struggle to transcend systemic challenges, especially rooted in racial biases and power dynamics is evident. This study may identify leadership qualities needed for change in NC to address systemic inequities for improving educational access and inform policy to uphold all students’ constitutional right to a sound, basic education

    Effects of municipal smoke-free ordinances on secondhand smoke exposure in the Republic of Korea

    Get PDF
    ObjectiveTo reduce premature deaths due to secondhand smoke (SHS) exposure among non-smokers, the Republic of Korea (ROK) adopted changes to the National Health Promotion Act, which allowed local governments to enact municipal ordinances to strengthen their authority to designate smoke-free areas and levy penalty fines. In this study, we examined national trends in SHS exposure after the introduction of these municipal ordinances at the city level in 2010.MethodsWe used interrupted time series analysis to assess whether the trends of SHS exposure in the workplace and at home, and the primary cigarette smoking rate changed following the policy adjustment in the national legislation in ROK. Population-standardized data for selected variables were retrieved from a nationally representative survey dataset and used to study the policy action’s effectiveness.ResultsFollowing the change in the legislation, SHS exposure in the workplace reversed course from an increasing (18% per year) trend prior to the introduction of these smoke-free ordinances to a decreasing (−10% per year) trend after adoption and enforcement of these laws (β2 = 0.18, p-value = 0.07; β3 = −0.10, p-value = 0.02). SHS exposure at home (β2 = 0.10, p-value = 0.09; β3 = −0.03, p-value = 0.14) and the primary cigarette smoking rate (β2 = 0.03, p-value = 0.10; β3 = 0.008, p-value = 0.15) showed no significant changes in the sampled period. Although analyses stratified by sex showed that the allowance of municipal ordinances resulted in reduced SHS exposure in the workplace for both males and females, they did not affect the primary cigarette smoking rate as much, especially among females.ConclusionStrengthening the role of local governments by giving them the authority to enact and enforce penalties on SHS exposure violation helped ROK to reduce SHS exposure in the workplace. However, smoking behaviors and related activities seemed to shift to less restrictive areas such as on the streets and in apartment hallways, negating some of the effects due to these ordinances. Future studies should investigate how smoke-free policies beyond public places can further reduce the SHS exposure in ROK

    Writing Facts: Interdisciplinary Discussions of a Key Concept in Modernity

    Get PDF
    "Fact" is one of the most crucial inventions of modern times. Susanne Knaller discusses the functions of this powerful notion in the arts and the sciences, its impact on aesthetic models and systems of knowledge. The practice of writing provides an effective procedure to realize and to understand facts. This concerns preparatory procedures, formal choices, models of argumentation, and narrative patterns. By considering "writing facts" and "writing facts", the volume shows why and how "facts" are a result of knowledge, rules, and norms as well as of description, argumentation, and narration. This approach allows new perspectives on »fact« and its impact on modernity

    Grounds for a Third Place : The Starbucks Experience, Sirens, and Space

    Get PDF
    My goal in this dissertation is to help demystify or “filter” the “Starbucks Experience” for a post-pandemic world, taking stock of how a multi-national company has long outgrown its humble beginnings as a wholesale coffee bean supplier to become a digitally-integrated and hypermodern café. I look at the role Starbucks plays within the larger cultural history of the coffee house and also consider how Starbucks has been idyllically described in corporate discourse as a comfortable and discursive “third place” for informal gathering, a term that also prescribes its own radical ethos as a globally recognized customer service platform. Attempting to square Starbucks’ iconography and rhetoric with a new critical methodology, in a series of interdisciplinary case studies, I examine the role Starbucks’ “third place” philosophy plays within larger conversations about urban space and commodity culture, analyze Starbucks advertising, architecture and art, and trace the mythical rise of the Starbucks Siren (and the reiterations and re-imaginings of the Starbucks Siren in art and media). While in corporate rhetoric Starbucks’ “third place” is depicted as an enthralling adventure, full of play, discovery, authenticity, or “romance,” I draw on critical theory to discuss how it operates today as a space of distraction, isolation, and loss

    Towards a Digital Capability Maturity Framework for Tertiary Institutions

    Get PDF
    Background: The Digital Capability (DC) of an Institution is the extent to which the institution's culture, policies, and infrastructure enable and support digital practices (Killen et al., 2017), and maturity is the continuous improvement of those capabilities. As technology continues to evolve, it is likely to give rise to constant changes in teaching and learning, potentially disrupting Tertiary Education Institutions (TEIs) and making existing organisational models less effective. An institution’s ability to adapt to continuously changing technology depends on the change in culture and leadership decisions within the individual institutions. Change without structure leads to inefficiencies, evident across the Nigerian TEI landscape. These inefficiencies can be attributed mainly to a lack of clarity and agreement on a development structure. Objectives: This research aims to design a structure with a pathway to maturity, to support the continuous improvement of DC in TEIs in Nigeria and consequently improve the success of digital education programmes. Methods: I started by conducting a Systematic Literature Review (SLR) investigating the body of knowledge on DC, its composition, the relationship between its elements and their respective impact on the Maturity of TEIs. Findings from the review led me to investigate further the key roles instrumental in developing Digital Capability Maturity in Tertiary Institutions (DCMiTI). The results of these investigations formed the initial ideas and constructs upon which the proposed structure was built. I then explored a combination of quantitative and qualitative methods to substantiate the initial constructs and gain a deeper understanding of the relationships between elements/sub-elements. Next, I used triangulation as a vehicle to expand the validity of the findings by replicating the methods in a case study of TEIs in Nigeria. Finally, after using the validated constructs and knowledge base to propose a structure based on CMMI concepts, I conducted an expert panel workshop to test the model’s validity. Results: I consolidated the body of knowledge from the SLR into a universal classification of 10 elements, each comprising sub-elements. I also went on to propose a classification for DCMiTI. The elements/sub-elements in the classification indicate the success factors for digital maturity, which were also found to positively impact the ability to design, deploy and sustain digital education. These findings were confirmed in a UK University and triangulated in a case study of Northwest Nigeria. The case study confirmed the literature findings on the status of DCMiTI in Nigeria and provided sufficient evidence to suggest that a maturity structure would be a well-suited solution to supporting DCM in the region. I thus scoped, designed, and populated a domain-specific framework for DCMiTI, configured to support the educational landscape in Northwest Nigeria. Conclusion: The proposed DCMiTI framework enables TEIs to assess their maturity level across the various capability elements and reports on DCM as a whole. It provides guidance on the criteria that must be satisfied to achieve higher levels of digital maturity. The framework received expert validation, as domain experts agreed that the proposed Framework was well applicable to developing DCMiTI and would be a valuable tool to support TEIs in delivering successful digital education. Recommendations were made to engage in further iterations of testing by deploying the proposed framework for use in TEI to confirm the extent of its generalisability and acceptability

    Resilient and Scalable Forwarding for Software-Defined Networks with P4-Programmable Switches

    Get PDF
    Traditional networking devices support only fixed features and limited configurability. Network softwarization leverages programmable software and hardware platforms to remove those limitations. In this context the concept of programmable data planes allows directly to program the packet processing pipeline of networking devices and create custom control plane algorithms. This flexibility enables the design of novel networking mechanisms where the status quo struggles to meet high demands of next-generation networks like 5G, Internet of Things, cloud computing, and industry 4.0. P4 is the most popular technology to implement programmable data planes. However, programmable data planes, and in particular, the P4 technology, emerged only recently. Thus, P4 support for some well-established networking concepts is still lacking and several issues remain unsolved due to the different characteristics of programmable data planes in comparison to traditional networking. The research of this thesis focuses on two open issues of programmable data planes. First, it develops resilient and efficient forwarding mechanisms for the P4 data plane as there are no satisfying state of the art best practices yet. Second, it enables BIER in high-performance P4 data planes. BIER is a novel, scalable, and efficient transport mechanism for IP multicast traffic which has only very limited support of high-performance forwarding platforms yet. The main results of this thesis are published as 8 peer-reviewed and one post-publication peer-reviewed publication. The results cover the development of suitable resilience mechanisms for P4 data planes, the development and implementation of resilient BIER forwarding in P4, and the extensive evaluations of all developed and implemented mechanisms. Furthermore, the results contain a comprehensive P4 literature study. Two more peer-reviewed papers contain additional content that is not directly related to the main results. They implement congestion avoidance mechanisms in P4 and develop a scheduling concept to find cost-optimized load schedules based on day-ahead forecasts

    Power Play: The President\u27s Role in Shaping Renewable Energy Regulation and Policy

    Get PDF
    With the impacts of climate change becoming more and more apparent every day, finding means of effective action to mitigate its effects become increasingly critical. While localized work can play an important role, federal action is necessary to have the most widespread and effective impact, especially on interconnected issues such as clean energy. Congressional action is the avenue of change at this level, however in an increasingly partisan and divided environment, progress on this front is far short of what is needed. Looking to the president is logical here, both as a single actor more insulated from partisan fights, but also as head of the branch in charge of implementing the nation’s laws. This paper looks to explore what means of influence the president has on the action taken by federal agencies and how such methods can be made more effective. Through a principal-agent framework, the role of regulatory and appointment powers are examined with a variety of historical and contemporary case studies. While only a subset of the powers afforded to a president, the areas explored offer wide latitude for action, in areas that are particularly important for energy development. The paper concludes with some reflections for the future, suggesting how these considerations can be practically applied

    Doing Things with Words: The New Consequences of Writing in the Age of AI

    Get PDF
    Exploring the entanglement between artificial intelligence (AI) and writing, this thesis asks, what does writing with AI do? And, how can this doing be made visible, since the consequences of information and communication technologies (ICTs) are so often opaque? To propose one set of answers to the questions above, I begin by working with Google Smart Compose, the word-prediction AI Google launched to more than a billion global users in 2018, by way of a novel method I call AI interaction experiments. In these experiments, I transcribe texts into Gmail and Google Docs, carefully documenting Smart Compose’s interventions and output. Wedding these experiments to existing scholarship, I argue that writing with AI does three things: it engages writers in asymmetrical economic relations with Big Tech; it entangles unwitting writers in climate crisis by virtue of the vast resources, as Bender et al. (2021), Crawford (2021), and Strubell et al. (2019) have pointed out, required to train and sustain AI models; and it perpetuates linguistic racism, further embedding harmful politics of race and representation in everyday life. In making these arguments, my purpose is to intervene in normative discourses surrounding technology, exposing hard-to-see consequences so that we—people in the academy, critical media scholars, educators, and especially those of us in dominant groups— may envision better futures. Toward both exposure and reimagining, my dissertation’s primary contributions are research-creational work. Research-creational interventions accompany each of the three major chapters of this work, drawing attention to the economic, climate, and race relations that word-prediction AI conceals and to the otherwise opaque premises on which it rests. The broader wager of my dissertation is that what technologies do and what they are is inseparable: the relations a technology enacts must be exposed, and they must necessarily figure into how we understand the technology itself. Because writing with AI enacts particular economic, climate, and race relations, these relations must figure into our understanding of what it means to write with AI and, because of AI’s increasing entanglement with acts of writing, into our very understanding of what it means to write

    Distributed Implementation of eXtended Reality Technologies over 5G Networks

    Get PDF
    Mención Internacional en el título de doctorThe revolution of Extended Reality (XR) has already started and is rapidly expanding as technology advances. Announcements such as Meta’s Metaverse have boosted the general interest in XR technologies, producing novel use cases. With the advent of the fifth generation of cellular networks (5G), XR technologies are expected to improve significantly by offloading heavy computational processes from the XR Head Mounted Display (HMD) to an edge server. XR offloading can rapidly boost XR technologies by considerably reducing the burden on the XR hardware, while improving the overall user experience by enabling smoother graphics and more realistic interactions. Overall, the combination of XR and 5G has the potential to revolutionize the way we interact with technology and experience the world around us. However, XR offloading is a complex task that requires state-of-the-art tools and solutions, as well as an advanced wireless network that can meet the demanding throughput, latency, and reliability requirements of XR. The definition of these requirements strongly depends on the use case and particular XR offloading implementations. Therefore, it is crucial to perform a thorough Key Performance Indicators (KPIs) analysis to ensure a successful design of any XR offloading solution. Additionally, distributed XR implementations can be intrincated systems with multiple processes running on different devices or virtual instances. All these agents must be well-handled and synchronized to achieve XR real-time requirements and ensure the expected user experience, guaranteeing a low processing overhead. XR offloading requires a carefully designed architecture which complies with the required KPIs while efficiently synchronizing and handling multiple heterogeneous devices. Offloading XR has become an essential use case for 5G and beyond 5G technologies. However, testing distributed XR implementations requires access to advanced 5G deployments that are often unavailable to most XR application developers. Conversely, the development of 5G technologies requires constant feedback from potential applications and use cases. Unfortunately, most 5G providers, engineers, or researchers lack access to cutting-edge XR hardware or applications, which can hinder the fast implementation and improvement of 5G’s most advanced features. Both technology fields require ongoing input and continuous development from each other to fully realize their potential. As a result, XR and 5G researchers and developers must have access to the necessary tools and knowledge to ensure the rapid and satisfactory development of both technology fields. In this thesis, we focus on these challenges providing knowledge, tools and solutiond towards the implementation of advanced offloading technologies, opening the door to more immersive, comfortable and accessible XR technologies. Our contributions to the field of XR offloading include a detailed study and description of the necessary network throughput and latency KPIs for XR offloading, an architecture for low latency XR offloading and our full end to end XR offloading implementation ready for a commercial XR HMD. Besides, we also present a set of tools which can facilitate the joint development of 5G networks and XR offloading technologies: our 5G RAN real-time emulator and a multi-scenario XR IP traffic dataset. Firstly, in this thesis, we thoroughly examine and explain the KPIs that are required to achieve the expected Quality of Experience (QoE) and enhanced immersiveness in XR offloading solutions. Our analysis focuses on individual XR algorithms, rather than potential use cases. Additionally, we provide an initial description of feasible 5G deployments that could fulfill some of the proposed KPIs for different offloading scenarios. We also present our low latency muti-modal XR offloading architecture, which has already been tested on a commercial XR device and advanced 5G deployments, such as millimeter-wave (mmW) technologies. Besides, we describe our full endto- end complex XR offloading system which relies on our offloading architecture to provide low latency communication between a commercial XR device and a server running a Machine Learning (ML) algorithm. To the best of our knowledge, this is one of the first successful XR offloading implementations for complex ML algorithms in a commercial device. With the goal of providing XR developers and researchers access to complex 5G deployments and accelerating the development of future XR technologies, we present FikoRE, our 5G RAN real-time emulator. FikoRE has been specifically designed not only to model the network with sufficient accuracy but also to support the emulation of a massive number of users and actual IP throughput. As FikoRE can handle actual IP traffic above 1 Gbps, it can directly be used to test distributed XR solutions. As we describe in the thesis, its emulation capabilities make FikoRE a potential candidate to become a reference testbed for distributed XR developers and researchers. Finally, we used our XR offloading tools to generate an XR IP traffic dataset which can accelerate the development of 5G technologies by providing a straightforward manner for testing novel 5G solutions using realistic XR data. This dataset is generated for two relevant XR offloading scenarios: split rendering, in which the rendering step is moved to an edge server, and heavy ML algorithm offloading. Besides, we derive the corresponding IP traffic models from the captured data, which can be used to generate realistic XR IP traffic. We also present the validation experiments performed on the derived models and their results.This work has received funding from the European Union (EU) Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie ETN TeamUp5G, grant agreement No. 813391.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidente: Narciso García Santos.- Secretario: Fernando Díaz de María.- Vocal: Aryan Kaushi
    corecore