2,577 research outputs found

    The infrastructural conditions of (de-)growth: The case of the internet

    Get PDF
    Infrastructure studies represent a domain that remains significantly uncharted among degrowth scholars. This is paradoxical considering that infrastructures constitute a fundamental prerequisite for the equitable distribution of many aspects of human well-being that degrowth proponents emphasize. Nonetheless, the substantial resource and energy consumption associated with infrastructures cannot be overlooked. The internet offers an instructive case study in this sense, at its best it forges human connections and is productive of considerable societal value. The resource implications of the often-overlooked internet physical layer of data-centres and submarine cables needs to be acknowledged. Furthermore, the ways in which assumptions of perpetual growth are built into this global infrastructure via the logic layer of internet protocols and other governing mechanisms such as finance and network design need to be examined if we are to determine the extent to which such infrastructures are inherently growth dependent. In making these two arguments, we draw upon the work of both Science and Technology Studies (STS) and Large Technological System (LTS) studies on the inherent problems of large infrastructures which have thus far seen little engagement with questions of degrowth. We review the case of the internet and suggest a number of scenarios that illustrate potential roles for such infrastructures in any planned reduction of economic activity

    Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence

    Get PDF
    Recent years have seen a tremendous growth in Artificial Intelligence (AI)-based methodological development in a broad range of domains. In this rapidly evolving field, large number of methods are being reported using machine learning (ML) and Deep Learning (DL) models. Majority of these models are inherently complex and lacks explanations of the decision making process causing these models to be termed as 'Black-Box'. One of the major bottlenecks to adopt such models in mission-critical application domains, such as banking, e-commerce, healthcare, and public services and safety, is the difficulty in interpreting them. Due to the rapid proleferation of these AI models, explaining their learning and decision making process are getting harder which require transparency and easy predictability. Aiming to collate the current state-of-the-art in interpreting the black-box models, this study provides a comprehensive analysis of the explainable AI (XAI) models. To reduce false negative and false positive outcomes of these back-box models, finding flaws in them is still difficult and inefficient. In this paper, the development of XAI is reviewed meticulously through careful selection and analysis of the current state-of-the-art of XAI research. It also provides a comprehensive and in-depth evaluation of the XAI frameworks and their efficacy to serve as a starting point of XAI for applied and theoretical researchers. Towards the end, it highlights emerging and critical issues pertaining to XAI research to showcase major, model-specific trends for better explanation, enhanced transparency, and improved prediction accuracy

    The politics of internet privacy regulation in a globalised world: an examination of regulatory agencies' autonomy, politicisation, and lobbying strategies

    Get PDF
    The rapid proliferation of new information technologies has not only made internet privacy one of the most pressing issues of the contemporary area, it has also triggered new regulatory challenges because of their cross-border character. This PhD thesis examines the politics of internet privacy regulation at the global level. Existing research has largely investigated the extent to which there is no international privacy regime, when and why data protection regulations in the European Union affect member state laws and trade relations, and how interest groups shape data protection regulations in the EU. Little scholarly attention, however, has been accorded to the decision-making processes and policies produced beyond the legislative arena. Non-legislative and technical modes of policy-making are yet becoming more prominent in global politics. This research focuses on global data protection and internet privacy rules determined by leading, but little-known, internet regulatory agencies, in particular: the Internet Corporation for Assigned Names and Numbers, World Wide Web Consortium, Internet Engineering Task Force, and Institute of Electrical and Electronics Engineers. It investigates three distinct but interconnected questions regarding regulatory agencies' autonomy, politicisation, and interest groups' lobbying strategies. Each of the three questions corresponds to one substantive chapter and makes distinct contributions, using separate theoretical frameworks, methods, and analyses. Taken together, the chapters provide important theoretical arguments and empirical evidence on the making of internet privacy regulation, with a special emphasis on the role of corporate interests

    TOWARD AUTOMATED THREAT MODELING BY ADVERSARY NETWORK INFRASTRUCTURE DISCOVERY

    Get PDF
    Threat modeling can help defenders ascertain potential attacker capabilities and resources, allowing better protection of critical networks and systems from sophisticated cyber-attacks. One aspect of the adversary profile that is of interest to defenders is the means to conduct a cyber-attack, including malware capabilities and network infrastructure. Even though most defenders collect data on cyber incidents, extracting knowledge about adversaries to build and improve the threat model can be time-consuming. This thesis applies machine learning methods to historical cyber incident data to enable automated threat modeling of adversary network infrastructure. Using network data of attacker command and control servers based on real-world cyber incidents, specific adversary datasets can be created and enriched using the capabilities of internet-scanning search engines. Mixing these datasets with data from benign or non-associated hosts with similar port-service mappings allows for building an interpretable machine learning model of attackers. Additionally, creating internet-scanning search engine queries based on machine learning model predictions allows for automating threat modeling of adversary infrastructure. Automated threat modeling of adversary network infrastructure allows searching for unknown or emerging threat actor network infrastructure on the Internet.Major, Ukrainian Ground ForcesApproved for public release. Distribution is unlimited

    2023-2024 Catalog

    Get PDF
    The 2023-2024 Governors State University Undergraduate and Graduate Catalog is a comprehensive listing of current information regarding:Degree RequirementsCourse OfferingsUndergraduate and Graduate Rules and Regulation

    Exploring the inner mechanisms of 5G networks for orchestrating container-based applications in edge data centers

    Get PDF
    One of the novel new features of mobile 5G networks is what is commonly known as "Ultra Reliable Low Latency" communication. To achieve the "Low Latency" part, it is necessary to introduce processing and storage capabilities closer to the radio access network, thus introducing Edge data centers. An Edge data center will be capable of hosting third-party applications and a user of these applications can access them using the cellular mobile network. This makes the network path between the user equipment (UE) and the application short in terms of physical distance and network hops, thus reducing the latency dramatically. This thesis looks into these new features of the 5th-generation mobile networks to establish if, and how they can be used to orchestrate container-based applications deployed at edge data centers. The orchestration mechanism suggested will be described in more detail in the thesis body but as an overview, it involves using the user's positions and the knowledge about which applications the users are accessing and information about where these applications reside to move applications between edge data centers. One of the 5G exploration findings was that the location of users in a 5G network can be determined using the Network Exposure Function (NEF) API. The NEF is one of the new 5G network functions and enables trusted third-party actors to interact with the 5G core through a publisher-subscriber-oriented API. The proposed orchestration strategy involves calculating the ``weighted average location'' of 5G users who have accessed the specific application residing in the Edge within a specified time frame. A live 5G network with a stand-alone (SA) core was not available at the time of writing and part of the thesis work has therefore been to identify if there exist network emulators with the functionality needed to reach the goal of this thesis, i.e. design and implement the orchestrator based on interaction with the network. More specifically: can we find a NEF emulator that can be configured to give us network data related to user equipment location? Unfortunately, the three alternatives considered: Open5Gs, NEF\_emulator, and Nokia's Open5Glab do not fully meet our requirements for generating user events. Open5Gs an open source 5G network implementation lacks the whole NEF north-bridge implementation, NEF\_emulator has limited implementation and integration complexities, and Nokia's Open5Glab's simulated users are inactive and thus do not generate sufficient data. Given the absence of suitable emulators to generate the needed data, the thesis pivoted to also include the design and implementation of a mobile network emulator with the following key components: a mobile network abstraction that encompasses crucial elements from 5G, such as users and radio access nodes, allowing users to connect to the mobile network; a network abstraction that hosts emulated edge data centers and the corresponding applications accessible to connected users; and mobile network exposure that exposes mobile network core events through a simplified NEF north-bound API implementation. Finally, the thesis concludes by implementing the proposed orchestration strategy using the mobile network emulator, demonstrating that orchestrating can effectively reduce the end-to-end latency from users to applications, as evidenced by the obtained results

    Implementation of ISO Frameworks to Risk Management in IPv6 Security

    Get PDF
    The Internet of Things is a technology wave sweeping across various industries and sectors. It promises to improve productivity and efficiency by providing new services and data to users. However, the full potential of this technology is still not realized due to the transition to IPv6 as a backbone. Despite the security assurances that IPv6 provides, privacy and concerns about the Internet of Things remain. This is why it is important that organizations thoroughly understand the protocol and its migration to ensure that they are equipped to take advantage of its many benefits. Due to the lack of available IPv4 addresses, organizations are in an uncertain situation when it comes to implementing IoT technologies. The other aim is to fill in the gaps left by the ISO to identify and classify the risks that are not yet apparent. The thesis seeks to establish and implement the use of ISO to manage risks. It will also help to align security efforts with organizational goals. The proposed solution is evaluated through a survey that is designed to gather feedback from various levels of security and risk management professionals. The suggested modifications are also included in the study. A survey on the implementation of ISO frameworks to risk management in IPv6 was conducted and with results as shown in the random sampling technique that was used for conducting the research a total of 75 questionnaires were shared online, 50 respondents returned responses online through emails and social media platforms. The result of the analysis shows that system admin has the highest pooling 26% of all the overall participants, followed by network admin with 20%, then cybersecurity specialists with 16%. 14% of the respondents were network architects while senior management and risk management professionals were 4% and 2% respectively. The majority of the respondents agreed that risk treatment enhances the risk management performance of the IPv6 network resulting from the proper selection and implementation of correct risk prevention strategies

    SUTMS - Unified Threat Management Framework for Home Networks

    Get PDF
    Home networks were initially designed for web browsing and non-business critical applications. As infrastructure improved, internet broadband costs decreased, and home internet usage transferred to e-commerce and business-critical applications. Today’s home computers host personnel identifiable information and financial data and act as a bridge to corporate networks via remote access technologies like VPN. The expansion of remote work and the transition to cloud computing have broadened the attack surface for potential threats. Home networks have become the extension of critical networks and services, hackers can get access to corporate data by compromising devices attacked to broad- band routers. All these challenges depict the importance of home-based Unified Threat Management (UTM) systems. There is a need of unified threat management framework that is developed specifically for home and small networks to address emerging security challenges. In this research, the proposed Smart Unified Threat Management (SUTMS) framework serves as a comprehensive solution for implementing home network security, incorporating firewall, anti-bot, intrusion detection, and anomaly detection engines into a unified system. SUTMS is able to provide 99.99% accuracy with 56.83% memory improvements. IPS stands out as the most resource-intensive UTM service, SUTMS successfully reduces the performance overhead of IDS by integrating it with the flow detection mod- ule. The artifact employs flow analysis to identify network anomalies and categorizes encrypted traffic according to its abnormalities. SUTMS can be scaled by introducing optional functions, i.e., routing and smart logging (utilizing Apriori algorithms). The research also tackles one of the limitations identified by SUTMS through the introduction of a second artifact called Secure Centralized Management System (SCMS). SCMS is a lightweight asset management platform with built-in security intelligence that can seamlessly integrate with a cloud for real-time updates

    I Refuse if You Let Me: Studying User Behavior with Privacy Banners at Scale

    Get PDF
    Privacy Banners are a common experience while surfing the Web. Mandated by privacy regulations, they are the way for users to express their consent to the usage of cookies and data collection. They take various forms, carry different wordings and offer different interaction mechanisms. While several works have qualitatively evaluated the effectiveness of privacy banners, it is still unclear how users take advantage of the options offered and if and how the design of the banner could influence their choice. This work presents a large-scale analysis of how the Privacy Banner options impact on users’ interaction with it. We use data from a global Consent Management Platform serving more than 400 websites with visitors from all countries. With this, we observe more than 4 M interactions collected over three months. We find that only 1-4% of visitors opt out of cookies when more than one click is required. Conversely, when offered a Reject All button to deny consent with a single click, the percentage of users who deny consent increases to about 21%. We further investigate other properties, such as the visitor’s country, device type, banner position, etc. While the results confirm some common beliefs, to the best of our knowledge, his is the first work to accurately quantify how people interact with Privacy Banners and observe the effect of offering a single-click refusal option. We believe our work improves the understanding of user behaviour and perception of privacy, as well as the implications and effectiveness of privacy regulations

    Improving efficiency and security of IIoT communications using in-network validation of server certificate

    Get PDF
    The use of advanced communications and smart mechanisms in industry is growing rapidly, making cybersecurity a critical aspect. Currently, most industrial communication protocols rely on the Transport Layer Security (TLS) protocol to build their secure version, providing confidentiality, integrity and authentication. In the case of UDP-based communications, frequently used in Industrial Internet of Things (IIoT) scenarios, the counterpart of TLS is Datagram Transport Layer Security (DTLS), which includes some mechanisms to deal with the high unreliability of the transport layer. However, the (D)TLS handshake is a heavy process, specially for resource-deprived IIoT devices and frequently, security is sacrificed in favour of performance. More specifically, the validation of digital certificates is an expensive process from the time and resource consumption point of view. For this reason, digital certificates are not always properly validated by IIoT devices, including the verification of their revocation status; and when it is done, it introduces an important delay in the communications. In this context, this paper presents the design and implementation of an in-network server certificate validation system that offloads this task from the constrained IIoT devices to a resource-richer network element, leveraging data plane programming (DPP). This approach enhances security as it guarantees that a comprehensive server certificate verification is always performed. Additionally, it increases performance as resource-expensive tasks are moved from IIoT devices to a resource-richer network element. Results show that the proposed solution reduces DTLS handshake times by 50–60 %. Furthermore, CPU use in IIoT devices is also reduced, resulting in an energy saving of about 40 % in such devices.This work was financially supported by the Spanish Ministry of Science and Innovation through the TRUE-5G project PID2019-108713RB-C54/AEI/10.13039/501100011033. It was also partially supported by the Ayudas Cervera para Centros Tecnológicos grant of the Spanish Centre for the Development of Industrial Technology (CDTI) under the project EGIDA (CER-20191012), and by the Basque Country Government under the ELKARTEK Program, project REMEDY - Real tiME control and embeddeD securitY (KK-2021/00091)
    • …
    corecore