9,664 research outputs found

    Going local without localization:Power and humanitarian response in the Syrian war

    Get PDF
    International aid organizations and donors have committed to localize aid by empowering local actors to deliver and lead in humanitarian response. While international actors do often rely on local actors for aid delivery, their progress on shifting authority falls short. Scholars suggest that while localizing aid may be desirable, the organizational imperatives of international actors and aid’s colonial past and present make it difficult at best. Can localization efforts produce locally led humanitarian response? Adopting a power framework, we argue that localization reinforces and reproduces international power; through institutional processes, localization efforts by international actors allocate capacity to, and constitute local actors as, humanitarians that are more or less capable, funded, and involved in responding to crises in the latter’s own countries. This article interprets aid efforts during the Syria War. In this crucial case, we might expect localization to be “easy” due to the dependence of international actors on local actors because of security concerns and constraints on international access. We draw on fine-grained qualitative data collected through immersive observation and 250 interviews with Syrian and international aid workers in Jordan, Lebanon, and Turkey, as well as descriptive analysis of quantitative data. We reveal the ways Syrians were constituted as frontline responders, recipients of funds or trainings, risk-takers, gateways to access, and tokenistic representatives of the crisis. Our research shows that while the response seemed to “go local” by relying on the labor and risk-taking of Syrians to implement relief, it did not transfer authority to Syrian actors. Findings contribute to current debates in global development and humanitarian scholarship about who holds power within the global aid architecture

    SDR-LoRa, an open-source, full-fledged implementation of LoRa on Software-Defined-Radios: Design and potential exploitation

    Get PDF
    In this paper, we present SDR-LoRa, an open-source, full-fledged Software Defined Radio (SDR) implementation of a LoRa transceiver. First, we conduct a thorough analysis of the LoRa physical layer (PHY) functionalities, encompassing processes such as packet modulation, demodulation, and preamble detection. Then, we leverage on this analysis to create a pioneering SDR-based LoRa PHY implementation. Accordingly, we thoroughly describe all the implementation details. Moreover, we illustrate how SDR-LoRa can help boost research on the LoRa protocol by presenting three exemplary key applications that can be built on top of our implementation, namely fine-grained localization, interference cancellation, and enhanced link reliability. To validate SDR-LoRa and its applications, we test it on two different platforms: (i) a physical setup involving USRP radios and off-the-shelf commercial devices, and (ii) the Colosseum wireless channel emulator. Our experimental findings reveal that (i) SDR-LoRa performs comparably to conventional commercial LoRa systems, and (ii) all the aforementioned applications can be successfully implemented on top of SDR-LoRa with remarkable results. The complete details of the SDR-LoRa implementation code have been publicly shared online, together with a plug-and-play Colosseum container

    Natural and Technological Hazards in Urban Areas

    Get PDF
    Natural hazard events and technological accidents are separate causes of environmental impacts. Natural hazards are physical phenomena active in geological times, whereas technological hazards result from actions or facilities created by humans. In our time, combined natural and man-made hazards have been induced. Overpopulation and urban development in areas prone to natural hazards increase the impact of natural disasters worldwide. Additionally, urban areas are frequently characterized by intense industrial activity and rapid, poorly planned growth that threatens the environment and degrades the quality of life. Therefore, proper urban planning is crucial to minimize fatalities and reduce the environmental and economic impacts that accompany both natural and technological hazardous events

    The Application of Data Analytics Technologies for the Predictive Maintenance of Industrial Facilities in Internet of Things (IoT) Environments

    Get PDF
    In industrial production environments, the maintenance of equipment has a decisive influence on costs and on the plannability of production capacities. In particular, unplanned failures during production times cause high costs, unplanned downtimes and possibly additional collateral damage. Predictive Maintenance starts here and tries to predict a possible failure and its cause so early that its prevention can be prepared and carried out in time. In order to be able to predict malfunctions and failures, the industrial plant with its characteristics, as well as wear and ageing processes, must be modelled. Such modelling can be done by replicating its physical properties. However, this is very complex and requires enormous expert knowledge about the plant and about wear and ageing processes of each individual component. Neural networks and machine learning make it possible to train such models using data and offer an alternative, especially when very complex and non-linear behaviour is evident. In order for models to make predictions, as much data as possible about the condition of a plant and its environment and production planning data is needed. In Industrial Internet of Things (IIoT) environments, the amount of available data is constantly increasing. Intelligent sensors and highly interconnected production facilities produce a steady stream of data. The sheer volume of data, but also the steady stream in which data is transmitted, place high demands on the data processing systems. If a participating system wants to perform live analyses on the incoming data streams, it must be able to process the incoming data at least as fast as the continuous data stream delivers it. If this is not the case, the system falls further and further behind in processing and thus in its analyses. This also applies to Predictive Maintenance systems, especially if they use complex and computationally intensive machine learning models. If sufficiently scalable hardware resources are available, this may not be a problem at first. However, if this is not the case or if the processing takes place on decentralised units with limited hardware resources (e.g. edge devices), the runtime behaviour and resource requirements of the type of neural network used can become an important criterion. This thesis addresses Predictive Maintenance systems in IIoT environments using neural networks and Deep Learning, where the runtime behaviour and the resource requirements are relevant. The question is whether it is possible to achieve better runtimes with similarly result quality using a new type of neural network. The focus is on reducing the complexity of the network and improving its parallelisability. Inspired by projects in which complexity was distributed to less complex neural subnetworks by upstream measures, two hypotheses presented in this thesis emerged: a) the distribution of complexity into simpler subnetworks leads to faster processing overall, despite the overhead this creates, and b) if a neural cell has a deeper internal structure, this leads to a less complex network. Within the framework of a qualitative study, an overall impression of Predictive Maintenance applications in IIoT environments using neural networks was developed. Based on the findings, a novel model layout was developed named Sliced Long Short-Term Memory Neural Network (SlicedLSTM). The SlicedLSTM implements the assumptions made in the aforementioned hypotheses in its inner model architecture. Within the framework of a quantitative study, the runtime behaviour of the SlicedLSTM was compared with that of a reference model in the form of laboratory tests. The study uses synthetically generated data from a NASA project to predict failures of modules of aircraft gas turbines. The dataset contains 1,414 multivariate time series with 104,897 samples of test data and 160,360 samples of training data. As a result, it could be proven for the specific application and the data used that the SlicedLSTM delivers faster processing times with similar result accuracy and thus clearly outperforms the reference model in this respect. The hypotheses about the influence of complexity in the internal structure of the neuronal cells were confirmed by the study carried out in the context of this thesis

    Mapping the Focal Points of WordPress: A Software and Critical Code Analysis

    Get PDF
    Programming languages or code can be examined through numerous analytical lenses. This project is a critical analysis of WordPress, a prevalent web content management system, applying four modes of inquiry. The project draws on theoretical perspectives and areas of study in media, software, platforms, code, language, and power structures. The applied research is based on Critical Code Studies, an interdisciplinary field of study that holds the potential as a theoretical lens and methodological toolkit to understand computational code beyond its function. The project begins with a critical code analysis of WordPress, examining its origins and source code and mapping selected vulnerabilities. An examination of the influence of digital and computational thinking follows this. The work also explores the intersection of code patching and vulnerability management and how code shapes our sense of control, trust, and empathy, ultimately arguing that a rhetorical-cultural lens can be used to better understand code\u27s controlling influence. Recurring themes throughout these analyses and observations are the connections to power and vulnerability in WordPress\u27 code and how cultural, processual, rhetorical, and ethical implications can be expressed through its code, creating a particular worldview. Code\u27s emergent properties help illustrate how human values and practices (e.g., empathy, aesthetics, language, and trust) become encoded in software design and how people perceive the software through its worldview. These connected analyses reveal cultural, processual, and vulnerability focal points and the influence these entanglements have concerning WordPress as code, software, and platform. WordPress is a complex sociotechnical platform worthy of further study, as is the interdisciplinary merging of theoretical perspectives and disciplines to critically examine code. Ultimately, this project helps further enrich the field by introducing focal points in code, examining sociocultural phenomena within the code, and offering techniques to apply critical code methods

    An End, Once and for All: Mass Effect 3, Video Game Controversies, and the Fight for Player Agency

    Get PDF
    Since the success of the player-led Mass Effect 3 ending controversy, player-led video game controversies have become mainstream sites of industrial and ideological contention between developers, players, and the culture itself. This dissertation focuses on the history of the Mass Effect 3 ending controversy, the game’s specific textual qualities that encouraged player protest, and the negotiations between players and developers in online spaces that persuaded developers to alter the game’s ending based on player demands. Using the Mass Effect 3 as its primary object, this dissertation argues this controversy—as well as subsequent player-led video game controversies—was not simply the result of dissatisfaction with a single plot point or representation in the text or video game community, but the complex negotiation of creative differences between players and developers over the production and control of video game texts and culture. Video games and their controversies are rooted in the medium\u27s intrinsic qualities of interactivity, choice, labor and the need for shared production between developers and players to progress and produce a video game text, which encourages the development of a sense of agency and ownership over the text for both groups. This dissertation argues that video games are not just texts that developers create and that players play, but rather texts produced through the co-creative production practice that Axel Bruns has defined as “produsage”—texts where producers act in dual roles as users while users to also act as producers—that allow players a creative stake in the outcome of a video game text, encourages a sense of agency and ownership, and collapses traditional boundaries between developers and players. Video game controversies naturally arise when players perceive a loss of agency and control over the video game text and attempt to reclaim control over ownership of the text through controversy

    The Development of Microdosimetric Instrumentation for Quality Assurance in Heavy Ion Therapy, Boron Neutron Capture Therapy and Fast Neutron Therapy

    Get PDF
    This thesis presents research for the development of new microdosimetric instrumentation for use with solid-state microdosimeters in order to improve their portability for radioprotection purposes and for QA in various hadron therapy modalities. Monte Carlo simulation applications are developed and benchmarked, pertaining to the context of the relevant therapies considered. The simulation and experimental findings provide optimisation recommendations relating to microdosimeter performance and possible radioprotection risks by activated materials. The first part of this thesis is continuing research into the development of novel Silicon-on-Insulator (SOI) microdosimeters in the application of hadron therapy QA. This relates specifically to the optimisation of current microdosimeters, development of Monte Carlo applications for experimental validation, assessment of radioprotection risks during experiments and advanced Monte Carlo modelling of various accelerator beamlines. Geant4 and MCNP6 Monte Carlo codes are used extensively in this thesis, with rigorous benchmarking completed in the context of experimental verification, and evaluation of the similarities and differences when simulating relevant hadron therapy facilities. The second part of this thesis focuses on the development of a novel wireless microdosimetry system - the Radiodosimeter, to improve the operation efficiency and minimise any radioprotection risks. The successful implementation of the wireless Radiodosimeter is considered as an important milestone in the development of a microdosimetry system that can be operated by an end-user with no prior knowledge
    • …
    corecore