2,085 research outputs found

    Toward Autonomous Power Control in Semi-Grant-Free NOMA Systems: A Power Pool-Based Approach

    Get PDF
    In this paper, we design a resource block (RB) oriented power pool (PP) for semi-grant-free non-orthogonal multiple access (SGF-NOMA) in the presence of residual errors resulting from imperfect successive interference cancellation (SIC). In the proposed method, the BS allocates one orthogonal RB to each grant-based (GB) user, and determines the acceptable received power from grant-free (GF) users and calculates a threshold against this RB for broadcasting. Each GF user as an agent, tries to find the optimal transmit power and RB without affecting the quality-of-service (QoS) and ongoing transmission of the GB user. To this end, we formulate the transmit power and RB allocation problem as a stochastic Markov game to design the desired PPs and maximize the long-term system throughput. The problem is then solved using multi-agent (MA) deep reinforcement learning algorithms, such as double deep Q networks (DDQN) and Dueling DDQN due to their enhanced capabilities in value estimation and policy learning, with the latter performing optimally in environments characterized by extensive states and action spaces. The agents (GF users) undertake actions, specifically adjusting power levels and selecting RBs, in pursuit of maximizing cumulative rewards (throughput). Simulation results indicate computational scalability and minimal signaling overhead of the proposed algorithm with notable gains in system throughput compared to existing SGF-NOMA systems. We examine the effect of SIC error levels on sum rate and user transmit power, revealing a decrease in sum rate and an increase in user transmit power as QoS requirements and error variance escalate. We demonstrate that PPs can benefit new (untrained) users joining the network and outperform conventional SGF-NOMA without PPs in spectral efficiency

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    A new global media order? : debates and policies on media and mass communication at UNESCO, 1960 to 1980

    Get PDF
    Defence date: 24 June 2019Examining Board: Professor Federico Romero, European University Institute (Supervisor); Professor Corinna Unger, European University Institute (Second Reader); Professor Iris Schröder, UniversitĂ€t Erfurt (External Advisor); Professor Sandrine Kott, UniversitĂ© de GenĂšveThe 1970s, a UNESCO report claimed, would be the “communication decade”. UNESCO had started research on new means of mass communication for development purposes in the 1960s. In the 1970s, the issue evolved into a debate on the so-called “New World Information and Communication Order” (NWICO) and the democratisation of global media. It led UNESCO itself into a major crisis in the 1980s. My project traces a dual trajectory that shaped this global debate on transnational media. The first follows communications from being seen as a tool and goal of national development in the 1960s, to communications seen as catalyst for recalibrated international political, cultural and economic relations. The second relates to the recurrent attempts, and eventual failure, of various actors to engage UNESCO as a platform to promote a new global order. I take UNESCO as an observation post to study national ambitions intersecting with internationalist claims to universality, changing understandings of the role of media in development and international affairs, and competing visions of world order. Looking at the modes of this debate, the project also sheds light on the evolving practices of internationalism. Located in the field of a new international history, this study relates to the recent rediscovery of the “new order”-discourses of the 1970s as well as to the increasingly diversified literature on internationalism. With its focus on international communications and attempts at regulating them, it also contributes to an international media history in the late twentieth century. The emphasis on the role of international organisations as well as on voices from the Global South will make contributions to our understanding of the historic macro-processes of decolonisation, globalisation and the Cold War

    Robust and Listening-Efficient Contention Resolution

    Full text link
    This paper shows how to achieve contention resolution on a shared communication channel using only a small number of channel accesses -- both for listening and sending -- and the resulting algorithm is resistant to adversarial noise. The shared channel operates over a sequence of synchronized time slots, and in any slot agents may attempt to broadcast a packet. An agent's broadcast succeeds if no other agent broadcasts during that slot. If two or more agents broadcast in the same slot, then the broadcasts collide and both broadcasts fail. An agent listening on the channel during a slot receives ternary feedback, learning whether that slot had silence, a successful broadcast, or a collision. Agents are (adversarially) injected into the system over time. The goal is to coordinate the agents so that each is able to successfully broadcast its packet. A contention-resolution protocol is measured both in terms of its throughput and the number of slots during which an agent broadcasts or listens. Most prior work assumes that listening is free and only tries to minimize the number of broadcasts. This paper answers two foundational questions. First, is constant throughput achievable when using polylogarithmic channel accesses per agent, both for listening and broadcasting? Second, is constant throughput still achievable when an adversary jams some slots by broadcasting noise in them? Specifically, for NN packets arriving over time and JJ jammed slots, we give an algorithm that with high probability in N+JN+J guarantees Θ(1)\Theta(1) throughput and achieves on average O(polylog(N+J))O(\texttt{polylog}(N+J)) channel accesses against an adaptive adversary. We also have per-agent high-probability guarantees on the number of channel accesses -- either O(polylog(N+J))O(\texttt{polylog}(N+J)) or O((J+1)polylog(N))O((J+1) \texttt{polylog}(N)), depending on how quickly the adversary can react to what is being broadcast

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Plug-in healthcare:Development, ruination, and repair in health information exchange

    Get PDF
    This dissertation explores the work done by people and things in emerging infrastructures for health information exchange. It shows how this work relates to processes of development, production, and growth, as well as to abandonment, ruination, and loss. It argues for a revaluation of repair work: a form of articulation work that attends to gaps and disruptions in the margins of technological development. Often ignored by engineers, policy makers, and researchers, repair sensitizes us to different ways of caring for people and things that do not fit, fall in between categories, and resist social norms and conventions. It reminds us that infrastructures emerge in messy and unevenly distributed sociotechnical configurations, and that technological solutions cannot be simply ‘plugged in’ at will, but require all kinds of work. With that, repair emphasizes the need for more democratic, critical, and reflexive engagements with (and interventions in) health information exchange. Empirically, this study aims to understand how ‘integration’ in health information exchange is done in practice, and to develop concepts and insights that may help us to rethink technological development accordingly. It starts from the premise that the introduction of IT in healthcare is all too often regarded as a neutral process, and as a rational implementation challenge. These widespread views among professionals, managers, and policy makers need to be addressed, as they have very real – and mostly undesirable – consequences. Spanning a period of more than ten years, this study traces the birth and demise of an online regional health portal in the Netherlands (2009-2019). Combining ethnographic research with an experimental form of archive work, it describes sociotechnical networks that expanded, collapsed, and reconfigured around a variety of problems – from access to information and data ownership to business cases, financial sustainability, and regional care. It puts a spotlight on the integration of standards, infrastructures, and users in the portal project, and on elements of collapsing networks that quietly resurfaced elsewhere. The reconstruction of these processes foregrounds different instances of repair work in the portal’s development and subsequent abandonment, repurposing, and erasure. Conceptually, this study contributes to academic debates in health information exchange, including the politics of technology, practices of participatory design, and the role of language in emerging information infrastructures. It latches on to ethnographic studies on information systems and infrastructural work, and brings together insights from actor-network theory, science and technology studies, and figurational sociology to rethink and extend current (reflexive and critical) understandings of technological development. It raises three questions: What work is done in the development and demise of an online health portal? How are relations between people and things shaped in that process? And how can insights from this study help us to understand changing sociotechnical figurations in health information exchange? The final analysis includes five key concepts: the act of building network extensions, the method of tracing phantom networks, the notion of sociotechnical figurations, the logic of plug-in healthcare, and repair as a heuristic device.<br/

    Data Collection in Two-Tier IoT Networks with Radio Frequency (RF) Energy Harvesting Devices and Tags

    Get PDF
    The Internet of things (IoT) is expected to connect physical objects and end-users using technologies such as wireless sensor networks and radio frequency identification (RFID). In addition, it will employ a wireless multi-hop backhaul to transfer data collected by a myriad of devices to users or applications such as digital twins operating in a Metaverse. A critical issue is that the number of packets collected and transferred to the Internet is bounded by limited network resources such as bandwidth and energy. In this respect, IoT networks have adopted technologies such as time division multiple access (TDMA), signal interference cancellation (SIC) and multiple-input multiple-output (MIMO) in order to increase network capacity. Another fundamental issue is energy. To this end, researchers have exploited radio frequency (RF) energy-harvesting technologies to prolong the lifetime of energy constrained sensors and smart devices. Specifically, devices with RF energy harvesting capabilities can rely on ambient RF sources such as access points, television towers, and base stations. Further, an operator may deploy dedicated power beacons that serve as RF-energy sources. Apart from that, in order to reduce energy consumption, devices can adopt ambient backscattering communication technologies. Advantageously, backscattering allows devices to communicate using negligible amount of energy by modulating ambient RF signals. To address the aforementioned issues, this thesis first considers data collection in a two-tier MIMO ambient RF energy-harvesting network. The first tier consists of routers with MIMO capability and a set of source-destination pairs/flows. The second tier consists of energy harvesting devices that rely on RF transmissions from routers for energy supply. The problem is to determine a minimum-length TDMA link schedule that satisfies the traffic demand of source-destination pairs and energy demand of energy harvesting devices. It formulates the problem as a linear program (LP), and outlines a heuristic to construct transmission sets that are then used by the said LP. In addition, it outlines a new routing metric that considers the energy demand of energy harvesting devices to cope with routing requirements of IoT networks. The simulation results show that the proposed algorithm on average achieves 31.25% shorter schedules as compared to competing schemes. In addition, the said routing metric results in link schedules that are at most 24.75% longer than those computed by the LP

    A novel routing optimization strategy based on reinforcement learning in perception layer networks

    Get PDF
    Wireless sensor networks have become incredibly popular due to the Internet of Things’ (IoT) rapid development. IoT routing is the basis for the efficient operation of the perception-layer network. As a popular type of machine learning, reinforcement learning techniques have gained significant attention due to their successful application in the field of network communication. In the traditional Routing Protocol for low-power and Lossy Networks (RPL) protocol, to solve the fairness of control message transmission between IoT terminals, a fair broadcast suppression mechanism, or Drizzle algorithm, is usually used, but the Drizzle algorithm cannot allocate priority. Moreover, the Drizzle algorithm keeps changing its redundant constant k value but never converges to the optimal value of k. To address this problem, this paper uses a combination based on reinforcement learning (RL) and trickle timer. This paper proposes an RL Intelligent Adaptive Trickle-Timer Algorithm (RLATT) for routing optimization of the IoT awareness layer. RLATT has triple-optimized the trickle timer algorithm. To verify the algorithm’s effectiveness, the simulation is carried out on Contiki operating system and compared with the standard trickling timer and Drizzle algorithm. Experiments show that the proposed algorithm performs better in terms of packet delivery ratio (PDR), power consumption, network convergence time, and total control cost ratio
    • 

    corecore