4,339 research outputs found

    Match Attendance and “Sportainment”: The Case of Vålerenga Fotball Damer

    Get PDF

    Mapping and analysis of the current self- and co- regulatory framework of commercial communication aimed at minors

    Get PDF
    As the advertising sector has been very active in self-regulating commercial communication aimed at children, a patchwork of different rules and instruments exist, drafted by different self-regulatory organisations at international, European and national level. In order to determine the scope and contents of these rules, and hence, the actual level of protection of children, a structured mapping of these rules is needed. As such, this report aims to provide an overview of different categories of Alternative Regulatory Instruments(ARIs,such as self- and co-regulation regarding (new) advertising formats aimed at children. This report complements the first legal AdLit research report, which provided an overview of the legislative provisions in this domain.status: publishe

    Effective interprocess communication (IPC) in a real-time transputer network

    Get PDF
    The thesis describes the design and implementation of an interprocess communication (IPC) mechanism within a real-time distributed operating system kernel (RT-DOS) which is designed for a transputer-based network. The requirements of real-time operating systems are examined and existing design and implementation strategies are described. Particular attention is paid to one of the object-oriented techniques although it is concluded that these techniques are not feasible for the chosen implementation platform. Studies of a number of existing operating systems are reported. The choices for various aspects of operating system design and their influence on the IPC mechanism to be used are elucidated. The actual design choices are related to the real-time requirements and the implementation that has been adopted is described. [Continues.

    Nonorthogonal Multiple Access and Subgrouping for Improved Resource Allocation in Multicast 5G NR

    Get PDF
    The ever-increasing demand for applications with stringent constraints in device density, latency, user mobility, or peak data rate has led to the appearance of the last generation of mobile networks (i.e., 5G). However, there is still room for improvement in the network spectral efficiency, not only at the waveform level but also at the Radio Resource Management (RRM). Up to now, solutions based on multicast transmissions have presented considerable efficiency increments by successfully implementing subgrouping strategies. These techniques enable more efficient exploitation of channel time and frequency resources by splitting users into subgroups and applying independent and adaptive modulation and coding schemes. However, at the RRM, traditional multiplexing techniques pose a hard limit in exploiting the available resources, especially when users' QoS requests are unbalanced. Under these circumstances, this paper proposes jointly applying the subgrouping and Non-Orthogonal Multiple Access (NOMA) techniques in 5G to increase the network data rate. This study shows that NOMA is highly spectrum-efficient and could improve the system throughput performance in certain conditions. In the first part of this paper, an in-depth analysis of the implications of introducing NOMA techniques in 5G subgrouping at RRM is carried out. Afterward, the validation is accomplished by applying the proposed approach to different 5G use cases based on vehicular communications. After a comprehensive analysis of the results, a theoretical approach combining NOMA and time division is presented, which improves considerably the data rate offered in each use case.This work was supported in part by the Italian Ministry of University and Research (MIUR), within the Smart Cities framework, Project Cagliari2020 ID: PON04a2_00381; in part by the Basque Government under Grant IT1234-19; and in part by the Spanish Government [Project PHANTOM under Grant RTI2018-099162-B-I00 (MCIU/AEI/FEDER, UE)]

    Reducing Cache Contention On GPUs

    Get PDF
    The usage of Graphics Processing Units (GPUs) as an application accelerator has become increasingly popular because, compared to traditional CPUs, they are more cost-effective, their highly parallel nature complements a CPU, and they are more energy efficient. With the popularity of GPUs, many GPU-based compute-intensive applications (a.k.a., GPGPUs) present significant performance improvement over traditional CPU-based implementations. Caches, which significantly improve CPU performance, are introduced to GPUs to further enhance application performance. However, the effect of caches is not significant for many cases in GPUs and even detrimental for some cases. The massive parallelism of the GPU execution model and the resulting memory accesses cause the GPU memory hierarchy to suffer from significant memory resource contention among threads. One cause of cache contention arises from column-strided memory access patterns that GPU applications commonly generate in many data-intensive applications. When such access patterns are mapped to hardware thread groups, they become memory-divergent instructions whose memory requests are not GPU hardware friendly, resulting in serialized access and performance degradation. Cache contention also arises from cache pollution caused by lines with low reuse. For the cache to be effective, a cached line must be reused before its eviction. Unfortunately, the streaming characteristic of GPGPU workloads and the massively parallel GPU execution model increase the reuse distance, or equivalently reduce reuse frequency of data. In a GPU, the pollution caused by a large reuse distance data is significant. Memory request stall is another contention factor. A stalled Load/Store (LDST) unit does not execute memory requests from any ready warps in the issue stage. This stall prevents the potential hit chances for the ready warps. This dissertation proposes three novel architectural modifications to reduce the contention: 1) contention-aware selective caching detects the memory-divergent instructions caused by the column-strided access patterns, calculates the contending cache sets and locality information and then selectively caches; 2) locality-aware selective caching dynamically calculates the reuse frequency with efficient hardware and caches based on the reuse frequency; and 3) memory request scheduling queues the memory requests from a warp issuing stage, frees the LDST unit stall and schedules items from the queue to the LDST unit by multiple probing of the cache. Through systematic experiments and comprehensive comparisons with existing state-of-the-art techniques, this dissertation demonstrates the effectiveness of our aforementioned techniques and the viability of reducing cache contention through architectural support. Finally, this dissertation suggests other promising opportunities for future research on GPU architecture

    A Day in the Life: The Federal Communications Commission

    Get PDF
    corecore