1,550 research outputs found

    Relaying Simultaneous Multicast Messages

    Full text link
    The problem of multicasting multiple messages with the help of a relay, which may also have an independent message of its own to multicast, is considered. As a first step to address this general model, referred to as the compound multiple access channel with a relay (cMACr), the capacity region of the multiple access channel with a "cognitive" relay is characterized, including the cases of partial and rate-limited cognition. Achievable rate regions for the cMACr model are then presented based on decode-and-forward (DF) and compress-and-forward (CF) relaying strategies. Moreover, an outer bound is derived for the special case in which each transmitter has a direct link to one of the receivers while the connection to the other receiver is enabled only through the relay terminal. Numerical results for the Gaussian channel are also provided.Comment: This paper was presented at the IEEE Information Theory Workshop, Volos, Greece, June 200

    Practical Implementation of Contract Administration Performance Model in Qatar Construction Projects

    Get PDF
    Globally, the performance of Construction Contract Administration (CCA) is becoming of significant interest as the industry suffers from notable delays, cost overruns, and disputes as a consequence of poor contract administration practices. In Qatar, the pace of projects will continue after hosting the FIFA 2022 World Cup to achieve the 2030 Qatar National Vision and beyond, therefore, monitoring of the proper CCA implementation and performance is necessary. Due to the wide scope and complicated nature of CCA, there is yet no consensus on how to assess its performance. This study briefly presents a systematic, operational, and multi-dimensional construction Contract Administration Performance Framework (CAPF) consisting of 93 CCA key measures/ tasks categorized in 11 CCA dimensions/process groups. The proposed framework is validated by structural equation modeling and subsequently, the Contract Administration Performance Model (CAPM) model was established. Through this study, the CAPM and its components are briefly explained, and then implemented on a real-world sample of 13 small, medium, and major construction projects in Qatar covering both public and private sectors, and then the performance is benchmarked. It is found that the model provides an operational basis for measuring the CCA Group Performance Indices (GPI), the overall Construction Contract Administration Performance Index (CCAPI) and support the identification of underperforming groups. The benchmarking value for the CCAPI (77.5%) demonstrates that the level of CCA performance is good. Also, the benchmarking values of GPI (range 74.3% to 87.8%) are good, except risk management (GPI= 50.5%), which needs an improvement program

    Graph run-length matrices for histopathological image segmentation

    Get PDF
    Cataloged from PDF version of article.The histopathological examination of tissue specimens is essential for cancer diagnosis and grading. However, this examination is subject to a considerable amount of observer variability as it mainly relies on visual interpretation of pathologists. To alleviate this problem, it is very important to develop computational quantitative tools, for which image segmentation constitutes the core step. In this paper, we introduce an effective and robust algorithm for the segmentation of histopathological tissue images. This algorithm incorporates the background knowledge of the tissue organization into segmentation. For this purpose, it quantifies spatial relations of cytological tissue components by constructing a graph and uses this graph to define new texture features for image segmentation. This new texture definition makes use of the idea of gray-level run-length matrices. However, it considers the runs of cytological components on a graph to form a matrix, instead of considering the runs of pixel intensities. Working with colon tissue images, our experiments demonstrate that the texture features extracted from "graph run-length matrices" lead to high segmentation accuracies, also providing a reasonable number of segmented regions. Compared with four other segmentation algorithms, the results show that the proposed algorithm is more effective in histopathological image segmentatio

    On lossy transmission of correlated sources over a multiple access channel

    Get PDF
    We study lossy communication of correlated sources over a multiple access channel. In particular, we provide a joint source-channel coding scheme for transmitting correlated sources with decoder side information, and study the conditions under which separate source and channel coding is optimal. For the latter, the encoders and/or the decoder have access to a common observation conditioned on which the two sources are independent. By establishing necessary and sufficient conditions, we show the optimality of separation when the encoders and the decoder both have access to the common observation. We also demonstrate that separation is optimal when only the encoders have access to the common observation whose lossless recovery is required at the decoder. As a special case, we study separation for sources with a common part. Our results indicate that side information can have significant impact on the optimality of source-channel separation in lossy transmission

    Average age of information with hybrid ARQ under a resource constraint

    Get PDF
    Scheduling the transmission of status updates over an error-prone communication channel is studied in order to minimize the long-term average age of information (AoI) at the destination under a constraint on the average number of transmissions at the source node. After each transmission, the source receives an instantaneous ACK/NACK feedback, and decides on the next update without prior knowledge on the success of future transmissions. First, the optimal scheduling policy is studied under different feedback mechanisms when the channel statistics are known; in particular, the standard automatic repeat request (ARQ) and hybrid ARQ (HARQ) protocols are considered. Then, for an unknown environment, an average-cost reinforcement learning (RL) algorithm is proposed that learns the system parameters and the transmission policy in real time. The effectiveness of the proposed methods are verified through numerical simulations

    Improved policy representation and policy search for proactive content caching in wireless networks

    Get PDF
    We study the problem of proactively pushing contents into a finite capacity cache memory of a user equipment in order to reduce the long-term average energy consumption in a wireless network. We consider an online social network (OSN) framework, in which new contents are generated over time and each content remains relevant to the user for a random time period, called the lifetime of the content. The user accesses the OSN through a wireless network at random time instants to download and consume all the relevant contents. Downloading contents has an energy cost that depends on the channel state and the number of downloaded contents. Our aim is to reduce the long-term average energy consumption by proactively caching contents at favorable channel conditions. In previous work, it was shown that the optimal caching policy is infeasible to compute (even with the complete knowledge of a stochastic model describing the system), and a simple family of threshold policies was introduced and optimised using the finite difference method. In this paper we improve upon both components of this approach: we use linear function approximation (LFA) to better approximate the considered family of caching policies, and apply the REINFORCE algorithm to optimise its parameters. Numerical simulations show that the new approach provides reduction in both the average energy cost and the running time for policy optimisation

    Cache-aided combination networks with interference

    Get PDF
    Centralized coded caching and delivery isstudied for a radio access combination network (RACN),whereby a set ofHedge nodes (ENs), connected to acloud server via orthogonal fronthaul links with limitedcapacity, serve a total ofKuser equipments (UEs) overwireless links. The cloud server is assumed to hold alibrary ofNfiles, each of sizeFbits; and each user,equipped with a cache of sizeμRNFbits, is connectedto a distinct set ofrENs each of which equipped witha cache of sizeμTNFbits, whereμT,μR∈[0,1]arethe fractional cache capacities of the UEs and the ENs,respectively. The objective is to minimize the normalizeddelivery time (NDT), which refers to the worst case deliverylatency when each user requests a single distinct file fromthe library. Three coded caching and transmission schemesare considered, namely theMDS-IA,soft-transferandzero-forcing (ZF)schemes. MDS-IA utilizes maximum distanceseparable (MDS) codes in the placement phase and realinterference alignment (IA) in the delivery phase. Theachievable NDT for this scheme is presented forr= 2and arbitrary fractional cache sizesμTandμR, and alsofor arbitrary value ofrand fractional cache sizeμTwhen the cache capacity of the UE is above a certainthreshold. The soft-transfer scheme utilizes soft-transferof coded symbols to ENs that implement ZF over the edgelinks. The achievable NDT for this scheme is presentedfor arbitraryrand arbitrary fractional cache sizesμTandμR. The last scheme utilizes ZF between the ENs andthe UEs without the participation of the cloud server inthe delivery phase. The achievable NDT for this scheme is presented for an arbitrary value ofrwhen the totalcache size at a pair of UE and EN is sufficient to store thewhole library, i.e.,μT+μR≥1. The results indicate thatthe fronthaul capacity determines which scheme achievesa better performance in terms of the NDT, and thesoft-transfer scheme becomes favorable as the fronthaulcapacity increases

    Neural Distributed Image Compression Using Common Information

    Get PDF
    We present a novel deep neural network (DNN) architecture for compressing an image when a correlated image is available as side information only at the decoder, a special case of the well-known distributed source coding (DSC) problem in information theory. In particular, we consider a pair of stereo images, which generally have high correlation with each other due to overlapping fields of view, and assume that one image of the pair is to be compressed and transmitted, while the other image is available only at the decoder. In the proposed architecture, the encoder maps the input image to a latent space, quantizes the latent representation, and compresses it using entropy coding. The decoder is trained to extract the common information between the input image and the correlated image, using only the latter. The received latent representation and the locally generated common information are passed through a decoder network to obtain an enhanced reconstruction of the input image. The common information provides a succinct representation of the relevant information at the receiver. We train and demonstrate the effectiveness of the proposed approach on the KITTI and Cityscape datasets of stereo image pairs. Our results show that the proposed architecture is capable of exploiting the decoder-only side information, and outperforms previous work on stereo image compression with decoder side information

    Neural distributed image compression with cross-attention feature alignment

    Get PDF
    We consider the problem of compressing an information source when a correlated one is available as side information only at the decoder side, which is a special case of the distributed source coding problem in information theory. In particular, we consider a pair of stereo images, which have overlapping fields of view, and are captured by a synchronized and calibrated pair of cameras as correlated image sources. In previously proposed methods, the encoder transforms the input image to a latent representation using a deep neural network, and compresses the quantized latent representation losslessly using entropy coding. The decoder decodes the entropy-coded quantized latent representation, and reconstructs the input image using this representation and the available side information. In the proposed method, the decoder employs a cross-attention module to align the feature maps obtained from the received latent representation of the input image and a latent representation of the side information. We argue that aligning the correlated patches in the feature maps allows better utilization of the side information. We empirically demonstrate the competitiveness of the proposed algorithm on KITTI and Cityscape datasets of stereo image pairs. Our experimental results show that the proposed architecture is able to exploit the decoder-only side information in a more efficient manner compared to previous works
    corecore