128 research outputs found

    QoE Modelling, Measurement and Prediction: A Review

    Full text link
    In mobile computing systems, users can access network services anywhere and anytime using mobile devices such as tablets and smart phones. These devices connect to the Internet via network or telecommunications operators. Users usually have some expectations about the services provided to them by different operators. Users' expectations along with additional factors such as cognitive and behavioural states, cost, and network quality of service (QoS) may determine their quality of experience (QoE). If users are not satisfied with their QoE, they may switch to different providers or may stop using a particular application or service. Thus, QoE measurement and prediction techniques may benefit users in availing personalized services from service providers. On the other hand, it can help service providers to achieve lower user-operator switchover. This paper presents a review of the state-the-art research in the area of QoE modelling, measurement and prediction. In particular, we investigate and discuss the strengths and shortcomings of existing techniques. Finally, we present future research directions for developing novel QoE measurement and prediction technique

    1/L21/L^2 corrected soft photon theorem from a CFT3_3 Ward identity

    Full text link
    Classical soft theorems applied to probe scattering processes on AdS4_4 spacetimes predict the existence of 1/L21/L^2 corrections to the soft photon and soft graviton factors of asymptotically flat spacetimes. In this paper, we establish that the 1/L21/L^2 corrected soft photon theorem can be derived from a large NN CFT3_3 Ward identity. We derive a perturbed soft photon mode operator on a flat spacetime patch in global AdS4_4 in terms of an integrated expression of the boundary CFT current. Using the same in the CFT3_3 Ward identity, we recover the 1/L21/L^2 corrected soft photon theorem derived from classical soft theorems.Comment: 32 pages, 1 figur

    Diffusion Handles: Enabling 3D Edits for Diffusion Models by Lifting Activations to 3D

    Full text link
    Diffusion Handles is a novel approach to enabling 3D object edits on diffusion images. We accomplish these edits using existing pre-trained diffusion models, and 2D image depth estimation, without any fine-tuning or 3D object retrieval. The edited results remain plausible, photo-real, and preserve object identity. Diffusion Handles address a critically missing facet of generative image based creative design, and significantly advance the state-of-the-art in generative image editing. Our key insight is to lift diffusion activations for an object to 3D using a proxy depth, 3D-transform the depth and associated activations, and project them back to image space. The diffusion process applied to the manipulated activations with identity control, produces plausible edited images showing complex 3D occlusion and lighting effects. We evaluate Diffusion Handles: quantitatively, on a large synthetic data benchmark; and qualitatively by a user study, showing our output to be more plausible, and better than prior art at both, 3D editing and identity control. Project Webpage: https://diffusionhandles.github.io/Comment: Project Webpage: https://diffusionhandles.github.io

    Context-Aware QoE Modelling, Measurement, and Prediction in Mobile Computing Systems

    Full text link

    Toward distributed, global, deep learning using IoT devices

    Get PDF
    Deep learning (DL) using large scale, high-quality IoT datasets can be computationally expensive. Utilizing such datasets to produce a problem-solving model within a reasonable time frame requires a scalable distributed training platform/system. We present a novel approach where to train one DL model on the hardware of thousands of mid-sized IoT devices across the world, rather than the use of GPU cluster available within a data center. We analyze the scalability and model convergence of the subsequently generated model, identify three bottlenecks that are: high computational operations, time consuming dataset loading I/O, and the slow exchange of model gradients. To highlight research challenges for globally distributed DL training and classification, we consider a case study from the video data processing domain. A need for a two-step deep compression method, which increases the training speed and scalability of DL training processing, is also outlined. Our initial experimental validation shows that the proposed method is able to improve the tolerance of the distributed training process to varying internet bandwidth, latency, and Quality of Service metrics

    Response to correspondence on Reproducibility of CRISPR-Cas9 Methods for Generation of Conditional Mouse Alleles: A Multi-Center Evaluation

    Get PDF

    Search for gravitational-lensing signatures in the full third observing run of the LIGO-Virgo network

    Get PDF
    Gravitational lensing by massive objects along the line of sight to the source causes distortions of gravitational wave-signals; such distortions may reveal information about fundamental physics, cosmology and astrophysics. In this work, we have extended the search for lensing signatures to all binary black hole events from the third observing run of the LIGO--Virgo network. We search for repeated signals from strong lensing by 1) performing targeted searches for subthreshold signals, 2) calculating the degree of overlap amongst the intrinsic parameters and sky location of pairs of signals, 3) comparing the similarities of the spectrograms amongst pairs of signals, and 4) performing dual-signal Bayesian analysis that takes into account selection effects and astrophysical knowledge. We also search for distortions to the gravitational waveform caused by 1) frequency-independent phase shifts in strongly lensed images, and 2) frequency-dependent modulation of the amplitude and phase due to point masses. None of these searches yields significant evidence for lensing. Finally, we use the non-detection of gravitational-wave lensing to constrain the lensing rate based on the latest merger-rate estimates and the fraction of dark matter composed of compact objects
    corecore