422 research outputs found

    Optimal Fairness Scheduling for Coded Caching in Multi-AP Wireless Local Area Networks

    Full text link
    Coded caching schemes exploit the cumulative cache memory of the users by using simple linear encoders, outperforming uncoded schemes where cache contents are only used locally. Considering multi-AP WLANs and video-on-demand (VoD) applications where users stream videos by sequentially requesting video ``chunks", we apply existing coded caching techniques with reduced subpacketization order, and obtain a computational method to determine the theoretical throughput region of the users' content delivery rates, calculated as the number of chunks delivered per unit of time per user. We then solve the fairness scheduling problem by maximizing the desired fairness metric over the throughput region. We also provide two heuristic methods with reduced complexity, where one of them maximizes the desired fairness metric over a smaller region than the throughput region, and the other uses a greedy algorithmic approach to associate users with APs in a fair way

    Information Search on the Web: Understanding the Impact of Response Time Delays with Information Foraging Theory

    Get PDF
    Web delays are a persistent and highly publicized problem. Long delays have been shown to reduce information search, but less is known about the impact of more modest “acceptable” delays — delays that do not substantially reduce user satisfaction. Prior research suggests that as the time and effort required to complete a task increases, decision-makers tend to reduce information search at the expense of decision quality. In this study, the effects of an acceptable time delay (seven seconds) on information search behavior were examined. Results showed that increased time and effort caused by acceptable delays provoked increased information search

    Hardware acceleration of photon mapping

    Get PDF
    PhD ThesisThe quest for realism in computer-generated graphics has yielded a range of algorithmic techniques, the most advanced of which are capable of rendering images at close to photorealistic quality. Due to the realism available, it is now commonplace that computer graphics are used in the creation of movie sequences, architectural renderings, medical imagery and product visualisations. This work concentrates on the photon mapping algorithm [1, 2], a physically based global illumination rendering algorithm. Photon mapping excels in producing highly realistic, physically accurate images. A drawback to photon mapping however is its rendering times, which can be significantly longer than other, albeit less realistic, algorithms. Not surprisingly, this increase in execution time is associated with a high computational cost. This computation is usually performed using the general purpose central processing unit (CPU) of a personal computer (PC), with the algorithm implemented as a software routine. Other options available for processing these algorithms include desktop PC graphics processing units (GPUs) and custom designed acceleration hardware devices. GPUs tend to be efficient when dealing with less realistic rendering solutions such as rasterisation, however with their recent drive towards increased programmability they can also be used to process more realistic algorithms. A drawback to the use of GPUs is that these algorithms often have to be reworked to make optimal use of the limited resources available. There are very few custom hardware devices available for acceleration of the photon mapping algorithm. Ray-tracing is the predecessor to photon mapping, and although not capable of producing the same physical accuracy and therefore realism, there are similarities between the algorithms. There have been several hardware prototypes, and at least one commercial offering, created with the goal of accelerating ray-trace rendering [3]. However, properties making many of these proposals suitable for the acceleration of ray-tracing are not shared by photon mapping. There are even fewer proposals for acceleration of the additional functions found only in photon mapping. All of these approaches to algorithm acceleration offer limited scalability. GPUs are inherently difficult to scale, while many of the custom hardware devices available thus far make use of large processing elements and complex acceleration data structures. In this work we make use of three novel approaches in the design of highly scalable specialised hardware structures for the acceleration of the photon mapping algorithm. Increased scalability is gained through: • The use of a brute-force approach in place of the commonly used smart approach, thus eliminating much data pre-processing, complex data structures and large processing units often required. • The use of Logarithmic Number System (LNS) arithmetic computation, which facilitates a reduction in processing area requirement. • A novel redesign of the photon inclusion test, used within the photon search method of the photon mapping algorithm. This allows an intelligent memory structure to be used for the search. The design uses two hardware structures, both of which accelerate one core rendering function. Renderings produced using field programmable gate array (FPGA) based prototypes are presented, along with details of 90nm synthesised versions of the designs which show that close to an orderof- magnitude speedup over a software implementation is possible. Due to the scalable nature of the design, it is likely that any advantage can be maintained in the face of improving processor speeds. Significantly, due to the brute-force approach adopted, it is possible to eliminate an often-used software acceleration method. This means that the device can interface almost directly to a frontend modelling package, minimising much of the pre-processing required by most other proposals
    • …
    corecore