64,393 research outputs found

    Advanced Radio Resource Management for Multi Antenna Packet Radio Systems

    Full text link
    In this paper, we propose fairness-oriented packet scheduling (PS) schemes with power-efficient control mechanism for future packet radio systems. In general, the radio resource management functionality plays an important role in new OFDMA based networks. The control of the network resource division among the users is performed by packet scheduling functionality based on maximizing cell coverage and capacity satisfying, and certain quality of service requirements. Moreover, multiantenna transmit-receive schemes provide additional flexibility to packet scheduler functionality. In order to mitigate inter-cell and co-channel interference problems in OFDMA cellular networks soft frequency reuse with different power masks patterns is used. Stemming from the earlier enhanced proportional fair scheduler studies for single-input multiple-output (SIMO) and multiple-input multipleoutput (MIMO) systems, we extend the development of efficient packet scheduling algorithms by adding transmit power considerations in the overall priority metrics calculations and scheduling decisions. Furthermore, we evaluate the proposed scheduling schemes by simulating practical orthogonal frequency division multiple access (OFDMA) based packet radio system in terms of throughput, coverage and fairness distribution among users. As a concrete example, under reduced overall transmit power constraint and unequal power distribution for different sub-bands, we demonstrate that by using the proposed power-aware multi-user scheduling schemes, significant coverage and fairness improvements in the order of 70% and 20%, respectively, can be obtained, at the expense of average throughput loss of only 15%.Comment: 14 Pages, IJWM

    Advanced solutions for quality-oriented multimedia broadcasting

    Get PDF
    Multimedia content is increasingly being delivered via different types of networks to viewers in a variety of locations and contexts using a variety of devices. The ubiquitous nature of multimedia services comes at a cost, however. The successful delivery of multimedia services will require overcoming numerous technological challenges many of which have a direct effect on the quality of the multimedia experience. For example, due to dynamically changing requirements and networking conditions, the delivery of multimedia content has traditionally adopted a best effort approach. However, this approach has often led to the end-user perceived quality of multimedia-based services being negatively affected. Yet the quality of multimedia content is a vital issue for the continued acceptance and proliferation of these services. Indeed, end-users are becoming increasingly quality-aware in their expectations of multimedia experience and demand an ever-widening spectrum of rich multimedia-based services. As a consequence, there is a continuous and extensive research effort, by both industry and academia, to find solutions for improving the quality of multimedia content delivered to the users; as well, international standards bodies, such as the International Telecommunication Union (ITU), are renewing their effort on the standardization of multimedia technologies. There are very different directions in which research has attempted to find solutions in order to improve the quality of the rich media content delivered over various network types. It is in this context that this special issue on broadcast multimedia quality of the IEEE Transactions on Broadcasting illustrates some of these avenues and presents some of the most significant research results obtained by various teams of researchers from many countries. This special issue provides an example, albeit inevitably limited, of the richness and breath of the current research on multimedia broadcasting services. The research i- - ssues addressed in this special issue include, among others, factors that influence user perceived quality, encoding-related quality assessment and control, transmission and coverage-based solutions and objective quality measurements

    Harvey: A Greybox Fuzzer for Smart Contracts

    Full text link
    We present Harvey, an industrial greybox fuzzer for smart contracts, which are programs managing accounts on a blockchain. Greybox fuzzing is a lightweight test-generation approach that effectively detects bugs and security vulnerabilities. However, greybox fuzzers randomly mutate program inputs to exercise new paths; this makes it challenging to cover code that is guarded by narrow checks, which are satisfied by no more than a few input values. Moreover, most real-world smart contracts transition through many different states during their lifetime, e.g., for every bid in an auction. To explore these states and thereby detect deep vulnerabilities, a greybox fuzzer would need to generate sequences of contract transactions, e.g., by creating bids from multiple users, while at the same time keeping the search space and test suite tractable. In this experience paper, we explain how Harvey alleviates both challenges with two key fuzzing techniques and distill the main lessons learned. First, Harvey extends standard greybox fuzzing with a method for predicting new inputs that are more likely to cover new paths or reveal vulnerabilities in smart contracts. Second, it fuzzes transaction sequences in a targeted and demand-driven way. We have evaluated our approach on 27 real-world contracts. Our experiments show that the underlying techniques significantly increase Harvey's effectiveness in achieving high coverage and detecting vulnerabilities, in most cases orders-of-magnitude faster; they also reveal new insights about contract code.Comment: arXiv admin note: substantial text overlap with arXiv:1807.0787

    Searching for test data with feature diversity

    Full text link
    There is an implicit assumption in software testing that more diverse and varied test data is needed for effective testing and to achieve different types and levels of coverage. Generic approaches based on information theory to measure and thus, implicitly, to create diverse data have also been proposed. However, if the tester is able to identify features of the test data that are important for the particular domain or context in which the testing is being performed, the use of generic diversity measures such as this may not be sufficient nor efficient for creating test inputs that show diversity in terms of these features. Here we investigate different approaches to find data that are diverse according to a specific set of features, such as length, depth of recursion etc. Even though these features will be less general than measures based on information theory, their use may provide a tester with more direct control over the type of diversity that is present in the test data. Our experiments are carried out in the context of a general test data generation framework that can generate both numerical and highly structured data. We compare random sampling for feature-diversity to different approaches based on search and find a hill climbing search to be efficient. The experiments highlight many trade-offs that needs to be taken into account when searching for diversity. We argue that recurrent test data generation motivates building statistical models that can then help to more quickly achieve feature diversity.Comment: This version was submitted on April 14th 201

    Test Set Diameter: Quantifying the Diversity of Sets of Test Cases

    Full text link
    A common and natural intuition among software testers is that test cases need to differ if a software system is to be tested properly and its quality ensured. Consequently, much research has gone into formulating distance measures for how test cases, their inputs and/or their outputs differ. However, common to these proposals is that they are data type specific and/or calculate the diversity only between pairs of test inputs, traces or outputs. We propose a new metric to measure the diversity of sets of tests: the test set diameter (TSDm). It extends our earlier, pairwise test diversity metrics based on recent advances in information theory regarding the calculation of the normalized compression distance (NCD) for multisets. An advantage is that TSDm can be applied regardless of data type and on any test-related information, not only the test inputs. A downside is the increased computational time compared to competing approaches. Our experiments on four different systems show that the test set diameter can help select test sets with higher structural and fault coverage than random selection even when only applied to test inputs. This can enable early test design and selection, prior to even having a software system to test, and complement other types of test automation and analysis. We argue that this quantification of test set diversity creates a number of opportunities to better understand software quality and provides practical ways to increase it.Comment: In submissio
    corecore