283,293 research outputs found

    Integration of EFQM excellence model and information systems criterion

    Get PDF
    Higher Education Institutions (HEIs) have become key institutions in the knowledge-based economy. Over the past decade, the Malaysian government has placed greater emphasis on improved efficiency and productivity in the HEI as an engine for promoting quality human capital for a knowledge-based economy. Importantly, the government raised the share of research and development in GDP from 1.5% in the Eighth Malaysia Plan (2000-2005) to 4.9% in the Ninth Malaysia Plan (2006-2010) for HEIs. As a result, there is a need to monitor the quality performance of HEIs to see if the governments objectives are being met. The European Foundation for Quality Management (EFQM) excellence model was introduced at the beginning of 1992 as the framework for assessing organizations for the European Quality Award. In fact, this model has been claimed to be the most widely used model of the national excellence awards in the European countries. However, it does not have Information Systems (IS) as a single criterion. The purpose of this paper is to evaluate the interrelationships between the EFQM excellence model and information systems criterion of Malcolm Baldrige National Quality Award (MBNQA) model in the HEIs of Malaysia. The paper identified ten (10) criteria from the research model: leadership; policy and strategy; people; partnership and resources; information systems; processes; people results; student results; society results and key performance results. We obtained 118 valid responses from person in charge of quality management in Malaysian HEIs. Structural equation model (SEM) is used to analyse the data and results indicate that the relationships among the research model followed the Information Systems-Quality Management theory and TQM theory

    Popular and/or Prestigious? Measures of Scholarly Esteem

    Get PDF
    Citation analysis does not generally take the quality of citations into account: all citations are weighted equally irrespective of source. However, a scholar may be highly cited but not highly regarded: popularity and prestige are not identical measures of esteem. In this study we define popularity as the number of times an author is cited and prestige as the number of times an author is cited by highly cited papers. Information Retrieval (IR) is the test field. We compare the 40 leading researchers in terms of their popularity and prestige over time. Some authors are ranked high on prestige but not on popularity, while others are ranked high on popularity but not on prestige. We also relate measures of popularity and prestige to date of Ph.D. award, number of key publications, organizational affiliation, receipt of prizes/honors, and gender.Comment: 26 pages, 5 figure

    Guest Editorial: Nonlinear Optimization of Communication Systems

    Get PDF
    Linear programming and other classical optimization techniques have found important applications in communication systems for many decades. Recently, there has been a surge in research activities that utilize the latest developments in nonlinear optimization to tackle a much wider scope of work in the analysis and design of communication systems. These activities involve every “layer” of the protocol stack and the principles of layered network architecture itself, and have made intellectual and practical impacts significantly beyond the established frameworks of optimization of communication systems in the early 1990s. These recent results are driven by new demands in the areas of communications and networking, as well as new tools emerging from optimization theory. Such tools include the powerful theories and highly efficient computational algorithms for nonlinear convex optimization, together with global solution methods and relaxation techniques for nonconvex optimization

    Uncovering Randomness and Success in Society

    Get PDF
    An understanding of how individuals shape and impact the evolution of society is vastly limited due to the unavailability of large-scale reliable datasets that can simultaneously capture information regarding individual movements and social interactions. We believe that the popular Indian film industry, 'Bollywood', can provide a social network apt for such a study. Bollywood provides massive amounts of real, unbiased data that spans more than 100 years, and hence this network has been used as a model for the present paper. The nodes which maintain a moderate degree or widely cooperate with the other nodes of the network tend to be more fit (measured as the success of the node in the industry) in comparison to the other nodes. The analysis carried forth in the current work, using a conjoined framework of complex network theory and random matrix theory, aims to quantify the elements that determine the fitness of an individual node and the factors that contribute to the robustness of a network. The authors of this paper believe that the method of study used in the current paper can be extended to study various other industries and organizations.Comment: 39 pages, 12 figures, 14 table

    Development of RMJ: A mirror of the development of the profession and discipline of record management

    Get PDF
    The purpose of this paper is to examine critically the history of Records Management Journal on its 20th anniversary; it aims to review and analyse its evolution and its contribution in the context of the development of the profession and the discipline of records management. The paper seeks to provide the context and justification for the selection of eight articles previously published in the journal to be reprinted in this issue

    Channel Capacity under Sub-Nyquist Nonuniform Sampling

    Full text link
    This paper investigates the effect of sub-Nyquist sampling upon the capacity of an analog channel. The channel is assumed to be a linear time-invariant Gaussian channel, where perfect channel knowledge is available at both the transmitter and the receiver. We consider a general class of right-invertible time-preserving sampling methods which include irregular nonuniform sampling, and characterize in closed form the channel capacity achievable by this class of sampling methods, under a sampling rate and power constraint. Our results indicate that the optimal sampling structures extract out the set of frequencies that exhibits the highest signal-to-noise ratio among all spectral sets of measure equal to the sampling rate. This can be attained through filterbank sampling with uniform sampling at each branch with possibly different rates, or through a single branch of modulation and filtering followed by uniform sampling. These results reveal that for a large class of channels, employing irregular nonuniform sampling sets, while typically complicated to realize, does not provide capacity gain over uniform sampling sets with appropriate preprocessing. Our findings demonstrate that aliasing or scrambling of spectral components does not provide capacity gain, which is in contrast to the benefits obtained from random mixing in spectrum-blind compressive sampling schemes.Comment: accepted to IEEE Transactions on Information Theory, 201

    Multilevel Coded Modulation for Unequal Error Protection and Multistage Decoding—Part I: Symmetric Constellations

    Get PDF
    In this paper, theoretical upper bounds and computer simulation results on the error performance of multilevel block coded modulations for unequal error protection (UEP) and multistage decoding are presented. It is shown that nonstandard signal set partitionings and multistage decoding provide excellent UEP capabilities beyond those achievable with conventional coded modulation. The coding scheme is designed in such a way that the most important information bits have a lower error rate than other information bits. The large effective error coefficients, normally associated with standard mapping by set partitioning, are reduced by considering nonstandard partitionings of the underlying signal set. The bits-to-signal mappings induced by these partitionings allow the use of soft-decision decoding of binary block codes. Moreover, parallel operation of some of the staged decoders is possible, to achieve high data rate transmission, so that there is no error propagation between these decoders. Hybrid partitionings are also considered that trade off increased intraset distances in the last partition levels with larger effective error coefficients in the middle partition levels. The error performance of specific examples of multilevel codes over 8-PSK and 64-QAM signal sets are simulated and compared with theoretical upper bounds on the error performance

    On Known-Plaintext Attacks to a Compressed Sensing-based Encryption: A Quantitative Analysis

    Get PDF
    Despite the linearity of its encoding, compressed sensing may be used to provide a limited form of data protection when random encoding matrices are used to produce sets of low-dimensional measurements (ciphertexts). In this paper we quantify by theoretical means the resistance of the least complex form of this kind of encoding against known-plaintext attacks. For both standard compressed sensing with antipodal random matrices and recent multiclass encryption schemes based on it, we show how the number of candidate encoding matrices that match a typical plaintext-ciphertext pair is so large that the search for the true encoding matrix inconclusive. Such results on the practical ineffectiveness of known-plaintext attacks underlie the fact that even closely-related signal recovery under encoding matrix uncertainty is doomed to fail. Practical attacks are then exemplified by applying compressed sensing with antipodal random matrices as a multiclass encryption scheme to signals such as images and electrocardiographic tracks, showing that the extracted information on the true encoding matrix from a plaintext-ciphertext pair leads to no significant signal recovery quality increase. This theoretical and empirical evidence clarifies that, although not perfectly secure, both standard compressed sensing and multiclass encryption schemes feature a noteworthy level of security against known-plaintext attacks, therefore increasing its appeal as a negligible-cost encryption method for resource-limited sensing applications.Comment: IEEE Transactions on Information Forensics and Security, accepted for publication. Article in pres
    corecore