38 research outputs found

    The Effects of the Quantification of Faculty Productivity: Perspectives from the Design Science Research Community

    Get PDF
    In recent years, efforts to assess faculty research productivity have focused more on the measurable quantification of academic outcomes. For benchmarking academic performance, researchers have developed different ranking and rating lists that define so-called high-quality research. While many scholars in IS consider lists such as the Senior Scholar’s basket (SSB) to provide good guidance, others who belong to less-mainstream groups in the IS discipline could perceive these lists as constraining. Thus, we analyzed the perceived impact of the SSB on information systems (IS) academics working in design science research (DSR) and, in particular, how it has affected their research behavior. We found the DSR community felt a strong normative influence from the SSB. We conducted a content analysis of the SSB and found evidence that some of its journals have come to accept DSR more. We note the emergence of papers in the SSB that outline the role of theory in DSR and describe DSR methodologies, which indicates that the DSR community has rallied to describe what to expect from a DSR manuscript to the broader IS community and to guide the DSR community on how to organize papers for publication in the SSB

    Exploring the workload balance effects of including continuity-based factors in nurse-patient assignments

    Get PDF
    Workload balance in nurse-patient assignments is important for ensuring quality in patient care. Unbalanced workloads can lead to high levels of nursing stress, medical errors, lower-quality outcomes, and higher costs. Studies have pro-posed assignment strategies based on patient acuity, location, and characteristics of specialized units. These methods do not address the part of workload associated with continuity in care coordination, and the potential benefits associated with continuity-based assignments. We present the results of a pilot simulation study comparing an acuity-oriented method to a continuity-based approach, using acuity as a measure of workload. Our results suggest that a purely continuity-based approach can result in skewed workloads when measured by patient acuity. In future work, we plan to consider hybrid methods, which may be able to provide the benefits of both continuity and acuity based methods

    Data Collection Interfaces in Online Communities: The Impact of Data Structuredness and Nature of Shared Content on Perceived Information Quality

    Get PDF
    The growth of online communities has resulted in an increased availability of user-generated content (UGC). Given the varied sources of UGC, the quality of information it provides is a growing challenge. While many aspects of UGC have been studied, the role of data structures in gathering UGC and nature of to-be-shared content has yet to receive attention. UGC is created in online platforms with varying degrees of data structure, ranging from unstructured to highly-structured formats. These platforms are often designed without regard to how the structure of the input format impacts the quality of outcome. In this study, we investigate the impact of the degree of data structure on the perceived quality of information from the novel perspective of data creators. We also propose and evaluate a novel moderating effect due to the nature of content online users wish to share. The preliminary findings support our claims of the importance of these factors for information quality. We conclude the paper with directions for future research and expected contributions for theory and practice

    Data access and interaction management in mobile and distributed environments

    No full text
    Ph.D.Shamkant B. Navath

    Cost-based decision-making in middleware virtualization environments

    No full text
    Middleware virtualization refers to the process of running applications on a set of resources (e.g., databases, application servers, other transactional service resources) such that the resource-to-application binding can be changed dynamically on the basis of applications' resource requirements. Although virtualization is a rapidly growing area, little formal academic or industrial research provides guidelines for cost-optimal allocation strategies. In this work, we study this problem formally. We identify the problem and describe why existing schemes cannot be applied directly. We then formulate a mathematical model describing the business costs of virtualization. We develop runtime models of virtualization decision-making paradigms. We describe the cost implications of various runtime models and consider the cost effects of different managerial decisions and business factors, such as budget changes and changes in demand. Our results yield useful insights for managers in making virtualization decisions.Computing science Virtualization Resource assignment System design

    Balancing Workload and Care Communication Costs in Nurse Patient Assignment

    No full text
    Nurse-patient assignment is a complex task. Many proposed methods attempt to balance workload across nurses using work-descriptive factors, such as patient acuity, or patient location/distance. However, such methods ignore other factors, such as the cost of care communication. In this initial work, we propose a prototype hybrid method that attempts to blend the use of both types of factors for assignment. We evaluated this hybrid method along with three control methods in a simulated inpatient unit environment. The results showed our hybrid method could obtain benefits of less communication cost with some penalty in acuity imbalance. Future work will focus on refining the method to reduce or avoid this penalty

    Managing RFID events in large-scale distributed RFID infrastructures

    No full text
    As RFID installations become larger and more geographically distributed, their scalability becomes a concern. Currently, most RFID processing occurs in a central location, gathering tag scans and matching them to event-condition-action (ECA) rules. However, as the number of scans and ECA rules grows, the workload quickly outpaces the capacity of a centralized processing server. In this paper, we consider the problem of distributing the RFID processing workload across multiple nodes in the system. We describe the problem, and present an overview of our approach. We then formulate two decision models for distributing the processing across the system. One generates an optimal allocation based on global awareness of the state of the system. This problem is NP-hard and assumes that bandwidth and processing resource availability is known in a central location, which is unrealistic in real scenarios. Thus, we use this model as a theoretical optimal model for comparison purposes. The second model generates a set of local decisions based on locally-available processing and bandwidth information, which takes much less information into account than the global model, but still produces useful results. We describe our system architecture, and present a set of experimental results that demonstrate that (a) the global model, while providing an optimal allocation of processing responsibilities, model does not scale well, requiring hours to solve problems that the localized model can solve in a few tens of seconds; (b) the localized model generates usable solutions, differing from the optimal solution on average by 2.1% for smaller problem sizes and at most 5.8% in the largest problem size compared; and (c) the localized approach can provide runtime performance near that of the global model, within 3-5% of the global model, and up to a 55% improvement in runtime performance over a (uniform) random allocation

    \Lambda

    No full text
    Abstract Web sites allow the collection of vast amounts of navigational data- clickstreams of user traversals through the site. These massive data stores offer the tantalizing possibility of uncovering interesting patterns within the dataset. For e-businesses, always looking for an edge in the hyper-competitive online marketplace, this possibility is of particular interest. Of significant particular interest to e-businesses is the discovery of Critical Edge Sequences (CES), which denote frequently traversed subpaths in the catalog. CESs can be used to improve site performance and site management, increase the effectiveness of advertising on the site, and gather additional knowledge of customer interest patterns on the site. Using traditional graph-based and web mining strategies to find CESs could turn out to be expensive in both space and time. In this paper, we propose a number of approximate algorithms to compute the most popular paths bewteen node pairs in a catalog, which are then used to discover CESs. Our methods are both space-efficient and accurate, providing a vast reduction in the storage requirement with a minimum impact on accuracy. These algorithms, which can be executed off-line in batch mode, are also practical with respect to running time. As variants of single-source shortest-path, they run in log linear time. \Lambda Georgia Institute of Technology y University of Massachusetts and IIT, Bombay 1 Introduction Web sites allow the collection of vast amounts of navigational data- clickstreams of user traversals through the site. These massive data stores offer the tantalizing possibility of uncovering interesting patterns within the dataset. For e-businesses, always looking for an edge in the hyper-competitive online marketplace, this possibility is of particular interest

    Sham Navathe

    No full text
    \Lambda
    corecore