27,386 research outputs found

    A Methodology for Profiling Literature using Co-citation Analysis

    Get PDF
    The contribution of this paper is a methodology for profiling literature in Information Systems (IS) using a powerful tool for co-citation analysis - Citespace. Co-citation analysis provide important insights into knowledge domains by identifying frequently co-cited papers, authors and journals. The methodology is applied to a dataset comprising of citation data pertaining to a leading European journal – the European Journal of Information Systems (EJIS). In this paper we outline the different steps involved in using Citespace to profile literature in IS and use the EJIS dataset as an example. We hope that the readers will employ and/or extend the given methodology to conduct similar bibliometric studies in IS and other research areas

    Profiling research published in the journal of enterprise information management (JEIM)

    Get PDF
    Purpose – The purpose of this paper is to analyse research published in the Journal of Enterprise Information Management (JEIM) in the last ten years (1999 to 2008). Design/methodology/approach – Employing a profiling approach, the analysis of the 381 JEIM publications includes examining variables such as the most active authors, geographic diversity, authors' backgrounds, co-author analysis, research methods and keyword analysis. Findings – All the finding are in relation to the period of analysis (1999 to 2008). (a) Research categorised under descriptive, theoretical and conceptual methods is the most dominant research approach followed by JEIM authors. This is followed by case study research. (b) The largest proportion of contributions came from researchers and practitioners with an information systems background, followed by those with a background in business and computer science and IT. (c) The keyword analysis suggests that ‘information systems’, ‘electronic commerce’, ‘internet’, ‘logistics’, ‘supply chain management’, ‘decision making’, ‘small to medium-sized enterprises’, ‘information management’, ‘outsourcing’, and ‘modelling’ were the most frequently investigated keywords. (d) The paper presents and discusses the findings obtained from the citation analysis that determines the impact of the research published in the JEIM. Originality/value – The primary value of this paper lies in extending the understanding of the evolution and patterns of IS research. This has been achieved by analysing and synthesising existing JEIM publications

    International comparative performance of the UK research base : 2011

    Get PDF

    Profiling a decade of information systems frontiers’ research

    Get PDF
    This article analyses the first ten years of research published in the Information Systems Frontiers (ISF) from 1999 to 2008. The analysis of the published material includes examining variables such as most productive authors, citation analysis, universities associated with the most publications, geographic diversity, authors’ backgrounds and research methods. The keyword analysis suggests that ISF research has evolved from establishing concepts and domain of information systems (IS), technology and management to contemporary issues such as outsourcing, web services and security. The analysis presented in this paper has identified intellectually significant studies that have contributed to the development and accumulation of intellectual wealth of ISF. The analysis has also identified authors published in other journals whose work largely shaped and guided the researchers published in ISF. This research has implications for researchers, journal editors, and research institutions

    Exploiting the full power of temporal gene expression profiling through a new statistical test: Application to the analysis of muscular dystrophy data

    Get PDF
    Background: The identification of biologically interesting genes in a temporal expression profiling dataset is challenging and complicated by high levels of experimental noise. Most statistical methods used in the literature do not fully exploit the temporal ordering in the dataset and are not suited to the case where temporal profiles are measured for a number of different biological conditions. We present a statistical test that makes explicit use of the temporal order in the data by fitting polynomial functions to the temporal profile of each gene and for each biological condition. A Hotelling T2-statistic is derived to detect the genes for which the parameters of these polynomials are significantly different from each other. Results: We validate the temporal Hotelling T2-test on muscular gene expression data from four mouse strains which were profiled at different ages: dystrophin-, beta-sarcoglycan and gammasarcoglycan deficient mice, and wild-type mice. The first three are animal models for different muscular dystrophies. Extensive biological validation shows that the method is capable of finding genes with temporal profiles significantly different across the four strains, as well as identifying potential biomarkers for each form of the disease. The added value of the temporal test compared to an identical test which does not make use of temporal ordering is demonstrated via a simulation study, and through confirmation of the expression profiles from selected genes by quantitative PCR experiments. The proposed method maximises the detection of the biologically interesting genes, whilst minimising false detections. Conclusion: The temporal Hotelling T2-test is capable of finding relatively small and robust sets of genes that display different temporal profiles between the conditions of interest. The test is simple, it can be used on gene expression data generated from any experimental design and for any number of conditions, and it allows fast interpretation of the temporal behaviour of genes. The R code is available from V.V. The microarray data have been submitted to GEO under series GSE1574 and GSE3523

    Classification of information systems research revisited: A keyword analysis approach

    Get PDF
    A number of studies have previously been conducted on keyword analysis in order to provide a comprehensive scheme to classify information systems (IS) research. However, these studies appeared prior to 1994, and IS research has clearly developed substantially since then with the emergence of areas such as electronic commerce, electronic government, electronic health and numerous others. Furthermore, the majority of European IS outlets - such as the European Journal of Information Systems and Information Systems Journal - were founded in the early 1990s, and keywords from these journals were not included in any previous work. Given that a number of studies have raised the issue of differences in European and North American IS research topics and approaches, it is arguable that any such analysis must consider sources from both locations to provide a representative and balanced view of IS classification. Moreover, it has also been argued that there is a need for further work in order to create a comprehensive keyword classification scheme reflecting the current state of the art. Consequently, the aim of this paper is to present the results of a keyword analysis utilizing keywords appearing in major peer-reviewed IS publications after the year 1990 through to 2007. This aim is realized by means of the two following objectives: (1) collect all keywords appearing in 24 peer reviewed IS journals after 1990; and (2) identify keywords not included in the previous IS keyword classification scheme. This paper also describes further research required in order to place new keywords in appropriate IS research categories. The paper makes an incremental contribution toward a contemporary means of classifying IS research. This work is important and useful for researchers in understanding the area and evolution of the IS field and also has implications for improving information search and retrieval activities

    Electronic fraud detection in the U.S. Medicaid Healthcare Program: lessons learned from other industries

    Get PDF
    It is estimated that between 600and600 and 850 billion annually is lost to fraud, waste, and abuse in the US healthcare system,with 125to125 to 175 billion of this due to fraudulent activity (Kelley 2009). Medicaid, a state-run, federally-matchedgovernment program which accounts for roughly one-quarter of all healthcare expenses in the US, has been particularlysusceptible targets for fraud in recent years. With escalating overall healthcare costs, payers, especially government-runprograms, must seek savings throughout the system to maintain reasonable quality of care standards. As such, the need foreffective fraud detection and prevention is critical. Electronic fraud detection systems are widely used in the insurance,telecommunications, and financial sectors. What lessons can be learned from these efforts and applied to improve frauddetection in the Medicaid health care program? In this paper, we conduct a systematic literature study to analyze theapplicability of existing electronic fraud detection techniques in similar industries to the US Medicaid program
    • 

    corecore