235 research outputs found

    Using the Cohort Model in Development of Web Sites and Web Policies

    Get PDF
    New York\u27s Center for Technology in Government organized a cohort of state and local government teams, all of whom wished to offer services by using the World Wide Web as a delivery platform. For the most part the cohort model was beneficial by providing opportunities for additional learning, for information sharing, and to build on the experience of others

    Controlling for Lexical Closeness in Survey Research: A Demonstration on the Technology Acceptance Model

    Get PDF
    Word co-occurrences in text carry lexical information that can be harvested by data-mining tools such as latent semantic analysis (LSA). In this research perspective paper, we demonstrate the potency of using such embedded information by demonstrating that the technology acceptance model (TAM) can be reconstructed significantly by analyzing unrelated newspaper articles. We suggest that part of the reason for the phenomenal statistical validity of TAM across contexts may be related to the lexical closeness among the keywords in its measurement items. We do so not to critique TAM but to praise the quality of its methodology. Next, putting that LSA reconstruction of TAM into perspective, we show that empirical data can provide a significantly better fitting model than LSA data can. Combined, the results raise the possibility that a significant portion of variance in survey based research results from word cooccurrences in the language itself regardless of the theory or context of the study. Addressing this possibility, we suggest a method to statistically control for lexical closeness

    Component-based process modelling in health care

    Get PDF
    Structural changes and increasing market dynamics in the health care sector intensify the hospitals’ need for cost-savings and process optimization. A first step is the documentation of processes in order to clarify the actual needs. As in health care processes are rather complex and often different players with divergent demands are involved, a disciplined approach to effectively and efficiently model processes is required. For this purpose, in this contribution a component-based modelling approach is presented and applied

    Visualizing the core-periphery distinction in theory domains

    Get PDF
    As specific parts of a theory are refined over time, the aggregated set of variables and associations of multiple theory instances provide the identity of a theory domain. This research applies a meta-theoretical analysis to the problem of theory identity and the core-periphery distinction. The theoretico-empirical network for quantitative publications over a 20 year span of two top Information Systems journals is analysed and visualized to illustrate these aspects of theory. The analysis provides insight into the density of research in specific theory domains, the verisimilitude and explanatory ubiquity of core versus peripheral postulates, and suggests opportunities for increasing explanatory depth and integration in select theory domains.<br /

    Understanding the Elephant: The Discourse Approach to Boundary Identification and Corpus Construction for Theory Review Articles

    Get PDF
    The goal of a review article is to present the current state of knowledge in a research area. Two important initial steps in writing a review article are boundary identification (identifying a body of potentially relevant past research) and corpus construction (selecting research manuscripts to include in the review). We present a theory-as-discourse approach, which (1) creates a theory ecosystem of potentially relevant prior research using a citation-network approach to boundary identification; and (2) identifies manuscripts for consideration using machine learning or random selection. We demonstrate an instantiation of the theory as discourse approach through a proof-of-concept, which we call the automated detection of implicit theory (ADIT) technique. ADIT improves performance over the conventional approach as practiced in past technology acceptance model reviews (i.e., keyword search, sometimes manual citation chaining); it identifies a set of research manuscripts that is more comprehensive and at least as precise. Our analysis shows that the conventional approach failed to identify a majority of past research. Like the three blind men examining the elephant, the conventional approach distorts the totality of the phenomenon. ADIT also enables researchers to statistically estimate the number of relevant manuscripts that were excluded from the resulting review article, thus enabling an assessment of the review article’s representativeness

    A Transdisciplinary Approach to Construct Search and Integration

    Get PDF
    Human behaviors play a leading role in many critical areas including the adoption of information systems, prevention of many diseases, and educational achievement. There has been explosive growth of research in the behavioral sciences during the past decade. Behavioral science researchers are now recognizing that due to this ever expanding volume of research it is impossible to find and incorporate all appropriate inter-related disciplinary knowledge. Unfortunately, due to inconsistent language and construct proliferation across disciplines, this excellent but disconnected research has not been utilized fully or effectively to address problems of human health or other areas. This paper introduces a newly developed, cutting edge technology, the Inter-Nomological Network (INN) which for the first time provides an integrating tool to behavioral scientists so they may effectively build upon prior research. We expect INN to provide the first step in moving the behavioral sciences into an era of integrated science. INN is based on Latent Semantic Analysis (LSA), a theory of language use with associated automatic computerized text analysis capabilities

    The 4th Industrial Revolution Powered by the Integration of AI, Blockchain, and 5G

    Get PDF
    The 21st century has introduced the 4th Industrial Revolution, which describes an industrial paradigm shift that alters social, economic, and political environments simultaneously. Innovative technologies such as blockchain, artificial intelligence, and advanced mobile networks power this digital revolution. These technologies provide a unique component that, when integrated, will establish a foundation to drive future innovation. In this paper, we summarize a 2019 Association for Information Systems Americas Conference on Information Systems (AMCIS) panel session where researchers who specialize in these technologies discussed new innovations and their integration. This topic has significant implications to business and academia both as these technologies will disrupt the social, economic, and political landscapes

    Improving Usability of Social and Behavioral Sciences’ Evidence: A Call to Action for a National Infrastructure Project for Mining Our Knowledge

    Get PDF
    Over the last century, the social and behavioral sciences have accumulated a vast storehouse of knowledge with the potential to transform society and all its constituents. Unfortunately, this knowledge has accumulated in a form (e.g., journal papers) and scale that makes it extremely difficult to search, categorize, analyze, and integrate across studies. In this commentary based on a National Science Foundation-funded workshop, we describe the social and behavioral sciences’ knowledge-management problem. We discuss the knowledge-scale problem and how we lack a common language, a common format to represent knowledge, a means to analyze and summarize in an automated way, and approaches to visualize knowledge at a large scale. We then describe that we need a collaborative research program between information systems, information science, and computer science (IICS) researchers and social and behavioral science (SBS) researchers to develop information system artifacts to address the problem that many scientific disciplines share but that the social and behavioral sciences have uniquely not addressed

    How Best to Hunt a Mammoth - Toward Automated Knowledge Extraction From Graphical Research Models

    Get PDF
    In the Information Systems (IS) discipline, central contributions of research projects are often represented in graphical research models, clearly illustrating constructs and their relationships. Although thousands of such representations exist, methods for extracting this source of knowledge are still in an early stage. We present a method for (1) extracting graphical research models from articles, (2) generating synthetic training data for (3) performing object detection with a neural network, and (4) a graph reconstruction algorithm to (5) storing results into a designated research model format. We trained YOLOv7 on 20,000 generated diagrams and evaluated its performance on 100 manually reconstructed diagrams from the Senior Scholars\u27 Basket. The results for extracting graphical research models show a F1-score of 0.82 for nodes, 0.72 for links, and an accuracy of 0.72 for labels, indicating the applicability for supporting the population of knowledge repositories contributing to knowledge synthesi

    A Guide to Text Analysis with Latent Semantic Analysis in R with Annotated Code: Studying Online Reviews and the Stack Exchange Community

    Get PDF
    In this guide, we introduce researchers in the behavioral sciences in general and MIS in particular to text analysis as done with latent semantic analysis (LSA). The guide contains hands-on annotated code samples in R that walk the reader through a typical process of acquiring relevant texts, creating a semantic space out of them, and then projecting words, phrase, or documents onto that semantic space to calculate their lexical similarities. R is an open source, popular programming language with extensive statistical libraries. We introduce LSA as a concept, discuss the process of preparing the data, and note its potential and limitations. We demonstrate this process through a sequence of annotated code examples: we start with a study of online reviews that extracts lexical insight about trust. That R code applies singular value decomposition (SVD). The guide next demonstrates a realistically large data analysis of Stack Exchange, a popular Q&A site for programmers. That R code applies an alternative sparse SVD method. All the code and data are available on github.com
    • …
    corecore