180 research outputs found

    Recollections on Mentoring

    Get PDF
    In this article about mentoring I simply try to tell a story about what transpired involving me and our doctoral students at the University of Minnesota from the early 1970s until the early 1990s. I try to offer a few guiding principles on mentoring doctoral students, but I must set forth a disclaimer that my observations are based on a situation that was very environmentally specific and thus any generalizations should be taken in context of the time and place

    ASSESSMENT OF THE INFORMATION SYSTEMS ORGANIZATION: AN EMPIRICAL INVESTIGATION OF ASSESSOR PERSPECTIVES

    Get PDF
    Assessing the information systems (IS) function within organizations has been identified as a critical issue for both executive and IS management. Unfortunately, despite the importance of the issue, little progress has been made in agreeing on appropriate measures of the contribution of the IS function to an enterprise. There is evidence that this lack of progress in defining measures of IS performance may stem from the differing perspectives of the people performing the assessment. The research described in this paper investigates the relationship between the organizational function of an individual performing an assessment of the IS organization and (1) the types of information selected for the assessment and (2) the assessment outcome. The research results indicate that there are differences in information preferences by executive managers, IS managers, and internal auditors for performing an IS organization assessment and that the people in these organizational roles identify different types of strengths and weaknesses when assessing an IS organization

    Knowledge Sharing in High Technology Companies

    Get PDF
    This paper reports on the results of studying the applicability of a model for Shared Knowledge Creation (SKC). The model is based upon a previous pilot study and was tested in a New Product Development (NPD) setting involving four projects in two high technology companies located in the Nordic countries. In particular, four factors presented in the model were explored: (1) the SKC process; (2) IT infrastructure in support of the process; (3) catalysts; and (4) organizational issues and SKC. Implications are drawn for practice and for future research

    Understanding the Effectiveness of Computer Graphics for Decision Support: A Cumulative Experimental Approach

    Get PDF
    A total of 840 junior and senior-level undergraduate business students participated in three experiments which compared computer-generated graphical forms of data presentation with traditional tabular reports. The first experiment compared tables and bar charts for their effects on readability, interpretation accuracy and decision making. No differences in interpretation accuracy or decision quality were observed for the two groups, although tabular reports were rated as more easy to read and understand than graphical reports. The second experiment compared line plots with tables for their effects on interpretation accuracy and decision quality. Subjects with graphical reports outperformed those with tables. There were no meaningful differences in interpretation accuracy across treatment groups. The third experiment compared graphical and tabular reports for their ability to convey a message to the reader. Only in situations in which a vast amount of information was presented and relatively simple impressions were to be made, did subjects given graphs outperform those using tables. This program of cumulative experiments indicates that generalized claims of superiority of graphic presentation are unsupported, at least for decision-related activities. In fact, the experiments suggest that the effectiveness of data display format is largely a function of the characteristics of the task at hand, and that impressions gleaned from one shot studies of the effectiveness of the use of graphs may be nothing more than situationally dependent artifacts

    Methodological Issues in Experimental IS Research: Experiences and Recommendations

    Get PDF
    Within the last ten years, the use of experimental methodology in information systems research has substantially increased. However, despite the popularity of experimentation, studies suffer from major methodological problems: (1) lack of underlying theory, (2) proliferation of measuring instruments, (3) inappropriate research designs, (4) diversity of experimental tasks, and (5) lack of internal validity. These problems have led to an accumulation of conflicting results in several areas of IS research, in particular, research in the area of graphics and information presentation. This paper uses the area of information presentation format to explore the nature of the methodological problems mentioned above and to suggest potential remedies: 1. Due to the lackoftheoretical basis, informationpresentationresearchers do nothave any common ground for conducting and interpreting their results. This has resulted in oneshot, ad-hoc studies that do not build on the work of others. No state of relatedness among studies has emerged. Only through progrums of research can we hope for an underlying theory to emerge. 2. The proliferation of measuring instruments, many of which may have problems with reliability and validity, has plagued IS research. Again, only through a program of research can we hope to construct a set ofmeasuring instruments applicable and easily adaptable to a large number of studies 3. With regard to research design, simplistic and nonpragmatic studies as well as poorly controlled experiments have impeded the progress of IS research. Suggested remedies include the adoption of multivariate designs, use of decision maker productivity as a dependent vatiable, and more effective experimental control through measurement of factors that are 1mown from previous research to influence decision performance. 4. The presence of a multitude of task environments has also posed problems. The employment of diverse tasks makes comparisons of results across studies inappropriate. A taxomony of tasks must be developed before we can meaningfully integrate research findings. 5. Many studies have suffered from internal validity problems. A remedy for this requires more effective precautions to ensure that the findings of a study are due to the factors researched rather than to accidents. Tb illustrate this last problem of internal validity and the steps needed to improve the experimental studies involving mangerial graphics is described. The research study conducted at the University of Minnesota was initially set up to investigate the relationship between graphical decision aids, task complexity, and decision maker performance. First, a task, and a case that was to provide a task setting, were developed. Also, questionnaires and tests were constructed to gather information on the (1) backgroundof subjects, (2) motivation of subjects, (3) subjects\u27 satisfaction withthe graphs, (4) perceived complexity and difficulty of the problem solving task, and (5) the subjects\u27 interpretation accuracy in reading graphs After the development of the tasl and other experimental material, the experiment was pretested The results from the pilot study gave the authors every reason to believe that the task did not have any major validity problems. However, whenthe experiment was actually given to 63 graduate students, the data didnotreveal any consistent patterns due to graphical and task treatments. This, of course, concerned the authors, and, as a result, attention was directed toward improving the experimental task, research design, and measurement A second experiment was conducted to test whether the insignificant results in the first experimentwere causedbythegraphsorbymisleadingorconfusinginformationinthetask. The data from the second experiment collected on 20 experimental subjects, convinced the authorsthatthemainproblemcausingtheinsignificantresultshadnotbeenthepoorquality of the graphs, but the fact that, in general subjects were just not able to perform the task However, the authors did not know whether this poor performance was due to an overly difficult task or to misleading or confusing information within the task. Therefore, a third experiment was conducted to resolve this question. The third study used 17 managers as experimental subjects. It was assumed that if the managers couldsatisfactorilycompletethetasktheauthors couldconcludethatthe taskwas valid, but too difficult for graduate students. The analysis of the data collected from the third experiment confirmed, however, that serious problems existed with the task itself. Debriefings of the managers indicated that the case description, in combination with the presented data on marketing variables, included confusing and misleading data Obviously, thetaskwasnotprovidingthebasisforansweringtheresearchquestionontherelationshipof task, presentation format, the decision performance. Thus, a majorrevision of the task was undertaken. The revised material is currently undergoing pretesting. In summary, the authors have gone through several experiments in searching and testing for valid measurements. During this process we have learned an invaluable lesson that we hope willbe usefulto othersintheirresearch endeavors. We discoveredthatthe process of coming up with an effective taskand variable measurementis lengthy, costly, and mayhave uncertain outcomes even if considerable precautions are taken. For experimental IS researchers particularly those performing studies on the use of managerial graphics, cautions and guidelines are provided to help them address more effectively the common methodological

    THE IMPACT OF COMPUTER-BASED SUPPORT ON THE PROCESS AND OUTCOMES OF GROUP DECISION MAKING

    Get PDF
    Interactive computer-based systems to support group decision making (group decision support systems or GDSS) have received increased attention from researchers and practitioners in recent years. Huber (1984) argues that as organizational environments become more turbulent and complex, decisions will be required to be made in less time and with greater information exchange within decision making groups. Thus, it is imperative that studies be undertaken to determine the types and characteristics of group decision tasks most appropriate for support by a GDSS and to determine the features of a GDSS that will support those tasks. A number of prominent researchers in the field of group decision making (Shaw, 1973, 1981; Hackman and Morris, 1975; Fisher, 1974) agree that the decision task itself is probably the most important factor in determining group decision making effectiveness. The characteristics of group decision tasks are many and varied, but according to Shaw (1973) the level of difficulty/complexity of the decision is a fundamental factor in influencing the performance of the group. Some decisions are characterized by information that is clear, concise, easily communicable, and where relationships between important factors in the decision are easily understood. In short, these decisions require relatively little effort to make and are therefore called easy decisions. Decision tasks where the information to be considered in making the decision is incomplete, difficult to understand, and where complex relationships exist within the information available are called complex or difficult decisions. The role of decision task difficulty in the effective use of GDSS is considered ih this study. This research is an initial experimental study, exploratory in nature, that aims to get a first-level understanding of the impact of a computer-based DSS on group decision making. The group decision support system that is used in this study has only those features that specifically support group decision making (alternatives generation and communication, preference ranking and voting support). The reason for this approach is to start a program of research with a simple system in order to determine the particular impact of these features on, not only the outcomes of group decision making (such as decision quality), but on the process of group decision making as well. A controlled 2 x 2 factorial experiment was used to compare the decisions made by groups which had GDSS support with those groups that had no GDSS support and those with a high difficulty task to those with a low difficulty task. Figure 1 shows the relationship among the main variables in the study. The experimental task was a marketing business case in which the company was experiencing declining profits. Each group was asked to find the problem which was causing the declining profits. Difficulty was manipulated by modifying the data in the case. The setting for this experiment was a decision room designed and set up to accommodate face-to-face group interaction. The GDSS treatment entailed the use of one computer terminal per group member so that the GDSS could be used to support group decision making. Each group member in the GDSS treatment also had the use of a pencil, paper, a hand calculator, and a blackboard. For the non GDSS treatment, the terminals were removed and the group used just pencils, paper, hand calculators, and a blackboard to assist in making the decision. The computer hardware consisted of a DEC VAX 11/780 timesharing system using the VMS operating system, and DEC VT-102 terminals. The terminals were connected to the VAX 11/780 using 2400 baud direct lines. The GDSS called Decision Aid for Groups (DECAID) was designed, coded, and tested to make sure that it worked in the experimental setting. The approach to design was to implement the features, and then to refine the system through testing to make those features work as efficiently as possible. The GDSS software performed the basic functions of recording and storing and displaying alternatives that were entered by group members, aggregating and displaying preference rankings that had been entered for those alternatives, and recording votes (either publicly or anonymously) for the various alternatives generated. The system was easy to use and menu driven. Eighty four senior undergraduate business administration students participated in the study. These subjects had taken at least one course each in management science/decision analysis techniques, marketing, management theory/organizational behavior, and all had exposure to case analysis techniques. All subjects had been given training in the use of the GDSS. Measures were taken of decision outcomes (decision quality, decision time, decision confidence, satisfaction with group process, and amount of GDSS usage), and decision process variables (number of issues considered, number of alternatives generated, and participation in the decision making). Decision quality was measured along two dimensions: (1) decision content - how close did the group\u27s decision come to that made by a panel of experts; and (2) decision reasoning -- how similar the group\u27s reasoning in arriving at their decision was to the reasoning of the experts. Decision time was defined as the length of time it took the group to reach a consensus decision. Decision confidence and satisfaction with the group process were measured by individual responses to a post- test questionnaire. The individual responses were then aggregated to give a group value. The amount of GDSS usage was measured by examining the computer logs that were kept during the GDSS sessions. Decision issues were defined as factors that were important in the analysis of the case. Decision alternatives were defined as those issues in the case that the group analyzed as being the possible major problems in the case and hence, possible solutions to the decision task. Participation was measured by counting the number of task related comments made by each individual group member. Issues, alternatives and participation were determined by analysis of the video and audio tapes that were made of the experimental sessions. The major findings of the study are: 1. Decision quality is enhanced when decision making is supported by a GDSS, particularly for high dificulty tasks. 2. Decision time is not affected by use of a GDSS. 3. Confidence in the group decision and satisfaction with the decision making process are reduced when a GDSS is used, irrespective of task difficulty. 4. The number of alternatives considered is increased when a GDSS is used to support group decision making. 5. Participation in the group decision making process is unaffected by GDSS support or by decision task difficulty. The paper concludes by suggesting directions for future research into GDSS. Work is needed to determine the effectiveness of additional features of a GDSS (such as other communication features, modeling features, etc.), to understand the impact of GDSS on the different phases of decision making, and to examine the effect of repeated use of a GDSS on the quality of group decision making

    Developing an Instrument for Knowledge Management Project Evaluation

    Get PDF
    Many knowledge management (KM) projects have been initiated, some of which have been successes but many have been failures. Measuring the success or failure of KM initiatives is not easy, and in order to do so some kind of measurement process has to be available. There are three points at which evaluation of KM projects can, and should be, done: (1) when deciding whether to start and where to focus, (2) once under way, following up on a project and making adjustments if needed, and (3) when completed, to evaluate the project outcomes. This paper concentrates on the first two areas by developing a general instrument for evaluation of KM projects.</p

    Reflections on Designing Field Research for Emerging IS Topics: The Case of Knowledge Management

    Get PDF
    To understand how to improve the research process for future projects, it is useful to take a retrospective view of a research project. This is especially true for emerging topics in IS where many opportunities are available to shape directions and priorities. This article presents a reflective analysis of a field research project in the area of knowledge management. The article examines the process history and assesses the decisions taken and activities carried out in the early formative stages of a field research project. With a detailed anthropological flavor, the paper describes the ins and outs of the various phases of the research process in a narrative experiential way, and analyzes what was learned. The results should be useful for future researchers. The major lessons learned were: 1. Retrospectively examining the research of others can be useful in learning how to improve one\u27s ability to do research in a particular area, such as field research in information systems. 2. Researchers wishing to develop a long term relationship with a host organization may have to be flexible in their research approaches and methods, even to the extent of sacrificing rigor for providing outcomes of use to the host organization. 3. Pilot studies should be carefully designed and executed to maximize learning for later, more extensive studies

    Attributing scientific and technical progress: the case of holography

    Get PDF
    Holography, the three-dimensional imaging technology, was portrayed widely as a paradigm of progress during its decade of explosive expansion 1964–73, and during its subsequent consolidation for commercial and artistic uses up to the mid 1980s. An unusually seductive and prolific subject, holography successively spawned scientific insights, putative applications and new constituencies of practitioners and consumers. Waves of forecasts, associated with different sponsors and user communities, cast holography as a field on the verge of success—but with the dimensions of success repeatedly refashioned. This retargeting of the subject represented a degree of cynical marketeering, but was underpinned by implicit confidence in philosophical positivism and faith in technological progressivism. Each of its communities defined success in terms of expansion, and anticipated continual progressive increase. This paper discusses the contrasting definitions of progress in holography, and how they were fashioned in changing contexts. Focusing equally on reputed ‘failures’ of some aspects of the subject, it explores the varied attributes by which success and failure were linked with progress by different technical communities. This important case illuminates the peculiar post-World War II environment that melded the military, commercial and popular engagement with scientific and technological subjects, and the competing criteria by which they assessed the products of science
    corecore