68 research outputs found

    Cooperative Processing: An Agenda for Research

    Get PDF
    The purpose of this paper is to explore the potential research agenda for Cooperative Processing (COP). COP is a method of processing in which communications is an integral part of the process of executing an application. Numerous innovative products that support COP are starting to appear on the market. However, as is usually the case with any new technology, many organizations have not yet implemented COP. They have embraced a wait and see attitude. Such a stance can be attributed to the newness of the applications operating in COP mode and to the lack of data demonstrating COP\u27S uses and benefits. For example, no data, at present, demonstrates basic COP efficacy. Among many possible research areas this paper suggests topics such as: potential COP users, types of applications benefiting from the COP processing mode, and organizational and technological factors involved in COP implementation

    A strategic plan for MIS at James River Corporation of Virginia

    Get PDF
    The objectives of this paper are (1) to present a methodology for developing an M.I.S. strategic plan, and (2) to recommend a M.I.S. organizational structure for James River Corporation of Virginia. The paper begins by describing the five businesses of James River and then describes a process for developing a strategic plan for a M.I.S. function. The next two sections define a desired future state for M.I.S. in James River and present guidelines for implementing this future state. The fifth section defines a method of implementing the strategy across the five diverse businesses of the corporation

    Enterprise storage report for the 1990's

    Get PDF
    Data processing has become an increasingly vital function, if not the most vital function, in most businesses today. No longer only a mainframe domain, the data processing enterprise also includes the midrange and workstation platforms, either local or remote. This expanded view of the enterprise has encouraged more and more businesses to take a strategic, long-range view of information management rather than the short-term tactical approaches of the past. Some of the significant aspects of data storage in the enterprise for the 1990's are highlighted

    NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications, volume 1

    Get PDF
    Papers and viewgraphs from the conference are presented. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disks and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's

    Proceedings of the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications

    Get PDF
    The proceedings of the National Space Science Data Center Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications held July 23 through 25, 1991 at the NASA/Goddard Space Flight Center are presented. The program includes a keynote address, invited technical papers, and selected technical presentations to provide a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's

    To Host a Legacy System to the Web

    Get PDF
    The dramatic improvements in global interconectivity due to intranets, extranets and the Internet has led to many enterprises to consider migrating legacy systems to a web based systems. While data remapping is relatively straightforward in most cases, greater challenges lie in adapting legacy application software. This research effort describes an experiment in which a legacy system is migrated to a web-client/server environment. First, this thesis reports on the difficulties and issues arising when porting a legacy system International Invoice (IIMM) to a web-client/server environment. Next, this research analyzes the underlying issues, and offer cautionary guidance to future migrators and finally this research effort builds a prototype of the legacy system on a web client/server environment that demonstrates effective strategies to deal with these issues

    Integrating legacy mainframe systems: architectural issues and solutions

    Get PDF
    For more than 30 years, mainframe computers have been the backbone of computing systems throughout the world. Even today it is estimated that some 80% of the worlds' data is held on such machines. However, new business requirements and pressure from evolving technologies, such as the Internet is pushing these existing systems to their limits and they are reaching breaking point. The Banking and Financial Sectors in particular have been relying on mainframes for the longest time to do their business and as a result it is they that feel these pressures the most. In recent years there have been various solutions for enabling a re-engineering of these legacy systems. It quickly became clear that to completely rewrite them was not possible so various integration strategies emerged. Out of these new integration strategies, the CORBA standard by the Object Management Group emerged as the strongest, providing a standards based solution that enabled the mainframe applications become a peer in a distributed computing environment. However, the requirements did not stop there. The mainframe systems were reliable, secure, scalable and fast, so any integration strategy had to ensure that the new distributed systems did not lose any of these benefits. Various patterns or general solutions to the problem of meeting these requirements have arisen and this research looks at applying some of these patterns to mainframe based CORBA applications. The purpose of this research is to examine some of the issues involved with making mainframebased legacy applications inter-operate with newer Object Oriented Technologies

    Large-Scale Client/Server Migration Methodology

    Get PDF
    The purpose of this dissertation is to explain how to migrate a medium-sized or large company to client/server computing. It draws heavily on the recent IBM Boca Raton migration experience. The client/server computing model is introduced and related, by a Business Reengineering Model, to the major trends that are affecting most businesses today, including business process reengineering, empowered teams, and quality management. A recommended information technology strategy is presented. A business case development approach, necessary to justify the large expenditures required for a client/server migration, is discussed. A five-phase migration management methodology is presented to explain how a business can be transformed from mid-range or mainframe-centric computing to client/server computing. Requirements definition, selection methodology, and development alternatives for client/server applications are presented. Applications are broadly categorized for use by individuals (personal applications) or teams. Client systems, server systems, and network infrastructures are described along with discussions of requirements definition, selection, installation, and support. The issues of user communication, education, and support with respect to a large client/server infrastructure are explored. Measurements for evaluation of a client/server computing environment are discussed with actual results achieved at the IBM Boca Raton site during the 1994 migration. The dissertation concludes with critical success factors for client/server computing investments and perspectives regarding future technology in each major area

    ITACS Annual Accountability Report: FY2004 Accomplishments and Challenges

    Get PDF

    Adaptive management of emerging battlefield network

    Get PDF
    The management of the battlefield network takes place in a Network Operations Center (NOC). The manager, based on the importance of the managed network, is sometimes required to be present all the time within the physical installations of the NOC. The decisions regard a wide spectrum of network configurations, fault detection and repair, and network performance improvement. Especially in the case of the battlefield network operations these decisions are sometimes so important that can be characterized as critical to the success of the whole military operation. Most of the times, the response time is so restricted that exceeds the mean physical human response limits. An automated response that also carries the characteristics of human intelligence is needed to overcome the restrictions the human nature of an administrator imposes. The research will establish the proper computer network management architecture for an adaptive network. This architecture will enhance the capabilities of network management and in terms of cost and efficiency.http://archive.org/details/adaptivemanageme109451678Lieutenant Commander, Hellenic NavyApproved for public release; distribution is unlimited
    corecore