22,747 research outputs found

    WHAT IS INFORMATION SUCH THAT THERE CAN BE INFORMATION SYSTEMS?

    Get PDF
    Information systems, as a discipline, is concerned with the generation, storage and transmission of information, generally by technological means. As such, it would seem to be fundamental that it has a clear and agreed conceptualization of its core subject matter – namely “information”. Yet, we would claim, this is clearly not the case. As McKinney and Yoos point out, in a recent survey of the term information within information systems: “This is the IS predicament – using information as a ubiquitous label whose meaning is almost never specified. Virtually all the extant IS literature fails to explicitly specify meaning for the very label that identifies it.” We live in an information age and the vast majority of information (whatever it may be) is made available through a wide range of computer systems and one would expect therefore that information systems would in fact be one of the leading disciplines of the times rather than one that appears to hide itself in the shadows. Governments nowadays routinely utilize many academic experts to advise them in a whole range of areas but how many IS professors ever get asked? So, the primary purpose of this paper is to stimulate a debate within IS to discuss, and try to establish, a secure foundation for the discipline in terms of its fundamental concept – information. The structure of the paper is that we will firstly review the theories of information used (generally implicitly) within IS. Then we will widen the picture to consider the range of theories available more broadly within other disciplines. We will then suggest a particular approach that we consider most fruitful and discuss some of the major contentious issues. We will illustrate the theories with examples from IS

    Fall Prediction and Prevention Systems: Recent Trends, Challenges, and Future Research Directions.

    Get PDF
    Fall prediction is a multifaceted problem that involves complex interactions between physiological, behavioral, and environmental factors. Existing fall detection and prediction systems mainly focus on physiological factors such as gait, vision, and cognition, and do not address the multifactorial nature of falls. In addition, these systems lack efficient user interfaces and feedback for preventing future falls. Recent advances in internet of things (IoT) and mobile technologies offer ample opportunities for integrating contextual information about patient behavior and environment along with physiological health data for predicting falls. This article reviews the state-of-the-art in fall detection and prediction systems. It also describes the challenges, limitations, and future directions in the design and implementation of effective fall prediction and prevention systems

    A Study of Junior High Students' Perceptions of the Water Cycle

    Get PDF
    Provides pedagogical insight concerning learners' pre-conceptions and misconceptions about the water cycle The resource being annotated is: http://www.dlese.org/dds/catalog_DLESE-000-000-003-365.htm

    Quality vs Quantity: Advantages and Disadvantages of Image-Based Modeling

    Get PDF
    In the last few years, survey has changed radically thanks to progress in the field of 3D, massive data acquisition methods. The scientific debate focuses on the control over data quality by comparing Structure from Motion acquisition methods with consolidated methods. Collecting and interpreting a large amount of information helps us deeply understand our cultural heritage. This system of knowledge that we create has to achieve a dual objective: to document heterogeneous data with guaranteed repeatability and to ensure data quality during data capture and model processing. This information includes cultural resource data: dimension, information on construction, material characteristics, color; etc. The case study, the Abbey of Santa Maria della Matina, focuses on the shift from quantitative data, acquired in a semi-automatic manner, to qualitative data, controlled under uncertainty. In this framework, all branches of the “Science of Representation” ensure metric, spatial, and formal control of the built models

    Engineering simulations for cancer systems biology

    Get PDF
    Computer simulation can be used to inform in vivo and in vitro experimentation, enabling rapid, low-cost hypothesis generation and directing experimental design in order to test those hypotheses. In this way, in silico models become a scientific instrument for investigation, and so should be developed to high standards, be carefully calibrated and their findings presented in such that they may be reproduced. Here, we outline a framework that supports developing simulations as scientific instruments, and we select cancer systems biology as an exemplar domain, with a particular focus on cellular signalling models. We consider the challenges of lack of data, incomplete knowledge and modelling in the context of a rapidly changing knowledge base. Our framework comprises a process to clearly separate scientific and engineering concerns in model and simulation development, and an argumentation approach to documenting models for rigorous way of recording assumptions and knowledge gaps. We propose interactive, dynamic visualisation tools to enable the biological community to interact with cellular signalling models directly for experimental design. There is a mismatch in scale between these cellular models and tissue structures that are affected by tumours, and bridging this gap requires substantial computational resource. We present concurrent programming as a technology to link scales without losing important details through model simplification. We discuss the value of combining this technology, interactive visualisation, argumentation and model separation to support development of multi-scale models that represent biologically plausible cells arranged in biologically plausible structures that model cell behaviour, interactions and response to therapeutic interventions

    Data and Predictive Analytics Use for Logistics and Supply Chain Management

    Get PDF
    Purpose The purpose of this paper is to explore the social process of Big Data and predictive analytics (BDPA) use for logistics and supply chain management (LSCM), focusing on interactions among technology, human behavior and organizational context that occur at the technology’s post-adoption phases in retail supply chain (RSC) organizations. Design/methodology/approach The authors follow a grounded theory approach for theory building based on interviews with senior managers of 15 organizations positioned across multiple echelons in the RSC. Findings Findings reveal how user involvement shapes BDPA to fit organizational structures and how changes made to the technology retroactively affect its design and institutional properties. Findings also reveal previously unreported aspects of BDPA use for LSCM. These include the presence of temporal and spatial discontinuities in the technology use across RSC organizations. Practical implications This study unveils that it is impossible to design a BDPA technology ready for immediate use. The emergent process framework shows that institutional and social factors require BDPA use specific to the organization, as the technology comes to reflect the properties of the organization and the wider social environment for which its designers originally intended. BDPA is, thus, not easily transferrable among collaborating RSC organizations and requires managerial attention to the institutional context within which its usage takes place. Originality/value The literature describes why organizations will use BDPA but fails to provide adequate insight into how BDPA use occurs. The authors address the “how” and bring a social perspective into a technology-centric area
    • …
    corecore