24,565 research outputs found

    Factors shaping the evolution of electronic documentation systems

    Get PDF
    The main goal is to prepare the space station technical and managerial structure for likely changes in the creation, capture, transfer, and utilization of knowledge. By anticipating advances, the design of Space Station Project (SSP) information systems can be tailored to facilitate a progression of increasingly sophisticated strategies as the space station evolves. Future generations of advanced information systems will use increases in power to deliver environmentally meaningful, contextually targeted, interconnected data (knowledge). The concept of a Knowledge Base Management System is emerging when the problem is focused on how information systems can perform such a conversion of raw data. Such a system would include traditional management functions for large space databases. Added artificial intelligence features might encompass co-existing knowledge representation schemes; effective control structures for deductive, plausible, and inductive reasoning; means for knowledge acquisition, refinement, and validation; explanation facilities; and dynamic human intervention. The major areas covered include: alternative knowledge representation approaches; advanced user interface capabilities; computer-supported cooperative work; the evolution of information system hardware; standardization, compatibility, and connectivity; and organizational impacts of information intensive environments

    Data trust framework using blockchain and smart contracts

    Get PDF
    Lack of trust is the main barrier preventing more widespread data sharing. The lack of transparent and reliable infrastructure for data sharing prevents many data owners from sharing their data. Data trust is a paradigm that facilitates data sharing by forcing data controllers to be transparent about the process of sharing and reusing data. Blockchain technology has the potential to present the essential properties for creating a practical and secure data trust framework by transforming current auditing practices and automatic enforcement of smart contracts logic without relying on intermediaries to establish trust. Blockchain holds an enormous potential to remove the barriers of traditional centralized applications and propose a distributed and transparent administration by employing the involved parties to maintain consensus on the ledger. Furthermore, smart contracts are a programmable component that provides blockchain with more flexible and powerful capabilities. Recent advances in blockchain platforms toward smart contracts' development have revealed the possibility of implementing blockchain-based applications in various domains, such as health care, supply chain and digital identity. This dissertation investigates the blockchain's potential to present a framework for data trust. It starts with a comprehensive study of smart contracts as the main component of blockchain for developing decentralized data trust. Interrelated, three decentralized applications that address data sharing and access control problems in various fields, including healthcare data sharing, business process, and physical access control system, have been developed and examined. In addition, a general-purpose application based on an attribute-based access control model is proposed that can provide trusted auditability required for data sharing and access control systems and, ultimately, a data trust framework. Besides auditing, the system presents a transparency level that both access requesters (data users) and resource owners (data controllers) can benefit from. The proposed solutions have been validated through a use case of independent digital libraries. It also provides a detailed performance analysis of the system implementation. The performance results have been compared based on different consensus mechanisms and databases, indicating the system's high throughput and low latency. Finally, this dissertation presents an end-to-end data trust framework based on blockchain technology. The proposed framework promotes data trustworthiness by assessing input datasets, effectively managing access control, and presenting data provenance and activity monitoring. A trust assessment model that examines the trustworthiness of input data sets and calculates the trust value is presented. The number of transaction validators is defined adaptively with the trust value. This research provides solutions for both data owners and data users’ by ensuring the trustworthiness and quality of the data at origin and transparent and secure usage of the data at the end. A comprehensive experimental study indicates the presented system effectively handles a large number of transactions with low latency

    Semantic discovery and reuse of business process patterns

    Get PDF
    Patterns currently play an important role in modern information systems (IS) development and their use has mainly been restricted to the design and implementation phases of the development lifecycle. Given the increasing significance of business modelling in IS development, patterns have the potential of providing a viable solution for promoting reusability of recurrent generalized models in the very early stages of development. As a statement of research-in-progress this paper focuses on business process patterns and proposes an initial methodological framework for the discovery and reuse of business process patterns within the IS development lifecycle. The framework borrows ideas from the domain engineering literature and proposes the use of semantics to drive both the discovery of patterns as well as their reuse

    Adaptive learning-based resource management strategy in fog-to-cloud

    Get PDF
    Technology in the twenty-first century is rapidly developing and driving us into a new smart computing world, and emerging lots of new computing architectures. Fog-to-Cloud (F2C) is among one of them, which emerges to ensure the commitment for bringing the higher computing facilities near to the edge of the network and also help the large-scale computing system to be more intelligent. As the F2C is in its infantile state, therefore one of the biggest challenges for this computing paradigm is to efficiently manage the computing resources. Mainly, to address this challenge, in this work, we have given our sole interest for designing the initial architectural framework to build a proper, adaptive and efficient resource management mechanism in F2C. F2C has been proposed as a combined, coordinated and hierarchical computing platform, where a vast number of heterogeneous computing devices are participating. Notably, their versatility creates a massive challenge for effectively handling them. Even following any large-scale smart computing system, it can easily recognize that various kind of services is served for different purposes. Significantly, every service corresponds with the various tasks, which have different resource requirements. So, knowing the characteristics of participating devices and system offered services is giving advantages to build effective and resource management mechanism in F2C-enabled system. Considering these facts, initially, we have given our intense focus for identifying and defining the taxonomic model for all the participating devices and system involved services-tasks. In any F2C-enabled system consists of a large number of small Internet-of-Things (IoTs) and generating a continuous and colossal amount of sensing-data by capturing various environmental events. Notably, this sensing-data is one of the key ingredients for various smart services which have been offered by the F2C-enabled system. Besides that, resource statistical information is also playing a crucial role, for efficiently providing the services among the system consumers. Continuous monitoring of participating devices generates a massive amount of resource statistical information in the F2C-enabled system. Notably, having this information, it becomes much easier to know the device's availability and suitability for executing some tasks to offer some services. Therefore, ensuring better service facilities for any latency-sensitive services, it is essential to securely distribute the sensing-data and resource statistical information over the network. Considering these matters, we also proposed and designed a secure and distributed database framework for effectively and securely distribute the data over the network. To build an advanced and smarter system is necessarily required an effective mechanism for the utilization of system resources. Typically, the utilization and resource handling process mainly depend on the resource selection and allocation mechanism. The prediction of resources (e.g., RAM, CPU, Disk, etc.) usage and performance (i.e., in terms of task execution time) helps the selection and allocation process. Thus, adopting the machine learning (ML) techniques is much more useful for designing an advanced and sophisticated resource allocation mechanism in the F2C-enabled system. Adopting and performing the ML techniques in F2C-enabled system is a challenging task. Especially, the overall diversification and many other issues pose a massive challenge for successfully performing the ML techniques in any F2C-enabled system. Therefore, we have proposed and designed two different possible architectural schemas for performing the ML techniques in the F2C-enabled system to achieve an adaptive, advance and sophisticated resource management mechanism in the F2C-enabled system. Our proposals are the initial footmarks for designing the overall architectural framework for resource management mechanism in F2C-enabled system.La tecnologia del segle XXI avança ràpidament i ens condueix cap a un nou món intel·ligent, creant nous models d'arquitectures informàtiques. Fog-to-Cloud (F2C) és un d’ells, i sorgeix per garantir el compromís d’acostar les instal·lacions informàtiques a prop de la xarxa i també ajudar el sistema informàtic a gran escala a ser més intel·ligent. Com que el F2C es troba en un estat preliminar, un dels majors reptes d’aquest paradigma tecnològic és gestionar eficientment els recursos informàtics. Per fer front a aquest repte, en aquest treball hem centrat el nostre interès en dissenyar un marc arquitectònic per construir un mecanisme de gestió de recursos adequat, adaptatiu i eficient a F2C.F2C ha estat concebut com una plataforma informàtica combinada, coordinada i jeràrquica, on participen un gran nombre de dispositius heterogenis. La seva versatilitat planteja un gran repte per gestionar-los de manera eficaç. Els serveis que s'hi executen consten de diverses tasques, que tenen requisits de recursos diferents. Per tant, conèixer les característiques dels dispositius participants i dels serveis que ofereix el sistema és un requisit per dissenyar mecanismes eficaços i de gestió de recursos en un sistema habilitat per F2C. Tenint en compte aquests fets, inicialment ens hem centrat en identificar i definir el model taxonòmic per a tots els dispositius i sistemes implicats en l'execució de tasques de serveis. Qualsevol sistema habilitat per F2C inclou en un gran nombre de dispositius petits i connectats (conegut com a Internet of Things, o IoT) que generen una quantitat contínua i colossal de dades de detecció capturant diversos events ambientals. Aquestes dades són un dels ingredients clau per a diversos serveis intel·ligents que ofereix F2C. A més, el seguiment continu dels dispositius participants genera igualment una gran quantitat d'informació estadística. En particular, en tenir aquesta informació, es fa molt més fàcil conèixer la disponibilitat i la idoneïtat dels dispositius per executar algunes tasques i oferir alguns serveis. Per tant, per garantir millors serveis sensibles a la latència, és essencial distribuir de manera equilibrada i segura la informació estadística per la xarxa. Tenint en compte aquests assumptes, també hem proposat i dissenyat un entorn de base de dades segura i distribuïda per gestionar de manera eficaç i segura les dades a la xarxa. Per construir un sistema avançat i intel·ligent es necessita un mecanisme eficaç per a la gestió de l'ús dels recursos del sistema. Normalment, el procés d’utilització i manipulació de recursos depèn principalment del mecanisme de selecció i assignació de recursos. La predicció de l’ús i el rendiment de recursos (per exemple, RAM, CPU, disc, etc.) en termes de temps d’execució de tasques ajuda al procés de selecció i assignació. Adoptar les tècniques d’aprenentatge automàtic (conegut com a Machine Learning, o ML) és molt útil per dissenyar un mecanisme d’assignació de recursos avançat i sofisticat en el sistema habilitat per F2C. L’adopció i la realització de tècniques de ML en un sistema F2C és una tasca complexa. Especialment, la diversificació general i molts altres problemes plantegen un gran repte per realitzar amb èxit les tècniques de ML. Per tant, en aquesta recerca hem proposat i dissenyat dos possibles esquemes arquitectònics diferents per realitzar tècniques de ML en el sistema habilitat per F2C per aconseguir un mecanisme de gestió de recursos adaptatiu, avançat i sofisticat en un sistema F2C. Les nostres propostes són els primers passos per dissenyar un marc arquitectònic general per al mecanisme de gestió de recursos en un sistema habilitat per F2C.Postprint (published version

    ERP implementation methodologies and frameworks: a literature review

    Get PDF
    Enterprise Resource Planning (ERP) implementation is a complex and vibrant process, one that involves a combination of technological and organizational interactions. Often an ERP implementation project is the single largest IT project that an organization has ever launched and requires a mutual fit of system and organization. Also the concept of an ERP implementation supporting business processes across many different departments is not a generic, rigid and uniform concept and depends on variety of factors. As a result, the issues addressing the ERP implementation process have been one of the major concerns in industry. Therefore ERP implementation receives attention from practitioners and scholars and both, business as well as academic literature is abundant and not always very conclusive or coherent. However, research on ERP systems so far has been mainly focused on diffusion, use and impact issues. Less attention has been given to the methods used during the configuration and the implementation of ERP systems, even though they are commonly used in practice, they still remain largely unexplored and undocumented in Information Systems research. So, the academic relevance of this research is the contribution to the existing body of scientific knowledge. An annotated brief literature review is done in order to evaluate the current state of the existing academic literature. The purpose is to present a systematic overview of relevant ERP implementation methodologies and frameworks as a desire for achieving a better taxonomy of ERP implementation methodologies. This paper is useful to researchers who are interested in ERP implementation methodologies and frameworks. Results will serve as an input for a classification of the existing ERP implementation methodologies and frameworks. Also, this paper aims also at the professional ERP community involved in the process of ERP implementation by promoting a better understanding of ERP implementation methodologies and frameworks, its variety and history
    • …
    corecore