18,532 research outputs found

    Supply optimization model in the hierarchical geographically distributed organization

    Get PDF
    The strategic importance of the procurement function in the large organizations management requires using effective tools by the logistics management to justify decisions in the supply process. The architecture features of hierarchical geographically distributed organizations allow the use of a hybrid supply scheme that rationally combines the advantages of centralized and decentralized purchasing and supply management (PSM). The article suggests a supply optimization model in the hierarchical geographically distributed organization (HGDO), reflecting the features of a complex, multifactorial and multi-stage procurement process. The model allows to find the optimal options for purchasing and supplying products for the criterion of minimizing the total logistics costs that characterize this process for the entire period of planning HGDO logistics support, taking into account the values of the various parameters of participants and the logistics functions of the procurement process over each period of time. The model is an effective tool for supporting and coordinating decisions made by logistics managers at different levels of management of HGDO based on numerous options for purchasing and supplying products and their budgeting in conditions of the dynamics and diversity of internal and external factors of influence

    Improving lifecycle query in integrated toolchains using linked data and MQTT-based data warehousing

    Full text link
    The development of increasingly complex IoT systems requires large engineering environments. These environments generally consist of tools from different vendors and are not necessarily integrated well with each other. In order to automate various analyses, queries across resources from multiple tools have to be executed in parallel to the engineering activities. In this paper, we identify the necessary requirements on such a query capability and evaluate different architectures according to these requirements. We propose an improved lifecycle query architecture, which builds upon the existing Tracked Resource Set (TRS) protocol, and complements it with the MQTT messaging protocol in order to allow the data in the warehouse to be kept updated in real-time. As part of the case study focusing on the development of an IoT automated warehouse, this architecture was implemented for a toolchain integrated using RESTful microservices and linked data.Comment: 12 pages, worksho

    PhyNetLab: An IoT-Based Warehouse Testbed

    Full text link
    Future warehouses will be made of modular embedded entities with communication ability and energy aware operation attached to the traditional materials handling and warehousing objects. This advancement is mainly to fulfill the flexibility and scalability needs of the emerging warehouses. However, it leads to a new layer of complexity during development and evaluation of such systems due to the multidisciplinarity in logistics, embedded systems, and wireless communications. Although each discipline provides theoretical approaches and simulations for these tasks, many issues are often discovered in a real deployment of the full system. In this paper we introduce PhyNetLab as a real scale warehouse testbed made of cyber physical objects (PhyNodes) developed for this type of application. The presented platform provides a possibility to check the industrial requirement of an IoT-based warehouse in addition to the typical wireless sensor networks tests. We describe the hardware and software components of the nodes in addition to the overall structure of the testbed. Finally, we will demonstrate the advantages of the testbed by evaluating the performance of the ETSI compliant radio channel access procedure for an IoT warehouse

    A unified view of data-intensive flows in business intelligence systems : a survey

    Get PDF
    Data-intensive flows are central processes in today’s business intelligence (BI) systems, deploying different technologies to deliver data, from a multitude of data sources, in user-preferred and analysis-ready formats. To meet complex requirements of next generation BI systems, we often need an effective combination of the traditionally batched extract-transform-load (ETL) processes that populate a data warehouse (DW) from integrated data sources, and more real-time and operational data flows that integrate source data at runtime. Both academia and industry thus must have a clear understanding of the foundations of data-intensive flows and the challenges of moving towards next generation BI environments. In this paper we present a survey of today’s research on data-intensive flows and the related fundamental fields of database theory. The study is based on a proposed set of dimensions describing the important challenges of data-intensive flows in the next generation BI setting. As a result of this survey, we envision an architecture of a system for managing the lifecycle of data-intensive flows. The results further provide a comprehensive understanding of data-intensive flows, recognizing challenges that still are to be addressed, and how the current solutions can be applied for addressing these challenges.Peer ReviewedPostprint (author's final draft

    A Framework for Developing Real-Time OLAP algorithm using Multi-core processing and GPU: Heterogeneous Computing

    Full text link
    The overwhelmingly increasing amount of stored data has spurred researchers seeking different methods in order to optimally take advantage of it which mostly have faced a response time problem as a result of this enormous size of data. Most of solutions have suggested materialization as a favourite solution. However, such a solution cannot attain Real- Time answers anyhow. In this paper we propose a framework illustrating the barriers and suggested solutions in the way of achieving Real-Time OLAP answers that are significantly used in decision support systems and data warehouses

    Heterogeneous biomedical database integration using a hybrid strategy: a p53 cancer research database.

    Get PDF
    Complex problems in life science research give rise to multidisciplinary collaboration, and hence, to the need for heterogeneous database integration. The tumor suppressor p53 is mutated in close to 50% of human cancers, and a small drug-like molecule with the ability to restore native function to cancerous p53 mutants is a long-held medical goal of cancer treatment. The Cancer Research DataBase (CRDB) was designed in support of a project to find such small molecules. As a cancer informatics project, the CRDB involved small molecule data, computational docking results, functional assays, and protein structure data. As an example of the hybrid strategy for data integration, it combined the mediation and data warehousing approaches. This paper uses the CRDB to illustrate the hybrid strategy as a viable approach to heterogeneous data integration in biomedicine, and provides a design method for those considering similar systems. More efficient data sharing implies increased productivity, and, hopefully, improved chances of success in cancer research. (Code and database schemas are freely downloadable, http://www.igb.uci.edu/research/research.html.)
    corecore