49,319 research outputs found

    Establishing banking data mart for evaluation of credit exposure

    Get PDF
    Main objective of thesis is the presentation of the evolution of the data mart in banking data warehouse of credit exposure. In the first part we focus on the basics of data warehouse, describe different approaches to building it and some differences between the warehouse and operational data base. We define architecture and highlight some trends of data warehousing. In the second section we focuse on the specifics of the banking data warehouse and exposure some disadvantages in the construction it. We describe and illustrate the case of implementation of credit exposure into data warehouse. We described a dimensional data model, ETL processes for loading data and producing reports to assist in decision-making and reporting. In thesis we present a tool for implementation of ETL processes IBM DataStage and tool for creating reports BusinessObject SAP BI. We conclude the work with the effects and results of the data mart construction in a banking data warehouse for credit exposure

    Establishing banking data mart for evaluation of credit exposure

    Get PDF
    Main objective of thesis is the presentation of the evolution of the data mart in banking data warehouse of credit exposure. In the first part we focus on the basics of data warehouse, describe different approaches to building it and some differences between the warehouse and operational data base. We define architecture and highlight some trends of data warehousing. In the second section we focuse on the specifics of the banking data warehouse and exposure some disadvantages in the construction it. We describe and illustrate the case of implementation of credit exposure into data warehouse. We described a dimensional data model, ETL processes for loading data and producing reports to assist in decision-making and reporting. In thesis we present a tool for implementation of ETL processes IBM DataStage and tool for creating reports BusinessObject SAP BI. We conclude the work with the effects and results of the data mart construction in a banking data warehouse for credit exposure

    Development of Warehouse Management System to Manage Warehouse Operations

    Get PDF
    The Warehouse Management System (WMS) is software designed to assist in managing and monitoring warehouse processes. The purpose of this research is to improve warehouse operational management by developing a WMS using the Extreme Programming method and implementing a monitoring system to display real-time temperature and humidity measurements. The research addresses issues observed in the warehouse operations of PT. Shippindo Teknologi Logistik (Shipper), a warehouse rental services company. The problems identified in Shipper's warehouse operations largely stem from human errors. Hence, this research aims to provide a viable solution to reduce human errors by implementing various management processes, including inbound and outbound management and tracking, as well as visualizing product placement within the racks using rack maps. Additionally, the integration of a temperature and humidity monitoring system in the warehouse helps monitor the warehouse's condition in real-time. Testing using the black-box method for WMS in this research was successful, demonstrating that the system can execute all functions and display temperature and humidity data as per the designed specifications (inbound, mapping, storage, temperature and humidity monitoring, outbound). The average error in temperature and humidity measurements is relatively low, with 0.9% for temperature and 1.3% for humidity. However, further development is still required to enhance the system for better performance. This includes creating a more robust model for product detection labeling on the storage page to improve label accuracy and developing control systems for advanced temperature and humidity monitoring

    Improving warehouse labour efficiency by intentional forecast bias

    Get PDF
    Purpose – This paper shows that intentional demand forecast bias can improve warehouse capacity planning and labour efficiency. It presents an empirical methodology to detect and implement forecast bias. Design/methodology/approach – A forecast model integrates historical demand information and expert forecasts to support active bias management. A non-linear relationship between labour productivity and forecast bias is employed to optimise efficiency. The business analytic methods are illustrated by a case study in a consumer electronics warehouse, supplemented by a survey among thirty warehouses. Findings – Results indicate that warehouse management systematically over-forecasts order sizes. The case study shows that optimal bias for picking and loading is 30-70 percent with efficiency gains of 5-10 percent, whereas the labour-intensive packing stage does not benefit from bias. The survey results confirm productivity effects of forecast bias. Research implications – Warehouse managers can apply the methodology in their own situation if they systematically register demand forecasts, actual order sizes and labour productivity per warehouse stage. Application is illustrated for a single warehouse, and studies for alternative product categories and labour processes are of interest. Practical implications – Intentional forecast bias can lead to smoother workflows in warehouses and thus result in higher labour efficiency. Required data includes historical data on demand forecasts, order sizes and labour productivity. Implementation depends on labour hiring strategies and cost structures. Originality/value – Operational data support evidence-based warehouse labour management. The case study validates earlier conceptual studies based on artificial data

    Designing the venue logistics management operations for a World Exposition

    Get PDF
    World Expositions, due to their size and peculiar features, pose a number of logistics challenges. This paper aims at developing a design framework for the venue logistics management (VLM) operations to replenish food products to the event site, through a combination of qualitative and quantitative research approaches. First, an in-depth interview methodology, combined with the outcomes of a literature review, is adopted for defining the key variables for the tactical and operational set-up of the VLM system. Second, a quantitative approach is developed to define the necessary logistics resources. The framework is then applied to the case of Milan 2015 World Exposition. It is the first time that such a design framework for a World Exposition is presented: the originality of this research lies in the proposal of a systematic approach that adds to the experiential practices constituting the current body of knowledge on event logistics

    Formal design of data warehouse and OLAP systems : a dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Systems at Massey University, Palmerston North, New Zealand

    Get PDF
    A data warehouse is a single data store, where data from multiple data sources is integrated for online business analytical processing (OLAP) of an entire organisation. The rationale being single and integrated is to ensure a consistent view of the organisational business performance independent from different angels of business perspectives. Due to its wide coverage of subjects, data warehouse design is a highly complex, lengthy and error-prone process. Furthermore, the business analytical tasks change over time, which results in changes in the requirements for the OLAP systems. Thus, data warehouse and OLAP systems are rather dynamic and the design process is continuous. In this thesis, we propose a method that is integrated, formal and application-tailored to overcome the complexity problem, deal with the system dynamics, improve the quality of the system and the chance of success. Our method comprises three important parts: the general ASMs method with types, the application tailored design framework for data warehouse and OLAP, and the schema integration method with a set of provably correct refinement rules. By using the ASM method, we are able to model both data and operations in a uniform conceptual framework, which enables us to design an integrated approach for data warehouse and OLAP design. The freedom given by the ASM method allows us to model the system at an abstract level that is easy to understand for both users and designers. More specifically, the language allows us to use the terms from the user domain not biased by the terms used in computer systems. The pseudo-code like transition rules, which gives the simplest form of operational semantics in ASMs, give the closeness to programming languages for designers to understand. Furthermore, these rules are rooted in mathematics to assist in improving the quality of the system design. By extending the ASMs with types, the modelling language is tailored for data warehouse with the terms that are well developed for data-intensive applications, which makes it easy to model the schema evolution as refinements in the dynamic data warehouse design. By providing the application-tailored design framework, we break down the design complexity by business processes (also called subjects in data warehousing) and design concerns. By designing the data warehouse by subjects, our method resembles Kimball's "bottom-up" approach. However, with the schema integration method, our method resolves the stovepipe issue of the approach. By building up a data warehouse iteratively in an integrated framework, our method not only results in an integrated data warehouse, but also resolves the issues of complexity and delayed ROI (Return On Investment) in Inmon's "top-down" approach. By dealing with the user change requests in the same way as new subjects, and modelling data and operations explicitly in a three-tier architecture, namely the data sources, the data warehouse and the OLAP (online Analytical Processing), our method facilitates dynamic design with system integrity. By introducing a notion of refinement specific to schema evolution, namely schema refinement, for capturing the notion of schema dominance in schema integration, we are able to build a set of correctness-proven refinement rules. By providing the set of refinement rules, we simplify the designers's work in correctness design verification. Nevertheless, we do not aim for a complete set due to the fact that there are many different ways for schema integration, and neither a prescribed way of integration to allow designer favored design. Furthermore, given its °exibility in the process, our method can be extended for new emerging design issues easily
    • …
    corecore