60,210 research outputs found

    A case study on model driven data integration for data centric software development.

    Get PDF
    Model Driven Data Integration is a data integration approach that proactively incorporates and utilizes metadata across the data integration process. By decoupling data and metadata, MDDI drastically reduces complexity of data integration; whilst also providing an integrated standard development method, which is associated with Model Driven Architecture. This paper introduces a case study to adopt MDA technology as an MDDI framework for data centric software development; including data merging and data customization for data mining. A data merging model is also proposed to define relationships between different models at a conceptual level which is then transformed into a physical model. In this case study we collect and integrate historical data from various universities into the Data Warehouse system in order to develop student intervention services through data mining

    Initial Analysis of Data-Driven Haptic Search for the Smart Suction Cup

    Full text link
    Suction cups offer a useful gripping solution, particularly in industrial robotics and warehouse applications. Vision-based grasp algorithms, like Dex-Net, show promise but struggle to accurately perceive dark or reflective objects, sub-resolution features, and occlusions, resulting in suction cup grip failures. In our prior work, we designed the Smart Suction Cup, which estimates the flow state within the cup and provides a mechanically resilient end-effector that can inform arm feedback control through a sense of touch. We then demonstrated how this cup's signals enable haptically-driven search behaviors for better grasping points on adversarial objects. This prior work uses a model-based approach to predict the desired motion direction, which opens up the question: does a data-driven approach perform better? This technical report provides an initial analysis harnessing the data previously collected. Specifically, we compare the model-based method with a preliminary data-driven approach to accurately estimate lateral pose adjustment direction for improved grasp success

    Radio frequency identification and time-driven activity based costing:RFID-TDABC application in warehousing

    Get PDF
    Purpose: This paper extends the use of Radio Frequency Identification (RFID) data for accounting of warehouse costs and services. Time Driven Activity Based Costing (TDABC) methodology is enhanced with the real-time collected RFID data about duration of warehouse activities. This allows warehouse managers to have accurate and instant calculations of costs. The RFID enhanced TDABC (RFID-TDABC) is proposed as a novel application of the RFID technology. Research Approach: Application of RFID-TDABC in a warehouse is implemented on warehouse processes of a case study company. Implementation covers receiving, put-away, order picking, and despatching. Findings and Originality: RFID technology is commonly used for the identification and tracking items. The use of the RFID generated information with the TDABC can be successfully extended to the area of costing. This RFID-TDABC costing model will benefit warehouse managers with accurate and instant calculations of costs. Research Impact: There are still unexplored benefits to RFID technology in its applications in warehousing and the wider supply chain. A multi-disciplinary research approach led to combining RFID technology and TDABC accounting method in order to propose RFID-TDABC. Combining methods and theories from different fields with RFID, may lead researchers to develop new techniques such as RFID-TDABC presented in this paper. Practical Impact: RFID-TDABC concept will be of value to practitioners by showing how warehouse costs can be accurately measured by using this approach. Providing better understanding of incurred costs may result in a further optimisation of warehousing operations, lowering costs of activities, and thus provide competitive pricing to customers. RFID-TDABC can be applied in a wider supply chain

    Using Ontologies for the Design of Data Warehouses

    Get PDF
    Obtaining an implementation of a data warehouse is a complex task that forces designers to acquire wide knowledge of the domain, thus requiring a high level of expertise and becoming it a prone-to-fail task. Based on our experience, we have detected a set of situations we have faced up with in real-world projects in which we believe that the use of ontologies will improve several aspects of the design of data warehouses. The aim of this article is to describe several shortcomings of current data warehouse design approaches and discuss the benefit of using ontologies to overcome them. This work is a starting point for discussing the convenience of using ontologies in data warehouse design.Comment: 15 pages, 2 figure

    High-Level Object Oriented Genetic Programming in Logistic Warehouse Optimization

    Get PDF
    Disertační práce je zaměřena na optimalizaci průběhu pracovních operací v logistických skladech a distribučních centrech. Hlavním cílem je optimalizovat procesy plánování, rozvrhování a odbavování. Jelikož jde o problém patřící do třídy složitosti NP-težký, je výpočetně velmi náročné nalézt optimální řešení. Motivací pro řešení této práce je vyplnění pomyslné mezery mezi metodami zkoumanými na vědecké a akademické půdě a metodami používanými v produkčních komerčních prostředích. Jádro optimalizačního algoritmu je založeno na základě genetického programování řízeného bezkontextovou gramatikou. Hlavním přínosem této práce je a) navrhnout nový optimalizační algoritmus, který respektuje následující optimalizační podmínky: celkový čas zpracování, využití zdrojů, a zahlcení skladových uliček, které může nastat během zpracování úkolů, b) analyzovat historická data z provozu skladu a vyvinout sadu testovacích příkladů, které mohou sloužit jako referenční výsledky pro další výzkum, a dále c) pokusit se předčit stanovené referenční výsledky dosažené kvalifikovaným a trénovaným operačním manažerem jednoho z největších skladů ve střední Evropě.This work is focused on the work-flow optimization in logistic warehouses and distribution centers. The main aim is to optimize process planning, scheduling, and dispatching. The problem is quite accented in recent years. The problem is of NP hard class of problems and where is very computationally demanding to find an optimal solution. The main motivation for solving this problem is to fill the gap between the new optimization methods developed by researchers in academic world and the methods used in business world. The core of the optimization algorithm is built on the genetic programming driven by the context-free grammar. The main contribution of the thesis is a) to propose a new optimization algorithm which respects the makespan, the utilization, and the congestions of aisles which may occur, b) to analyze historical operational data from warehouse and to develop the set of benchmarks which could serve as the reference baseline results for further research, and c) to try outperform the baseline results set by the skilled and trained operational manager of the one of the biggest warehouses in the middle Europe.

    Value-driven Security Agreements in Extended Enterprises

    Get PDF
    Today organizations are highly interconnected in business networks called extended enterprises. This is mostly facilitated by outsourcing and by new economic models based on pay-as-you-go billing; all supported by IT-as-a-service. Although outsourcing has been around for some time, what is now new is the fact that organizations are increasingly outsourcing critical business processes, engaging on complex service bundles, and moving infrastructure and their management to the custody of third parties. Although this gives competitive advantage by reducing cost and increasing flexibility, it increases security risks by eroding security perimeters that used to separate insiders with security privileges from outsiders without security privileges. The classical security distinction between insiders and outsiders is supplemented with a third category of threat agents, namely external insiders, who are not subject to the internal control of an organization but yet have some access privileges to its resources that normal outsiders do not have. Protection against external insiders requires security agreements between organizations in an extended enterprise. Currently, there is no practical method that allows security officers to specify such requirements. In this paper we provide a method for modeling an extended enterprise architecture, identifying external insider roles, and for specifying security requirements that mitigate security threats posed by these roles. We illustrate our method with a realistic example

    An automated ETL for online datasets

    Get PDF
    While using online datasets for machine learning is commonplace today, the quality of these datasets impacts on the performance of prediction algorithms. One method for improving the semantics of new data sources is to map these sources to a common data model or ontology. While semantic and structural heterogeneities must still be resolved, this provides a well established approach to providing clean datasets, suitable for machine learning and analysis. However, when there is a requirement for a close to real time usage of online data, a method for dynamic Extract-Transform-Load of new sources data must be developed. In this work, we present a framework for integrating online and enterprise data sources, in close to real time, to provide datasets for machine learning and predictive algorithms. An exhaustive evaluation compares a human built data transformation process with our system’s machine generated ETL process, with very favourable results, illustrating the value and impact of an automated approach
    • …
    corecore