18 research outputs found

    A STRUCTURAL AND SEMANTIC APPROACH TO SIMILARITY MEASUREMENT OF LOGISTICS PROCESSES

    No full text
    The increased individuation and variety of logistics processes have spurred a strong demand for a new process customization strategy. In order to satisfy the increasingly specific requirements and demands of customers, organizations have been developing more competitive and flexible logistics processes. This trend not only has greatly increased the number of logistics processes in process repositories but also has resulted in a significant increase of the number of redundant processes registered into the repositories, making the search of adequate processes for business decision making hard. Organizations, therefore, have turned to process reusability as a solution. One such strategy employs similarity measurement as a precautionary measure of redundancy limiting the occurrence of redundant processes. This paper proposes a structure- and semantics-based approach to similarity measurement of logistics processes. Semantic information and semantic similarity on logistics processes are defined based on logistics ontology, available in the supply chain operation reference (SCOR) model. By combining similarity measurement based on both structural and semantic information of logistics processes we show that our approach improves the previous approaches in terms of accuracy and quality

    A structural and semantic approach to similarity measurement of logistics processes

    No full text
    Abstract: The increased individuation and variety of logistics processes has spurred a strong demand for a new process customization strategy. Indeed, to satisfy the increasingly specific requirements and demands of customers, organizations have been developing more competitive and flexible logistics processes. This trend not only has greatly increased the number of logistics processes in process repositories but also has resulted processes for business decision making hard. Organizations, therefore, have turned to process reusability as a solution. One such strategy employs similarity measurement as a precautionary measure limiting the occurrence of redundant processes. This paper proposes a structure-and semantics-based approach to similarity measurement of logistics processes. Semantic information and semantic similarity on logistics processes are defined based on logistics ontology, available in the supply chain operation reference (SCOR) model. By combining similarity measurement based on both structural and semantic information of logistics processes we show that our approach improves the previous approaches in terms of accuracy and quality.open

    Automatic Control of Workflow Processes using ECA Rules

    No full text
    Changes in recent business environments have created the necessity for a more efficient and effective business process management. The workflow management system is software that assists in defining business processes as well as automatically controlling the execution of the processes. This paper proposes a new approach to the automatic execution of business processes using Event-Condition-Action (ECA) rules that can be automatically triggered by an active database. First of all, we propose the concept of blocks that can classify process flows into several patterns. A block is a minimal unit that can specify the behaviors represented in a process model. An algorithm is developed to detect blocks from a process definition network and transform it into a hierarchical tree model. The behaviors in each block type are modeled using ACTA formalism. This provides a theoretical basis from which ECA rules are identified. The proposed ECA rule-based approach shows that it is possible to execute the workflow using the active capability of database without users intervention. The operation of the proposed methods is illustrated through an example process.This research was partly supported by the program of the National Research Laboratory granted from the Korea Institute of Science and Technology Evaluation and Planning. This work was also supported by grant no. R01-2002- 000-00155-0 from the Basic Research Program of the Korea Science and Engineering Foundation

    Optimization of Wavy-Channel Micromixer Geometry Using Taguchi Method

    No full text
    The micro-mixer has been widely used in mixing processes for chemical and pharmaceutical industries. We introduced an improved and easy to manufacture micro-mixer design utilizing the wavy structure micro-channel T-junction which can be easily manufactured using a simple stamping method. Here, we aim to optimize the geometrical parameters, i.e., wavy frequency, wavy amplitude, and width and height of the micro channel by utilizing the robust Taguchi statistical method with regards to the mixing performance (mixing index), pumping power and figure of merit (FoM). The interaction of each design parameter is evaluated. The results indicate that high mixing performance is not always associated with high FoM due to higher pumping power. Higher wavy frequency and amplitude is required for good mixing performance; however, this is not the case for pumping power due to an increase in Darcy friction loss. Finally, the advantages and limitations of the designs and objective functions are discussed in the light of present numerical results

    Combining Contrastive Learning with Auto-Encoder for Out-of-Distribution Detection

    No full text
    Reliability and robustness are fundamental requisites for the successful integration of deep-learning models into real-world applications. Deployed models must exhibit an awareness of their limitations, necessitating the ability to discern out-of-distribution (OOD) data and prompt human intervention, a critical competency. While several frameworks for OOD detection have been introduced and achieved remarkable results, most state-of-the-art (SOTA) models rely on supervised learning with annotated data for their training. However, acquiring labeled data can be a demanding, time-consuming or, in some cases, an infeasible task. Consequently, unsupervised learning has gained substantial traction and has made noteworthy advancements. It empowers models to undergo training solely on unlabeled data while still achieving comparable or even superior performance compared to supervised alternatives. Among the array of unsupervised methods, contrastive learning has asserted its effectiveness in feature extraction for a variety of downstream tasks. Conversely, auto-encoders are extensively employed to acquire indispensable representations that faithfully reconstruct input data. In this study, we introduce a novel approach that amalgamates contrastive learning with auto-encoders for OOD detection using unlabeled data. Contrastive learning diligently tightens the grouping of in-distribution data while meticulously segregating OOD data, and the auto-encoder augments the feature space with increased refinement. Within this framework, data undergo implicit classification into in-distribution and OOD categories with a notable degree of precision. Our experimental findings manifest that this method surpasses most of the existing detectors reliant on unlabeled data or even labeled data. By incorporating an auto-encoder into an unsupervised learning framework and training it on the CIFAR-100 dataset, our model enhances the detection rate of unsupervised learning methods by an average of 5.8%. Moreover, it outperforms the supervised-based OOD detector by an average margin of 11%

    Workload Balancing on Agents for Business Process Efficiency Based on Stochastic Model

    No full text
    BPMS (Business Process Management Systems) is a revolutionary information system that supports designing, administrating, and improving the business processes systematically. BPMS enables execution of business processes by assigning tasks to human or computer agents according to the predefined definitions of the processes. In this paper, we model business processes and agents using a queueing network and propose a task assignment algorithm to maximize overall process efficiency under the limitation of agents capacity. We first transform the business processes into queueing network models, in which the agents are considered as servers. With this complete, workloads of agents are calculated as server utilization and the task assignment policy can be determined by balancing the workloads. This will serve to minimize the workloads of all agents, thus achieving overall process efficiency. Another application of these results can be capacity planning of agents in advance and business process optimization in reengineering context. The simulation results and comparisons with other well-known dispatching policies show the effectiveness of our algorithm
    corecore