105 research outputs found

    Fuzzy EOQ Model with Trapezoidal and Triangular Functions Using Partial Backorder

    Get PDF
    EOQ fuzzy model is EOQ model that can estimate the cost from existing information. Using trapezoid fuzzy functions can estimate the costs of existing and trapezoid membership functions has some points that have a value of membership . TR ̃C value results of trapezoid fuzzy will be higher than usual TRC value results of EOQ model . This paper aims to determine the optimal amount of inventory in the company, namely optimal Q and optimal V, using the model of partial backorder will be known optimal Q and V for the optimal number of units each time a message . EOQ model effect on inventory very closely by using EOQ fuzzy model with triangular and trapezoid membership functions with partial backorder. Optimal Q and optimal V values for the optimal fuzzy models will have an increase due to the use of trapezoid and triangular membership functions that have a different value depending on the requirements of each membership function value. Therefore, by using a fuzzy model can solve the company's problems in estimating the costs for the next term

    Temporal Segmentation of Human Actions in Videos

    Get PDF
    Understanding human actions in videos is of great interest in various scenarios ranging from surveillance over quality control in production processes to content-based video search. Algorithms for automatic temporal action segmentation need to overcome severe difficulties in order to be reliable and provide sufficiently good quality. Not only can human actions occur in different scenes and surroundings, the definition on an action itself is also inherently fuzzy, leading to a significant amount of inter-class variations. Moreover, besides finding the correct action label for a pre-defined temporal segment in a video, localizing an action in the first place is anything but trivial. Different actions not only vary in their appearance and duration but also can have long-range temporal dependencies that span over the complete video. Further, getting reliable annotations of large amounts of video data is time consuming and expensive. The goal of this thesis is to advance current approaches to temporal action segmentation. We therefore propose a generic framework that models the three components of the task explicitly, ie long-range temporal dependencies are handled by a context model, variations in segment durations are represented by a length model, and short-term appearance and motion of actions are addressed with a visual model. While the inspiration for the context model mainly comes from word sequence models in natural language processing, the visual model builds upon recent advances in the classification of pre-segmented action clips. Considering that long-range temporal context is crucial, we avoid local segmentation decisions and find the globally optimal temporal segmentation of a video under the explicit models. Throughout the thesis, we provide explicit formulations and training strategies for the proposed generic action segmentation framework under different supervision conditions. First, we address the task of fully supervised temporal action segmentation, where frame-level annotations are available during training. We show that our approach can outperform early sliding window baselines and recent deep architectures and that explicit length and context modeling leads to substantial improvements. Considering that full frame-level annotation is expensive to obtain, we then formulate a weakly supervised training algorithm that uses ordered sequences of actions occurring in the video as only supervision. While a first approach reduces the weakly supervised setup to a fully supervised setup by generating a pseudo ground-truth during training, we propose a second approach that avoids this intermediate step and allows to directly optimize a loss based on the weak supervision. Closing the gap between the fully and the weakly supervised setup, we moreover evaluate semi-supervised learning, where video frames are sparsely annotated. With the motivation that the vast amount of video data on the Internet only comes with meta-tags or content keywords that do not provide any temporal ordering information, we finally propose a method for action segmentation that learns from unordered sets of actions only. All approaches are evaluated on several commonly used benchmark datasets. With the proposed methods, we reach state-of-the-art performance for both, fully and weakly supervised action segmentation

    Machine Learning for Instance Segmentation

    Get PDF
    Volumetric Electron Microscopy images can be used for connectomics, the study of brain connectivity at the cellular level. A prerequisite for this inquiry is the automatic identification of neural cells, which requires machine learning algorithms and in particular efficient image segmentation algorithms. In this thesis, we develop new algorithms for this task. In the first part we provide, for the first time in this field, a method for training a neural network to predict optimal input data for a watershed algorithm. We demonstrate its superior performance compared to other segmentation methods of its category. In the second part, we develop an efficient watershed-based algorithm for weighted graph partitioning, the \emph{Mutex Watershed}, which uses negative edge-weights for the first time. We show that it is intimately related to the multicut and has a cutting edge performance on a connectomics challenge. Our algorithm is currently used by the leaders of two connectomics challenges. Finally, motivated by inpainting neural networks, we create a method to learn the graph weights without any supervision

    Learning-based Segmentation for Connectomics

    Get PDF
    Recent advances in electron microscopy techniques make it possible to acquire highresolution, isotropic volume images of neural circuitry. In connectomics, neuroscientists seek to obtain the circuit diagram involving all neurons and synapses in such a volume image. Mapping neuron connectivity requires tracing each and every neural process through terabytes of image data. Due to the size and complexity of these volume images, fully automated analysis methods are desperately needed. In this thesis, I consider automated, machine learning-based neurite segmentation approaches based on a simultaneous merge decision of adjacent supervoxels. - Given a learned likelihood of merging adjacent supervoxels, Chapter 4 adapts a probabilistic graphical model which ensures that merge decisions are consistent and the surfaces of final segments are closed. This model can be posed as a multicut optimization problem and is solved with the cutting-plane method. In order to scale to large datasets, a fast search for (and good choice of) violated cycle constraints is crucial. Quantitative experiments show that the proposed closed-surface regularization significantly improves segmentation performance. - In Chapter 5, I investigate whether the edge weights of the previous model can be chosen to minimize the loss with respect to non-local segmentation quality measures (e.g. Rand Index). Suitable w are obtained from a structured learning approach. In the Structured Support Vector Machine formulation, a novel fast enumeration scheme is used to find the most violated constraint. Quantitative experiments show that structured learning can improve upon unstructured methods. Furthermore, I introduce a new approximate, hierarchical and blockwise optimization approach for large-scale multicut segmentation. Using this method, high-quality approximate solutions for large problem instances are found quickly. - Chapter 6 introduces another novel approximate scheme for multicut segmentation -- Cut, Glue&Cut -- which is based on the move-making paradigm. First, the graph is recursively partitioned into small regions (cut phase). Then, for any two adjacent regions, alternative cuts of these two regions define possible moves (glue&cut phase). The proposed algorithm finds segmentations that are { as measured by a loss function { as close to the ground-truth as the global optimum found by exact solvers, while being significantly faster than existing methods. - In order to jointly label resulting segments as well as to label the boundaries between segments, Chapter 7 proposes the Asymmetric Multi-way Cut model, a variant of Multi-way Cut. In this new model, within-class cuts are allowed for some labels, while being forbidden for other labels. Qualitative experiments show when such a formulation can be beneficial. In particular, an application to joint neurite and cell organelle labeling in EM volume images is discussed. - Custom software tools that can cope with the large data volumes common in the field of connectomics are a prerequisite for the implementation and evaluation of novel segmentation techniques. Chapter 3 presents version 1.0 of ilastik, a joint effort of multiple researchers. I have co-written its volume viewing component, volumina. ilastik provides an interactive pixel classification work ow on largerthan-RAM datasets as well as a semi-automated segmentation module useful for acquiring gold standard segmentations. Furthermore, I describe new software for dealing with hierarchies of cell complexes as well as for blockwise image processing operations on large datasets. The different segmentation methods presented in this thesis provide a promising direction towards reaching the required reliability as well as the required data throughput necessary for connectomics applications

    Proceedings of the Workshop on Knowledge Representation and Configuration, WRKP\u2796

    Get PDF

    Formalising Human Mental Workload as a Defeasible Computational Concept

    Get PDF
    Human mental workload has gained importance, in the last few decades, as a fundamental design concept in human-computer interaction. It can be intuitively defined as the amount of mental work necessary for a person to complete a task over a given period of time. For people interacting with interfaces, computers and technological devices in general, the construct plays an important role. At a low level, while processing information, often people feel annoyed and frustrated; at higher level, mental workload is critical and dangerous as it leads to confusion, it decreases the performance of information processing and it increases the chances of errors and mistakes. It is extensively documented that either mental overload or underload negatively affect performance. Hence, designers and practitioners who are ultimately interested in system or human performance need answers about operator workload at all stages of system design and operation. At an early system design phase, designers require some explicit model to predict the mental workload imposed by their technologies on end-users so that alternative system designs can be evaluated. However, human mental workload is a multifaceted and complex construct mainly applied in cognitive sciences. A plethora of ad-hoc definitions can be found in the literature. Generally, it is not an elementary property, rather it emerges from the interaction between the requirements of a task, the circumstances under which it is performed and the skills, behaviours and perceptions of the operator. Although measuring mental workload has advantages in interaction and interface design, its formalisation as an operational and computational construct has not sufficiently been addressed. Many researchers agree that too many ad-hoc models are present in the literature and that they are applied subjectively by mental workload designers thereby limiting their application in different contexts and making comparison across different models difficult. This thesis introduces a novel computational framework for representing and assessing human mental workload based on defeasible reasoning. The starting point is the investigation of the nature of human mental workload that appears to be a defeasible phenomenon. A defeasible concept is a concept built upon a set of arguments that can be defeated by adding additional arguments. The word ‘defeasible’ is inherited from defeasible reasoning, a form of reasoning built upon reasons that can be defeated. It is also known as non-monotonic reasoning because of the technical property (non-monotonicity) of the logical formalisms that are aimed at modelling defeasible reasoning activity. Here, a conclusion or claim, derived from the application of previous knowledge, can be retracted in the light of new evidence. Formally, state-of-the-art defeasible reasoning models are implemented employing argumentation theory, a multi-disciplinary paradigm that incorporates elements of philosophy, psychology and sociology. It systematically studies how arguments can be built, sustained or discarded in a reasoning process, and it investigates the validity of their conclusions. Since mental workload can be seen as a defeasible phenomenon, formal defeasible argumentation theory may have a positive impact in its representation and assessment. Mental workload can be captured, analysed, and measured in ways that increase its understanding allowing its use for practical activities. The research question investigated here is whether defeasible argumentation theory can enhance the representation of the construct of mental workload and improve the quality of its assessment in the field of human-computer interaction. In order to answer this question, recurrent knowledge and evidence employed in state-of-the-art mental workload measurement techniques have been reviewed in the first place as well as their defeasible and non-monotonic properties. Secondly, an investigation of the state-of-the-art computational techniques for implementing defeasible reasoning has been carried out. This allowed the design of a modular framework for mental workload representation and assessment. The proposed solution has been evaluated by comparing the properties of sensitivity, diagnosticity and validity of the assessments produced by two instances of the framework against the ones produced by two well known subjective mental workload assessments techniques (the Nasa Task Load Index and the Workload Profile) in the context of human-web interaction. In detail, through an empirical user study, it has been firstly demonstrated how these two state-of-the-art techniques can be translated into two particular instances of the framework while still maintaining the same validity. In other words, the indexes of mental workload inferred by the two original instruments, and the ones generated by their corresponding translations (instances of the framework) showed a positive and nearly perfect statistical correlation. Additionally, a new defeasible instance built with the framework showed a better sensitivity and a higher diagnosticity capacity than the two selected state-of-the art techniques. The former showed a higher convergent validity with the latter techniques, but a better concurrent validity with performance measures. The new defeasible instance generated indexes of mental workload that better correlated with the objective time for task completion compared to the two selected instruments. These findings support the research question thereby demonstrating how defeasible argumentation theory can be successfully adopted to support the representation of mental workload and to enhance the quality of its assessments. The main contribution of this thesis is the presentation of a methodology, developed as a formal modular framework, to represent mental workload as a defeasible computational concept and to assess it as a numerical usable index. This research contributes to the body of knowledge by providing a modular framework built upon defeasible reasoning and formalised through argumentation theory in which workload can be optimally measured, analysed, explained and applied in different contexts

    Data-driven shape analysis and processing

    Get PDF
    Data-driven methods serve an increasingly important role in discovering geometric, structural, and semantic relationships between shapes. In contrast to traditional approaches that process shapes in isolation of each other, data-driven methods aggregate information from 3D model collections to improve the analysis, modeling and editing of shapes. Through reviewing the literature, we provide an overview of the main concepts and components of these methods, as well as discuss their application to classification, segmentation, matching, reconstruction, modeling and exploration, as well as scene analysis and synthesis. We conclude our report with ideas that can inspire future research in data-driven shape analysis and processing

    Spatial Fuzzy clustering with simultaneous estimation of Markov random field parameters and class

    Get PDF
    Projecte final de carrera fet en col.laboració amb Medical Imaging Research Center. Illinois Institute of Technolog
    corecore