990 research outputs found

    Writing Reusable Digital Geometry Algorithms in a Generic Image Processing Framework

    Full text link
    Digital Geometry software should reflect the generality of the underlying mathe- matics: mapping the latter to the former requires genericity. By designing generic solutions, one can effectively reuse digital geometry data structures and algorithms. We propose an image processing framework focused on the Generic Programming paradigm in which an algorithm on the paper can be turned into a single code, written once and usable with various input types. This approach enables users to design and implement new methods at a lower cost, try cross-domain experiments and help generalize resultsComment: Workshop on Applications of Discrete Geometry and Mathematical Morphology, Istanb : France (2010

    Is intra-household power more balanced in poor households? A parametric alternative

    Get PDF
    This note provides a complement to the empirical part of Couprie, Peluso, Trannoy (2009)..It provides a parametric estimate of the intra-household sharing rule using clothes consumption french data.clothes consumption, intra-household inequality

    Time allocation within the family: welfare implications of life in a couple

    Get PDF
    This paper analyzes the household decision-making process leading to the allocation of time and consumption in the family. We estimate, on the British Household Panel Survey, a collective model of demand for leisure generalized to the production of a household public good. For the first time in such a framework the sharing rule conditional on public expenditures is identified by woman's change of family status: from single-living to couple or from couple to single-living. Welfare implications are elaborated. Woman's share of household's private expenditures appears to be on average 45%.collective model, public good, domestic production, sharing rule identification

    Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers

    Full text link
    Scene parsing, or semantic segmentation, consists in labeling each pixel in an image with the category of the object it belongs to. It is a challenging task that involves the simultaneous detection, segmentation and recognition of all the objects in the image. The scene parsing method proposed here starts by computing a tree of segments from a graph of pixel dissimilarities. Simultaneously, a set of dense feature vectors is computed which encodes regions of multiple sizes centered on each pixel. The feature extractor is a multiscale convolutional network trained from raw pixels. The feature vectors associated with the segments covered by each node in the tree are aggregated and fed to a classifier which produces an estimate of the distribution of object categories contained in the segment. A subset of tree nodes that cover the image are then selected so as to maximize the average "purity" of the class distributions, hence maximizing the overall likelihood that each segment will contain a single object. The convolutional network feature extractor is trained end-to-end from raw pixels, alleviating the need for engineered features. After training, the system is parameter free. The system yields record accuracies on the Stanford Background Dataset (8 classes), the Sift Flow Dataset (33 classes) and the Barcelona Dataset (170 classes) while being an order of magnitude faster than competing approaches, producing a 320 \times 240 image labeling in less than 1 second.Comment: 9 pages, 4 figures - Published in 29th International Conference on Machine Learning (ICML 2012), Jun 2012, Edinburgh, United Kingdo

    Combinatorial Continuous Maximal Flows

    Get PDF
    Maximum flow (and minimum cut) algorithms have had a strong impact on computer vision. In particular, graph cuts algorithms provide a mechanism for the discrete optimization of an energy functional which has been used in a variety of applications such as image segmentation, stereo, image stitching and texture synthesis. Algorithms based on the classical formulation of max-flow defined on a graph are known to exhibit metrication artefacts in the solution. Therefore, a recent trend has been to instead employ a spatially continuous maximum flow (or the dual min-cut problem) in these same applications to produce solutions with no metrication errors. However, known fast continuous max-flow algorithms have no stopping criteria or have not been proved to converge. In this work, we revisit the continuous max-flow problem and show that the analogous discrete formulation is different from the classical max-flow problem. We then apply an appropriate combinatorial optimization technique to this combinatorial continuous max-flow CCMF problem to find a null-divergence solution that exhibits no metrication artefacts and may be solved exactly by a fast, efficient algorithm with provable convergence. Finally, by exhibiting the dual problem of our CCMF formulation, we clarify the fact, already proved by Nozawa in the continuous setting, that the max-flow and the total variation problems are not always equivalent.Comment: 26 page

    Do Spouses Cooperate? And If Not: Why?

    Get PDF
    Models of household economics require an understanding of economic interactions in families. Social ties, repetition and reduced strategic uncertainty make social dilemmas in couples a very special case that needs to be empirically studied. In this paper we present results from a large economic experiment with 100 maritally living couples. Participants made decisions in a social dilemma with their partner and with a stranger. We predict behavior in this task with individual and couples' socio-demographic variables, efficiency preferences and couples' marital satisfaction. As opposed to models explaining behavior amongst strangers, the regressions on couples’ decisions highlight clear patterns concerning cooperation behavior which could inspire future household decision-making models.Noncooperative Games; Laboratory, Individual Behavior; Household Production and Intra-household Allocation

    From household to individual’s welfare: does the Lorenz criteria still hold? Theory and Evidence from French Data

    Get PDF
    Consider an income distribution among households of the same size in which individuals, equally needy from the point of view of an ethical observer, are treated unfairly within the household. In the first part of the paper, we look for necessary and sufficient conditions under which the Generalized Lorenz test is preserved from household to individual level. We find that the concavity of the expenditures devoted to public goods relatively to household income is a necessary condition. This condition also becomes sufficient, if joined with the concavity of the expenditure devoted to private goods of the dominated individual. The results are extended to the case of heterogeneous populations, when more complex Lorenz comparisons are involved. In the second part of the paper, we propose a new method to identify the intra-family sharing rule. The double concavity condition is then non-parametrically tested on French households.Lorenz comparisons, intra-household inequality, concavity

    Indoor Semantic Segmentation using depth information

    Full text link
    This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA.Comment: 8 pages, 3 figure

    Predicting Deeper into the Future of Semantic Segmentation

    Get PDF
    The ability to predict and therefore to anticipate the future is an important attribute of intelligence. It is also of utmost importance in real-time systems, e.g. in robotics or autonomous driving, which depend on visual scene understanding for decision making. While prediction of the raw RGB pixel values in future video frames has been studied in previous work, here we introduce the novel task of predicting semantic segmentations of future frames. Given a sequence of video frames, our goal is to predict segmentation maps of not yet observed video frames that lie up to a second or further in the future. We develop an autoregressive convolutional neural network that learns to iteratively generate multiple frames. Our results on the Cityscapes dataset show that directly predicting future segmentations is substantially better than predicting and then segmenting future RGB frames. Prediction results up to half a second in the future are visually convincing and are much more accurate than those of a baseline based on warping semantic segmentations using optical flow.Comment: Accepted to ICCV 2017. Supplementary material available on the authors' webpage
    • 

    corecore