10 research outputs found

    Efficient dense blur map estimation for automatic 2D-to-3D conversion

    Get PDF
    Focus is an important depth cue for 2D-to-3D conversion of low depth-of-field images and video. However, focus can be only reliably estimated on edges. Therefore, Bea et al. [1] first proposed an optimization based approach to propagate focus to non-edge image portions, for single image focus editing. While their approach produces accurate dense blur maps, the computational complexity and memory requirements for solving the resulting sparse linear system with standard multigrid or (multilevel) preconditioning techniques, are infeasible within the stringent requirements of the consumer electronics and broadcast industry. In this paper we propose fast, efficient, low latency, line scanning based focus propagation, which mitigates the need for complex multigrid or (multilevel) preconditioning techniques. In addition we propose facial blur compensation to compensate for false shading edges that cause incorrect blur estimates in people's faces. In general shading leads to incorrect focus estimates, which may lead to unnatural 3D and visual discomfort. Since visual attention mostly tends to faces, our solution solves the most distracting errors. A subjective assessment by paired comparison on a set of challenging low-depth-of-field images shows that the proposed approach achieves equal 3D image quality as optimization based approaches, and that facial blur compensation results in a significant improvemen

    Fusion of partial orderings for decision problems in Quality Management

    Get PDF
    Purpose – In a rather common problem for the Quality Management field, (i) a set of judges express their individual (subjective) judgments about a specific attribute, which is related to some objects of interest, and (ii) these judgments have to be fused into a collective one. This paper develops a new technique where individual judgments – which are expressed in the form of partial preference orderings, including the more/less preferred objects only – are fused into a collective judgment, which is expressed in the form of a ratio scaling of the objects of interest. An application example concerning the design of a civilian aircraft seat is presented. Design/methodology/approach – The proposed technique borrows the architecture and the underlying postulates from the Thurstone’s Law of Comparative Judgment (LCJ), adopting a more user-friendly response mode, which is based on (partial) preference orderings instead of paired-comparison relationships. By aggregating and processing these orderings, an overdefined system of equations can be constructed and solved through the Generalized Least Squares method. Apart from a ratio scaling of the objects of interest, this approach makes it possible to estimate the relevant uncertainty, by propagating the uncertainty of input data. Findings – Preliminary results show the effectiveness of the proposed technique, even when preference orderings are rather “incomplete”, i.e., they include a relatively limited number of objects, with respect to those available. Research limitations/implications – Thanks to the relatively simple and practical response mode, the proposed technique is applicable to a variety of practical contexts, such as telephone and street interviews. Although preliminary results are promising, the technique will be tested in a more organic way, considering several factors (e.g., number of judges, number of objects, degree of completeness of preference orderings, degree of agreement of judges, etc.). Originality/value – Even though the scientific literature includes many techniques that are inspired by the LCJ, the proposed one is characterized by two important novelties: (i) it is based on a more user-friendly response mode and (ii) it allows to obtain a ratio scaling of objects with a relevant uncertainty estimation

    The psychophysics of comic: effects of incongruity in causality and animacy

    Get PDF
    According to several theories of humour (see Berger, 2012; Martin, 2007), incongruity - i.e., the presence of two incompatible meanings in the same situation - is a crucial condition for an event being evaluated as comical. The aim of this research was to test with psychophysical methods the role of incongruity in visual perception by manipulating the causal paradigm (Michotte, 1946/1963) to get a comic effect. We ran three experiments. In Experiment 1, we tested the role of speed ratio between the first and the second movement, and the effect of animacy cues (i.e. frog-like and jumping-like trajectories) in the second movement; in Experiment 2, we manipulated the temporal delay between the movements to explore the relationship between perceptual causal contingencies and comic impressions; in Experiment 3, we compared the strength of the comic impressions arising from incongruent trajectories based on animacy cues with those arising from incongruent trajectories not based on animacy cues (bouncing and rotating) in the second part of the causal event. General findings showed that the paradoxical juxtaposition of a living behaviour in the perceptual causal paradigm is a powerful factor in eliciting comic appreciations, coherently with the Bergsonian perspective in particular (Bergson, 2003), and with incongruity theories in general

    Fusing incomplete preference rankings in design for manufacturing applications through the ZM II-technique

    Get PDF
    The authors recently presented a technique (denominated “ZM”) to fuse multiple (subjective) preference rankings of some objects of interest - in manufacturing applications - into a common unidimensional ratio scale (Franceschini, Maisano 2019). Although this technique can be applied to a variety of decision-making problems in the Manufacturing field, it is limited by a response mode requiring the formulation of complete preference rankings, i.e. rankings that include all objects. Unfortunately, this model is unsuitable for some practical contexts – such as decision-making problems characterized by a relatively large number of objects, field surveys, etc. – where respondents can barely identify the more/less preferred objects, without realistically being able to construct complete preference rankings. The purpose of this paper is to develop a new technique (denominated “ZMII”) which also “tolerates” incomplete preference rankings, e.g., rankings with the more/less preferred objects only. This technique borrows the underlying postulates from the Thurstone’s Law of Comparative Judgment and uses the Generalized Least Squares method to obtain a ratio scaling of the objects of interest, with a relevant uncertainty estimation. Preliminary results show the effectiveness of the new technique even for relatively incomplete preference rankings. Description is supported by an application example concerning the design of a coach-bus seat

    Quantitative analysis of infrared contrast enhancement algorithms

    Get PDF
    This thesis examines a quantitative analysis of infrared contrast enhancement algorithms found in literature and developed by the author. Four algorithms were studied, three of which were found in literature and one developed by the author: tail-less plateau equalization (TPE), adaptive plateau equalization (APE), the method according to Aare Mallo (MEAM), and infrared multi-scale retinex (IMSR). Engineering code was developed for each algorithm. From this engineering code, a rate of growth analysis was conducted to determine each algorithm’s computational load. From the analysis, it was found that all algorithms with the exception of IMSR have a desirable linear nature. Once the rate of growth analysis was complete, sample infrared imagery was collected. Three scenes were collected for experimentation: a low-to-high thermal variation scene, a low-to-mid thermal variation scene, and a natural scene. After collecting sample imagery and processing it with the engineering code, a paired comparison psychophysical trial was executed using local firefighters, common users of the infrared imaging system. From this trial, two metrics were formed: an average rank and an interval scale. From analysis of both metrics plus an analysis of the rate of growth, MEAM was declared to be the best algorithm overall

    Brilliance, contrast, colorfulness, and the perceived volume of device color gamut

    Get PDF
    With the advent of digital video and cinema media technologies, much more is possible in achieving brighter and more vibrant colors, colors that transcend our experience. The challenge is in the realization of these possibilities in an industry rooted in 1950s technology where color gamut is represented with little or no insight into the way an observer perceives color as a complex mixture of the observer’s intentions, desires, and interests. By today’s standards, five perceptual attributes – brightness, lightness, colorfulness, chroma, and hue - are believed to be required for a complete specification. As a compelling case for such a representation, a display system is demonstrated that is capable of displaying color beyond the realm of object color, perceptually even beyond the spectrum locus of pure color. All this begs the question: Just what is meant by perceptual gamut? To this end, the attributes of perceptual gamut are identified through psychometric testing and the color appearance models CIELAB and CIECAM02. Then, by way of demonstration, these attributes were manipulated to test their application in wide gamut displays. In concert with these perceptual attributes and their manipulation, Ralph M. Evans’ concept of brilliance as an attribute of perception that extends beyond the realm of everyday experience, and the theoretical studies of brilliance by Y. Nayatani, a method was developed for producing brighter, more colorful colors and deeper, darker colors with the aim of preserving object color perception – flesh tones in particular. The method was successfully demonstrated and tested in real images using psychophysical methods in the very real, practical application of expanding the gamut of sRGB into an emulation of the wide gamut, xvYCC encoding

    CCAD: A Basic Sample Database for Modeling Common Color Appearance

    Get PDF
    The consistency assessment of a set of cross-media color reproductions is an urgent research topic in color application field. This is enhanced by the various gamut devices which make it impossible to match exactly using conventional colourimetry metrics, which existed metrics for color consistency were developed for small gamut differences. When a set of color reproductions provided by fewer gamut interaction between sample and reference media are judged to show a high degree of similarity, they are usually regard as sharing a ‘Common Color Appearance’. This degree of similarity is just scaled by subjective assessment efficiently. In order to achieve and measure common color appearance, some offered metrics based on their private small and special color samples, which had restricted the applicability and evaluation of common color appearance metrics. On the basis of the adjustment & feedback frame, the proposed common color appearance databases (CCAD) including single-patch mode and image-patch model were implemented to provide a new solution for this issue. In this project, CRPCs data from ISO/PAS 15339 standard were selected as the standard data source. Firstly, ten specific color centers were selected from CRPC4 as primary references, meanwhile the corresponding color centers with same CMYK values in CRPCs (s set 1, 2, 3, 5, 6, 7) selected as the secondary references. Secondly, in each CRPCs gamut, twenty samples were generated by small adjustments of attributes including different combination between lightness, colorfulness and hue angle. Thirdly, similarity scaling values were achieved from category judgment method under the standard viewing condition, and color patch samples with highest similar degree were summarized by Mean-Opinion-Scores and Z-scores together. According to evaluation results, various similarity degree of color patch set and common color appearance set were both achieved with 95% confident interval. At last, using closeness trend line method, the adaptability and scalability of the proposed CCAD were verified to provide a basic data references for common color appearance metrics

    SIMMEC 2016

    Get PDF
    Nota de responsabilidade: os autores s?o os ?nicos respons?veis pelo material reproduzido nesse artigo.O Simp?sio de Mec?nica Computacional (SIMMEC) ? um evento multidisciplinar de ?mbito nacional realizado desde 1991 como evento da Associa??o Brasileira de M?todos Computacionais em Engenharia (ABMEC). Seu objetivo ? a divulga??o da produ??o t?cnica e cient?fica na ?rea de m?todos computacionais aplicados a diversas ?reas da engenharia, incentivando a gera??o de conhecimento, parcerias e produtos. O XII SIMMEC foi realizado de 23 a 25 de maio de 2016 na cidade de Diamantina, Minas Gerais, cidade Patrim?nio Cultural da Humanidade desde 1999. Esta edi??o foi organizada pelo Instituto de Ci?ncia e Tecnologia da Universidade Federal dos Vales do Jequitinhonha e Mucuri. Nesta edi??o o SIMMEC contou com contribui??es nas seguintes ?reas tem?ticas: biomec?nica, computa??o cient?fica, din?mica e vibra??o, fen?menos de transporte, mec?nica dos s?lidos, m?todos num?ricos e otimiza??o.Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior (CAPES)Funda??o de Amparo ? Pesquisa do Estado de Minas Gerais (FAPEMIG
    corecore