316,305 research outputs found

    A Search Based Face Annotation (SBFA) Algorithm for Annotating Frail Labeled Images

    Get PDF
    Data mining is the method of extracting valuable data from an over-sized information supply. Currently a day’s web has gained additional attention of users with its wealthy interfaces and surplus quantity of knowledge on the market on web. This has earned plenty of user’s interest in extracting plenty of helpful data but it’s still restricted with a number of the resources extraction like frail labeled facial pictures. This paper mainly investigates a novel framework of search-based face annotation by mining frail tagged facial pictures that are freely available on the web. One major limitation is how effectively we can perform annotation by exploiting the list of most similar facial pictures and their weak labels that are usually vague and incomplete. To resolve this drawback, we have a tendency to propose a unsupervised label refinement (ULR) approach for refining the labels of web facial pictures. A clustering-based approximation algorithmic rule which might improve the quantifiable significantly is implemented. In this paper we've enforced a replacement search supported image search i.e. Image is taken as input instead of text keyword and also the output is additionally retrieved within the sorted list of image, If the input image is matched with any of the of pictures in image sound unit. Also ranking is given to images based on user views

    Do you see what I mean?

    Get PDF
    Visualizers, like logicians, have long been concerned with meaning. Generalizing from MacEachren's overview of cartography, visualizers have to think about how people extract meaning from pictures (psychophysics), what people understand from a picture (cognition), how pictures are imbued with meaning (semiotics), and how in some cases that meaning arises within a social and/or cultural context. If we think of the communication acts carried out in the visualization process further levels of meaning are suggested. Visualization begins when someone has data that they wish to explore and interpret; the data are encoded as input to a visualization system, which may in its turn interact with other systems to produce a representation. This is communicated back to the user(s), who have to assess this against their goals and knowledge, possibly leading to further cycles of activity. Each phase of this process involves communication between two parties. For this to succeed, those parties must share a common language with an agreed meaning. We offer the following three steps, in increasing order of formality: terminology (jargon), taxonomy (vocabulary), and ontology. Our argument in this article is that it's time to begin synthesizing the fragments and views into a level 3 model, an ontology of visualization. We also address why this should happen, what is already in place, how such an ontology might be constructed, and why now

    OmniVista:an application for isovist field and path analysis

    Get PDF
    This paper briefly describes the software application OmniVista written for the Apple MacintoshPlatform. OmniVista is essentially an isovist generating application, which uses the 2d planof a building or urban environment as input data, and then can be used in one of threemodal ways. Firstly, point isovists can be generated by ?clicking? onto any location in theenvironment. Secondly, all navigable space can be flood-filled with points, which may then beused to generate a field of isovists. Finally, a path of points can be used to examine howisovist properties vary along the path - the results of this can either be output as numericaldata, or exported as a series of pictures, which may be combined to form an animation of thevarying isovists along the route. This paper will examine all three modes of use in turn,starting from the simplest (point) to the more complex (the path). A description and equationfor all isovist measures used in the application will also be given as an appendix to thepaper

    OmniVista:an application for isovist field and path analysis

    Get PDF
    This paper briefly describes the software application OmniVista written for the Apple MacintoshPlatform. OmniVista is essentially an isovist generating application, which uses the 2d planof a building or urban environment as input data, and then can be used in one of threemodal ways. Firstly, point isovists can be generated by ?clicking? onto any location in theenvironment. Secondly, all navigable space can be flood-filled with points, which may then beused to generate a field of isovists. Finally, a path of points can be used to examine howisovist properties vary along the path - the results of this can either be output as numericaldata, or exported as a series of pictures, which may be combined to form an animation of thevarying isovists along the route. This paper will examine all three modes of use in turn,starting from the simplest (point) to the more complex (the path). A description and equationfor all isovist measures used in the application will also be given as an appendix to thepaper

    Enhancing Perceptual Attributes with Bayesian Style Generation

    Full text link
    Deep learning has brought an unprecedented progress in computer vision and significant advances have been made in predicting subjective properties inherent to visual data (e.g., memorability, aesthetic quality, evoked emotions, etc.). Recently, some research works have even proposed deep learning approaches to modify images such as to appropriately alter these properties. Following this research line, this paper introduces a novel deep learning framework for synthesizing images in order to enhance a predefined perceptual attribute. Our approach takes as input a natural image and exploits recent models for deep style transfer and generative adversarial networks to change its style in order to modify a specific high-level attribute. Differently from previous works focusing on enhancing a specific property of a visual content, we propose a general framework and demonstrate its effectiveness in two use cases, i.e. increasing image memorability and generating scary pictures. We evaluate the proposed approach on publicly available benchmarks, demonstrating its advantages over state of the art methods.Comment: ACCV-201

    Flexible SVBRDF Capture with a Multi-Image Deep Network

    Get PDF
    International audienceEmpowered by deep learning, recent methods for material capture can estimate a spatially-varying reflectance from a single photograph. Such lightweight capture is in stark contrast with the tens or hundreds of pictures required by traditional optimization-based approaches. However, a single image is often simply not enough to observe the rich appearance of real-world materials. We present a deep-learning method capable of estimating material appearance from a variable number of uncalibrated and unordered pictures captured with a handheld camera and flash. Thanks to an order-independent fusing layer, this architecture extracts the most useful information from each picture, while benefiting from strong priors learned from data. The method can handle both view and light direction variation without calibration. We show how our method improves its prediction with the number of input pictures, and reaches high quality reconstructions with as little as 1 to 10 images-a sweet spot between existing single-image and complex multi-image approaches

    Microstructure Study on Barnett Shale

    Get PDF
    This thesis presents the discussion of the microstructure of the Barnett Shale as studied using the combined technology of the Focus Ion Beam (FIB) and Scanning Electron Microscope (SEM). This study mainly focuses on 12 core samples from the Barnett Shale reservoir. Theoretical models, which could be used to calculate the effective stiffness tensor of gas shale, require different types of input data. I used the FIB-SEM to find support for input parameters required for theoretical models, such as cracks connectivity, aspect ratio, mineral alignment, porosity, etc., since the pictures taken from the FIB-SEM offer us a way to analyze what is going on in the nano – scale world. This paper also discusses obtaining the other input data using various methods. X-ray Diffraction (XRD) was used to get the mineral compositions. The result of XRD indicates that the core samples are mainly comprised by quartz and clay minerals. Total Organic Carbon (TOC) contents of 12 samples were measured with the average around 4.5%.Earth and Atmospheric Sciences, Department o

    Shoulder-Surfing Resistant Authentication for Augmented Reality

    Get PDF
    Augmented Reality (AR) Head-Mounted Displays (HMD) are increasingly used in industry to digitize processes and enhance user experience by enabling real-time interaction with both physical and virtual objects. In this context, HMD provide access to sensitive data and applications which demand authenticating users before granting access. Furthermore, these devices are often used in shared spaces. Thus, shoulder-surfing attacks need to be addressed. As users can remember pictures more easily than text, we applied the recognition-based graphical password scheme “Things” from previous work on an AR HMD while placing the pictures for each authentication attempt in a random order. We implemented this scheme for the HMD Microsoft HoloLens and conducted a user study evaluating Things\u27s usability. All participants could be successfully authenticated and the System Usability Scale (SUS) score is with 74 categorized as above average. We discuss as future work how to improve the SUS scores, e.g., by using different grid designs and input methods
    • …
    corecore