120,595 research outputs found

    A Statistical Framework for Image Category Search from a Mental Picture

    Get PDF
    Image Retrieval; Relevance Feedback; Page Zero Problem; Mental Matching; Bayesian System; Statistical LearningStarting from a member of an image database designated the “query image,” traditional image retrieval techniques, for example search by visual similarity, allow one to locate additional instances of a target category residing in the database. However, in many cases, the query image or, more generally, the target category, resides only in the mind of the user as a set of subjective visual patterns, psychological impressions or “mental pictures.” Consequently, since image databases available today are often unstructured and lack reliable semantic annotations, it is often not obvious how to initiate a search session; this is the “page zero problem.” We propose a new statistical framework based on relevance feedback to locate an instance of a semantic category in an unstructured image database with no semantic annotations. A search session is initiated from a random sample of images. At each retrieval round the user is asked to select one image from among a set of displayed images – the one that is closest in his opinion to the target class. The matching is then “mental.” Performance is measured by the number of iterations necessary to display an image which satisfies the user, at which point standard techniques can be employed to display other instances. Our core contribution is a Bayesian formulation which scales to large databases. The two key components are a response model which accounts for the user's subjective perception of similarity and a display algorithm which seeks to maximize the flow of information. Experiments with real users and two databases of 20,000 and 60,000 images demonstrate the efficiency of the search process

    A comprehensive study of the usability of multiple graphical passwords

    Get PDF
    Recognition-based graphical authentication systems (RBGSs) using images as passwords have been proposed as one potential solution to the need for more usable authentication. The rapid increase in the technologies requiring user authentication has increased the number of passwords that users have to remember. But nearly all prior work with RBGSs has studied the usability of a single password. In this paper, we present the first published comparison of the usability of multiple graphical passwords with four different image types: Mikon, doodle, art and everyday objects (food, buildings, sports etc.). A longi-tudinal experiment was performed with 100 participants over a period of 8 weeks, to examine the usability performance of each of the image types. The re-sults of the study demonstrate that object images are most usable in the sense of being more memorable and less time-consuming to employ, Mikon images are close behind but doodle and art images are significantly inferior. The results of our study complement cognitive literature on the picture superiority effect, vis-ual search process and nameability of visually complex images

    Characterizing neuromorphologic alterations with additive shape functionals

    Full text link
    The complexity of a neuronal cell shape is known to be related to its function. Specifically, among other indicators, a decreased complexity in the dendritic trees of cortical pyramidal neurons has been associated with mental retardation. In this paper we develop a procedure to address the characterization of morphological changes induced in cultured neurons by over-expressing a gene involved in mental retardation. Measures associated with the multiscale connectivity, an additive image functional, are found to give a reasonable separation criterion between two categories of cells. One category consists of a control group and two transfected groups of neurons, and the other, a class of cat ganglionary cells. The reported framework also identified a trend towards lower complexity in one of the transfected groups. Such results establish the suggested measures as an effective descriptors of cell shape

    The Impact of Concept Representation in Interactive Concept Validation (ICV)

    Get PDF
    Large scale ideation has developed as a promising new way of obtaining large numbers of highly diverse ideas for a given challenge. However, due to the scale of these challenges, algorithmic support based on a computational understanding of the ideas is a crucial component in these systems. One promising solution is the use of knowledge graphs to provide meaning. A significant obstacle lies in word-sense disambiguation, which cannot be solved by automatic approaches. In previous work, we introduce \textit{Interactive Concept Validation} (ICV) as an approach that enables ideators to disambiguate terms used in their ideas. To test the impact of different ways of representing concepts (should we show images of concepts, or only explanatory texts), we conducted experiments comparing three representations. The results show that while the impact on ideation metrics was marginal, time/click effort was lowest in the images only condition, while data quality was highest in the both condition

    Learning midlevel image features for natural scene and texture classification

    Get PDF
    This paper deals with coding of natural scenes in order to extract semantic information. We present a new scheme to project natural scenes onto a basis in which each dimension encodes statistically independent information. Basis extraction is performed by independent component analysis (ICA) applied to image patches culled from natural scenes. The study of the resulting coding units (coding filters) extracted from well-chosen categories of images shows that they adapt and respond selectively to discriminant features in natural scenes. Given this basis, we define global and local image signatures relying on the maximal activity of filters on the input image. Locally, the construction of the signature takes into account the spatial distribution of the maximal responses within the image. We propose a criterion to reduce the size of the space of representation for faster computation. The proposed approach is tested in the context of texture classification (111 classes), as well as natural scenes classification (11 categories, 2037 images). Using a common protocol, the other commonly used descriptors have at most 47.7% accuracy on average while our method obtains performances of up to 63.8%. We show that this advantage does not depend on the size of the signature and demonstrate the efficiency of the proposed criterion to select ICA filters and reduce the dimensio

    Why People Search for Images using Web Search Engines

    Get PDF
    What are the intents or goals behind human interactions with image search engines? Knowing why people search for images is of major concern to Web image search engines because user satisfaction may vary as intent varies. Previous analyses of image search behavior have mostly been query-based, focusing on what images people search for, rather than intent-based, that is, why people search for images. To date, there is no thorough investigation of how different image search intents affect users' search behavior. In this paper, we address the following questions: (1)Why do people search for images in text-based Web image search systems? (2)How does image search behavior change with user intent? (3)Can we predict user intent effectively from interactions during the early stages of a search session? To this end, we conduct both a lab-based user study and a commercial search log analysis. We show that user intents in image search can be grouped into three classes: Explore/Learn, Entertain, and Locate/Acquire. Our lab-based user study reveals different user behavior patterns under these three intents, such as first click time, query reformulation, dwell time and mouse movement on the result page. Based on user interaction features during the early stages of an image search session, that is, before mouse scroll, we develop an intent classifier that is able to achieve promising results for classifying intents into our three intent classes. Given that all features can be obtained online and unobtrusively, the predicted intents can provide guidance for choosing ranking methods immediately after scrolling
    • 

    corecore