7 research outputs found

    Measuring and Predicting Importance of Objects in Our Visual World

    Get PDF
    Associating keywords with images automatically is an approachable and useful goal for visual recognition researchers. Keywords are distinctive and informative objects. We argue that keywords need to be sorted by 'importance', which we define as the probability of being mentioned first by an observer. We propose a method for measuring the `importance' of words using the object labels that multiple human observers give an everyday scene photograph. We model object naming as drawing balls from an urn, and fit this model to estimate `importance'; this combines order and frequency, enabling precise prediction under limited human labeling. We explore the relationship between the importance of an object in a particular image and the area, centrality, and saliency of the corresponding image patches. Furthermore, our data shows that many words are associated with even simple environments, and that few frequently appearing objects are shared across environments

    Objects predict fixations better than early saliency

    Get PDF
    Humans move their eyes while looking at scenes and pictures. Eye movements correlate with shifts in attention and are thought to be a consequence of optimal resource allocation for high-level tasks such as visual recognition. Models of attention, such as “saliency maps,” are often built on the assumption that “early” features (color, contrast, orientation, motion, and so forth) drive attention directly. We explore an alternative hypothesis: Observers attend to “interesting” objects. To test this hypothesis, we measure the eye position of human observers while they inspect photographs of common natural scenes. Our observers perform different tasks: artistic evaluation, analysis of content, and search. Immediately after each presentation, our observers are asked to name objects they saw. Weighted with recall frequency, these objects predict fixations in individual images better than early saliency, irrespective of task. Also, saliency combined with object positions predicts which objects are frequently named. This suggests that early saliency has only an indirect effect on attention, acting through recognized objects. Consequently, rather than treating attention as mere preprocessing step for object recognition, models of both need to be integrated

    Evolving S-Boxes with Reduced Differential Power Analysis Susceptibility

    Get PDF
    Differential power analysis targets S-boxes to break ciphers that resist cryptanalysis. We relax cryptanalytic constraints to lower S-box leakage, as quantified by the transparency order. We apply genetic algorithms to generate 8-bit S-boxes, optimizing transparency order and nonlinearity as in existing work (Picek et al. 2015). We apply multiobjective evolutionary algorithms to generate a Pareto front. We find a tight relationship where nonlinearity drops substantially before transparency order does, suggesting the difficulty of finding S-boxes with high nonlinearity and low transparency order, if they exist. Additionally, we show that the cycle crossover yields more efficient single objective genetic algorithms for generating S-boxes than the existing literature. We demonstrate this in the first side-by-side comparison of the genetic algorithms of Millan et al. 1999, Wang et al. 2012, and Picek et al. 2015. Finally, we propose and compare several methods for avoiding fixed points in S-boxes; repairing a fixed point after evolution in a way that preserves fitness was superior to including a fixed point penalty in the objective function or randomly repairing fixed points during or after evolution

    Modeling and Predicting Object Attention in Natural Scenes

    Get PDF
    Humans automatically attend to certain objects in a scene. Better understanding this process could improve a computer's ability to parse scene images and convey information about them to humans. This thesis is arranged in three parts. The first part explores how important a particular object is in a photograph of a complex scene. We propose a definition of importance and present two methods for measuring object importance from human observers. Using this ground truth, we fit a function for predicting the importance of each object directly from a segmented image; our function combines many object-related and image-related features. We validate our importance predictions on a large set of objects and find that the most important objects may be identified automatically. We find that object position and size are particularly informative, while a popular measure of saliency is not. The second part explores the relationship between object naming, eye movements, and saliency maps. Eye movements correlate with shifts in attention and are thought to be a consequence of optimal resource allocation for high-level tasks such as visual recognition. Saliency maps, are often built on the assumption that "early" features (e.g., color, contrast, orientation, and motion) as opposed to objects themselves drive attention. We measure the eye position of humans viewing scenes and then ask them to recall objects that they saw in each scene. Weighted with recall frequency or maximum saliency, these objects predict fixations in individual images better than early saliency, suggesting that early saliency may have an indirect effect on attention, acting through detected objects. The third part explores the problem of locating objects in a scene irrespective of category. We introduce the first benchmark for category-independent object detection. It is composed of a large public dataset of annotated high-resolution scene images and suitable metrics for performance evaluation. We demonstrate our benchmark by comparing three methods for generalized object detection against a baseline and an upper bound.</p

    Measuring and Predicting Object Importance

    No full text
    How important is a particular object in a photograph of a complex scene? We propose a definition of importance and present two methods for measuring object importance from human observers. Using this ground truth, we fit a function for predicting the importance of each object directly from a segmented image; our function combines a large number of object-related and image-related features. We validate our importance predictions on 2,841 objects and find that the most important objects may be identified automatically. We find that object position and size are particularly informative, while a popular measure of saliency is not
    corecore