1,616 research outputs found

    Methods of using kinetic type to express emotions

    Get PDF
    Typography is an essential tool for effective communication and evokes various visual emotions through its individual quality. The function of typography has changed because of the demands of human needs and technological progress and the continual evolution of cultural societies. Human emotion expresses different levels of circumstances in our environment. Emotion has its own voice and delivers the characteristics of an individual personality through visual existence. Motion is a fundamental aspect of nature that has existed since mankind evolved. Motion character generates a dynamic energy, with its elements of direction and velocity that enhances creative vision. Motion stimulates our eyes so our vision can perceive various positions, sizes and shapes and allows for the visualization of depth and dimension. It is the reaction to the reality of our environment. The characteristics of typography, emotion and motion, use their own values to present their uniqueness. The relationship of typography, emotion, and motion are the key interests in developing kinetic typography using current technology. Technology has greatly improved our quality of life with new methods of cultural expression. It is important to examine how the individual elements of typography, emotion, motion, and technology collaborate to express the dynamic visual aspects of kinetic typography. Literature review I, I investigate how our visual perception has developed from the beginning of the Gestalt theory, which considers two-dimensional surfaces, to the depth and motion environment in visual world. In literature review II, the historic art movements of Cubism and Futurism are analyzed by focusing on how the natural phenomena of motion was utilized by creating new aspects of visual art as they produce the visual illusion of dynamic motion. In literature review III, I examine how our technological environment and cultural experiences have influenced the development of film title sequences. Two distinctive and innovative graphic designers, Saul Bass and Kyle Cooper will be discussed. In chapter 3,1 present the elements of visual representation used for creating kinetic typography from a basic two-dimensional typographical structure to one that uses the elements of space and time to create a more emotional experience

    Pictorial Appearances. A Phenomenological Inquiry

    Get PDF
    This work thematizes the phenomenological thresholds that separate image and reality. The Husserlian theory of image consciousness is discussed, criticized in light of the contemporary debate on depiction, and then questioned against different types of pictorial spaces. It is argued that the major limitation of this theory is its focus on depictive images and the consequent flattening of the conditions that make possible the appearance of an image on the conditions of its having a meaning. To overcome this problem, a genetic phenomenological approach to the study of the image is proposed that takes into account the phenomenology of passive syntheses and the analyses of the constitution of space—three-dimensional first, and then pictorial. This work presents the idea that pictorial appearances unfold in a specific way that contrasts with phenomenal sequences of the ordinary objects that populate our environment. This contrast grounds the divide between image and reality

    High-level perceptual contours from a variety of low-level physical features

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.Includes bibliographical references (p. 87-90).by Brian M. Scassellati.M.Eng

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing

    Pattern Recognition

    Get PDF
    A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition

    Are All Pixels Equally Important? Towards Multi-Level Salient Object Detection

    Get PDF
    When we look at our environment, we primarily pay attention to visually distinctive objects. We refer to these objects as visually important or salient. Our visual system dedicates most of its processing resources to analyzing these salient objects. An analogous resource allocation can be performed in computer vision, where a salient object detector identifies objects of interest as a pre-processing step. In the literature, salient object detection is considered as a foreground-background segmentation problem. This approach assumes that there is no variation in object importance. Only the most salient object(s) are detected as foreground. In this thesis, we challenge this conventional methodology of salient-object detection and introduce multi-level object saliency. In other words, all pixels are not equally important. The well-known salient-object ground-truth datasets contain images with single objects and thus are not suited to evaluate the varying importance of objects. In contrast, many natural images have multiple objects. The saliency levels of these objects depend on two key factors. First, the duration of eye fixation is longer for visually and semantically informative image regions. Therefore, a difference in fixation duration should reflect a variation in object importance. Second, visual perception is subjective; hence the saliency of an object should be measured by averaging the perception of a group of people. In other words, objective saliency can be considered as the collective human attention. In order to better represent natural images and to measure the saliency levels of objects, we thus collect new images containing multiple objects and create a Comprehensive Object Saliency (COS) dataset. We provide ground truth multi-level salient object maps via eye-tracking and crowd-sourcing experiments. We then propose three salient-object detectors. Our first technique is based on multi-scale linear filtering and can detect salient objects of various sizes. The second method uses a bilateral-filtering approach and is capable of producing uniform object saliency values. Our third method employs image segmentation and machine learning and is robust against image noise and texture. This segmentation-based method performs the best on the existing datasets compared to our other methods and the state-of-the-art methods. The state-of-the-art salient-object detectors are not designed to assess the relative importance of objects and to provide multi-level saliency values. We thus introduce an Object-Awareness Model (OAM) that estimates the saliency levels of objects by using their position and size information. We then modify and extend our segmentation-based salient-object detector with the OAM and propose a Comprehensive Salient Object Detection (CSD) method that is capable of performing multi-level salient-object detection. We show that the CSD method significantly outperforms the state-of-the-art methods on the COS dataset. We use our salient-object detectors as a pre-processing step in three applications. First, we show that multi-level salient-object detection provides more relevant semantic image tags compared to conventional salient-object detection. Second, we employ our salient-object detector to detect salient objects in videos in real time. Third, we use multi-level object-saliency values in context-aware image compression and obtain perceptually better compression compared to standard JPEG with the same file size

    Spatial Displays and Spatial Instruments

    Get PDF
    The conference proceedings topics are divided into two main areas: (1) issues of spatial and picture perception raised by graphical electronic displays of spatial information; and (2) design questions raised by the practical experience of designers actually defining new spatial instruments for use in new aircraft and spacecraft. Each topic is considered from both a theoretical and an applied direction. Emphasis is placed on discussion of phenomena and determination of design principles

    “Something that just hovers”: Charting Feldman’s Neither

    Get PDF
    Senior Project submitted to The Division of Arts of Bard College

    Contextual modulation of visual variability: perceptual biases over time and across the visual field

    Get PDF
    The visual system extracts statistical information about the environment to manage noise, ensure perceptual stability and predict future events. These summary representations are able to inform sensory information received in subsequent times or in other regions of the visual field. This has been conceptualized in terms of Bayesian inference within the predictive coding framework. Nevertheless, contextual influence can also drive anti-Bayesian biases, as in sensory adaptation. Variance is a crucial statistical descriptor, yet relatively overlooked in ensemble vision research. We assessed the mechanisms whereby visual variability exerts and is subject to contextual modulation over time and across the visual field. Perceptual biases over time: serial dependence (SD) In a series of visual experiments, we examined SD on visual variance: the influence of the variance of previously presented ensembles in current variance judgments. We encountered two history-dependent biases: a positive bias exerted by recent presentations and a negative bias driven by less recent context. Contrary to claims that positive SD has low-level sensory origin, our experiments demonstrated a decisional bias requiring perceptual awareness and subject to time and capacity limitations. The negative bias was likely of sensory origin (adaptation). A two-layer model combining population codes and Bayesian Kalman filters replicated positive and negative effects in their approximate timescales. Perceptual biases across the visual field: Uniformity Illusion (UI) In UI, presentation of a pattern with uniform foveal components and more variable peripheral elements results in the latter taking the appearance of the foveal input. We studied the mechanistic basis of UI on orientation and determined that it arose without changes in sensory encoding at the primary visual cortex. Conclusions We studied perceptual biases on visual variability across space and time and found a combination of sensory negative and positive decisional biases, likely to handle the balance between change sensitivity and perceptual stability

    Visual region understanding: unsupervised extraction and abstraction

    Get PDF
    The ability to gain a conceptual understanding of the world in uncontrolled environments is the ultimate goal of vision-based computer systems. Technological societies today are heavily reliant on surveillance and security infrastructure, robotics, medical image analysis, visual data categorisation and search, and smart device user interaction, to name a few. Out of all the complex problems tackled by computer vision today in context of these technologies, that which lies closest to the original goals of the field is the subarea of unsupervised scene analysis or scene modelling. However, its common use of low level features does not provide a good balance between generality and discriminative ability, both a result and a symptom of the sensory and semantic gaps existing between low level computer representations and high level human descriptions. In this research we explore a general framework that addresses the fundamental problem of universal unsupervised extraction of semantically meaningful visual regions and their behaviours. For this purpose we address issues related to (i) spatial and spatiotemporal segmentation for region extraction, (ii) region shape modelling, and (iii) the online categorisation of visual object classes and the spatiotemporal analysis of their behaviours. Under this framework we propose (a) a unified region merging method and spatiotemporal region reduction, (b) shape representation by the optimisation and novel simplication of contour-based growing neural gases, and (c) a foundation for the analysis of visual object motion properties using a shape and appearance based nearest-centroid classification algorithm and trajectory plots for the obtained region classes. 1 Specifically, we formulate a region merging spatial segmentation mechanism that combines and adapts features shown previously to be individually useful, namely parallel region growing, the best merge criterion, a time adaptive threshold, and region reduction techniques. For spatiotemporal region refinement we consider both scalar intensity differences and vector optical flow. To model the shapes of the visual regions thus obtained, we adapt the growing neural gas for rapid region contour representation and propose a contour simplication technique. A fast unsupervised nearest-centroid online learning technique next groups observed region instances into classes, for which we are then able to analyse spatial presence and spatiotemporal trajectories. The analysis results show semantic correlations to real world object behaviour. Performance evaluation of all steps across standard metrics and datasets validate their performance
    • …
    corecore