16 research outputs found

    Visual Saliency in Video Compression and Transmission

    Get PDF
    This dissertation explores the concept of visual saliency—a measure of propensity for drawing visual attention—and presents various novel methods for utilization of visual saliencyin video compression and transmission. Specifically, a computationally-efficient method for visual saliency estimation in digital images and videos is developed, which approximates one of the most well-known visual saliency models. In the context of video compression, a saliency-aware video coding method is proposed within a region-of-interest (ROI) video coding paradigm. The proposed video coding method attempts to reduce attention-grabbing coding artifacts and keep viewers’ attention in areas where the quality is highest. The method allows visual saliency to increase in high quality parts of the frame, and allows saliency to reduce in non-ROI parts. Using this approach, the proposed method is able to achieve the same subjective quality as competing state-of-the-art methods at a lower bit rate. In the context of video transmission, a novel saliency-cognizant error concealment method is presented for ROI-based video streaming in which regions with higher visual saliency are protected more heavily than low saliency regions. In the proposed error concealment method, a low-saliency prior is added to the error concealment process as a regularization term, which serves two purposes. First, it provides additional side information for the decoder to identify the correct replacement blocks for concealment. Second, in the event that a perfectly matched block cannot be unambiguously identified, the low-saliency prior reduces viewers’ visual attention on the loss-stricken regions, resulting in higher overall subjective quality. During the course of this research, an eye-tracking dataset for several standard video sequences was created and made publicly available. This dataset can be utilized to test saliency models for video and evaluate various perceptually-motivated algorithms for video processing and video quality assessment

    Attentional mechanisms driven adaptive quantization and selective bit allocation scheme for H.264/AVC

    No full text
    International audienceRate control algorithm adopted in H.264/AVC reference software shows several shortcomings that have been highlighted by different studies. For instance, in the baseline profile, the frame target bit-rate estimation assumes similar characteristics for all frames and the quantization parameter determination uses the Mean Absolute Difference for complexity estimation. Consequently, an inefficient bit allocation is performed leading to important quality variation of decoded sequences. A saliency-based rate-control is proposed in this paper to achieve bit-rate saving and improve perceived quality. The saliency map of each frame, simulating the human visual attention by a bottom-up approach, is used at the frame level to adjust the quantization parameter and at the macroblock level to guide the bit allocation process. Simulation results show that the proposed attentional model is well correlated to human behavior. When compared to JM15.0 reference software, at the frame level, the saliency map exploitation achieves bit-rate savings of up to 26%. At the MB level and under the same quality constraint, bit-rate improvement is up to 42% and buffer level variation is reduced by up to 71%

    Attention-based machine perception for intelligent cyber-physical systems

    Get PDF
    Cyber-physical systems (CPS) fundamentally change the way of how information systems interact with the physical world. They integrate the sensing, computing, and communication capabilities on heterogeneous platforms and infrastructures. Efficient and effective perception of the environment lays the foundation of proper operations in other CPS components (e.g., planning and control). Recent advances in artificial intelligence (AI) have unprecedentedly changed the way of how cyber systems extract knowledge from the collected sensing data, and understand the physical surroundings. This novel data-to-knowledge transformation capability pushes a wide spectrum of recognition tasks (e.g., visual object detection, speech recognition, and sensor-based human activity recognition) to a higher level, and opens an new era of intelligent cyber-physical systems. However, the state-of-the-art neural perception models are typically computation-intensive and sensitive to data noises, which induce significant challenges when they are deployed on resources-limited embedded platforms. This dissertation works on optimizing both the efficiency and efficacy of deep-neural- network (DNN)-based machine perception in intelligent cyber-physical systems. We extensively exploit and apply the design philosophy of attention, originated from cognitive psychology field, from multiple perspectives of machine perception. It generally means al- locating different degrees of concentration to different perceived stimuli. Specifically, we address the following five research questions: First, can we run the computation-intensive neural perception models in real-time by only looking at (i.e., scheduling) the important parts of the perceived scenes, with the cueing from an external sensor? Second, can we eliminate the dependency on the external cueing and make the scheduling framework a self- cueing system? Third, how to distribute the workloads among cameras in a distributed (visual) perception system, where multiple cameras can observe the same parts of the environment? Fourth, how to optimize the achieved perception quality when sensing data from heterogeneous locations and sensor types are collected and utilized? Fifth, how to handle sensor failures in a distributed sensing system, when the deployed neural perception models are sensitive to missing data? We formulate the above problems, and introduce corresponding attention-based solutions for each, to construct the fundamental building blocks for envisioning an attention-based machine perception system in intelligent CPS with both efficiency and efficacy guarantees

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Fourth Conference on Artificial Intelligence for Space Applications

    Get PDF
    Proceedings of a conference held in Huntsville, Alabama, on November 15-16, 1988. The Fourth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: space applications of expert systems in fault diagnostics, in telemetry monitoring and data collection, in design and systems integration; and in planning and scheduling; knowledge representation, capture, verification, and management; robotics and vision; adaptive learning; and automatic programming

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    Evolutionary and Cognitive Approaches to Voice Perception in Humans: Acoustic Properties, Personality and Aesthetics

    Get PDF
    Voices are used as a vehicle for language, and variation in the acoustic properties of voices also contains information about the speaker. Listeners use measurable qualities, such as pitch and formant traits, as cues to a speaker’s physical stature and attractiveness. Emotional states and personality characteristics are also judged from vocal stimuli. The research contained in this thesis examines vocal masculinity, aesthetics and personality, with an emphasis on the perception of prosocial traits including trustworthiness and cooperativeness. I will also explore themes which are more cognitive in nature, testing aspects of vocal stimuli which may affect trait attribution, memory and the ascription of identity. Chapters 2 and 3 explore systematic differences across vocal utterances, both in types of utterance using different classes of stimuli and across the time course of perception of the auditory signal. These chapters examine variation in acoustic measurements in addition to variation in listener attributions of commonly-judged speaker traits. The most important result from this work was that evaluations of attractiveness made using spontaneous speech correlated with those made using scripted speech recordings, but did not correlate with those made of the same persons using vowel stimuli. This calls into question the use of sustained vowel sounds for the attainment of ratings of subjective characteristics. Vowel and single-word stimuli are also quite short – while I found that attributions of masculinity were reliable at very short exposure times, more subjective traits like attractiveness and trustworthiness require a longer exposure time to elicit reliable attributions. I conclude with recommending an exposure time of at least 5 seconds in duration for such traits to be reliably assessed. Chapter 4 examines what vocal traits affect perceptions of pro-social qualities using both natural and manipulated variation in voices. While feminine pitch traits (F0 and F0-SD) were linked to cooperativeness ratings, masculine formant traits (Df and Pf) were also associated with cooperativeness. The relative importance of these traits as social signals is discussed. Chapter 5 questions what makes a voice memorable, and helps to differentiate between memory for individual voice identities and for the content which was spoken by administering recognition tests both within and across sensory modalities. While the data suggest that experimental manipulation of voice pitch did not influence memory for vocalised stimuli, attractive male voices were better remembered than unattractive voices, independent of pitch manipulation. Memory for cross-modal (textual) content was enhanced by raising the voice pitch of both male and female speakers. I link this pattern of results to the perceived dominance of voices which have been raised and lowered in pitch, and how this might impact how memories are formed and retained. Chapter 6 examines masculinity across visual and auditory sensory modalities using a cross-modal matching task. While participants were able to match voices to muted videos of both male and female speakers at rates above chance, and to static face images of men (but not women), differences in masculinity did not influence observers in their judgements, and voice and face masculinity were not correlated. These results are discussed in terms of the generally-accepted theory that masculinity and femininity in faces and voices communicate the same underlying genetic quality. The biological mechanisms by which vocal and facial masculinity could develop independently are speculated

    Uncertainty in Artificial Intelligence: Proceedings of the Thirty-Fourth Conference

    Get PDF

    Full Proceedings, 2018

    Get PDF
    Full conference proceedings for the 2018 International Building Physics Association Conference hosted at Syracuse University

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered
    corecore