8 research outputs found

    Facial Micro- and Macro-Expression Spotting and Generation Methods

    Get PDF
    Facial micro-expression (ME) recognition requires face movement interval as input, but computer methods in spotting ME are still underperformed. This is due to lacking large-scale long video dataset and ME generation methods are in their infancy. This thesis presents methods to address data deficiency issues and introduces a new method for spotting macro- and micro-expressions simultaneously. This thesis introduces SAMM Long Videos (SAMM-LV), which contains 147 annotated long videos, and develops a baseline method to facilitate ME Grand Challenge 2020. Further, a reference-guided style transfer of StarGANv2 is experimented on SAMM-LV to generate a synthetic dataset, namely SAMM-SYNTH. The quality of SAMM-SYNTH is evaluated by using facial action units detected by OpenFace. Quantitative measurement shows high correlations on two Action Units (AU12 and AU6) of the original and synthetic data. In facial expression spotting, a two-stream 3D-Convolutional Neural Network with temporal oriented frame skips that can spot micro- and macro-expression simultaneously is proposed. This method achieves state-of-the-art performance in SAMM-LV and is competitive in CAS(ME)2, it was used as the baseline result of ME Grand Challenge 2021. The F1-score improves to 0.1036 when trained with composite data consisting of SAMM-LV and SAMMSYNTH. On the unseen ME Grand Challenge 2022 evaluation dataset, it achieves F1-score of 0.1531. Finally, a new sequence generation method to explore the capability of deep learning network is proposed. It generates spontaneous facial expressions by using only two input sequences without any labels. SSIM and NIQE were used for image quality analysis and the generated data achieved 0.87 and 23.14. By visualising the movements using optical flow value and absolute frame differences, this method demonstrates its potential in generating subtle ME. For realism evaluation, the generated videos were rated by using two facial expression recognition networks

    Images on the Move: Materiality - Networks - Formats

    Get PDF
    In contemporary society, digital images have become increasingly mobile. They are networked, shared on social media, and circulated across small and portable screens. Accordingly, the discourses of spreadability and circulation have come to supersede the focus on production, indexicality, and manipulability, which had dominated early conceptions of digital photography and film. However, the mobility of images is neither technologically nor conceptually limited to the realm of the digital. The edited volume re-examines the historical, aesthetical, and theoretical relevance of image mobility. The contributors provide a materialist account of images on the move - ranging from wired photography to postcards to streaming media

    Images on the Move

    Get PDF
    In contemporary society, digital images have become increasingly mobile. They are networked, shared on social media, and circulated across small and portable screens. Accordingly, the discourses of spreadability and circulation have come to supersede the focus on production, indexicality, and manipulability, which had dominated early conceptions of digital photography and film. However, the mobility of images is neither technologically nor conceptually limited to the realm of the digital. The edited volume re-examines the historical, aesthetical, and theoretical relevance of image mobility. The contributors provide a materialist account of images on the move - ranging from wired photography to postcards to streaming media

    Images on the Move

    Get PDF
    In contemporary society, digital images have become increasingly mobile. They are networked, shared on social media, and circulated across small and portable screens. Accordingly, the discourses of spreadability and circulation have come to supersede the focus on production, indexicality, and manipulability, which had dominated early conceptions of digital photography and film. However, the mobility of images is neither technologically nor conceptually limited to the realm of the digital. The edited volume re-examines the historical, aesthetical, and theoretical relevance of image mobility. The contributors provide a materialist account of images on the move - ranging from wired photography to postcards to streaming media

    Image and Video Forensics

    Get PDF
    Nowadays, images and videos have become the main modalities of information being exchanged in everyday life, and their pervasiveness has led the image forensics community to question their reliability, integrity, confidentiality, and security. Multimedia contents are generated in many different ways through the use of consumer electronics and high-quality digital imaging devices, such as smartphones, digital cameras, tablets, and wearable and IoT devices. The ever-increasing convenience of image acquisition has facilitated instant distribution and sharing of digital images on digital social platforms, determining a great amount of exchange data. Moreover, the pervasiveness of powerful image editing tools has allowed the manipulation of digital images for malicious or criminal ends, up to the creation of synthesized images and videos with the use of deep learning techniques. In response to these threats, the multimedia forensics community has produced major research efforts regarding the identification of the source and the detection of manipulation. In all cases (e.g., forensic investigations, fake news debunking, information warfare, and cyberattacks) where images and videos serve as critical evidence, forensic technologies that help to determine the origin, authenticity, and integrity of multimedia content can become essential tools. This book aims to collect a diverse and complementary set of articles that demonstrate new developments and applications in image and video forensics to tackle new and serious challenges to ensure media authenticity

    Digital Interaction and Machine Intelligence

    Get PDF
    This book is open access, which means that you have free and unlimited access. This book presents the Proceedings of the 9th Machine Intelligence and Digital Interaction Conference. Significant progress in the development of artificial intelligence (AI) and its wider use in many interactive products are quickly transforming further areas of our life, which results in the emergence of various new social phenomena. Many countries have been making efforts to understand these phenomena and find answers on how to put the development of artificial intelligence on the right track to support the common good of people and societies. These attempts require interdisciplinary actions, covering not only science disciplines involved in the development of artificial intelligence and human-computer interaction but also close cooperation between researchers and practitioners. For this reason, the main goal of the MIDI conference held on 9-10.12.2021 as a virtual event is to integrate two, until recently, independent fields of research in computer science: broadly understood artificial intelligence and human-technology interaction

    Application of an improved video-based depth inversion technique to a macrotidal sandy beach

    Get PDF
    Storm conditions are considered the dominating erosional mechanism for the coastal zone. Morphological changes during storms are hard to measure due to energetic environmental conditions. Surveys are therefore mostly executed right after a storm on a local scale over a single or few storms [days to weeks]. The impact of a single storm might depend on the preceding sequence of storms. Here, a video camera system is deployed in the South-West of England at the beach of Porthtowan to observe and assess short-term storm impact and long-term recovery. The morphological change is observed with a state-of-the-art video-based depth estimation tool that is based on the linear dispersion relationship between depth and wave celerity (cBathy). This work shows the first application of this depth estimation tool in a highly energetic macrotidal environment. Within this application two sources of first-order inaccuracies are identified: 1) camera related issues on the camera boundaries and 2) fixed pixel location for all tidal elevations. These systematic inaccuracies are overcome by 1) an adaptive pixel collection scheme and camera boundary solution and 2) freely moving pixels. The solutions together result in a maximum RMS-error reduction of 60%. From October 2013 to February 2015 depths are hourly estimated during daylight. This period included, the 2013-2014 winter season which was the most energetic winter since wave records began. Inter-tidal beach surveys show 200 m3/m erosion while the sub-tidal video derived bathymetries show a sediment loss of around 20 m3/m. At the same time the sub-tidal (outer) bar changes from 3D to linear due to a significant increase in alongshore wave power during storm conditions. Complex-EOF based storm-by-storm impact reveals that the individual storm impact at Porthtowan can be described as a combined function of storm-integrated incident offshore wave power [P] and disequilibrium and that the tidal range has limited effect on the storm impact. The inter- and sub-tidal domain together gain volume over the 2013-2014 winter and the two domains show an interactive inverse behaviour indicating sediment exchange during relatively calm summer conditions. The inter-tidal domain shows accelerated accretion during more energetic conditions in fall 2014. The outer bar slowly migrated onshore until more energetic wave conditions activate the sub-tidal storm deposits and 3 dimensionality is reintroduced. The inter-tidal beach shows full recovery in November 2014, 8 months after the stormy winter.Research Excellence Framewor

    Bowdoin Orient v.124, no.1-23 (1993-1994)

    Get PDF
    https://digitalcommons.bowdoin.edu/bowdoinorient-1990s/1005/thumbnail.jp
    corecore