8,501 research outputs found

    The eyes know it: FakeET -- An Eye-tracking Database to Understand Deepfake Perception

    Full text link
    We present \textbf{FakeET}-- an eye-tracking database to understand human visual perception of \emph{deepfake} videos. Given that the principal purpose of deepfakes is to deceive human observers, FakeET is designed to understand and evaluate the ease with which viewers can detect synthetic video artifacts. FakeET contains viewing patterns compiled from 40 users via the \emph{Tobii} desktop eye-tracker for 811 videos from the \textit{Google Deepfake} dataset, with a minimum of two viewings per video. Additionally, EEG responses acquired via the \emph{Emotiv} sensor are also available. The compiled data confirms (a) distinct eye movement characteristics for \emph{real} vs \emph{fake} videos; (b) utility of the eye-track saliency maps for spatial forgery localization and detection, and (c) Error Related Negativity (ERN) triggers in the EEG responses, and the ability of the \emph{raw} EEG signal to distinguish between \emph{real} and \emph{fake} videos.Comment: 8 page

    Cell sorting in a Petri dish controlled by computer vision.

    Get PDF
    Fluorescence-activated cell sorting (FACS) applying flow cytometry to separate cells on a molecular basis is a widespread method. We demonstrate that both fluorescent and unlabeled live cells in a Petri dish observed with a microscope can be automatically recognized by computer vision and picked up by a computer-controlled micropipette. This method can be routinely applied as a FACS down to the single cell level with a very high selectivity. Sorting resolution, i.e., the minimum distance between two cells from which one could be selectively removed was 50-70 micrometers. Survival rate with a low number of 3T3 mouse fibroblasts and NE-4C neuroectodermal mouse stem cells was 66 +/- 12% and 88 +/- 16%, respectively. Purity of sorted cultures and rate of survival using NE-4C/NE-GFP-4C co-cultures were 95 +/- 2% and 62 +/- 7%, respectively. Hydrodynamic simulations confirmed the experimental sorting efficiency and a cell damage risk similar to that of normal FACS

    A novel haptic model and environment for maxillofacial surgical operation planning and manipulation

    Get PDF
    This paper presents a practical method and a new haptic model to support manipulations of bones and their segments during the planning of a surgical operation in a virtual environment using a haptic interface. To perform an effective dental surgery it is important to have all the operation related information of the patient available beforehand in order to plan the operation and avoid any complications. A haptic interface with a virtual and accurate patient model to support the planning of bone cuts is therefore critical, useful and necessary for the surgeons. The system proposed uses DICOM images taken from a digital tomography scanner and creates a mesh model of the filtered skull, from which the jaw bone can be isolated for further use. A novel solution for cutting the bones has been developed and it uses the haptic tool to determine and define the bone-cutting plane in the bone, and this new approach creates three new meshes of the original model. Using this approach the computational power is optimized and a real time feedback can be achieved during all bone manipulations. During the movement of the mesh cutting, a novel friction profile is predefined in the haptical system to simulate the force feedback feel of different densities in the bone

    Visualizing Object Oriented Software in Three Dimensions

    Get PDF
    There is increasing evidence that it is possible to perceive and understand increasingly comple x information systems if they are displayed a s graphical objects in a three dimensional space . Object-oriented software provides an interestin g test case - there is a natural mapping fro m software objects to visual objects . In this paper we explore two areas. 1) Information perception : we are running controlled experiments to determine empirically if our initial premise is valid; how much more (or less) can be understoo d in 3D than in 2D? 2) Layout: our strategy is to combine partially automatic layout with manua l layout. This paper presents a brief overview of the project, the software architecture and some preliminary empirical results

    Self-Supervised Video Forensics by Audio-Visual Anomaly Detection

    Full text link
    Manipulated videos often contain subtle inconsistencies between their visual and audio signals. We propose a video forensics method, based on anomaly detection, that can identify these inconsistencies, and that can be trained solely using real, unlabeled data. We train an autoregressive model to generate sequences of audio-visual features, using feature sets that capture the temporal synchronization between video frames and sound. At test time, we then flag videos that the model assigns low probability. Despite being trained entirely on real videos, our model obtains strong performance on the task of detecting manipulated speech videos. Project site: https://cfeng16.github.io/audio-visual-forensicsComment: CVPR 202
    • …
    corecore