735 research outputs found
Recommended from our members
uC: Ubiquitous Collaboration Platform for Multimodal Team Interaction Support
A human-centered computing platform that improves teamwork and transforms the “human- computer interaction experience” for distributed teams is presented. This Ubiquitous Collaboration, or uC (“you see”), platform\u27s objective is to transform distributed teamwork (i.e., work occurring when teams of workers and learners are geographically dispersed and often interacting at different times). It achieves this goal through a multimodal team interaction interface realized through a reconfigurable open architecture. The approach taken is to integrate: (1) an intuitive speech- and video-centric multi-modal interface to augment more conventional methods (e.g., mouse, stylus and touch), (2) an open and reconfigurable architecture supporting information gathering, and (3) a machine intelligent approach to analysis and management of heterogeneous live and stored sensor data to support collaboration. The system will transform how teams of people interact with computers by drawing on both the virtual and physical environment
Going Deeper into Action Recognition: A Survey
Understanding human actions in visual data is tied to advances in
complementary research areas including object recognition, human dynamics,
domain adaptation and semantic segmentation. Over the last decade, human action
analysis evolved from earlier schemes that are often limited to controlled
environments to nowadays advanced solutions that can learn from millions of
videos and apply to almost all daily activities. Given the broad range of
applications from video surveillance to human-computer interaction, scientific
milestones in action recognition are achieved more rapidly, eventually leading
to the demise of what used to be good in a short time. This motivated us to
provide a comprehensive review of the notable steps taken towards recognizing
human actions. To this end, we start our discussion with the pioneering methods
that use handcrafted representations, and then, navigate into the realm of deep
learning based approaches. We aim to remain objective throughout this survey,
touching upon encouraging improvements as well as inevitable fallbacks, in the
hope of raising fresh questions and motivating new research directions for the
reader
Recommended from our members
A Novel Inpainting Framework for Virtual View Synthesis
Multi-view imaging has stimulated significant research to enhance the user experience of free viewpoint video, allowing interactive navigation between views and the freedom to select a desired view to watch. This usually involves transmitting both textural and depth information captured from different viewpoints to the receiver, to enable the synthesis of an arbitrary view. In rendering these virtual views, perceptual holes can appear due to certain regions, hidden in the original view by a closer object, becoming visible in the virtual view. To provide a high quality experience these holes must be filled in a visually plausible way, in a process known as inpainting. This is challenging because the missing information is generally unknown and the hole-regions can be large. Recently depth-based inpainting techniques have been proposed to address this challenge and while these generally perform better than non-depth assisted methods, they are not very robust and can produce perceptual artefacts.
This thesis presents a new inpainting framework that innovatively exploits depth and textural self-similarity characteristics to construct subjectively enhanced virtual viewpoints. The framework makes three significant contributions to the field: i) the exploitation of view information to jointly inpaint textural and depth hole regions; ii) the introduction of the novel concept of self-similarity characterisation which is combined with relevant depth information; and iii) an advanced self-similarity characterising scheme that automatically determines key spatial transform parameters for effective and flexible inpainting.
The presented inpainting framework has been critically analysed and shown to provide superior performance both perceptually and numerically compared to existing techniques, especially in terms of lower visual artefacts. It provides a flexible robust framework to develop new inpainting strategies for the next generation of interactive multi-view technologies
GSmoothFace: Generalized Smooth Talking Face Generation via Fine Grained 3D Face Guidance
Although existing speech-driven talking face generation methods achieve
significant progress, they are far from real-world application due to the
avatar-specific training demand and unstable lip movements. To address the
above issues, we propose the GSmoothFace, a novel two-stage generalized talking
face generation model guided by a fine-grained 3d face model, which can
synthesize smooth lip dynamics while preserving the speaker's identity. Our
proposed GSmoothFace model mainly consists of the Audio to Expression
Prediction (A2EP) module and the Target Adaptive Face Translation (TAFT)
module. Specifically, we first develop the A2EP module to predict expression
parameters synchronized with the driven speech. It uses a transformer to
capture the long-term audio context and learns the parameters from the
fine-grained 3D facial vertices, resulting in accurate and smooth
lip-synchronization performance. Afterward, the well-designed TAFT module,
empowered by Morphology Augmented Face Blending (MAFB), takes the predicted
expression parameters and target video as inputs to modify the facial region of
the target video without distorting the background content. The TAFT
effectively exploits the identity appearance and background context in the
target video, which makes it possible to generalize to different speakers
without retraining. Both quantitative and qualitative experiments confirm the
superiority of our method in terms of realism, lip synchronization, and visual
quality. See the project page for code, data, and request pre-trained models:
https://zhanghm1995.github.io/GSmoothFace
Data comparison schemes for Pattern Recognition in Digital Images using Fractals
Pattern recognition in digital images is a common problem with application in
remote sensing, electron microscopy, medical imaging, seismic imaging and
astrophysics for example. Although this subject has been researched for over
twenty years there is still no general solution which can be compared with the
human cognitive system in which a pattern can be recognised subject to
arbitrary orientation and scale.
The application of Artificial Neural Networks can in principle provide a very
general solution providing suitable training schemes are implemented.
However, this approach raises some major issues in practice. First, the CPU
time required to train an ANN for a grey level or colour image can be very
large especially if the object has a complex structure with no clear geometrical
features such as those that arise in remote sensing applications. Secondly,
both the core and file space memory required to represent large images and
their associated data tasks leads to a number of problems in which the use of
virtual memory is paramount.
The primary goal of this research has been to assess methods of image data
compression for pattern recognition using a range of different compression
methods. In particular, this research has resulted in the design and
implementation of a new algorithm for general pattern recognition based on
the use of fractal image compression.
This approach has for the first time allowed the pattern recognition problem to
be solved in a way that is invariant of rotation and scale. It allows both ANNs
and correlation to be used subject to appropriate pre-and post-processing
techniques for digital image processing on aspect for which a dedicated
programmer's work bench has been developed using X-Designer
Recommended from our members
A Hybrid Similarity Measure Framework for Multimodal Medical Image Registration
Medical imaging is widely used today to facilitate both disease diagnosis and treatment planning practice, with a key prerequisite being the systematic process of medical image registration (MIR) to align either mono or multimodal images of different anatomical parts of the human body. MIR utilises a similarity measure (SM) to quantify the level of spatial alignment and is particularly demanding due to the presence of inherent modality characteristics like intensity non-uniformities (INU) in magnetic resonance images and large homogeneous non-vascular regions in retinal images. While various intensity and feature-based SMs exist for MIR, mutual information (MI) has become established because of its computational efficiency and ability to register multimodal images. It is however, very sensitive to interpolation artefacts in the presence of INU with noise and can be compromised when overlapping areas are small. Recently MI-based hybrid variants which combine regional features with intensity have emerged, though these incur high dimensionality and large computational overheads.
To address these challenges and secure accurate, efficient and robust registration of images containing high INU, noise and large homogeneous regions, this thesis presents a new hybrid SM framework for 2D multimodal rigid MIR. The framework consistently provides superior quantitative and qualitative performance, while offering a uniquely flexible design trade-off between registration accuracy and computational time. It makes three significant technical contributions to the field: i) An expectation maximisation-based principal component analysis with mutual information (EMPCA-MI) framework incorporating neighbourhood feature information; ii) Two innovative enhancements to reduce information redundancy and improve MI computational efficiency; and iii) an adaptive algorithm to select the most significant principal components for feature selection.
The thesis findings conclusively confirm the hybrid SM framework offers an accurate and robust 2D registration solution for challenging multimodal medical imaging datasets, while its inherent flexibility means it can also be extended to the 3D registration domain
- …