179 research outputs found
Illumination Variation Correction Using Image Synthesis For Unsupervised Domain Adaptive Person Re-Identification
Unsupervised domain adaptive (UDA) person re-identification (re-ID) aims to
learn identity information from labeled images in source domains and apply it
to unlabeled images in a target domain. One major issue with many unsupervised
re-identification methods is that they do not perform well relative to large
domain variations such as illumination, viewpoint, and occlusions. In this
paper, we propose a Synthesis Model Bank (SMB) to deal with illumination
variation in unsupervised person re-ID. The proposed SMB consists of several
convolutional neural networks (CNN) for feature extraction and Mahalanobis
matrices for distance metrics. They are trained using synthetic data with
different illumination conditions such that their synergistic effect makes the
SMB robust against illumination variation. To better quantify the illumination
intensity and improve the quality of synthetic images, we introduce a new 3D
virtual-human dataset for GAN-based image synthesis. From our experiments, the
proposed SMB outperforms other synthesis methods on several re-ID benchmarks.Comment: 10 pages, 5 figures, 5 table
A regression method for real-time video quality evaluation
No-Reference (NR) metrics provide a mechanism to assess video quality in an ever-growing wireless network. Their low computational complexity and functional characteristics make them the primary choice when it comes to realtime content management and mobile streaming control. Unfortunately, common NR metrics suer from poor accuracy, particularly in network-impaired video streams. In this work, we introduce a regression-based video quality metric that is simple enough for real-time computation on thin clients, and comparably as accurate as state-of-the-art Full-Reference (FR) metrics, which are functionally and computationally inviable in real-time streaming. We benchmark our metric against the FR metric VQM (Video Quality Metric), finding a very strong correlation factor
Cognitive impairment and World Trade Centre-related exposures
On 11 September 2001 the World Trade Center (WTC) in New York was attacked by terrorists, causing the collapse of multiple buildings including the iconic 110-story ‘Twin Towers’. Thousands of people died that day from the collapse of the buildings, fires, falling from the buildings, falling debris, or other related accidents. Survivors of the attacks, those who worked in search and rescue during and after the buildings collapsed, and those working in recovery and clean-up operations were exposed to severe psychological stressors. Concurrently, these ‘WTC-affected’ individuals breathed and ingested a mixture of organic and particulate neurotoxins and pro-inflammogens generated as a result of the attack and building collapse. Twenty years later, researchers have documented neurocognitive and motor dysfunctions that resemble the typical features of neurodegenerative disease in some WTC responders at midlife. Cortical atrophy, which usually manifests later in life, has also been observed in this population. Evidence indicates that neurocognitive symptoms and corresponding brain atrophy are associated with both physical exposures at the WTC and chronic post-traumatic stress disorder, including regularly re-experiencing traumatic memories of the events while awake or during sleep. Despite these findings, little is understood about the long-term effects of these physical and mental exposures on the brain health of WTC-affected individuals, and the potential for neurocognitive disorders. Here, we review the existing evidence concerning neurological outcomes in WTC-affected individuals, with the aim of contextualizing this research for policymakers, researchers and clinicians and educating WTC-affected individuals and their friends and families. We conclude by providing a rationale and recommendations for monitoring the neurological health of WTC-affected individuals
Hybrid video quality prediction: reviewing video quality measurement for widening application scope
A tremendous number of objective video quality measurement algorithms have been developed during the last two decades. Most of them either measure a very limited aspect of the perceived video quality or they measure broad ranges of quality with limited prediction accuracy. This paper lists several perceptual artifacts that may be computationally measured in an isolated algorithm and some of the modeling approaches that have been proposed to predict the resulting quality from those algorithms. These algorithms usually have a very limited application scope but have been verified carefully. The paper continues with a review of some standardized and well-known video quality measurement algorithms that are meant for a wide range of applications, thus have a larger scope. Their individual artifacts prediction accuracy is usually lower but some of them were validated to perform sufficiently well for standardization. Several difficulties and shortcomings in developing a general purpose model with high prediction performance are identified such as a common objective quality scale or the behavior of individual indicators when confronted with stimuli that are out of their prediction scope. The paper concludes with a systematic framework approach to tackle the development of a hybrid video quality measurement in a joint research collaboration.Polish National Centre for Research and Development (NCRD) SP/I/1/77065/10, Swedish Governmental Agency for Innovation Systems (Vinnova
- …