64,827 research outputs found
Framing the detection of elder financial abuse as bystander intervention: Decision cues, pathways to detection and barriers to action
This article is (c) Emerald Group Publishing and permission has been granted for this version to appear here (http://bura.brunel.ac.uk/handle/2438/8569). Emerald does not grant permission for this article to be further copied/distributed or hosted elsewhere without the express permission from Emerald Group Publishing Limited.This article has been made available through the Brunel Open Access Publishing Fund.Purpose – The purpose of this paper is to explore the detection and prevention of elder financial abuse through the lens of a “professional bystander intervention model”. The authors were interested in the decision cues that raise suspicions of financial abuse, how such abuse comes to the attention of professionals who do not have a statutory responsibility for safeguarding older adults, and the barriers to intervention.
Design/methodology/approach – In-depth interviews were conducted using the critical incident technique. Thematic analysis was carried out on transcribed interviews. In total, 20 banking and 20 health professionals were recruited. Participants were asked to discuss real cases which they had dealt with personally.
Findings – The cases described indicated that a variety of cues were used in coming to a decision that financial abuse was very likely taking place. Common to these cases was a discrepancy between what is normal and expected and what is abnormal or unexpected. There was a marked difference in the type of abuse noticed by banking and health professionals, drawing attention to the ways in which context influences the likelihood that financial abuse will be detected. The study revealed that even if professionals suspect abuse, there are barriers which prevent them acting.
Originality/value – The originality of this study lies in its use of the bystander intervention model to study the decision-making processes of professionals who are not explicitly charged with adult safeguarding. The study was also unique because real cases were under consideration. Hence, what the professionals actually do, rather than what they might do, was under investigation.Economic and Social Research Counci
Solving Visual Madlibs with Multiple Cues
This paper focuses on answering fill-in-the-blank style multiple choice
questions from the Visual Madlibs dataset. Previous approaches to Visual
Question Answering (VQA) have mainly used generic image features from networks
trained on the ImageNet dataset, despite the wide scope of questions. In
contrast, our approach employs features derived from networks trained for
specialized tasks of scene classification, person activity prediction, and
person and object attribute prediction. We also present a method for selecting
sub-regions of an image that are relevant for evaluating the appropriateness of
a putative answer. Visual features are computed both from the whole image and
from local regions, while sentences are mapped to a common space using a simple
normalized canonical correlation analysis (CCA) model. Our results show a
significant improvement over the previous state of the art, and indicate that
answering different question types benefits from examining a variety of image
cues and carefully choosing informative image sub-regions
Priming Neural Networks
Visual priming is known to affect the human visual system to allow detection
of scene elements, even those that may have been near unnoticeable before, such
as the presence of camouflaged animals. This process has been shown to be an
effect of top-down signaling in the visual system triggered by the said cue. In
this paper, we propose a mechanism to mimic the process of priming in the
context of object detection and segmentation. We view priming as having a
modulatory, cue dependent effect on layers of features within a network. Our
results show how such a process can be complementary to, and at times more
effective than simple post-processing applied to the output of the network,
notably so in cases where the object is hard to detect such as in severe noise.
Moreover, we find the effects of priming are sometimes stronger when early
visual layers are affected. Overall, our experiments confirm that top-down
signals can go a long way in improving object detection and segmentation.Comment: fixed error in author nam
Digging Deeper into Egocentric Gaze Prediction
This paper digs deeper into factors that influence egocentric gaze. Instead
of training deep models for this purpose in a blind manner, we propose to
inspect factors that contribute to gaze guidance during daily tasks. Bottom-up
saliency and optical flow are assessed versus strong spatial prior baselines.
Task-specific cues such as vanishing point, manipulation point, and hand
regions are analyzed as representatives of top-down information. We also look
into the contribution of these factors by investigating a simple recurrent
neural model for ego-centric gaze prediction. First, deep features are
extracted for all input video frames. Then, a gated recurrent unit is employed
to integrate information over time and to predict the next fixation. We also
propose an integrated model that combines the recurrent model with several
top-down and bottom-up cues. Extensive experiments over multiple datasets
reveal that (1) spatial biases are strong in egocentric videos, (2) bottom-up
saliency models perform poorly in predicting gaze and underperform spatial
biases, (3) deep features perform better compared to traditional features, (4)
as opposed to hand regions, the manipulation point is a strong influential cue
for gaze prediction, (5) combining the proposed recurrent model with bottom-up
cues, vanishing points and, in particular, manipulation point results in the
best gaze prediction accuracy over egocentric videos, (6) the knowledge
transfer works best for cases where the tasks or sequences are similar, and (7)
task and activity recognition can benefit from gaze prediction. Our findings
suggest that (1) there should be more emphasis on hand-object interaction and
(2) the egocentric vision community should consider larger datasets including
diverse stimuli and more subjects.Comment: presented at WACV 201
- …