296 research outputs found
Homo economicus in visual search
How do reward outcomes affect early visual performance? Previous studies found a suboptimal influence, but they ignored the non-linearity in how subjects perceived the reward outcomes. In contrast, we find that when the non-linearity is accounted for, humans behave optimally and maximize expected reward. Our subjects were asked to detect the presence of a familiar target object in a cluttered scene. They were rewarded according to their performance. We systematically varied the target frequency and the reward/penalty policy for detecting/missing the targets. We find that 1) decreasing the target frequency will decrease the detection rates, in accordance with the literature. 2) Contrary to previous studies, increasing the target detection rewards will compensate for target rarity and restore detection performance. 3) A quantitative model based on reward maximization accurately predicts human detection behavior in all target frequency and reward conditions; thus, reward schemes can be designed to obtain desired detection rates for rare targets. 4) Subjects quickly learn the optimal decision strategy; we propose a neurally plausible model that exhibits the same properties. Potential applications include designing reward schemes to improve detection of life-critical, rare targets (e.g., cancers in medical images)
MODELING AND TESTING ULTRA-LIGHTWEIGHT THERMOFORM-STIFFENED PANELS
Ultra-lightweight thermoformed stiffened structures are emerging as a viable option for spacecraft applications due to their advantage over inflatable structures. Although pressurization may be used for deployment, constant pressure is not required to maintain stiffness. However, thermoformed stiffening features are often locally nonlinear in their behavior under loading. This thesis has three aspects: 1) to understand stiffness properties of a thermoformed stiffened ultra-lightweight panel, 2) to develop finite element models using a phased-verification approach and 3) to verify panel response to dynamic loading. This thesis demonstrates that conventional static and dynamic testing principles can be applied to test ultra-lightweight thermoformed stiffened structures. Another contribution of this thesis is by evaluating the stiffness properties of different stiffener configurations. Finally, the procedure used in this thesis could be adapted in the study of similar ultra-lightweight thermoformed stiffened spacecraft structures
Predicting response time and error rates in visual search
A model of human visual search is proposed. It predicts both response time (RT) and error rates (RT) as a function of image parameters such as target contrast and
clutter. The model is an ideal observer, in that it optimizes the Bayes ratio of target present vs target absent. The ratio is computed on the firing pattern of V1/V2 neurons, modeled by Poisson distributions. The optimal mechanism for integrating information over time is shown to be a ‘soft max’ of diffusions, computed over
the visual field by ‘hypercolumns’ of neurons that share the same receptive field and have different response properties to image features. An approximation of the
optimal Bayesian observer, based on integrating local decisions, rather than diffusions, is also derived; it is shown experimentally to produce very similar predictions
to the optimal observer in common psychophysics conditions. A psychophyisics experiment is proposed that may discriminate between which mechanism is used in the human brain
Turning off or dimming a device screen based on user attention
Device screens are often set to turn off and/or dim automatically if no user interaction is detected for a specified amount of time. Turning off or dimming the screen saves power and prolongs the amount of time the device can operate without needing to recharge the battery. However, such timeout-based actions can result in false positives or negatives. With user permission, this disclosure utilizes contextual input of a user’s gaze and attention for management of the automatic turn off or dimming of the device screen. The techniques are applied to reduce the false positives and negatives and ensure that the screen stays on longer if the user is still engaged with the device and turns off or dims before the timeout if the user has stopped using the screen
Operator Drowsiness Test
This publication details a quantifiable and objective operator drowsiness test. The test takes between 30 seconds to two (2) minutes to be administered. Any smartphone that has a front-facing camera and the supporting software can run the newly-developed and self-administrable test. It leverages years in sleep deprivation research that have found objective correlations between drowsiness (or alertness) and physical and behavioral parameters, such as: gazing, facial features, pupil size, blink rate, blink duration, breathing, pulse, head movements, face skin-tone, speech pattern, and vocal sound. In addition, the mass use of smartphones with rear-facing and front-facing cameras gives researchers the opportunity to deploy this new operator drowsiness test to a wide audience
Recommended from our members
Automated model-based transmission line fault location method using reduced equivalent circuit
Transmission lines are a vital part of power systems and are prone to a variety of short-circuit faults. Transmission line faults must be accurately identified and cleared so as to restore the faulted line back to normal operation in the shortest amount of time possible. Common fault locating practices used by utilities involve multiple manual data analysis stages which could cause time delays. This thesis presents an automated model-based transmission line fault location approach to improving the existing manual process. A fault location process comprises of data preprocessing of the fault record and estimating the fault location using impedance-based techniques. This thesis first elucidates the data preprocessing steps, proposes and validates an algorithm for determining the fault current and voltage. It then proposes an automated fault location method by simulating relevant fault scenarios on a reduced equivalent circuit to determine the fault location. The proposed fault location method is implemented using MATLAB and OpenDSS. The need and process of forming reduced equivalent circuits for automating the fault location process are presented. A test circuit was used to illustrate the method and to evaluate the technical capabilities of the algorithm. The technical performance of the proposed fault location method was analyzed on a variety of aspects such as line and generator outage, the presence of mutual coupling, the presence of fault resistance, and multiple combinations of the above. The algorithm is robust and capable of handling the above mentioned issues which affect the fault location estimate.Electrical and Computer Engineerin
Differentially Private Heatmaps
We consider the task of producing heatmaps from users' aggregated data while
protecting their privacy. We give a differentially private (DP) algorithm for
this task and demonstrate its advantages over previous algorithms on real-world
datasets.
Our core algorithmic primitive is a DP procedure that takes in a set of
distributions and produces an output that is close in Earth Mover's Distance to
the average of the inputs. We prove theoretical bounds on the error of our
algorithm under a certain sparsity assumption and that these are near-optimal.Comment: To appear in AAAI 202
Dynamic, Task-Related and Demand-Driven Scene Representation
Humans selectively process and store details about the vicinity based on their knowledge about the scene, the world and their current task. In doing so, only those pieces of information are extracted from the visual scene that is required for solving a given task. In this paper, we present a flexible system architecture along with a control mechanism that allows for a task-dependent representation of a visual scene. Contrary to existing approaches, our system is able to acquire information selectively according to the demands of the given task and based on the system’s knowledge. The proposed control mechanism decides which properties need to be extracted and how the independent processing modules should be combined, based on the knowledge stored in the system’s long-term memory. Additionally, it ensures that algorithmic dependencies between processing modules are resolved automatically, utilizing procedural knowledge which is also stored in the long-term memory. By evaluating a proof-of-concept implementation on a real-world table scene, we show that, while solving the given task, the amount of data processed and stored by the system is considerably lower compared to processing regimes used in state-of-the-art systems. Furthermore, our system only acquires and stores the minimal set of information that is relevant for solving the given task
- …