603 research outputs found
Prevention of Electrostatic Charge Generation in Filtration of Low-Conductivity Oils by Surface Modification of Modern Filter Media
The electrostatic charging behavior of filter elements operating in various hydraulic and lubricating
fluids has been re-examined from the perspective of fundamental material properties of the two
materials participating in the event. In contrast to the previously proposed mechanisms that
focused predominantly on fluid and material conductivities, new evidence strongly suggests that
the relative placement of the substrates in the triboelectric series must be taken into account. The
positions occupied in the triboelectric series account for the donor/acceptor tendencies exhibited
by the materials when brought close together in close proximity ( 10 nm). Nevertheless, this
behavior is only an outward manifestation of the deeper underlying characteristics that include
material surface energies and, looking even deeper, the associated electron work functions of the
interacting materials. Herein we provide several examples of the enhanced understanding of the
electrostatic charging/discharging (ESC/ESD) phenomena as they occur in the course of filtration
of hydraulic and lubricating fluids through modern filter elements constructed of synthetic glass
fiber and polymer materials
Recommended from our members
A Gaze-enabled Graph Visualization to Improve Graph Reading Tasks
Performing typical network tasks such as node scanning and path tracing can be difficult in large and dense graphs. To alleviate this problem we use eye-tracking as an interactive input to detect tasks that users intend to perform and then produce unobtrusive visual changes that support these tasks. First, we introduce a novel fovea based filtering that dims out edges with endpoints far removed from a user's view focus. Second, we highlight edges that are being traced at any given moment or have been the focus of recent attention. Third, we track recently viewed nodes and increase the saliency of their neighborhoods. All visual responses are unobtrusive and easily ignored to avoid unintentional distraction and to account for the imprecise and low-resolution nature of eye-tracking. We also introduce a novel gaze-correction approach that relies on knowledge about the network layout to reduce eye-tracking error. Finally, we present results from a controlled user study showing that our methods led to a statistically significant accuracy improvement in one of two network tasks and that our gaze-correction algorithm enables more accurate eye-tracking interaction
Pupillary and microsaccadic responses to cognitive effort and emotional arousal during complex decision making
A large body of literature documents the sensitivity of pupil response to cognitive load (e.g., Krejtz et al. 2018) and emotional arousal (Bradley et al., 2008). Recent empirical evidence also showed that microsaccade characteristics and dynamics can be modulated by mental fatigue and cognitive load (e.g., Dalmaso et al. 2017). Very little is known about the sensitivity of microsaccadic characteristics to emotional arousal. The present paper demonstrates in a controlled experiment pupillary and microsaccadic responses to information processing during multi-attribute decision making under affective priming. Twenty-one psychology students were randomly assigned into three affective priming conditions (neutral, aversive, and erotic). Participants were tasked to make several discriminative decisions based on acquired cues. In line with the expectations, results showed microsaccadic rate inhibition and pupillary dilation depending on cognitive effort (number of acquired cues) prior to decision. These effects were moderated by affective priming. Aversive priming strengthened pupillary and microsaccadic response to information processing effort. In general, results suggest that pupillary response is more biased by affective priming than microsaccadic rate. The results are discussed in the light of neuropsychological mechanisms of pupillary and microsaccadic behavior generation
Towards predicting post-editing productivity
Machine translation (MT) quality is generally measured via automatic metrics, producing scores that have no meaning for translators who are required to post-edit MT output or for project managers who have to plan and budget for transla- tion projects. This paper investigates correlations between two such automatic metrics (general text matcher and translation edit rate) and post-editing productivity. For the purposes of this paper, productivity is measured via processing speed and cognitive measures of effort using eye tracking as a tool. Processing speed, average fixation time and count are found to correlate well with the scores for groups of segments. Segments with high GTM and TER scores require substantially less time and cognitive effort than medium or low-scoring segments. Future research involving score thresholds and confidence estimation is suggested
The New CGEMS - Preparing the Computer Graphics Educational Materials Source to Meet the Needs of Educators
ACM SIGGRAPH and Eurographics are restarting CGEMS, the Computer Graphics Educational Materials Source, an on-line repository of curricular material for computer graphics education. In this context, the question that we ask ourselves is: ''How can CGEMS best meet the needs of educators''? The aim of this forum is to provide the audience with an idea of the purpose of CGEMS - a source of educational materials for educators by educators - and to give them an opportunity to contribute their views and ideas towards shaping the new CGEMS. Towards this purpose, we have identified a number of issues to resolve, which the panel will put forward to the participants of the forum for discussion
Eye tracking and visualization. Introduction to the Special Thematic Issue
There is a growing interest in eye tracking technologies applied to support traditional visualization techniques like diagrams, charts, maps, or plots, either static, animated, or interactive ones. More complex data analyses are required to derive knowledge and meaning from the data. Eye tracking systems serve that purpose in combination with biological and computer vision, cognition, perception, visualization, human-computer-interaction, as well as usability and user experience research. The 10 articles collected in this thematic special issue provide interesting examples how sophisticated methods of data analysis and representation enable researchers to discover and describe fundamental spatio-temporal regularities in the data. The human visual system, supported by appropriate visualization tools, enables the human operator to solve complex tasks, like understanding and interpreting three-dimensional medical images, controlling air traffic by radar displays, supporting instrument flight tasks, or interacting with virtual realities. The development and application of new visualization techniques is of major importance for future technological progress
Gaze transitions when learning with multimedia
Eye tracking methodology is used to examine the influence of interactive multimedia on the allocation of visual attention and its dynamics during learning. We hypothesized that an interactive simulation promotes more organized switching of attention between different elements of multimedia learning material, e.g., textual description and pictorial visualization. Participants studied a description of an algorithm accompanied either by an interactive simulation, self-paced animation, or static illustration. Using a novel framework for entropy-based comparison of gaze transition matrices, results showed that the interactive simulation elicited more careful visual investigation of the learning material as well as reading of the problem description through to its completion
Captions in 360 Video : Rapid Prototyping for User Testing
Extended reality is reinventing our approach to work, learning, culture, and social interaction. Nevertheless, the integration of accessible services within immersive environments is still in progress. This presentation will introduce new prototyping for immersive captioning and discuss how to achieve an optimal and fully inclusive viewing experience
- âŠ