3,330 research outputs found

    Towards a human eye behavior model by applying Data Mining Techniques on Gaze Information from IEC

    Get PDF
    In this paper, we firstly present what is Interactive Evolutionary Computation (IEC) and rapidly how we have combined this artificial intelligence technique with an eye-tracker for visual optimization. Next, in order to correctly parameterize our application, we present results from applying data mining techniques on gaze information coming from experiments conducted on about 80 human individuals

    Running shoes design system with artificial bee colony method using gaze information

    Get PDF
    To retrieve multimodal candidate solutions for real users, we investigated the effectiveness of an interactive evolutionary computation (IEC) method with an artificial bee colony (ABC) algorithm. Using three types of bees (i.e., employed, onlooker, and scout bees), the ABC algorithm retrieves various candidate solutions. Our previous study showed the effectiveness of the IEC with the ABC algorithm while looking at various practical IEC parameters from a numerical simulation using a pseudo-user that imitates user preferences. The results showed that the IEC with the ABC algorithm could retrieve more multimodal candidates than the interactive genetic algorithm (IGA), the previous chief method of IECs. However, we did not examine the effectiveness of the IEC with the ABC algorithm for real users. In this study, we performed experiments to examine the effectiveness of the IEC with the ABC algorithm for real users using running shoe designs as an evaluation object. The investigations compared multimodal candidate solutions using the IGA method as a comparison tool, retrieving the performance of both methods. To evaluate candidates, we employed user gaze information to reduce user evaluation loads. The results showed that the evaluation time for evaluating candidates of the IEC with the ABC algorithm was shorter than that of the IGA method. Moreover, we confirmed that the IEC with the ABC algorithm could retrieve more multimodal candidate solutions than the IGA method

    Embodied interaction with visualization and spatial navigation in time-sensitive scenarios

    Get PDF
    Paraphrasing the theory of embodied cognition, all aspects of our cognition are determined primarily by the contextual information and the means of physical interaction with data and information. In hybrid human-machine systems involving complex decision making, continuously maintaining a high level of attention while employing a deep understanding concerning the task performed as well as its context are essential. Utilizing embodied interaction to interact with machines has the potential to promote thinking and learning according to the theory of embodied cognition proposed by Lakoff. Additionally, the hybrid human-machine system utilizing natural and intuitive communication channels (e.g., gestures, speech, and body stances) should afford an array of cognitive benefits outstripping the more static forms of interaction (e.g., computer keyboard). This research proposes such a computational framework based on a Bayesian approach; this framework infers operator\u27s focus of attention based on the physical expressions of the operators. Specifically, this work aims to assess the effect of embodied interaction on attention during the solution of complex, time-sensitive, spatial navigational problems. Toward the goal of assessing the level of operator\u27s attention, we present a method linking the operator\u27s interaction utility, inference, and reasoning. The level of attention was inferred through networks coined Bayesian Attentional Networks (BANs). BANs are structures describing cause-effect relationships between operator\u27s attention, physical actions and decision-making. The proposed framework also generated a representative BAN, called the Consensus (Majority) Model (CMM); the CMM consists of an iteratively derived and agreed graph among candidate BANs obtained by experts and by the automatic learning process. Finally, the best combinations of interaction modalities and feedback were determined by the use of particular utility functions. This methodology was applied to a spatial navigational scenario; wherein, the operators interacted with dynamic images through a series of decision making processes. Real-world experiments were conducted to assess the framework\u27s ability to infer the operator\u27s levels of attention. Users were instructed to complete a series of spatial-navigational tasks using an assigned pairing of an interaction modality out of five categories (vision-based gesture, glove-based gesture, speech, feet, or body balance) and a feedback modality out of two (visual-based or auditory-based). Experimental results have confirmed that physical expressions are a determining factor in the quality of the solutions in a spatial navigational problem. Moreover, it was found that the combination of foot gestures with visual feedback resulted in the best task performance (p\u3c .001). Results have also shown that embodied interaction-based multimodal interface decreased execution errors that occurred in the cyber-physical scenarios (p \u3c .001). Therefore we conclude that appropriate use of interaction and feedback modalities allows the operators maintain their focus of attention, reduce errors, and enhance task performance in solving the decision making problems

    Кибербезопасность в образовательных сетях

    Get PDF
    The paper discusses the possible impact of digital space on a human, as well as human-related directions in cyber-security analysis in the education: levels of cyber-security, social engineering role in cyber-security of education, “cognitive vaccination”. “A Human” is considered in general meaning, mainly as a learner. The analysis is provided on the basis of experience of hybrid war in Ukraine that have demonstrated the change of the target of military operations from military personnel and critical infrastructure to a human in general. Young people are the vulnerable group that can be the main goal of cognitive operations in long-term perspective, and they are the weakest link of the System.У статті обговорюється можливий вплив цифрового простору на людину, а також пов'язані з людиною напрямки кібербезпеки в освіті: рівні кібербезпеки, роль соціального інжинірингу в кібербезпеці освіти, «когнітивна вакцинація». «Людина» розглядається в загальному значенні, головним чином як та, що навчається. Аналіз надається на основі досвіду гібридної війни в Україні, яка продемонструвала зміну цілей військових операцій з військовослужбовців та критичної інфраструктури на людину загалом. Молодь - це вразлива група, яка може бути основною метою таких операцій в довгостроковій перспективі, і вони є найслабшою ланкою системи.В документе обсуждается возможное влияние цифрового пространства на человека, а также связанные с ним направления в анализе кибербезопасности в образовании: уровни кибербезопасности, роль социальной инженерии в кибербезопасности образования, «когнитивная вакцинация». «Человек» рассматривается в общем смысле, в основном как ученик. Анализ представлен на основе опыта гибридной войны в Украине, которая продемонстрировала изменение цели военных действий с военного персонала и критической инфраструктуры на человека в целом. Молодые люди являются уязвимой группой, которая может быть главной целью когнитивных операций в долгосрочной перспективе, и они являются самым слабым звеном Систем

    Using eye-tracking into decision makers evaluation in evolutionary interactive UA-FLP algorithms

    Get PDF
    Unequal area facility layout problem is an important issue in the design of industrial plants, as well as other fields such as hospitals or schools, among others. While participating in an interactive designing process, the human user is required to evaluate a high number of proposed solutions, which produces them fatigue both mental and physical. In this paper, the use of eye-tracking to estimate user’s evaluations from gaze behavior is investigated. The results show that, after a process of training and data taking, it is possible to obtain a good enough estimation of the user’s evaluations which is independent of the problem and of the users as well. These promising results advice to use eye-tracking as a substitute for the mouse during users’ evaluations
    corecore