488,214 research outputs found
Systems engineering approaches to safety in transport systems
openDuring driving, driver behavior monitoring may provide useful information to prevent road traffic accidents caused by driver distraction. It has been shown that 90% of road traffic accidents are due to human error and in 75% of these cases human error is the only cause. Car manufacturers have been interested in driver monitoring research for several years, aiming to enhance the general knowledge of driver behavior and to evaluate the functional state as it may drastically influence driving safety by distraction, fatigue, mental workload and attention. Fatigue and sleepiness at the wheel are well known risk factors for traffic accidents.
The Human Factor (HF) plays a fundamental role in modern transport systems. Drivers and transport operators control a vehicle towards its destination in according to their own sense, physical condition, experience and ability, and safety strongly relies on the HF which has to take the right decisions. On the other hand, we are experiencing a gradual shift towards increasingly autonomous vehicles where HF still constitutes an important component, but may in fact become the "weakest link of the chain", requiring strong and effective training feedback.
The studies that investigate the possibility to use biometrical or biophysical signals as data sources to evaluate the interaction between human brain activity and an electronic machine relate to the Human Machine Interface (HMI) framework. The HMI can acquire human signals to analyse the specific embedded structures and recognize the behavior of the subject during his/her interaction with the machine or with virtual interfaces as PCs or other communication systems. Based on my previous experience related to planning and monitoring of hazardous material transport, this work aims to create control models focused on driver behavior and changes of his/her physiological parameters. Three case studies have been considered using the interaction between an EEG system and external device, such as driving simulators or electronical components. A case study relates to the detection of the driver's behavior during a test driver. Another case study relates to the detection of driver's arm movements according to the data from the EEG during a driver test. The third case is the setting up of a Brain Computer Interface (BCI) model able to detect head movements in human participants by EEG signal and to control an electronic component according to the electrical brain activity due to head turning movements. Some videos showing the experimental results are available at https://www.youtube.com/channel/UCj55jjBwMTptBd2wcQMT2tg.openXXXIV CICLO - INFORMATICA E INGEGNERIA DEI SISTEMI/ COMPUTER SCIENCE AND SYSTEMS ENGINEERING - Ingegneria dei sistemiZero, Enric
IDL-XML based information sharing model for enterprise integration
CJM is a mechanized approach to problem solving in an enterprise. Its basis is intercommunication between information systems, in order to provide faster and more effective decision making process. These results help minimize human error, improve overall productivity and guarantee customer satisfaction. Most enterprises or corporations started implementing integration by adopting automated solutions in a particular process, department, or area, in isolation from the rest of the physical or intelligent process resulting in the incapability for systems and equipment to share information with each other and with other computer systems. The goal in a manufacturing environment is to have a set of systems that will interact seamlessly with each other within a heterogeneous object framework overcoming the many barriers (language, platforms, and even physical location) that do not grant information sharing. This study identifies the data needs of several information systems of a corporation and proposes a conceptual model to improve the information sharing process and thus Computer Integrated Manufacturing. The architecture proposed in this work provides a methodology for data storage, data retrieval, and data processing in order to provide integration at the enterprise level. There are four layers of interaction in the proposed IXA architecture. The name TXA (DDL - XML Architecture for Enterprise Integration) is derived from the standards and technologies used to define the layers and corresponding functions of each layer. The first layer addresses the systems and applications responsible for data manipulation. The second layer provides the interface definitions to facilitate the interaction between the applications on the first layer. The third layer is where data would be structured using XML to be stored and the fourth layer is a central repository and its database management system
Recommended from our members
Affective computing in computer vision: a study on facial expression recognition
The use of artificial intelligence has become increasingly popular in recent years, allowing technology once thought of as futuristic to become possible and utilised at the consumer level. Many technological barriers to human-computer interaction have been overcome, and there is now a focus on the sociological acceptance of such technology. Inferring human emotional states is a time-consuming process and can be automated with computer vision. In this study, we explore how computer vision and face recognition systems can be leveraged to automatically infer human emotional states from the face. Rather than the classical single-emotion classification method, our aim is to explore whether it is possible to perform regression techniques to observe valence and arousal. Following the topology tuning of 33 different neural networks, the results show that valence and arousal can be predicted by a branched Convolutional Neural Network model with a mean squared error of 0.066 and 0.107, respectively. In addition, we discuss methods of improving the model, as well as uses of the technology, which include the autonomous monitoring of affect during situations of technological acceptance
User interfaces and discrete event simulation models.
A user interface is critical to the success of any computer-based system. Numerous studies have shown that interface design has a significant influence on factors such as learning time, performance speed, error rates, and user satisfaction. Computer-based simulation modelling is one of the domains that is particularly demanding in terms of user interfaces. It is also an area that often pioneers new technologies that are not necessarily previously researched in terms of human-computer interaction. The dissertation describes research into user interfaces for discrete event simulation. Issues that influence the 'usability' of such systems are examined. Several representative systems were investigated in order to generate some general assumptions with respect to those characteristics of user interfaces employed in simulation systems. A case study was carried out to gain practical experience and to identify possible problems that can be encountered in user interface development. There is a need for simulation systems that can support the developments of simulation models in many domains, which are not supported by contemporary simulation software. Many user interface deficiencies are discovered and reported. On the basis of findings in this research, proposals are made on how user interfaces for simulation systems can be enhanced to match better the needs specific to the domain of simulation modelling, and on how better to support users in simulation model developments. Such improvements in user interfaces that better support users in simulation model developments could achieve a reduction in the amount of time needed to learn simulation systems, support retention of learned concepts over time, reduce the number of errors during interaction, reduce the amount of time and effort needed for model development, and provide greater user satisfaction
Do (and say) as I say: Linguistic adaptation in human-computer dialogs
© Theodora Koulouri, Stanislao Lauria, and Robert D. Macredie. This article has been made available through the Brunel Open Access Publishing Fund.There is strong research evidence showing that people naturally align to each otherâs vocabulary, sentence structure, and acoustic features in dialog, yet little is known about how the alignment mechanism operates in the interaction between users and computer systems let alone how it may be exploited to improve the efficiency of the interaction. This article provides an account of lexical alignment in humanâcomputer dialogs, based on empirical data collected in a simulated humanâcomputer interaction scenario. The results indicate that alignment is present, resulting in the gradual reduction and stabilization of the vocabulary-in-use, and that it is also reciprocal. Further, the results suggest that when system and user errors occur, the development of alignment is temporarily disrupted and users tend to introduce novel words to the dialog. The results also indicate that alignment in humanâcomputer interaction may have a strong strategic component and is used as a resource to compensate for less optimal (visually impoverished) interaction conditions. Moreover, lower alignment is associated with less successful interaction, as measured by user perceptions. The article distills the results of the study into design recommendations for humanâcomputer dialog systems and uses them to outline a model of dialog management that supports and exploits alignment through mechanisms for in-use adaptation of the systemâs grammar and lexicon
Nomadic input on mobile devices: the influence of touch input technique and walking speed on performance and offset modeling
In everyday life people use their mobile phones on-the-go with different walking speeds and with different touch input techniques. Unfortunately, much of the published research in mobile interaction does not quantify the influence of these variables. In this paper, we analyze the influence of walking speed, gait pattern and input techniques on commonly used performance parameters like error rate, accuracy and tapping speed, and we compare the results to the static condition. We examine the influence of these factors on the machine learned offset model used to correct user input and we make design recommendations. The results show that all performance parameters degraded when the subject started to move, for all input techniques. Index finger pointing techniques demonstrated overall better performance compared to thumb-pointing techniques. The influence of gait phase on tap event likelihood and accuracy was demonstrated for all input techniques and all walking speeds. Finally, it was shown that the offset model built on static data did not perform as well as models inferred from dynamic data, which indicates the speed-specific nature of the models. Also, models identified using specific input techniques did not perform well when tested in other conditions, demonstrating the limited validity of offset models to a particular input technique. The model was therefore calibrated using data recorded with the appropriate input technique, at 75% of preferred walking speed, which is the speed to which users spontaneously slow down when they use a mobile device and which presents a tradeoff between accuracy and usability. This led to an increase in accuracy compared to models built on static data. The error rate was reduced between 0.05% and 5.3% for landscape-based methods and between 5.3% and 11.9% for portrait-based methods
Towards error categorisation in BCI: single-trial EEG classification between different errors
Objective: Error-related potentials (ErrP) are generated in the brain when humans perceive errors. These ErrP signals can be used to classify actions as erroneous or non-erroneous, using single-trial electroencephalography (EEG). A small number of studies have demonstrated the feasibility of using ErrP detection as feedback for reinforcement-learning-based Brain-Computer Interfaces (BCI), confirming the possibility of developing more autonomous BCI. These systems could be made more efficient with specific information about the type of error that occurred. A few studies differentiated the ErrP of different errors from each other, based on direction or severity. However, errors cannot always be categorised in these ways. We aimed to investigate the feasibility of differentiating very similar error conditions from each other, in the absence of previously explored metrics.
Approach: In this study, we used two data sets with 25 and 14 participants to investigate the differences between errors. The two error conditions in each task were similar in terms of severity, direction and visual processing. The only notable differences between them were the varying cognitive processes involved in perceiving the errors, and differing contexts in which the errors occurred. We used a linear classifier with a small feature set to differentiate the errors on a single-trial basis.
Results: For both data sets, we observed neurophysiological distinctions between the ErrPs related to each error type. We found further distinctions between age groups. Furthermore, we achieved statistically significant single-trial classification rates for most participants included in the classification phase, with mean overall accuracy of 65.2\% and 65.6\% for the two tasks.
Significance: As a proof of concept our results showed that it is feasible, using single-trial EEG, to classify these similar error types against each other. This study paves the way for more detailed and efficient learning in BCI, and thus for a more autonomous human-machine interaction
- âŠ