87 research outputs found

    Scanpath assessment of visible and infrared side-by-side and fused video displays

    Get PDF

    Proceedings of the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    This book is a collection of 15 reviewed technical reports summarizing the presentations at the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory. The covered topics include image processing, optical signal processing, visual inspection, pattern recognition and classification, human-machine interaction, world and situation modeling, autonomous system localization and mapping, information fusion, and trust propagation in sensor networks

    Cognitive Image Fusion and Assessment

    Get PDF

    Ensuring the Take-Over Readiness of the Driver Based on the Gaze Behavior in Conditionally Automated Driving Scenarios

    Get PDF
    Conditional automation is the next step towards the fully automated vehicle. Under prespecified conditions an automated driving function can take-over the driving task and the responsibility for the vehicle, thus enabling the driver to perform secondary tasks. However, performing secondary tasks and the resulting reduced attention towards the road may lead to critical situations in take-over situations. In such situations, the automated driving function reaches its limits, forcing the driver to take-over responsibility and the control of the vehicle again. Thus, the driver represents the fallback level for the conditionally automated system. At this point the question arises as to how it can be ensured that the driver can take-over adequately and timely without restricting the automated driving system or the new freedom of the driver. To answer this question, this work proposes a novel prototype for an advanced driver assistance system which is able to automatically classify the driver’s take-over readiness for keeping the driver ”in-the-loop”. The results show the feasibility of such a classification of the take-over readiness even in the highly dynamic vehicle environment using a machine learning approach. It was verified that far more than half of the drivers performing a low-quality take-over would have been warned shortly before the actual take-over, whereas nearly 90% of the drivers performing a high-quality take-over would not have been interrupted by the driver assistance system during a driving simulator study. The classification of the take-over readiness of the driver is performed by means of machine learning algorithms. The underlying features for this classification are mainly based on the head and eye movement behavior of the driver. It is shown how the secondary tasks currently being performed as well as the glances on the road can be derived from these measured signals. Therefore, novel, online-capable approaches for driver-activity recognition and Eyes-on-Road detection are introduced, evaluated, and compared to each other based on both data of a simulator and real-driving study. These novel approaches are able to deal with multiple challenges of current state-of-the-art methods such as: i) only a coarse separation of driver activities possible, ii) necessity for costly and time-consuming calibrations, and iii) no adaption to conditionally automated driving scenarios.Das hochautomatisierte Fahren bildet den nächsten Schritt in der Evolution der Fahrerassistenzsysteme hin zu vollautomatisierten Fahrzeugen. Unter definierten Bedingungen kann dabei der Fahrer die Fahraufgabe inklusive der Verantwortung über das Fahrzeug einer automatisierten Fahrfunktion übergeben und erhält die Möglichkeit sich anderen Tätigkeiten zu widmen. Um dennoch sicherzustellen, dass der Fahrer bei Bedarf schnellstmöglich die Kontrolle über das Fahrzeug wieder übernehmen kann, stellt sich die Frage, wie die fehlende Aufmerksamkeit gegenüber dem Straßenverkehr kompensiert werden kann ohne dabei die hochautomatisierte Fahrfunktion oder die neu gewonnenen Freiheiten des Fahrers zu beschränken. Um diese Frage zu beantworten wird in der vorliegenden Arbeit ein erstes prototypisches Fahrerassistenzsystem vorgestellt, welches es ermöglicht, die Übernahmebereitschaft des Fahrers automatisiert zu klassifizieren und abhängig davon den Fahrer "in-the-loop" zu halten. Die Ergebnisse zeigen, dass eine automatisierte Klassifikation über maschinelle Lernverfahren selbst in der hochdynamischen Fahrzeugumgebung hervorragende Erkennungsraten ermöglicht. In einer der durchgeführten Fahrsimulatorstudien konnte nachgewiesen werden, dass weit mehr als die Hälfte der Probanden mit einer geringen Übernahmequalität kurz vor der eigentlichen Übernahmesituation gewarnt und nahezu 90% der Probanden mit einer hohen Übernahmequalität in ihrer Nebentätigkeit nicht gestört worden wären. Diese automatisierte Klassifizierung beruht auf Merkmalen, die über Fahrerbeobachtung mittels Innenraumkamera gewonnen werden. Für die Extraktion dieser Merkmale werden Verfahren zur Fahreraktivitätserkennung und zur Detektion von Blicken auf die Straße benötigt, welche aktuell noch mit gewissen Schwachstellen zu kämpfen haben wie: i) Nur eine grobe Unterscheidung von Tätigkeiten möglich, ii) Notwendigkeit von kosten- und zeitintensiven Kalibrationsschritten, iii) fehlende Anpassung an hochautomatisierte Fahrszenarien. Aus diesen Gründen wurden neue Verfahren zur Fahreraktivitätserkennung und zur Detektion von Blicken auf die Straße in dieser Arbeit entwickelt, implementiert und evaluiert. Dabei bildet die Anwendbarkeit der Verfahren unter realistischen Bedingungen im Fahrzeug einen zentralen Aspekt. Zur Evaluation der einzelnen Teilsysteme und des übergeordneten Fahrerassistenzsystems wurden umfangreiche Versuche in einem Fahrsimulator sowie in realen Messfahrzeugen mit Referenz- sowie seriennaher Messtechnik durchgeführt

    New Challenges in HCI: Ambient Intelligence for Human Performance Improvement

    Get PDF
    Ambient Intelligence is new multidisciplinary paradigm that is going to change the relation between humans, technology and the environment they live in. This paradigm has its roots in the ideas Ubiquitous and Pervasive computing. In this vision, that nowadays is almost reality, technology becomes pervasive in everyday lives but, despite its increasing importance, it (should) becomes “invisible”, so deeply intertwined in our day-to-day activities to disappear into the fabric of our lives. The new environment should become “intelligent” and “smart”, able to actively and adaptively react to the presence, actions and needs of humans (not only users but complex human being), in order to support daily activities and improve the quality of life. Ambient Intelligence represents a trend able to profoundly affect every aspect of our life. It is not a problem regarding only technology but is about a new way to be “human”, to inhabit our environment, and to dialogue with technology. But what makes an environment smart and intelligent is the way it understands and reacts to changing conditions. As a well-designed tool can help us carry out our activities more quickly and easily, a poorly designed one could be an obstacle. Ambient Intelligence paradigm tends to change some human’s activities by automating certain task. However is not always simple to decide what automate and when and how much the user needs to have control. In this thesis we analyse the different levels composing the Ambient Intelligence paradigm, from its theoretical roots, through technology until the issues related the Human Factors and the Human Computer Interaction, to better understand how this paradigm is able to change the performance and the behaviour of the user. After a general analysis, we decided to focus on the problem of smart surveillance analysing how is possible to automate certain tasks through a context capture system, based on the fusion of different sources and inspired to the paradigm of Ambient Intelligence. Particularly we decide to investigate, from a Human Factors point of view, how different levels of automation (LOAs) may result in a change of user’s behaviour and performances. Moreover this investigation was aimed to find the criteria that may help to design a smart surveillance system. After the design of a general framework for fusion of different sensor in a real time locating system, an hybrid people tracking system, based on the combined use of RFID UWB and computer vision techniques was developed and tested to explore the possibilities of a smart context capture system. Taking this system as an example we developed 3 simulators of a smart surveillance system implementing 3 different LOAs: manual, low system assistance, high system assistance. We performed tests (using quali-quantitative measures) to see changes in performances, Situation Awareness and workload in relation to different LOAs. Based on the results obtained, is proposed a new interaction paradigm for control rooms based on the HCI concepts related to Ambient Intelligence paradigm and especially related to Ambient Display’s concept, highlighting its usability advantages in a control room scenario. The assessments made through test showed that if from a technological perspective is possible to achieve very high levels of automation, from a Human Factors point of view this doesn’t necessarily reflect in an improvement of human performances. The latter is rather related to a particular balance that is not fixed but changes according to specific context. Thus every Ambient Intelligence system may be designed in a human centric perspective considering that, sometimes less can be more and vice-versa
    • …
    corecore