8,647 research outputs found

    The DRIVE-SAFE project: signal processing and advanced information technologies for improving driving prudence and accidents

    Get PDF
    In this paper, we will talk about the Drivesafe project whose aim is creating conditions for prudent driving on highways and roadways with the purposes of reducing accidents caused by driver behavior. To achieve these primary goals, critical data is being collected from multimodal sensors (such as cameras, microphones, and other sensors) to build a unique databank on driver behavior. We are developing system and technologies for analyzing the data and automatically determining potentially dangerous situations (such as driver fatigue, distraction, etc.). Based on the findings from these studies, we will propose systems for warning the drivers and taking other precautionary measures to avoid accidents once a dangerous situation is detected. In order to address these issues a national consortium has been formed including Automotive Research Center (OTAM), Koç University, Istanbul Technical University, Sabancı University, Ford A.S., Renault A.S., and Fiat A. ƞ

    Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web

    Get PDF
    Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C’s Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers’ observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is soun

    Improving Speech Interaction in Vehicles Using Context-Aware Information through A SCXML Framework

    Get PDF
    Speech Technologies can provide important benefits for the development of more usable and safe in-vehicle human-machine interactive systems (HMIs). However mainly due robustness issues, the use of spoken interaction can entail important distractions to the driver. In this challenging scenario, while speech technologies are evolving, further research is necessary to explore how they can be complemented with both other modalities (multimodality) and information from the increasing number of available sensors (context-awareness). The perceived quality of speech technologies can significantly be increased by implementing such policies, which simply try to make the best use of all the available resources; and the in vehicle scenario is an excellent test-bed for this kind of initiatives. In this contribution we propose an event-based HMI design framework which combines context modelling and multimodal interaction using a W3C XML language known as SCXML. SCXML provides a general process control mechanism that is being considered by W3C to improve both voice interaction (VoiceXML) and multimodal interaction (MMI). In our approach we try to anticipate and extend these initiatives presenting a flexible SCXML-based approach for the design of a wide range of multimodal context-aware HMI in-vehicle interfaces. The proposed framework for HMI design and specification has been implemented in an automotive OSGi service platform, and it is being used and tested in the Spanish research project MARTA for the development of several in-vehicle interactive applications

    Multimodal person recognition for human-vehicle interaction

    Get PDF
    Next-generation vehicles will undoubtedly feature biometric person recognition as part of an effort to improve the driving experience. Today's technology prevents such systems from operating satisfactorily under adverse conditions. A proposed framework for achieving person recognition successfully combines different biometric modalities, borne out in two case studies

    From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI

    Full text link
    This paper gives an overview of the ten-year devel- opment of the papers presented at the International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutoUI) from 2009 to 2018. We categorize the topics into two main groups, namely, manual driving-related research and automated driving-related re- search. Within manual driving, we mainly focus on studies on user interfaces (UIs), driver states, augmented reality and head-up displays, and methodology; Within automated driv- ing, we discuss topics, such as takeover, acceptance and trust, interacting with road users, UIs, and methodology. We also discuss the main challenges and future directions for AutoUI and offer a roadmap for the research in this area.https://deepblue.lib.umich.edu/bitstream/2027.42/153959/1/From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI.pdfDescription of From Manual Driving to Automated Driving: A Review of 10 Years of AutoUI.pdf : Main articl

    Towards Cognitive Dialog Systems

    Get PDF

    Anthropocentric-based robotic and autonomous systems: assessment for new organisational options

    Get PDF
    Text based on the paper presented at the Conference "Autonomous systems: inter-relations of technical and societal issues" held at Monte de Caparica (Portugal), Universidade Nova de Lisboa, November, 5th and 6th 2009 and organized by IET-Research Centre on Enterprise and Work InnovationResearch activities at European level on the concept of new working environments offers considerable attention to the challenges of the increased competencies of people working together with automated technologies. Since the decade of 1980 the development of approaches for the humanization of work organization, and for the development of participative organizational options induced to new proposals related to the development of complex and integrated automated systems. From such parallel conceptual development emerged the concept of “anthropocentric robotic systems” and quickly it covered also other fields of automation. More recently, the debate also covers issues related to working perception of people dealing with autonomous systems (e.g. Autonomous robotics) in tasks related to production planning, to programming and to process control. In fact, today one can understand the wider use of the anthropocentrism concept of production architectures, when understanding the new quality of these systems. In this chapter the author analyses the evolution of these issues related to governance of ICT applied to manufacturing and industrial services in research programmes strengthening very much the ‘classical’ concept of anthropocentric-based systems. It is emerging a new value of the intuitive capacities and human knowledge in the optimization and flexibilization of the manufacturing processes. While this would be a pre-condition to understand the human-robot communication needs, there is also a need to take into consideration the qualitative variables in the definition and design of robotic systems, jobs and production systems.Project CRUP/DAAD on “Technology Assessment of Autonomous Robotics” of FCT-UNL and ITAS-KI

    Dialogic activity : a study of learning dialogues and entanglements in a vocational tertiary setting : a thesis presented in partial fulfilment of the requirements for the degree of Doctorate in Education at Massey University, Albany, New Zealand

    Get PDF
    New Zealand’s economic growth continues to place major pressure on the trades sector. To meet future demand for qualified builders, plumbers, electricians, and engineers, trades education has become available at no cost to students for two years. To attract student interest further, tertiary institutions now offer courses in a range of delivery options. Blended learning (BL) is one of these delivery modes and involves a combination of traditional face-to-face and digitally mediated approaches. This research explored students’ dialogic activity in a BL environment, within a trades educational institution. The dialogues that emerged during trades training courses were examined in relation to a complex assemblage of elements, which included interactions between students and teachers, and the digital and materials artefacts in the BL environments. The research used an interdisciplinary lens, employing theories of socio-materialism and dialogism, to unpack forms of dialogic activity that emerged within the BL environment. That same lens was used to reveal the part that material and digital artefacts played in the emergent dialogic activity. Conducted as a multiple case study, the research involved observations of instructors and student participants from three Level 3 pre-apprentice trade programmes, which provided a wide range of data over the course of one semester. Datasets from Automotive Engineering, Electrical Engineering and Mechanical Engineering, as the three cases involved, were analysed to explore the contextual meaning of the learning dialogues and activities in action. The findings revealed that learning dialogues occur in multiple contexts and environments. Artefacts and their properties, BL designs, open and flexible learning spaces, environmental conditions, health and safety considerations, embodiment, multiplicity, mediation, and class culture, all have a significant influence on dialogic activity. The findings offer important insights about the link between course design and learning and identify dialogic activity as an interdisciplinary phenomenon that warrants further investigation

    A Voice and Pointing Gesture Interaction System for Supporting Human Spontaneous Decisions in Autonomous Cars

    Get PDF
    Autonomous cars are expected to improve road safety, traffic and mobility. It is projected that in the next 20-30 years fully autonomous vehicles will be on the market. The advancement on the research and development of this technology will allow the disengagement of humans from the driving task, which will be responsibility of the vehicle intelligence. In this scenario new vehicle interior designs are proposed, enabling more flexible human vehicle interactions inside them. In addition, as some important stakeholders propose, control elements such as the steering wheel and accelerator and brake pedals may not be needed any longer. However, this user control disengagement is one of the main issues related with the user acceptance of this technology. Users do not seem to be comfortable with the idea of giving all the decision power to the vehicle. In addition, there can be location awareness situations where the user makes a spontaneous decision and requires some type of vehicle control. Such is the case of stopping at a particular point of interest or taking a detour in the pre-calculated autonomous route of the car. Vehicle manufacturers\u27 maintain the steering wheel as a control element, allowing the driver to take over the vehicle if needed or wanted. This causes a constraint in the previously mentioned human vehicle interaction flexibility. Thus, there is an unsolved dilemma between providing users enough control over the autonomous vehicle and route so they can make spontaneous decision, and interaction flexibility inside the car. This dissertation proposes the use of a voice and pointing gesture human vehicle interaction system to solve this dilemma. Voice and pointing gestures have been identified as natural interaction techniques to guide and command mobile robots, potentially providing the needed user control over the car. On the other hand, they can be executed anywhere inside the vehicle, enabling interaction flexibility. The objective of this dissertation is to provide a strategy to support this system. For this, a method based on pointing rays intersections for the computation of the point of interest (POI) that the user is pointing to is developed. Simulation results show that this POI computation method outperforms the traditional ray-casting based by 76.5% in cluttered environments and 36.25% in combined cluttered and non-cluttered scenarios. The whole system is developed and demonstrated using a robotics simulator framework. The simulations show how voice and pointing commands performed by the user update the predefined autonomous path, based on the recognized command semantics. In addition, a dialog feedback strategy is proposed to solve conflicting situations such as ambiguity in the POI identification. This additional step is able to solve all the previously mentioned POI computation inaccuracies. In addition, it allows the user to confirm, correct or reject the performed commands in case the system misunderstands them

    Measuring the differences between human-human and human-machine dialogs

    Get PDF
    In this paper, we assess the applicability of user simulation techniques to generate dialogs which are similar to real human-machine spoken interactions.To do so, we present the results of the comparison between three corpora acquired by means of different techniques. The first corpus was acquired with real users.A statistical user simulation technique has been applied to the same task to acquire the second corpus. In this technique, the next user answer is selected by means of a classification process that takes into account the previous dialog history, the lexical information in the clause, and the subtask of the dialog to which it contributes. Finally, a dialog simulation technique has been developed for the acquisition of the third corpus. This technique uses a random selection of the user and system turns, defining stop conditions for automatically deciding if the simulated dialog is successful or not. We use several evaluation measures proposed in previous research to compare between our three acquired corpora, and then discuss the similarities and differences with regard to these measures
    • 

    corecore