448 research outputs found

    A survey of technologies supporting design of a multimodal interactive robot for military communication

    Get PDF
    Purpose – This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making. Design/methodology/approach – This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success. Findings – Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed. Research limitations/implications – Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research. Practical implications – A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously. Social implications – Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission. Originality/value – The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication

    A Hand Motion Based Tool For Conceptual Model Making In Architecture

    Get PDF
    Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 2011Thesis (M.Sc.) -- İstanbul Technical University, Institute of Science and Technology, 2011Tasarımın erken veya kavramsal aşamalarında maketle çalışmak, nesne-beden ilişkisini düşünmeye ve üç boyutlu değerlendirmeye olanak sağlar; görme, dokunma ve hareketleri de içeren mekan algısını devreye sokarak tasarım sürecini zenginleştirir. Mimari tasarımın erken aşamalarının sayısal ortama taşınması, tasarım ve üretim süreçlerinin hızlanıp bütünleşmesiyle giderek yaygınlaşırken, elle çalışmanın getirdiği kazanımların yeni ortamlarda nasıl sürdürülebileceği sorusu ortaya çıkmaktadır. Günümüz teknolojileri el hareketlerinin sayısal ortama aktarılarak işlenebileceğine işaret ederken, sayısal ortamdaki kavramsal tasarım süreçlerinin bu boyutuyla genişletilebilmesi konusunda önemli kaynaklar sağlamaktadır. Üç boyutlu algının görsel algıya destek olarak yaratıcılığı tetiklediğini gösteren çalışmalar, maket yapmanın tasarım süreçlerini güçlendirerek tasarımcıyı desteklediği konusundaki varsayımımıza kaynaklık etmektedir. Çalışmamızda maket yapmanın sağladığı bu desteğin farklı ortamlarda sürdürülebilir olması hedeflenmiş, maket yaparken kullanılan el hareketlerinin sayısal ortama aktarılarak sayısal tasarım süreçlerinde kullanılması konu edilmiştir. Çalışmanın birinci aşamasında kavramsal maket yapma süreçleri gözlemlenerek bu süreçlerde uygulanan başlıca eylemler analiz edilmiştir. Analizler sonucunda maket yaparken kullanılan el hareketleri temel karakteristiklerine dayalı olarak sınıflandırılarak sayısal ortamda işlenebilecek bir hareket tanıma şeması oluşturulmuştur. El hareketlerinin birbirleriyle karşılaştıralabilecek temel özelliklerine göre sınıflandırılmasının ardından, bu özelliklerin sayısal ortama tercümesi konusunda öneriler sunulmuştur. El hareketlerinin sayısal ortama aktarılması ve tanınması için varolan teknolojiler ve yöntemler tartışılarak, sayısal ortamda maket yapmak için gerekli şartları sağlayacak bir tasarım ortamı geliştirilmiştir. Çalışmanın son aşamasında sayısal nesnelerin el hareketleriyle deformasyonuna yönelik algoritmalar geliştirilerek tasarım ortamının çerçevesi çizilmiştir. Araştırmamız sonucu ortaya koyduğumuz tasarım aracı bir CAD yazılımı kullanılarak bir dizi deney vasıtasıyla sınanmış ve deney sonuçları tartışılmıştır.Using sketch models in the early phases of architectural design process enables designers to think about the body-object relations and three dimensional evaluation, enrichening the design environment by providing a spatial perception including visual, haptic and kinesthetic interactions. Spreading through the acceleration and unification of the production processes, the digitization of the conceptual phases of architectural design brings about the question of how the benefits of working with hands can be transferred to the digital realm. Today’s technology points out that hand motions can be transfered into and processed in the digital platform and can provide a base for extending the digital design process with this aspect. Many researches show that the three dimensional perception supports the visual perception and strengthens the creativity. Depending on this fact, we assume that the physical model making supports design processes. In our research, we aim to sustain this support in different design environments and study the transfer of the hand motions used in the physical model making processes and use in the digital design environments. We show the results from the observations taken from the conceptual model making processes and analyze the actions involved in these processes as the first step of the research. Depending on this observations we classify the actions according to the main characteristics of the hand motions and propose a recognition schema to be processed in the digital platform. Following the analysis and the classification of the hand motions used in the model making process we aim on translating these hand motions into the digital platform. We discuss the technologies and methods used for hand motion capturing and recognition, and develop a design environment utilizing the hand motions used in the model making processes. We finalize our approache by presenting a set of algorithms for the object deformations conducted with the hand motions. Finally, we test the design environment and discuss the results.Yüksek LisansM.Sc

    16th Sound and Music Computing Conference SMC 2019 (28–31 May 2019, Malaga, Spain)

    Get PDF
    The 16th Sound and Music Computing Conference (SMC 2019) took place in Malaga, Spain, 28-31 May 2019 and it was organized by the Application of Information and Communication Technologies Research group (ATIC) of the University of Malaga (UMA). The SMC 2019 associated Summer School took place 25-28 May 2019. The First International Day of Women in Inclusive Engineering, Sound and Music Computing Research (WiSMC 2019) took place on 28 May 2019. The SMC 2019 TOPICS OF INTEREST included a wide selection of topics related to acoustics, psychoacoustics, music, technology for music, audio analysis, musicology, sonification, music games, machine learning, serious games, immersive audio, sound synthesis, etc

    Touch Technology in Affective Human, Robot, Virtual-Human Interactions: A Survey

    Get PDF
    Given the importance of affective touch in human interactions, technology designers are increasingly attempting to bring this modality to the core of interactive technology. Advances in haptics and touch-sensing technology have been critical to fostering interest in this area. In this survey, we review how affective touch is investigated to enhance and support the human experience with or through technology. We explore this question across three different research areas to highlight their epistemology, main findings, and the challenges that persist. First, we review affective touch technology through the human–computer interaction literature to understand how it has been applied to the mediation of human–human interaction and its roles in other human interactions particularly with oneself, augmented objects/media, and affect-aware devices. We further highlight the datasets and methods that have been investigated for automatic detection and interpretation of affective touch in this area. In addition, we discuss the modalities of affective touch expressions in both humans and technology in these interactions. Second, we separately review how affective touch has been explored in human–robot and real-human–virtual-human interactions where the technical challenges encountered and the types of experience aimed at are different. We conclude with a discussion of the gaps and challenges that emerge from the review to steer research in directions that are critical for advancing affective touch technology and recognition systems. In our discussion, we also raise ethical issues that should be considered for responsible innovation in this growing area

    A task learning mechanism for the telerobots

    Get PDF
    Telerobotic systems have attracted growing attention because of their superiority in the dangerous or unknown interaction tasks. It is very challengeable to exploit such systems to implement complex tasks in an autonomous way. In this paper, we propose a task learning framework to represent the manipulation skill demonstrated by a remotely controlled robot.Gaussian mixture model is utilized to encode and parametrize the smooth task trajectory according to the observations from the demonstrations. After encoding the demonstrated trajectory, a new task trajectory is generated based on the variability information of the learned model. Experimental results have demonstrated the feasibility of the proposed method
    corecore