484 research outputs found

    16th Sound and Music Computing Conference SMC 2019 (28–31 May 2019, Malaga, Spain)

    Get PDF
    The 16th Sound and Music Computing Conference (SMC 2019) took place in Malaga, Spain, 28-31 May 2019 and it was organized by the Application of Information and Communication Technologies Research group (ATIC) of the University of Malaga (UMA). The SMC 2019 associated Summer School took place 25-28 May 2019. The First International Day of Women in Inclusive Engineering, Sound and Music Computing Research (WiSMC 2019) took place on 28 May 2019. The SMC 2019 TOPICS OF INTEREST included a wide selection of topics related to acoustics, psychoacoustics, music, technology for music, audio analysis, musicology, sonification, music games, machine learning, serious games, immersive audio, sound synthesis, etc

    Energy-based control approaches in human-robot collaborative disassembly

    Get PDF

    Development of an Augmented Reality Interface for Intuitive Robot Programming

    Get PDF
    As the demand for advanced robotic systems continues to grow, the need for new technologies and techniques that can improve the efficiency and effectiveness of robot programming is imperative. The latter relies heavily on the effective communication of tasks between the user and the robot. To address this issue, we developed an Augmented Reality (AR) interface that incorporates Head Mounted Display (HMD) capabilities, and integrated it with an active learning framework for intuitive programming of robots. This integration enables the execution of conditional tasks, bridging the gap between user and robot knowledge. The active learning model with the user's guidance incrementally programs a complex task and after encoding the skills, generates a high level task graph. Then the holographic robot is visualising individual skills of the task in order to increase the user's intuition of the whole procedure with sensory information retrieved from the physical robot in real-time. The interactive aspect of the interface can be utilised in this phase, by providing the user the option of actively validating the learnt skills or potentially changing them and thus generating a new skill sequence. Teaching the real robot through teleoperation by using the HMD is also possible for the user to increase the directness and immersion factors of teaching procedure while safely manipulating the physical robot from a distance. The evaluation of the proposed framework is conducted through a series of experiments employing the developed interface on the real system. These experiments aim to assess the degree of intuitiveness provided by the interface features to the user and to determine the extent of similarity between the virtual system's behavior during the robot programming procedure and that of its physical counterpart

    A Comparison of Paper Sketch and Interactive Wireframe by Eye Movements Analysis, Survey, and Interview

    Get PDF
    Eye movement-based analyses have been extensively performed on graphical user interface designs, mainly on high-fidelity prototypes such as coded prototypes. However, practitioners usually initiate the development life cycle with low-fidelity prototypes, such as mock-ups or sketches. Since little or no eye movement analysis has been performed on the latter, would eye tracking transpose its benefits from high- to low-fidelity prototypes and produce different results? To bridge this gap, we performed an eye movement-based analysis that compares gaze point indexes, gaze event types and durations, fixation, and saccade indexes produced by N=8N{=}8 participants between two treatments, a paper prototype vs. a wireframe. The paper also reports a qualitative analysis based on the answers provided by these participants in a semi-directed interview and on a perceived usability questionnaire with 14 items. Due to its interactivity, the wireframe seems to foster a more exploratory approach to design (e.g., testing and navigating more extensively) than the paper prototype

    GoferBot: A Visual Guided Human-Robot Collaborative Assembly System

    Full text link
    The current transformation towards smart manufacturing has led to a growing demand for human-robot collaboration (HRC) in the manufacturing process. Perceiving and understanding the human co-worker's behaviour introduces challenges for collaborative robots to efficiently and effectively perform tasks in unstructured and dynamic environments. Integrating recent data-driven machine vision capabilities into HRC systems is a logical next step in addressing these challenges. However, in these cases, off-the-shelf components struggle due to generalisation limitations. Real-world evaluation is required in order to fully appreciate the maturity and robustness of these approaches. Furthermore, understanding the pure-vision aspects is a crucial first step before combining multiple modalities in order to understand the limitations. In this paper, we propose GoferBot, a novel vision-based semantic HRC system for a real-world assembly task. It is composed of a visual servoing module that reaches and grasps assembly parts in an unstructured multi-instance and dynamic environment, an action recognition module that performs human action prediction for implicit communication, and a visual handover module that uses the perceptual understanding of human behaviour to produce an intuitive and efficient collaborative assembly experience. GoferBot is a novel assembly system that seamlessly integrates all sub-modules by utilising implicit semantic information purely from visual perception

    Image Retrieval within Augmented Reality

    Get PDF
    Die vorliegende Arbeit untersucht das Potenzial von Augmented Reality zur Verbesserung von Image Retrieval Prozessen. Herausforderungen in Design und Gebrauchstauglichkeit wurden fĂŒr beide Forschungsbereiche dargelegt und genutzt, um Designziele fĂŒr Konzepte zu entwerfen. Eine Taxonomie fĂŒr Image Retrieval in Augmented Reality wurde basierend auf der Forschungsarbeit entworfen und eingesetzt, um verwandte Arbeiten und generelle Ideen fĂŒr Interaktionsmöglichkeiten zu strukturieren. Basierend auf der Taxonomie wurden Anwendungsszenarien als weitere Anforderungen fĂŒr Konzepte formuliert. Mit Hilfe der generellen Ideen und Anforderungen wurden zwei umfassende Konzepte fĂŒr Image Retrieval in Augmented Reality ausgearbeitet. Eins der Konzepte wurde auf einer Microsoft HoloLens umgesetzt und in einer Nutzerstudie evaluiert. Die Studie zeigt, dass das Konzept grundsĂ€tzlich positiv aufgenommen wurde und bietet Erkenntnisse ĂŒber unterschiedliches Verhalten im Raum und verschiedene Suchstrategien bei der DurchfĂŒhrung von Image Retrieval in der erweiterten RealitĂ€t.:1 Introduction 1.1 Motivation and Problem Statement 1.1.1 Augmented Reality and Head-Mounted Displays 1.1.2 Image Retrieval 1.1.3 Image Retrieval within Augmented Reality 1.2 Thesis Structure 2 Foundations of Image Retrieval and Augmented Reality 2.1 Foundations of Image Retrieval 2.1.1 DeïŹnition of Image Retrieval 2.1.2 ClassiïŹcation of Image Retrieval Systems 2.1.3 Design and Usability in Image Retrieval 2.2 Foundations of Augmented Reality 2.2.1 DeïŹnition of Augmented Reality 2.2.2 Augmented Reality Design and Usability 2.3 Taxonomy for Image Retrieval within Augmented Reality 2.3.1 Session Parameters 2.3.2 Interaction Process 2.3.3 Summary of the Taxonomy 3 Concepts for Image Retrieval within Augmented Reality 3.1 Related Work 3.1.1 Natural Query SpeciïŹcation 3.1.2 Situated Result Visualization 3.1.3 3D Result Interaction 3.1.4 Summary of Related Work 3.2 Basic Interaction Concepts for Image Retrieval in Augmented Reality 3.2.1 Natural Query SpeciïŹcation 3.2.2 Situated Result Visualization 3.2.3 3D Result Interaction 3.3 Requirements for Comprehensive Concepts 3.3.1 Design Goals 3.3.2 Application Scenarios 3.4 Comprehensive Concepts 3.4.1 Tangible Query Workbench 3.4.2 Situated Photograph Queries 3.4.3 Conformance of Concept Requirements 4 Prototypic Implementation of Situated Photograph Queries 4.1 Implementation Design 4.1.1 Implementation Process 4.1.2 Structure of the Implementation 4.2 Developer and User Manual 4.2.1 Setup of the Prototype 4.2.2 Usage of the Prototype 4.3 Discussion of the Prototype 5 Evaluation of Prototype and Concept by User Study 5.1 Design of the User Study 5.1.1 Usability Testing 5.1.2 Questionnaire 5.2 Results 5.2.1 Logging of User Behavior 5.2.2 Rating through Likert Scales 5.2.3 Free Text Answers and Remarks during the Study 5.2.4 Observations during the Study 5.2.5 Discussion of Results 6 Conclusion 6.1 Summary of the Present Work 6.2 Outlook on Further WorkThe present work investigates the potential of augmented reality for improving the image retrieval process. Design and usability challenges were identiïŹed for both ïŹelds of research in order to formulate design goals for the development of concepts. A taxonomy for image retrieval within augmented reality was elaborated based on research work and used to structure related work and basic ideas for interaction. Based on the taxonomy, application scenarios were formulated as further requirements for concepts. Using the basic interaction ideas and the requirements, two comprehensive concepts for image retrieval within augmented reality were elaborated. One of the concepts was implemented using a Microsoft HoloLens and evaluated in a user study. The study showed that the concept was rated generally positive by the users and provided insight in different spatial behavior and search strategies when practicing image retrieval in augmented reality.:1 Introduction 1.1 Motivation and Problem Statement 1.1.1 Augmented Reality and Head-Mounted Displays 1.1.2 Image Retrieval 1.1.3 Image Retrieval within Augmented Reality 1.2 Thesis Structure 2 Foundations of Image Retrieval and Augmented Reality 2.1 Foundations of Image Retrieval 2.1.1 DeïŹnition of Image Retrieval 2.1.2 ClassiïŹcation of Image Retrieval Systems 2.1.3 Design and Usability in Image Retrieval 2.2 Foundations of Augmented Reality 2.2.1 DeïŹnition of Augmented Reality 2.2.2 Augmented Reality Design and Usability 2.3 Taxonomy for Image Retrieval within Augmented Reality 2.3.1 Session Parameters 2.3.2 Interaction Process 2.3.3 Summary of the Taxonomy 3 Concepts for Image Retrieval within Augmented Reality 3.1 Related Work 3.1.1 Natural Query SpeciïŹcation 3.1.2 Situated Result Visualization 3.1.3 3D Result Interaction 3.1.4 Summary of Related Work 3.2 Basic Interaction Concepts for Image Retrieval in Augmented Reality 3.2.1 Natural Query SpeciïŹcation 3.2.2 Situated Result Visualization 3.2.3 3D Result Interaction 3.3 Requirements for Comprehensive Concepts 3.3.1 Design Goals 3.3.2 Application Scenarios 3.4 Comprehensive Concepts 3.4.1 Tangible Query Workbench 3.4.2 Situated Photograph Queries 3.4.3 Conformance of Concept Requirements 4 Prototypic Implementation of Situated Photograph Queries 4.1 Implementation Design 4.1.1 Implementation Process 4.1.2 Structure of the Implementation 4.2 Developer and User Manual 4.2.1 Setup of the Prototype 4.2.2 Usage of the Prototype 4.3 Discussion of the Prototype 5 Evaluation of Prototype and Concept by User Study 5.1 Design of the User Study 5.1.1 Usability Testing 5.1.2 Questionnaire 5.2 Results 5.2.1 Logging of User Behavior 5.2.2 Rating through Likert Scales 5.2.3 Free Text Answers and Remarks during the Study 5.2.4 Observations during the Study 5.2.5 Discussion of Results 6 Conclusion 6.1 Summary of the Present Work 6.2 Outlook on Further Wor

    Development of Visualization Tools for Dynamic Networks and Evaluation of Visual Stability Characteristics

    Get PDF
    Das dynamische Graphenzeichnen ist das Mittel der Wahl, wenn es um die Analyse und Visualisierung dynamischer Netzwerke geht. Diese Zeichnungen werden oft durch Festhalten aufeinanderfolgender Datenreihen oder “Snapshots” des untersuchten Netzwerkes erzeugt. FĂŒr jede von diesen Zeichnungen wird mit Hilfe des ausgewĂ€hlten Algorithmus eine unabhĂ€ngige Graphenzeichnung berechnet und die resultierende Sequenz wird dem Benutzer in einer vorbestimmten Reihenfolge prĂ€sentiert. Trotz der Einfachheit dieser Methode tauchen bei der dynamischen Graphenzeichnung mit der vorher genannten Strategie Probleme auf. Teilnehmer, Verbindungen und Muster können wĂ€hrend der Untersuchung des dynamischen Netzwerkes ihre Position auf der Darstellung verĂ€ndern. Außerdem neigen dynamische Graphenzeichnungen dazu, fortlaufend Elemente ohne vorhergehende Information hinzuzufĂŒgen und zu entfernen. Als Konsequenz ergibt sich die Schwierigkeit, die Entwicklung der Mitglieder des Netzwerkes zu beobachten. Es wurden verschiedene Techniken zur Anpassung von Layouts entwickelt, welche das Ziel haben, die Änderungen der dynamischen Graphenzeichnung zu minimieren. Einige von ihnen schlagen vor, dass die Grundstruktur der Zeichnung jederzeit beibehalten werden muss. Andere wiederum, dass jeder Teilnehmer und jede Beziehung einer fixen Position im Euklidischen Raum zugeordnet werden soll. Eine neu entwickelte Technik schlĂ€gt eine Alternative vor: Mehrere Teilnehmer können gleichzeitig einen Knotenpunkt im Euklidischen Raum beanspruchen, solange sie nicht zum selben Zeitpunkt erscheinen. Mehrere Beziehungen können unter den vorgenannten Bedingungen dementsprechend denselben Eckpunkt im Euklidischen Raum beanspruchen. Daraus folgt, dass die dynamische Graphenzeichnung ihre VerĂ€nderungen minimiert bis hin zu einem Zustand, in dem es als "visuell stabil" angesehen werden kann. Diese Arbeit zeigt inwieweit die visuelle StabilitĂ€t einer dynamischen Graphenzeichnung die Benutzererfahrung und die EffektivitĂ€t der visuellen Suche beim Verfolgen der Mitglieder oder Netzwerkeigenschaften beeinflusst. Zu diesem Zweck wurde ein Framework zur UnterstĂŒtzung flexibler Visualisierungstechniken entwickelt. Es diente als Plattform, um existierende Techniken zu bewerten. Solche Bewertungen kombinieren den Gebrauch von Fragebögen, um Informationen ĂŒber die Nutzererfahrung zu sammeln, ein Eye-Tracking System, um die Augenbewegungen zu erfassen sowie ein neues mathematisches Modell zur Quantifizierung der visuellen StabilitĂ€t einer dynamischen Graphenzeichnung. Die daraus folgenden Resultate ergeben, dass es einen Zielkonflikt zwischen der Benutzererfahrung und der Effizienz der visuellen Suche gibt, welche von der visuellen StabilitĂ€t der dynamischen Graphenzeichnung abhĂ€ngt. Einerseits bieten dynamische Graphenzeichnungen mit einem höheren Niveau an visueller StabilitĂ€t eine bessere Benutzererfahrung bei Verfolgungsaufgaben, aber eine schlechtere Effizienz bei der visuellen Suche. Andererseits bieten dynamische Graphenzeichnungen mit einer geringeren visuellen StabilitĂ€t eine nicht zufriedenstellende Benutzererfahrung, jedoch im Austausch eine Verbesserung der Effizienz der visuellen Suche. Dieses Ergebnis wird genutzt, um visuell stabile Beschreibungen zu entwickeln, die darauf abzielen, die Netzwerkeigenschaften ĂŒber einen gewissen Zeitraum zu untersuchen. Solche Beschreibungen und Empfehlungen bedienen sich Merkmalen wie Skalierung und Hervorhebung, um die Effizienz der visuellen Suche zu verbessern.Dynamic graph drawings are the metaphor of choice when it comes to the analysis and visualization of dynamic networks. These drawings are often created by capturing a successive sequence of states or “snapshots” from the network under study. Then, for each one of them, a graph drawing is independently computed with the layout algorithm of preference and the resulting sequence is presented to the user in a predefined order. Despite the simplicity of the method, dynamic graph drawings created with the pre- vious strategy possess some problems. Actors, relations or patterns can change their position on the canvas as the dynamic network is explored. Furthermore, dynamic graph drawings tend to constantly add and remove elements without prior information. As a consequence, it is very difficult to observe how the members of the network evolve over time. The scientific community has developed a series of layout adjustment techniques, which aim at minimizing the changes in a dynamic graph drawing. Some of them suggest that the “shape” of the drawing must be maintained at all time. Others that every actor and relation must be assigned to a fixed position in the Euclidean Space. However, a recently developed technique proposes an alternative. Multiple actors can occupy the same node position in the Euclidean Space, as long as they do not appear at the same point in time. Likewise, multiple relations can occupy the same edge position in the Euclidean Space following the principle aforementioned. As the result, a dynamic graph drawing minimizes its changes to a point where it can be perceived as visually stable. This thesis presents how the visual stability of a dynamic graph drawing affects the user experience and the efficiency of the visual search when tracking actors or network attributes over time. For this purpose, a framework to support flexible visualization techniques was developed. It served as the platform to evaluate existing layout ad- justment techniques. Such an evaluation combined the use of questionnaires to gather information about the user experience; an eye-tracking device to record the eye move- ments and a new mathematical model to appropriately quantify the visual stability of dynamic graph drawings. The results obtained suggest that there is a trade-off between the user experience and the efficiency of the visual search, which depends on the visual stability of a dynamic graph drawing. On the one hand, dynamic graph drawings with higher levels of visual stability provide a satisfying user experience in tracking tasks. Nonetheless, they are inefficient in terms of the visual search. On the other hand, dynamic graph drawings with lower levels of visual stability, do not provide a satisfying user experience in tracking tasks but considerably improve the efficiency of the visual search. These findings were used to develop visually stable metaphors, aiming at exploring network attributes over time. Such metaphors rely on features like scaling or highlighting to improve the efficiency of the visual search
    • 

    corecore