47 research outputs found

    Image-Enabled Discourse: Investigating the Creation of Visual Information as Communicative Practice

    Get PDF
    Anyone who has clarified a thought or prompted a response during a conversation by drawing a picture has exploited the potential of image making as an interactive tool for conveying information. Images are increasingly ubiquitous in daily communication, in large part due to advances in visually enabled information and communication technologies (ICT), such as information visualization applications, image retrieval systems and visually enabled collaborative work tools. Human abilities to use images to communicate are however far more sophisticated and nuanced than these technologies currently support. In order to learn more about the practice of image making as a specialized form of information and communication behavior, this study examined face-to-face conversations involving the creation of ad hoc visualizations (i.e., napkin drawings ). A model of image-enabled discourse is introduced, which positions image making as a specialized form of communicative practice. Multimodal analysis of video-recorded conversations focused on identifying image-enabled communicative activities in terms of interactional sociolinguistic concepts of conversational involvement and coordination, specifically framing, footing and stance. The study shows that when drawing occurs in the context of an ongoing dialogue, the activity of visual representation performs key communicative tasks. Visualization is a form of social interaction that contributes to the maintenance of conversational involvement in ways that are not often evident in the image artifact. For example, drawing enables us to coordinate with each other, to introduce alternative perspectives into a conversation and even to temporarily suspend the primary thread of a discussion in order to explore a tangential thought. The study compares attributes of the image artifact with those of the activity of image making, described as a series of contrasting affordances. Visual information in complex systems is generally represented and managed based on the affordances of the artifact, neglecting to account for all that is communicated through the situated action of creating. These finding have heuristic and best-practice implications for a range of areas related to the design and evaluation of virtual collaboration environments, visual information extraction and retrieval systems, and data visualization tools

    Perceptual Model-Driven Authoring of Plausible Vibrations from User Expectations for Virtual Environments

    Get PDF
    One of the central goals of design is the creation of experiences that are rated favorably in the intended application context. User expectations play an integral role in tactile product quality and tactile plausibility judgments alike. In the vibrotactile authoring process for virtual environments, vibra-tion is created to match the userā€™s expectations of the presented situational context. Currently, inefficient trial and error approaches attempt to match expectations implicitly. A more efficient, model-driven procedure based explicitly on tactile user expectations would thus be beneficial for author-ing vibrations. In everyday life, we are frequently exposed to various whole-body vibrations. Depending on their temporal and spectral proper-ties we intuitively associate specific perceptual properties such as ā€œtin-glingā€. This suggests a systematic relationship between physical parame-ters and perceptual properties. To communicate with potential users about such elicited or expected tactile properties, a standardized design language is proposed. It contains a set of sensory tactile perceptual attributes, which are sufficient to characterize the perceptual space of vibration encountered in everyday life. This design language enables the assessment of quantita-tive tactile perceptual specifications by laypersons that are elicited in situational contexts such as auditory-visual-tactile vehicle scenes. Howev-er, such specifications can also be assessed by providing only verbal de-scriptions of the content of these scenes. Quasi identical ratings observed for both presentation modes suggest that tactile user expectations can be quantified even before any vibration is presented. Such expected perceptu-al specifications are the prerequisite for a subsequent translation into phys-ical vibration parameters. Plausibility can be understood as a similarity judgment between elicited features and expected features. Thus, plausible vibration can be synthesized by maximizing the similarity of the elicited perceptual properties to the expected perceptual properties. Based on the observed relationships between vibration parameters and sensory tactile perceptual attributes, a 1-nearest-neighbor model and a regression model were built. The plausibility of the vibrations synthesized by these models in the context of virtual auditory-visual-tactile vehicle scenes was validat-ed in a perceptual study. The results demonstrated that the perceptual spec-ifications obtained with the design language are sufficient to synthesize vibrations, which are perceived as equally plausible as recorded vibrations in a given situational context. Overall, the demonstrated design method can be a new, more efficient tool for designers authoring vibrations for virtual environments or creating tactile feedback. The method enables further automation of the design process and thus potential time and cost reductions.:Preface III Abstract V Zusammenfassung VII List of Abbreviations XV 1 Introduction 1 1.1 General Introduction 1 1.1 Objectives of the Thesis 4 1.2 Structure of the Thesis 4 2. Tactile Perception in Real and Virtual Environments 7 2.1 Tactile Perception as a Multilayered Process 7 2.1.1 Physical Layer 8 2.1.2 Mechanoreceptor Layer 9 2.1.3 Sensory Layer 19 2.1.4 Affective Layer 26 2.2 Perception of Virtual Environments 29 2.2.1 The Place Illusion 29 2.2.2 The Plausibility Illusion 31 2.3 Approaches for the Authoring of Vibrations 38 2.3.1 Approaches on the Physical Layer 38 2.3.2 Approaches on the Mechanoreceptor Layer 40 2.3.3 Approaches on the Sensory Layer 40 2.3.4 Approaches on the Affective Layer 43 2.4 Summary 43 3. Research Concept 47 3.1 Research Questions 47 3.1.1 Foundations of the Research Concept 47 3.1.2 Research Concept 49 3.2 Limitations 50 4. Development of the Experimental Setup 53 4.1 Hardware 53 4.1.1 Optical Reproduction System 53 4.1.2 Acoustical Reproduction System 54 4.1.3 Whole-Body Vibration Reproduction System 56 4.2 Software 64 4.2.1 Combination of Reproduction Systems for Unimodal and Multimodal Presentation 64 4.2.2 Conducting Perceptual Studies 65 5. Assessment of a Sensory Tactile Design Language for Characterizing Vibration 67 5.1.1 Design Language Requirements 67 5.1.2 Method to Assess the Design Language 69 5.1.3 Goals of this Chapter 70 5.2 Tactile Stimuli 72 5.2.1 Generalization into Excitation Patterns 72 5.2.2 Definition of Parameter Values of the Excitation Patterns 75 5.2.3 Generation of the Stimuli 85 5.2.4 Summary 86 5.3 Assessment of the most relevant Sensory Tactile Perceptual Attributes 86 5.3.1 Experimental Design 87 5.3.2 Participants 88 5.3.3 Results 88 5.3.4 Aggregation and Prioritization 89 5.3.5 Summary 91 5.4 Identification of the Attributes forming the Design Language 92 5.4.1 Experimental Design 93 5.4.2 Participants 95 5.4.3 Results 95 5.4.4 Selecting the Elements of the Sensory Tactile Design Language 106 5.4.5 Summary 109 5.5 Summary and Discussion 109 5.5.1 Summary 109 5.5.2 Discussion 111 6. Quantification of Expected Properties with the Sensory Tactile Design Language 115 6.1 Multimodal Stimuli 116 6.1.1 Selection of the Scenes 116 6.1.2 Recording of the Scenes 117 6.1.3 Recorded Stimuli 119 6.2 Qualitative Communication in the Presence of Vibration 123 6.2.1 Experimental Design 123 6.2.2 Participants 124 6.2.3 Results 124 6.2.4 Summary 126 6.3 Quantitative Communication in the Presence of Vibration 126 6.3.1 Experimental Design 127 6.3.2 Participants 127 6.3.3 Results 127 6.3.4 Summary 129 6.4 Quantitative Communication in the Absence of Vibration 129 6.4.1 Experimental Design 130 6.4.2 Participants 132 6.4.3 Results 132 6.4.4 Summary 134 6.5 Summary and Discussion 135 7. Synthesis Models for the Translation of Sensory Tactile Properties into Vibration 137 7.1 Formalization of the Tactile Plausibility Illusion for Models 139 7.1.1 Formalization of Plausibility 139 7.1.2 Model Boundaries 143 7.2 Investigation of the Influence of Vibration Level on Attribute Ratings 144 7.2.1 Stimuli 145 7.2.2 Experimental Design 145 7.2.3 Participants 146 7.2.4 Results 146 7.2.5 Summary 148 7.3 Comparison of Modulated Vibration to Successive Impulse-like Vibration 148 7.3.1 Stimuli 149 7.3.2 Experimental Design 151 7.3.3 Participants 151 7.3.4 Results 151 7.3.5 Summary 153 7.4 Synthesis Based on the Discrete Estimates of a k-Nearest-Neighbor Classifier 153 7.4.1 Definition of the K-Nearest-Neighbor Classifier 154 7.4.2 Analysis Model 155 7.4.3 Synthesis Model 156 7.4.4 Interpolation of acceleration level for the vibration attribute profile pairs 158 7.4.5 Implementation of the Synthesis 159 7.4.6 Advantages and Disadvantages 164 7.5 Synthesis Based on the Quasi-Continuous Estimates of Regression Models 166 7.5.1 Overall Model Structure 168 7.5.2 Classification of the Excitation Pattern with a Support Vector Machine 171 7.5.3 General Approach to the Regression Models of each Excitation Pattern 178 7.5.4 Synthesis for the Impulse-like Excitation Pattern 181 7.5.5 Synthesis for the Bandlimited White Gaussian Noise Excitation Pattern 187 7.5.6 Synthesis for the Amplitude Modulated Sinusoidal Excitation Pattern 193 7.5.7 Synthesis for the Sinusoidal Excitation Pattern 199 7.5.8 Implementation of the Synthesis 205 7.5.9 Advantages and Disadvantages of the Approach 208 7.6 Validation of the Synthesis Models 210 7.6.1 Stimuli 212 7.6.2 Experimental Design 212 7.6.3 Participants 214 7.6.4 Results 214 7.6.5 Summary 219 7.7 Summary and Discussion 219 7.7.1 Summary 219 7.7.2 Discussion 222 8. General Discussion and Outlook 227 Acknowledgment 237 References 237Eines der zentralen Ziele des Designs von Produkten oder virtuellen Um-gebungen ist die Schaffung von Erfahrungen, die im beabsichtigten An-wendungskontext die Erwartungen der Benutzer erfĆ¼llen. GegenwƤrtig versucht man im vibrotaktilen Authoring-Prozess mit ineffizienten Trial-and-Error-Verfahren, die Erwartungen an den dargestellten, virtuellen Situationskontext implizit zu erfĆ¼llen. Ein effizienteres, modellgetriebenes Verfahren, das explizit auf den taktilen Benutzererwartungen basiert, wƤre daher von Vorteil. Im Alltag sind wir hƤufig verschiedenen Ganzkƶrper-schwingungen ausgesetzt. AbhƤngig von ihren zeitlichen und spektralen Eigenschaften assoziieren wir intuitiv bestimmte Wahrnehmungsmerkmale wie z.B. ā€œkribbelnā€. Dies legt eine systematische Beziehung zwischen physikalischen Parametern und Wahrnehmungsmerkmalen nahe. Um mit potentiellen Nutzern Ć¼ber hervorgerufene oder erwartete taktile Eigen-schaften zu kommunizieren, wird eine standardisierte Designsprache vor-geschlagen. Sie enthƤlt eine Menge von sensorisch-taktilen Wahrneh-mungsmerkmalen, die hinreichend den Wahrnehmungsraum der im Alltag auftretenden Vibrationen charakterisieren. Diese Entwurfssprache ermƶg-licht die quantitative Beurteilung taktiler Wahrnehmungsmerkmale, die in Situationskontexten wie z.B. auditiv-visuell-taktilen Fahrzeugszenen her-vorgerufen werden. Solche Wahrnehmungsspezifikationen kƶnnen jedoch auch bewertet werden, indem der Inhalt dieser Szenen verbal beschrieben wird. Quasi identische Bewertungen fĆ¼r beide PrƤsentationsmodi deuten darauf hin, dass die taktilen Benutzererwartungen quantifiziert werden kƶnnen, noch bevor eine Vibration prƤsentiert wird. Die erwarteten Wahr-nehmungsspezifikationen sind die Voraussetzung fĆ¼r eine anschlieƟende Ɯbersetzung in physikalische Schwingungsparameter. Plausible Vibratio-nen kƶnnen synthetisiert werden, indem die erwarteten Wahrnehmungs-merkmale hervorgerufen werden. Auf der Grundlage der beobachteten Beziehungen zwischen SchwingungsĀ¬parametern und sensorisch-taktilen Wahrnehmungsmerkmalen wurden ein 1-Nearest-Neighbor-Modell und ein Regressionsmodell erstellt. Die PlausibilitƤt der von diesen Modellen synthetisierten Schwingungen im Kontext virtueller, auditorisch-visuell-taktiler Fahrzeugszenen wurde in einer Wahrnehmungsstudie validiert. Die Ergebnisse zeigten, dass die mit der Designsprache gewonnenen Wahr-nehmungsspezifikationen ausreichen, um Schwingungen zu synthetisieren, die in einem gegebenen Situationskontext als ebenso plausibel empfunden werden wie aufgezeichnete Schwingungen. Die demonstrierte Entwurfsme-thode stellt ein neues, effizienteres Werkzeug fĆ¼r Designer dar, die Schwingungen fĆ¼r virtuelle Umgebungen erstellen oder taktiles Feedback fĆ¼r Produkte erzeugen.:Preface III Abstract V Zusammenfassung VII List of Abbreviations XV 1 Introduction 1 1.1 General Introduction 1 1.1 Objectives of the Thesis 4 1.2 Structure of the Thesis 4 2. Tactile Perception in Real and Virtual Environments 7 2.1 Tactile Perception as a Multilayered Process 7 2.1.1 Physical Layer 8 2.1.2 Mechanoreceptor Layer 9 2.1.3 Sensory Layer 19 2.1.4 Affective Layer 26 2.2 Perception of Virtual Environments 29 2.2.1 The Place Illusion 29 2.2.2 The Plausibility Illusion 31 2.3 Approaches for the Authoring of Vibrations 38 2.3.1 Approaches on the Physical Layer 38 2.3.2 Approaches on the Mechanoreceptor Layer 40 2.3.3 Approaches on the Sensory Layer 40 2.3.4 Approaches on the Affective Layer 43 2.4 Summary 43 3. Research Concept 47 3.1 Research Questions 47 3.1.1 Foundations of the Research Concept 47 3.1.2 Research Concept 49 3.2 Limitations 50 4. Development of the Experimental Setup 53 4.1 Hardware 53 4.1.1 Optical Reproduction System 53 4.1.2 Acoustical Reproduction System 54 4.1.3 Whole-Body Vibration Reproduction System 56 4.2 Software 64 4.2.1 Combination of Reproduction Systems for Unimodal and Multimodal Presentation 64 4.2.2 Conducting Perceptual Studies 65 5. Assessment of a Sensory Tactile Design Language for Characterizing Vibration 67 5.1.1 Design Language Requirements 67 5.1.2 Method to Assess the Design Language 69 5.1.3 Goals of this Chapter 70 5.2 Tactile Stimuli 72 5.2.1 Generalization into Excitation Patterns 72 5.2.2 Definition of Parameter Values of the Excitation Patterns 75 5.2.3 Generation of the Stimuli 85 5.2.4 Summary 86 5.3 Assessment of the most relevant Sensory Tactile Perceptual Attributes 86 5.3.1 Experimental Design 87 5.3.2 Participants 88 5.3.3 Results 88 5.3.4 Aggregation and Prioritization 89 5.3.5 Summary 91 5.4 Identification of the Attributes forming the Design Language 92 5.4.1 Experimental Design 93 5.4.2 Participants 95 5.4.3 Results 95 5.4.4 Selecting the Elements of the Sensory Tactile Design Language 106 5.4.5 Summary 109 5.5 Summary and Discussion 109 5.5.1 Summary 109 5.5.2 Discussion 111 6. Quantification of Expected Properties with the Sensory Tactile Design Language 115 6.1 Multimodal Stimuli 116 6.1.1 Selection of the Scenes 116 6.1.2 Recording of the Scenes 117 6.1.3 Recorded Stimuli 119 6.2 Qualitative Communication in the Presence of Vibration 123 6.2.1 Experimental Design 123 6.2.2 Participants 124 6.2.3 Results 124 6.2.4 Summary 126 6.3 Quantitative Communication in the Presence of Vibration 126 6.3.1 Experimental Design 127 6.3.2 Participants 127 6.3.3 Results 127 6.3.4 Summary 129 6.4 Quantitative Communication in the Absence of Vibration 129 6.4.1 Experimental Design 130 6.4.2 Participants 132 6.4.3 Results 132 6.4.4 Summary 134 6.5 Summary and Discussion 135 7. Synthesis Models for the Translation of Sensory Tactile Properties into Vibration 137 7.1 Formalization of the Tactile Plausibility Illusion for Models 139 7.1.1 Formalization of Plausibility 139 7.1.2 Model Boundaries 143 7.2 Investigation of the Influence of Vibration Level on Attribute Ratings 144 7.2.1 Stimuli 145 7.2.2 Experimental Design 145 7.2.3 Participants 146 7.2.4 Results 146 7.2.5 Summary 148 7.3 Comparison of Modulated Vibration to Successive Impulse-like Vibration 148 7.3.1 Stimuli 149 7.3.2 Experimental Design 151 7.3.3 Participants 151 7.3.4 Results 151 7.3.5 Summary 153 7.4 Synthesis Based on the Discrete Estimates of a k-Nearest-Neighbor Classifier 153 7.4.1 Definition of the K-Nearest-Neighbor Classifier 154 7.4.2 Analysis Model 155 7.4.3 Synthesis Model 156 7.4.4 Interpolation of acceleration level for the vibration attribute profile pairs 158 7.4.5 Implementation of the Synthesis 159 7.4.6 Advantages and Disadvantages 164 7.5 Synthesis Based on the Quasi-Continuous Estimates of Regression Models 166 7.5.1 Overall Model Structure 168 7.5.2 Classification of the Excitation Pattern with a Support Vector Machine 171 7.5.3 General Approach to the Regression Models of each Excitation Pattern 178 7.5.4 Synthesis for the Impulse-like Excitation Pattern 181 7.5.5 Synthesis for the Bandlimited White Gaussian Noise Excitation Pattern 187 7.5.6 Synthesis for the Amplitude Modulated Sinusoidal Excitation Pattern 193 7.5.7 Synthesis for the Sinusoidal Excitation Pattern 199 7.5.8 Implementation of the Synthesis 205 7.5.9 Advantages and Disadvantages of the Approach 208 7.6 Validation of the Synthesis Models 210 7.6.1 Stimuli 212 7.6.2 Experimental Design 212 7.6.3 Participants 214 7.6.4 Results 214 7.6.5 Summary 219 7.7 Summary and Discussion 219 7.7.1 Summary 219 7.7.2 Discussion 222 8. General Discussion and Outlook 227 Acknowledgment 237 References 23

    Designing Hybrid Interactions through an Understanding of the Affordances of Physical and Digital Technologies

    Get PDF
    Two recent technological advances have extended the diversity of domains and social contexts of Human-Computer Interaction: the embedding of computing capabilities into physical hand-held objects, and the emergence of large interactive surfaces, such as tabletops and wall boards. Both interactive surfaces and small computational devices usually allow for direct and space-multiplex input, i.e., for the spatial coincidence of physical action and digital output, in multiple points simultaneously. Such a powerful combination opens novel opportunities for the design of what are considered as hybrid interactions in this work. This thesis explores the affordances of physical interaction as resources for interface design of such hybrid interactions. The hybrid systems that are elaborated in this work are envisioned to support specific social and physical contexts, such as collaborative cooking in a domestic kitchen, or collaborative creativity in a design process. In particular, different aspects of physicality characteristic of those specific domains are explored, with the aim of promoting skill transfer across domains. irst, different approaches to the design of space-multiplex, function-specific interfaces are considered and investigated. Such design approaches build on related work on Graspable User Interfaces and extend the design space to direct touch interfaces such as touch-sensitive surfaces, in different sizes and orientations (i.e., tablets, interactive tabletops, and walls). These approaches are instantiated in the design of several experience prototypes: These are evaluated in different settings to assess the contextual implications of integrating aspects of physicality in the design of the interface. Such implications are observed both at the pragmatic level of interaction (i.e., patterns of users' behaviors on first contact with the interface), as well as on user' subjective response. The results indicate that the context of interaction affects the perception of the affordances of the system, and that some qualities of physicality such as the 3D space of manipulation and relative haptic feedback can affect the feeling of engagement and control. Building on these findings, two controlled studies are conducted to observe more systematically the implications of integrating some of the qualities of physical interaction into the design of hybrid ones. The results indicate that, despite the fact that several aspects of physical interaction are mimicked in the interface, the interaction with digital media is quite different and seems to reveal existing mental models and expectations resulting from previous experience with the WIMP paradigm on the desktop PC

    Optimizing Common Spatial Pattern for a Motor Imagerybased BCI by Eigenvector Filteration

    Get PDF
    One of the fundamental criterion for the successful application of a brain-computer interface (BCI) system is to extract significant features that confine invariant characteristics specific to each brain state. Distinct features play an important role in enabling a computer to associate different electroencephalogram (EEG) signals to different brain states. To ease the workload on the feature extractor and enhance separability between different brain states, the data is often transformed or filtered to maximize separability before feature extraction. The common spatial patterns (CSP) approach can achieve this by linearly projecting the multichannel EEG data into a surrogate data space by the weighted summation of the appropriate channels. However, choosing the optimal spatial filters is very significant in the projection of the data and this has a direct impact on classification. This paper presents an optimized pattern selection method from the CSP filter for improved classification accuracy. Based on the hypothesis that values closer to zero in the CSP filter introduce noise rather than useful information, the CSP filter is modified by analyzing the CSP filter and removing/filtering the degradative or insignificant values from the filter. This hypothesis is tested by comparing the BCI results of eight subjects using the conventional CSP filters and the optimized CSP filter. In majority of the cases the latter produces better performance in terms of the overall classification accuracy

    Optimizing Common Spatial Pattern for a Motor Imagerybased BCI by Eigenvector Filteration

    Get PDF
    One of the fundamental criterion for the successful application of a brain-computer interface (BCI) system is to extract significant features that confine invariant characteristics specific to each brain state. Distinct features play an important role in enabling a computer to associate different electroencephalogram (EEG) signals to different brain states. To ease the workload on the feature extractor and enhance separability between different brain states, the data is often transformed or filtered to maximize separability before feature extraction. The common spatial patterns (CSP) approach can achieve this by linearly projecting the multichannel EEG data into a surrogate data space by the weighted summation of the appropriate channels. However, choosing the optimal spatial filters is very significant in the projection of the data and this has a direct impact on classification. This paper presents an optimized pattern selection method from the CSP filter for improved classification accuracy. Based on the hypothesis that values closer to zero in the CSP filter introduce noise rather than useful information, the CSP filter is modified by analyzing the CSP filter and removing/filtering the degradative or insignificant values from the filter. This hypothesis is tested by comparing the BCI results of eight subjects using the conventional CSP filters and the optimized CSP filter. In majority of the cases the latter produces better performance in terms of the overall classification accuracy

    Turn-Taking in Human Communicative Interaction

    Get PDF
    The core use of language is in face-to-face conversation. This is characterized by rapid turn-taking. This turn-taking poses a number central puzzles for the psychology of language. Consider, for example, that in large corpora the gap between turns is on the order of 100 to 300 ms, but the latencies involved in language production require minimally between 600ms (for a single word) or 1500 ms (for as simple sentence). This implies that participants in conversation are predicting the ends of the incoming turn and preparing in advance. But how is this done? What aspects of this prediction are done when? What happens when the prediction is wrong? What stops participants coming in too early? If the system is running on prediction, why is there consistently a mode of 100 to 300 ms in response time? The timing puzzle raises further puzzles: it seems that comprehension must run parallel with the preparation for production, but it has been presumed that there are strict cognitive limitations on more than one central process running at a time. How is this bottleneck overcome? Far from being 'easy' as some psychologists have suggested, conversation may be one of the most demanding cognitive tasks in our everyday lives. Further questions naturally arise: how do children learn to master this demanding task, and what is the developmental trajectory in this domain? Research shows that aspects of turn-taking such as its timing are remarkably stable across languages and cultures, but the word order of languages varies enormously. How then does prediction of the incoming turn work when the verb (often the informational nugget in a clause) is at the end? Conversely, how can production work fast enough in languages that have the verb at the beginning, thereby requiring early planning of the whole clause? What happens when one changes modality, as in sign languages -- with the loss of channel constraints is turn-taking much freer? And what about face-to-face communication amongst hearing individuals -- do gestures, gaze, and other body behaviors facilitate turn-taking? One can also ask the phylogenetic question: how did such a system evolve? There seem to be parallels (analogies) in duetting bird species, and in a variety of monkey species, but there is little evidence of anything like this among the great apes. All this constitutes a neglected set of problems at the heart of the psychology of language and of the language sciences. This research topic welcomes contributions from right across the board, for example from psycholinguists, developmental psychologists, students of dialogue and conversation analysis, linguists interested in the use of language, phoneticians, corpus analysts and comparative ethologists or psychologists. We welcome contributions of all sorts, for example original research papers, opinion pieces, and reviews of work in subfields that may not be fully understood in other subfields

    Metafore mobilnih komunikacija ; ŠœŠµŃ‚Š°Ń„Š¾Ń€Ń‹ Š¼Š¾Š±ŠøŠ»ŃŒŠ½Š¾Š¹ сŠ²ŃŠ·Šø.

    Get PDF
    Mobilne komunikacije su polje informacione i komunikacione tehnologije koje karakteriÅ”e brzi razvoj i u kome se istraživanjem u analitičkim okvirima kognitivne lingvistike, zasnovanom na uzorku od 1005 odrednica, otkriva izrazito prisustvo metafore, metonimije, analogije i pojmovnog objedinjavanja. Analiza uzorka reči i izraza iz oblasti mobilnih medija, mobilnih operativnih sistema, dizajna korisničkih interfejsa, terminologije mobilnih mreža, kao i slenga i tekstizama koje upotrebljavaju korisnici mobilnih naprava ukazuje da pomenuti kognitivni mehanizmi imaju ključnu ulogu u olakÅ”avanju interakcije između ljudi i Å”irokog spektra mobilnih uređaja sa računarskim sposobnostima, od prenosivih računara i ličnih digitalnih asistenata (PDA), do mobilnih telefona, tableta i sprava koje se nose na telu. Ti mehanizmi predstavljaju temelj razumevanja i nalaze se u osnovi principa funkcionisanja grafičkih korisničkih interfejsa i direktne manipulacije u računarskim okruženjima. Takođe je analiziran i poseban uzorak od 660 emotikona i emođija koji pokazuju potencijal za proÅ”irenje značenja, imajući u vidu značaj piktograma za tekstualnu komunikaciju u vidu SMS poruka i razmenu tekstualnih sadržaja na druÅ”tvenim mrežama kojima se redovno pristupa putem mobilnih uređaja...Mobile communications are a fast-developing field of information and communication technology whose exploration within the analytical framework of cognitive linguistics, based on a sample of 1005 entries, reveals the pervasive presence of metaphor, metonymy analogy and conceptual integration. The analysis of the sample consisting of words and phrases related to mobile media, mobile operating systems and interface design, the terminology of mobile networking, as well as the slang and textisms employed by mobile gadget users shows that the above cognitive mechanisms play a key role in facilitating interaction between people and a wide range of mobile computing devices from laptops and PDAs to mobile phones, tablets and wearables. They are the cornerstones of comprehension that are behind the principles of functioning of graphical user interfaces and direct manipulation in computing environments. A separate sample, featuring a selection of 660 emoticons and emoji, exhibiting the potential for semantic expansion was also analyzed, in view of the significance of pictograms for text-based communication in the form of text messages or exchanges on social media sites regularly accessed via mobile devices..

    Drawing out interaction: Lines around shared space.

    Get PDF
    PhdDespite advances in image, video, and motion capture technologies, human interactions are frequently represented as line drawings. Intuitively, drawings provide a useful way of filtering complex, dynamic sequences to produce concise representations of interaction. They also make it possible to represent phenomena such as topic spaces, that do not have a concrete physical manifestation. However, the processes involved in producing these drawings, the advantages and limitations of line drawings as representations, and the implications of drawing as an analytic method have not previously been investigated. This thesis explores the use of drawings to represent human interaction and is informed by the prior experience and abilities of the investigator as a practising visual artist. It begins by discussing the drawing process and how it has been used to capture human activities. Key drawing techniques are identified and tested against an excerpt from an interaction between architects. A series of new drawings are constructed to depict one scene from this interaction, highlighting the contrasts between each drawing technique and their impact on the way shared spaces are represented. A second series of original drawings are produced exploring new ways of representing these spaces, leading to a proposal for a field-based approach that combines gesture paths, fields, and human figures to create a richer analytic representation. A protocol for using this approach to analyse video in practice is developed and evaluated though a sequence of three participatory workshops for researchers in human interaction. The results suggest that the field based process of drawing facilitates the production of spatially enriched graphical representations of qualitative spaces. The thesis concludes that the use of drawing to explore non-metric approaches to shared interactional space, has implications for research in human interaction, interaction design, clinical psychology, anthropology, and discourse analysis, and will find form in new new approaches to contemporary artistic practice.Engineering and Physical Sciences Research Council (EPSRC)

    Proceedings of the 7th Sound and Music Computing Conference

    Get PDF
    Proceedings of the SMC2010 - 7th Sound and Music Computing Conference, July 21st - July 24th 2010

    An integrative computational modelling of music structure apprehension

    Get PDF
    corecore