660 research outputs found

    Commentary: I'm only trying to help: A role for interventions in teaching listening

    Get PDF
    In my work as an author and teacher trainer, I have the opportunity to travel around the world and talk to teachers in a variety of settings. Though I meet teachers with a range of backgrounds and a wide disparity of resources, I find that a few common themes come up whenever I talk with teachers about language teaching and technology. One of the familiar refrains is that most of us claim to lack the technological resources we feel we need to teach effectively. There’s always something new on the horizon that we feel we just have to have. Another recurring theme is the lament that most of our students just don’t seem to take advantage of the extra learning opportunities we present them anyway! Teachers want to help, but often feel under appreciated for their efforts. Personally, I have relished the ongoing advances in technology over the course of my teaching career. I started out as a secondary school teacher in Togo, West Africa with chalk – sometimes yellow or pink! – and a blackboard as my only teaching technology. When teachers express a sense of being overwhelmed by new technology, I sometimes talk about my own beginnings and also remind them of a few of Donald Norman’s principles of human-centered design. According to Norman (2004), for any new technology to be effective, it must be intuitively helpful and elegantly efficient. In the case of language teaching, this means the technology must – immediately and transparently – help us teach better than we do already. If it doesn’t, we simply shouldn’t use it. In addition, Norman says, for any new technology to be widely adopted, it must appeal to the emotions as well as to reason. If people don’t enjoy using a particular technology, no matter how logically useful it may be, they will tend to shun it. Perhaps because as language teachers we tend to favor eclecticism, we will often throw any emerging technology into the mix as a "helpful resource." As Doughty and Long (2003) point out, teachers often do not distinguish between new technological tools that are innovative but not actually helpful and those which are innovative and genuinely helpful. In my own instructional design, I have identified three "intervention phases" in the listening process: decoding, comprehension, and interpretation. Before we assume any new technology or intervention is actually going to be supportive, I believe we need to understand the learners' goals during these listening processes. What actually motivates the learners towards achieving these goals is what ultimately will be useful

    Leveraging Multi-Modal Sensing for Mobile Health: A Case Review in Chronic Pain

    Get PDF
    Active and passive mobile sensing has garnered much attention in recent years. In this paper, we focus on chronic pain measurement and management as a case application to exemplify the state of the art. We present a consolidated discussion on the leveraging of various sensing modalities along with modular server-side and on-device architectures required for this task. Modalities included are: activity monitoring from accelerometry and location sensing, audio analysis of speech, image processing for facial expressions as well as modern methods for effective patient self-reporting. We review examples that deliver actionable information to clinicians and patients while addressing privacy, usability, and computational constraints. We also discuss open challenges in the higher level inferencing of patient state and effective feedback with potential directions to address them. The methods and challenges presented here are also generalizable and relevant to a broad range of other applications in mobile sensing

    Where Information Systems Research Meets Artificial Intelligence Practice: Towards the Development of an AI Capability Framework

    Get PDF
    Information systems (IS) research has always been one of the leading applied research areas in the investigation of technology-related phenomena. Meanwhile, for the past 10 years, artificial intelligence (AI) has transformed every aspect of society more than any other technological innovation. Thus, this is the right time for IS research to foster more quality and high-impact research on AI starting by organizing the cumulated body of knowledge on AI in IS research. We propose a framework called AI capability framework that would provide pertinent and relevant guidance for conducting IS research on AI. Since AI is a fast-evolving phenomenon, this framework is founded on the main AI capabilities that shape today’s fast-moving AI ecosystem. Thus, it is crucial that such a framework engages both AI research and practice into a continuous and evolving dialogue

    Brain anatomical correlates of perceptual phonological proficiency and language learning aptitude

    Get PDF
    The present dissertation concerns how brain tissue properties reflect proficiency in two aspects of language use: the ability to use tonal cues on word stems to predict how words will end and the aptitude for learning foreign languages. While it is known that people differ in their language abilities and that damage to brain tissue cause loss of cognitive functions, it is largely unknown if differences in language proficiencies correlate with differences in brain structure. The first two studies examine correlations between cortical morphometry, i.e. the thickness and surface area of the cortex, and the degree of dependency on word accents for processing upcoming suffixes in Swedish native speakers. Word accents in Swedish facilitate speech processing by having predictive associations to specific suffixes, (e.g. flĂ€ckaccent1+en ‘spot+singular’, flĂ€ckaccent2+ar ‘spot+plural’). This use of word accents, as phonological cues to inflectional suffixes, is relatively unique among the world’s languages. How much a speaker depends on word accents in speech processing can be measured as the difference in response time (RT) between valid and invalid word accent-suffix combinations when asked to identify the inflected form of a word. This can be thought of as a measure of perceptual phonological proficiency in native speakers. Perceptual phonological proficiency is otherwise very difficult to study, as most phonological contrasts are mandatory to properly interpret the meaning of utterances. Study I compares the cortical morphometrical correlates in the planum temporale and inferior frontal gyrus pars opercularis in relation to RT differences in tasks involving real words and pseudowords. We found that thickness of the left planum temporale correlates with perceptual phonological proficiency in lexical words but not pseudowords. This could implicate that word accents are part of full-form representations of familiar words. Moreover, for pseudowords but not lexical words, the thickness of the inferior frontal gyrus pars opercularis correlates with perceptual phonological proficiency. This association could reflect a greater importance for decompositional analysis in which word accents are part of a set of rules listeners need to rely on during processing of novel words. In study II, the investigation of the association between perceptual phonological proficiency in real words with cortical morphometry is expanded to the entire brain. Results show that cortical thickness and surface area of anterior temporal lobe areas, known constituents of a ventral sound-to-meaning language-processing stream is associated with greater perceptual phonological proficiency. This is consistent with a role for word accents in aiding putting together the meaning of or accessing a whole word representation of an inflected word form. Studies III and IV investigate the cortical morphometric associations with language learning aptitude. Findings in study III suggest that aptitude for grammatical inferencing, i.e. the ability to analytically discern the rules of a language, is associated with cortical thickness in the left inferior frontal gyrus pars triangularis. Furthermore, pitch discrimination proficiency, a skill related to language learning ability, correlates negatively with cortical thickness in the right homologue area. Moreover, study IV, using improved imaging techniques, reports on a correlation between vocabulary learning aptitude and cortical surface area in the left inferior precuneus as well as a negative correlation between diffusional axial kurtosis and phonetic memory in the left arcuate fasciculus and subsegment III of the superior longitudinal fasciculus. However, the finding correlation between cortical thickness and grammatical inferencing skill from study III was not replicated in study IV.Taken together, the present dissertation shows that differences in some language proficiencies are associated with regionally thicker or larger cortex and more coherent white matter tracts, the nature and spatial locus of which depend on the proficiency studied. The studies add to our understanding of how language proficiencies are represented in the brain’s anatomy

    Directional adposition use in English, Swedish and Finnish

    Get PDF
    Directional adpositions such as to the left of describe where a Figure is in relation to a Ground. English and Swedish directional adpositions refer to the location of a Figure in relation to a Ground, whether both are static or in motion. In contrast, the Finnish directional adpositions edellĂ€ (in front of) and jĂ€ljessĂ€ (behind) solely describe the location of a moving Figure in relation to a moving Ground (Nikanne, 2003). When using directional adpositions, a frame of reference must be assumed for interpreting the meaning of directional adpositions. For example, the meaning of to the left of in English can be based on a relative (speaker or listener based) reference frame or an intrinsic (object based) reference frame (Levinson, 1996). When a Figure and a Ground are both in motion, it is possible for a Figure to be described as being behind or in front of the Ground, even if neither have intrinsic features. As shown by Walker (in preparation), there are good reasons to assume that in the latter case a motion based reference frame is involved. This means that if Finnish speakers would use edellĂ€ (in front of) and jĂ€ljessĂ€ (behind) more frequently in situations where both the Figure and Ground are in motion, a difference in reference frame use between Finnish on one hand and English and Swedish on the other could be expected. We asked native English, Swedish and Finnish speakers’ to select adpositions from a language specific list to describe the location of a Figure relative to a Ground when both were shown to be moving on a computer screen. We were interested in any differences between Finnish, English and Swedish speakers. All languages showed a predominant use of directional spatial adpositions referring to the lexical concepts TO THE LEFT OF, TO THE RIGHT OF, ABOVE and BELOW. There were no differences between the languages in directional adpositions use or reference frame use, including reference frame use based on motion. We conclude that despite differences in the grammars of the languages involved, and potential differences in reference frame system use, the three languages investigated encode Figure location in relation to Ground location in a similar way when both are in motion. Levinson, S. C. (1996). Frames of reference and Molyneux’s question: Crosslingiuistic evidence. In P. Bloom, M.A. Peterson, L. Nadel & M.F. Garrett (Eds.) Language and Space (pp.109-170). Massachusetts: MIT Press. Nikanne, U. (2003). How Finnish postpositions see the axis system. In E. van der Zee & J. Slack (Eds.), Representing direction in language and space. Oxford, UK: Oxford University Press. Walker, C. (in preparation). Motion encoding in language, the use of spatial locatives in a motion context. Unpublished doctoral dissertation, University of Lincoln, Lincoln. United Kingdo

    The State of the Art in Cartograms

    Full text link
    Cartograms combine statistical and geographical information in thematic maps, where areas of geographical regions (e.g., countries, states) are scaled in proportion to some statistic (e.g., population, income). Cartograms make it possible to gain insight into patterns and trends in the world around us and have been very popular visualizations for geo-referenced data for over a century. This work surveys cartogram research in visualization, cartography and geometry, covering a broad spectrum of different cartogram types: from the traditional rectangular and table cartograms, to Dorling and diffusion cartograms. A particular focus is the study of the major cartogram dimensions: statistical accuracy, geographical accuracy, and topological accuracy. We review the history of cartograms, describe the algorithms for generating them, and consider task taxonomies. We also review quantitative and qualitative evaluations, and we use these to arrive at design guidelines and research challenges

    Accessible Autonomy: Exploring Inclusive Autonomous Vehicle Design and Interaction for People who are Blind and Visually Impaired

    Get PDF
    Autonomous vehicles are poised to revolutionize independent travel for millions of people experiencing transportation-limiting visual impairments worldwide. However, the current trajectory of automotive technology is rife with roadblocks to accessible interaction and inclusion for this demographic. Inaccessible (visually dependent) interfaces and lack of information access throughout the trip are surmountable, yet nevertheless critical barriers to this potentially lifechanging technology. To address these challenges, the programmatic dissertation research presented here includes ten studies, three published papers, and three submitted papers in high impact outlets that together address accessibility across the complete trip of transportation. The first paper began with a thorough review of the fully autonomous vehicle (FAV) and blind and visually impaired (BVI) literature, as well as the underlying policy landscape. Results guided prejourney ridesharing needs among BVI users, which were addressed in paper two via a survey with (n=90) transit service drivers, interviews with (n=12) BVI users, and prototype design evaluations with (n=6) users, all contributing to the Autonomous Vehicle Assistant: an award-winning and accessible ridesharing app. A subsequent study with (n=12) users, presented in paper three, focused on prejourney mapping to provide critical information access in future FAVs. Accessible in-vehicle interactions were explored in the fourth paper through a survey with (n=187) BVI users. Results prioritized nonvisual information about the trip and indicated the importance of situational awareness. This effort informed the design and evaluation of an ultrasonic haptic HMI intended to promote situational awareness with (n=14) participants (paper five), leading to a novel gestural-audio interface with (n=23) users (paper six). Strong support from users across these studies suggested positive outcomes in pursuit of actionable situational awareness and control. Cumulative results from this dissertation research program represent, to our knowledge, the single most comprehensive approach to FAV BVI accessibility to date. By considering both pre-journey and in-vehicle accessibility, results pave the way for autonomous driving experiences that enable meaningful interaction for BVI users across the complete trip of transportation. This new mode of accessible travel is predicted to transform independent travel for millions of people with visual impairment, leading to increased independence, mobility, and quality of life

    Human robot interaction in a crowded environment

    No full text
    Human Robot Interaction (HRI) is the primary means of establishing natural and affective communication between humans and robots. HRI enables robots to act in a way similar to humans in order to assist in activities that are considered to be laborious, unsafe, or repetitive. Vision based human robot interaction is a major component of HRI, with which visual information is used to interpret how human interaction takes place. Common tasks of HRI include finding pre-trained static or dynamic gestures in an image, which involves localising different key parts of the human body such as the face and hands. This information is subsequently used to extract different gestures. After the initial detection process, the robot is required to comprehend the underlying meaning of these gestures [3]. Thus far, most gesture recognition systems can only detect gestures and identify a person in relatively static environments. This is not realistic for practical applications as difficulties may arise from people‟s movements and changing illumination conditions. Another issue to consider is that of identifying the commanding person in a crowded scene, which is important for interpreting the navigation commands. To this end, it is necessary to associate the gesture to the correct person and automatic reasoning is required to extract the most probable location of the person who has initiated the gesture. In this thesis, we have proposed a practical framework for addressing the above issues. It attempts to achieve a coarse level understanding about a given environment before engaging in active communication. This includes recognizing human robot interaction, where a person has the intention to communicate with the robot. In this regard, it is necessary to differentiate if people present are engaged with each other or their surrounding environment. The basic task is to detect and reason about the environmental context and different interactions so as to respond accordingly. For example, if individuals are engaged in conversation, the robot should realize it is best not to disturb or, if an individual is receptive to the robot‟s interaction, it may approach the person. Finally, if the user is moving in the environment, it can analyse further to understand if any help can be offered in assisting this user. The method proposed in this thesis combines multiple visual cues in a Bayesian framework to identify people in a scene and determine potential intentions. For improving system performance, contextual feedback is used, which allows the Bayesian network to evolve and adjust itself according to the surrounding environment. The results achieved demonstrate the effectiveness of the technique in dealing with human-robot interaction in a relatively crowded environment [7]
    • 

    corecore