1,797 research outputs found

    Advances in human-computer interaction : graphics and animation components for interface design

    Get PDF
    We present an analysis of communicability methodology in graphics and animation components for interface design, called CAN (Communicability, Acceptability and Novelty). This methodology has been under development between 2005 and 2010, obtaining excellent results in cultural heritage, education and microcomputing contexts. In studies where there is a bi-directional interrelation between ergonomics, usability, user-centered design, software quality and the human-computer interaction. We also present the heuristic results about iconography and layout design in blogs and websites of the following countries: Spain, Italy, Portugal and France

    Understanding persuasive technologies to improve completion rates in MOOCs

    Get PDF
    Advances in computing technologies are revolutionising education. Specifically, advances in Human-Computer Interaction impact the media and methods of delivery, facilitating a conceptual shift from traditional face-to-face instruction towards a paradigm with delivery increasingly tailored to student needs. Massive Open Online Course(MOOC) providers have now the possibility to both predict and facilitate student success by applying learning analytics techniques on the large amount of data they hold about their learners. More than ever before, key information about successful student behaviour and context can be discovered and used in digital interventions on, for example, students at risk. This is a complex issue which is receiving increased attention in Higher Education and specifically amongst MOOCs providers. This position paper discusses the relevant challenges in the use of learning analytics in MOOCs in conjunction with persuasive technologies in order to improve completion rates

    Nintendo Wii Remote Controller in Higher Education: Development and Evaluation of a Demonstrator Kit for e-Teaching

    Get PDF
    The increasing availability of game based technologies together with advances in Human-Computer Interaction (HCI) and usability engineering provides new challenges and opportunities to virtual environments in the context of e-Teaching. Consequently, an evident trend is to offer learners with the equivalent of practical learning experiences, whilst supporting creativity for both teachers and learners. Current market surveys showed surprisingly that the Wii remote controller (Wiimote) is more widely spread than standard PCs and is the most used computer input device worldwide, which given its collection of sensors, accelerometers and bluetooth technology, makes it of great interest for HCI experiments in e-Learning/e-Teaching. In this paper we discuss the importance of gestures for teaching and describe the design and development of a low-cost demonstrator kit based on Wiimote enhancing the quality of the lecturing with gestures

    Understanding persuasive technologies to improve completion rates in MOOCs

    Get PDF
    Advances in computing technologies are revolutionising education. Specifically, advances in Human-Computer Interaction impact the media and methods of delivery, facilitating a conceptual shift from traditional face-to-face instruction towards a paradigm with delivery increasingly tailored to student needs. Massive Open Online Course(MOOC) providers have now the possibility to both predict and facilitate student success by applying learning analytics techniques on the large amount of data they hold about their learners. More than ever before, key information about successful student behaviour and context can be discovered and used in digital interventions on, for example, students at risk. This is a complex issue which is receiving increased attention in Higher Education and specifically amongst MOOCs providers. This position paper discusses the relevant challenges in the use of learning analytics in MOOCs in conjunction with persuasive technologies in order to improve completion rates.PostprintPeer reviewe

    Using Noninvasive Brain Measurement to Explore the Psychological Effects of Computer Malfunctions on Users during Human-Computer Interactions

    Full text link
    In today’s technologically driven world, there is a need to better understand the ways that common computer malfunctions affect computer users. These malfunctions may have measurable influences on computer user’s cognitive, emotional, and behavioral responses. An experiment was conducted where participants conducted a series of web search tasks while wearing functional nearinfrared spectroscopy (fNIRS) and galvanic skin response sensors. Two computer malfunctions were introduced during the sessions which had the potential to influence correlates of user trust and suspicion. Surveys were given after each session to measure user’s perceived emotional state, cognitive load, and perceived trust. Results suggest that fNIRS can be used to measure the different cognitive and emotional responses associated with computer malfunctions. These cognitive and emotional changes were correlated with users’ self-report levels of suspicion and trust, and they in turn suggest future work that further explores the capability of fNIRS for the measurement of user experience during human-computer interactions

    Testing Two Tools for Multimodal Navigation

    Get PDF
    The latest smartphones with GPS, electronic compasses, directional audio, touch screens, and so forth, hold a potential for location-based services that are easier to use and that let users focus on their activities and the environment around them. Rather than interpreting maps, users can search for information by pointing in a direction and database queries can be created from GPS location and compass data. Users can also get guidance to locations through point and sweep gestures, spatial sound, and simple graphics. This paper describes two studies testing two applications with multimodal user interfaces for navigation and information retrieval. The applications allow users to search for information and get navigation support using combinations of point and sweep gestures, nonspeech audio, graphics, and text. Tests show that users appreciated both applications for their ease of use and for allowing users to interact directly with the surrounding environment

    Digital Civic Sensemaking: Computer-Supported Participatory Sensemaking of Nuanced, Experience-Based Dialogue

    Get PDF
    Throughout the 21st century, we have seen a steady decline in trust in democracy, and a proliferation of exclusive, deconstructive methods of political participation such as town halls and polarizing social media discourse. However, methods of facilitated small group dialogue and community organizing have fostered trust, understanding, and civic empowerment for generations. Further, with advances in human-computer interaction, machine learning, and computer-supported cooperation in civic technology, the intersection between dialogue, community organizing, and technology for positive and inclusive civic participation is ripe for exploration. We present Real Talk, a hybrid civic technology program in which we aim to design, develop, and implement scalable technological infrastructure and equip communities, organizations, and networks with the processes and technology that allows them to connect, share experiences, collaborate, make meaning, address problems, suggest and advocate decisions in a thriving ecosystem. In this paper, we discuss a foundational system within the program as a key contribution to system sciences: computer-supported participatory sensemaking of nuanced dialogue data. We outline our system and discuss findings, implications, and shortcomings

    Reducing BCI calibration time with transfer learning: a shrinkage approach

    Get PDF
    Introduction: A brain-computer interface system (BCI) allows subjects to make use of neural control signals to drive a computer application. Therefor a BCI is generally equipped with a decoder to differentiate between types of responses recorded in the brain. For example, an application giving feedback to the user can benefit from recognizing the presence or absence of a so-called error potential (Errp), elicited in the brain of the user when this feedback is perceived as being ‘wrong’, a mistake of the system. Due to the high inter- and intra- subject variability in these response signals, calibration data needs to be recorded to train the decoder. This calibration session is exhausting and demotivating for the subject. Transfer Learning is a general name for techniques in which data from previous subjects is used as additional information to train a decoder for a new subject, thereby reducing the amount of subject specific data that needs to be recorded during calibration. In this work we apply transfer learning to an Errp detection task by applying single-target shrinkage to Linear Discriminant Analysis (LDA), a method originally proposed by Höhne et. al. to improve accuracy by compensating for inter-stimuli differences in an ERP-speller [1]. Material, Methods and Results: For our study we used the error potential dataset recorded by Perrin et al. in [2]. For 26 subjects each, 340 Errp/nonErrp responses were recorded with a #Errp to #nonErrp ratio of 0.41 to 0.94. 272 responses were available for training the decoder and the remaining 68 responses were left out for testing. For every subject separately we built three different decoders. First, a subject specific LDA decoder was built solely making use of the subject’s own train data. Second, we added the train data of the other 25 subjects to train a global LDA decoder, naively ignoring the difference between subjects. Finally, the single-target-shrinkage method (STS) [1] is used to regularize the parameters of the subject specific decoder towards those of the global decoder. Making use of cross validation this method assigns an optimal weight to the subject specific data and data from previous subjects to be used for training. Figure 1 shows the performance of the three decoders on the test data in terms of AUC as a function of the amount of subject specific calibration data used. Discussion: The subject specific decoder in Figure 1 shows how sensitive the decoding performance is to the amount of calibration data provided. Using data from previously recorded subjects the amount of calibration data, and as such the calibration time, can be reduced as shown by the global decoder. A certain amount of quality is however sacrificed. Making an optimal compromise between the subject specific and global decoder, the single-target-shrinkage decoder allows the calibration time to be reduced by 20% without any change in decoder quality (confirmed by a paired sample t-test giving p=0.72). Significance: This work serves as a first proof of concept in the use of shrinkage LDA as a transfer learning method. More specific, the error potential decoder built with reduced calibration time boosts the opportunity for error correcting methods in BCI
    corecore