101 research outputs found

    Prosody and speech perception

    Get PDF
    The major concern of this thesis is with models of speech perception. Following Gibson's (1966) work on visual perception, it seeks to establish whether there are sources of information in the speech signal which can be responded to directly and which specify the units of information of speech. The treatment of intonation follows that of Halliday (1967) and rhythm that of Abercrombie (1967) . By "prosody" is taken to mean both the intonational and the rhythmic aspects of speech.Experiments one to four show the interdependence of prosody and grammar in the perception of speech, although they leave open the question of which sort of information is responded to first. Experiments five and six, employing a short-term memory paradigm and Morton's (1970) "suffix effect" explanation, demonstrate that prosody could well be responded to before grammar. Since the previous experiments suggested a close connection between the two, these results suggest that information about grammatical structures may well be given directly by prosody. In qthe final two experiments the amount of prosodic information in fluent speech that can be perceived independently of grammar and meaning is investigated. Although tone -group division seems to be given clearly enough by acoustic cues, there are problems of interpretation with the data on syllable stress assignments.In the concluding chapter, a three-stage model of speech perception is proposed, following never (1970), but incorporating prosodic analysis as an integral part of the processing. The obtained experimental results are integrated within this model

    eStorys: A visual storyboard system supporting back-channel communication for emergencies

    Get PDF
    This is the post-print version of the final paper published in Journal of Visual Languages & Computing. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2010 Elsevier B.V.In this paper we present a new web mashup system for helping people and professionals to retrieve information about emergencies and disasters. Today, the use of the web during emergencies, is confirmed by the employment of systems like Flickr, Twitter or Facebook as demonstrated in the cases of Hurricane Katrina, the July 7, 2005 London bombings, and the April 16, 2007 shootings at Virginia Polytechnic University. Many pieces of information are currently available on the web that can be useful for emergency purposes and range from messages on forums and blogs to georeferenced photos. We present here a system that, by mixing information available on the web, is able to help both people and emergency professionals in rapidly obtaining data on emergency situations by using multiple web channels. In this paper we introduce a visual system, providing a combination of tools that demonstrated to be effective in such emergency situations, such as spatio/temporal search features, recommendation and filtering tools, and storyboards. We demonstrated the efficacy of our system by means of an analytic evaluation (comparing it with others available on the web), an usability evaluation made by expert users (students adequately trained) and an experimental evaluation with 34 participants.Spanish Ministry of Science and Innovation and Universidad Carlos III de Madrid and Banco Santander

    Usability and Satisfaction of eRubric.

    Get PDF
    [ES] En este trabajo se evalúa la usabilidad de herramientas en línea utilizadas en la educación universitaria. De todas las posibles herramientas se centra en especial en las e-Rúbricas utilizadas para la autoevaluación y la evaluación entre pares. Los resultados muestran un alto grado de usabilidad de esta herramienta y de satisfacción por parte de estudiantes que la emplean como recurso en su formación académica. Desde la perspectiva del diseño y de la accesibilidad se señalan os aspectos a mejorar en las e-Rúbricas evaluadas, sin entrar en aspectos de contenido de las mismas, o de los indicadores empleados, o de otras características propias de los instrumentos de evaluación, sino desde la perspectiva técnica de manejo en la web. Los resultados se han obtenido de la aplicación de un instrumento en línea del que se tiene un coeficiente de fiabilidad alfa de Cronbach de 0,889. El instrumento utilizado consta de unas pocas preguntas descriptivas sobre quien contesta y qué herramienta evalúa, seguidas de 22 enunciados en una escala Likert del 1 al 5. Los enunciados están dispuestos alternativamente en forma directa (1 peor valoración, a 5 mejor valoración), e indirecta (1 mejor valoración, a 5 peor valoración). Los resultados muestran que más del 50% de los que completan el cuestionario valoran la usabilidad de las herramientas evaluadas positivamente, en especial la e-Rúbrica, y menos del 20% valoran la usabilidad como baja. Este cuestionario puede utilizarse en la evaluación de la usabilidad de cualquier herramienta en la Web.[EN] This work evaluates the usability of tools on line in university education. It focuses especially on the e-Rubric tool used for self-evaluation and peer evaluation among all the possible tools. The results show a high degree of usability and satisfaction on the part of students who use this tool as task in their academic training. The aspects to improve in the evaluated e-Rubric will be pointed out from the perspective of design and accessibility without going into content aspects, used indicators, or other feature of the assessment instruments, but a technical perspective of handling of the tool. The results have been obtained for the implementation of an online survey. It is a Cronbach’s alpha of 0.889. The used instrument is the third version of a questionnaire that it has been improving since three academic years ago. The used version consists of a few descriptive questions about the user who answers and what tool is evaluated, followed by 22 questions using a Likert scale from 1 to 5. The statements were prepared alternately in a direct way (1 completely disagree, to 5 fully agree), and in an inderect way (1 fully agree, to 5 completely disagree) in order to prevent, or detect quick responses and/or thoughtless of the respondents. The results show that more than 50% of the users who has responded to the whole questionnaire value the usability of the evaluated tolls positively, in special e-Rubric (answers on the values of agreement, and fully agree), and less than 20% value the usability low (answers with values completely disagree and disagreement). All this with federated tools. This questionnaire can be used in the evaluation of the usability of any tool on the Web.Serrano Angulo, J.; Cebrián Robles, D. (2014). Usabilidad y satisfacción de la e-Rúbrica. REDU. Revista de Docencia Universitaria. 12(1):177-195. https://doi.org/10.4995/redu.2014.6426OJS177195121Alva, M., Martínez, A., Cueva, P. J., Sagástegui, T. Ch. y López, B. (2003) Comparison of Methods and Existing Tools for the Measurement of Usability in the Web. Web Engineering, pp. 386-389.Bailey, J.E. y Pearson, S.W. (1983). Development of a tool for measuring and analyzing computer user satisfaction. Management science, 29(5), pp. 530-545.Bangor, A. Kortum, P.T. y Miller, J.T. (2008). An empirical evaluation of the system usability scale. International Journal of Human-Computer Interaction. 24(6): pp. 574- 594.Bangor, A. Kortum, P.T. y Miller, J.T. (2009). Determining What Individual SUS Scores Mean: Adding an Adjective Rating Scale. Journal of Usability Studies. Vol. 4 (3), pp. 114-123.Bartolomé, A., Martínez, M.E. y Tellado, F. (2012). Análisis comparativo de metodologías de evaluación formativa: diarios personales mediante blogs y autoevaluación mediante rúbricas. En C. Leite y M. Zabalza (coords.), Ensino Superior. Inovaçao e qualidade na docencia (417-429). CIIE: Porto.Bevan, N. (1998). European Usability Support Centre. http://www.usabilitynet.org/papers/tapus98.pdfBevan, N. (2009). International Standards for Usability Should Be More Widely Used. Journal of Usability Studies. Vol. 4, Issue 3, pp. 106-113.Bevan, N, Kirakowski, J y Maissel, J, (1991). What is Usability?, in H.-J. Bullinger, (Ed.). Human Aspects in Computing: Design and use of interactive systems and work with terminals, Amsterdam: Elsevier.Blanco, A. (2009). Desarrollo y evaluación de competencias en Educación Superior. Madrid: Narcea.Brooke, J. (1996). SUS-A quick and dirty usability scale. En Jordan, P. W., Thomas, B., Weerdmeester, B. A., McClelland (eds.) Usability evaluation in industry. London: Taylor & Francis, pp. 189-194.Caballero Míguez, G. y Garza Gil, M.D. (2012). Innovando la docencia superior en Economía: trabajo cooperativo y elaboración participativa de contenidos. Revista de Docencia Universitaria. Vol.10 (2), pp. 319-327.Cavallin, H., Martin, W.M. y Heylighen, A. (2007). How relative absolute can be: SUMI and the impact of the nature of the task in measuring perceived software usability. AI & Society. 22(2): pp. 227-235.Cebrián, M. (2008). La evaluación formativa mediante e-rúbricas, INDIVISA - Boletín de Estudios e Investigación-. Monografía X, 197-208.Cebrián, M. (2011). Servicio federado de e-rúbrica para la evaluación de aprendizajes universitarios. Memoria Técnica para proyectos tipo A y B. Referencia EDU2010- 15432.Chin, J.P., Diehl, V.A. y Norman, K.L. (1988). Development of an instrument measuring user satisfaction of the human-computer interface. En Proceedings of the SIGCHI conference on Human factors in computing systems. pp. 213-218. ACM.CISU-R NISTIR. (2007). Common Industry Specification for Usability - Requirements. National Institute of Standards and Technology. http://zing.ncsl.nist.gov/iusr/docu ments/CISU-R-IR7432.pdf.Etxabe, J.M., Aranguren, K. y Losada, D. (2011). Diseño de rúbricas en la formación inicial de maestros/as, Revista de Formación e Innovación Educativa Universitaria, 4 (3), pp. 156-169.Hassan, Y. (2006). Factores de diseño web orientado a la satisfacción y no-frustración de uso. Revista Española de Documentación Científica, 29 (2), pp. 239-257.Harper, B. y Norman, K. (1993). Improving user satisfaction: The questionnaire for user satisfaction interaction version 5.5. En Proceedings of the First Annual Mid-Atlantic Human Factors Conference. pp. 224-228.Harper, B., Slaughter, L. y Norman K. (1997). Questionnaire administration via the WWW: A validation & reliability study for a user satisfaction questionnaire. http://www. lap.umd.edu/webnet/paper.html.International Standards for Usability: http://www.upassoc.org/upa_publications/jus/ 2009may/index.htmlIves, B., Olson, M.H. y Baroudi, J.J. (1983). The measurement of user information satisfaction. Communications of the ACM. 26(10), pp. 785-793.Kelly, M. (1994). MUSiC Final Report Parts 1 and 2: the MUSiC Project. Hampshire, UK: Brameur Ltd.Kirakowski, J. (1994). Background notes on the SUMI questionnaire. Human Factors Research Group University College Cork, Ireland. Originally.Kirakowski, J. y Claridge, N. (2013). WAMMI-Web Usability Questionnaire. http://www. wammi.com/questionnaire.htmlKirakowski, J. y Cierlik, B. (1998). Measuring the usability of web sites. En Human Factors and Ergonomics Society Annual Meeting Proceedings, Human Factors and Ergo nomics Society. Vol. 42, pp. 424-428.Kirakowski, J. y Corbett, M. (1988). Measuring user satisfaction. En 4ª Conference of the British Computer Society Human-Computer Interaction Specialist Group, pp. 329-338.Lewis, J. R. (1992). Psychometric evaluation of the computer system usability questionnaire: The CSUQ. Technical report, Tech. Rep.Lund, A.M. (2001). Measuring usability with the USE questionnaire. Usability and User Experience, 8(2), pp. 3-6.Martínez M.E., Tellado, F. y Raposo, M. (2013). La rúbrica como instrumento para la autoevaluación: un estudio piloto, Revista de Docencia Universitaria, 11 (2), 373- 390. http://red-u.net/redu/files/journals/1/articles/490/public/490-2380-1-PB. pdf.Molich, R., Ede, M.R., Kaasgaard, K. y Karyukin, B. (2004). Comparative usability evaluation. Behaviour & Information Technology, 23(1), pp. 65-74.Moril Valle, R., Ballester Pont, L. y Martínez Fernández, J. (2012). Introducción de las matrices de valoración analítica en el proceso de evaluación del Practicum de los Grados de Infantil y de Primaria. Revista de Docencia Universitaria Vol.10 (2), Mayo-Agosto 2012, pp. 251-271.Olsina, L., Lafuente, G. y Rossi, G. (2001). Using Web-site QEM to Measure the Quality of Sites. http://sedici.unlp.edu.ar/bitstream/handle/10915/22155/Documento_ completo.pdf?sequence=1QUIS, http://lap.umd.edu/quis/.Raposo, M. y Gallego, M.J. (2012). Evaluación entre pares y autoevaluación basadas en rúbricas. En C. Leite y M. Zabalza (coords.), Ensino Superior. Inovaçao e qualidade na docencia (444-453). CIIE: Porto.Raposo, M., Martínez, M.E., Tellado, F. y Doval, M.I. (2012). La evaluación de la mejora del aprendizaje y del trabajo en grupo mediante rúbricas. En C. Leite y M. Zabalza (coords.), Ensino Superior. Inovaçao e qualidade na docencia (4051-4065). CIIE: Porto.Rodríguez Gómez, G. y Ibarra Sáiz, M. S. (2012). Reflexiones en torno a la competencia evaluadora del profesorado en la Educación Superior. Revista de Docencia Universitaria Vol.10 (2), pp. 149-161.Sauro, J. (2011). Measuring Usability with the System Usability Scale (SUS) http://www. measuringusability.com/sus.phpSerrano, J. (2012). Aplicación on-line y tratamiento informático de cuestionarios. Revista Española de Pedagogía. Nº 251, pp. 61-76.Serrano, J. y Cebrián, D. (2012). Usabilidad de la e-rúbrica mediante cuestionarios online con limesurvey. En C. Leite y M. Zabalza (coords.), Ensino Superior. Inovaçao e qualidade na docencia (467-486). CIIE: Porto.Slaughter, L., Harper, B., y Norman, K. (1994). Assessing the equivalence of the paper and on-line formats of the QUIS 5.5. En Proceedings of Mid Atlantic Human Factors Conference, pp. 87-91. Citeseer.SUMI, http://sumi.ucc.ie/sumipapp.html.Tullis, T.S. y Stetson, J.N. (2004). A comparison of questionnaires for assessing website usability. En Usability Professional Association Conference.USE, http://www.stcsig.org/usability/newsletter/0110_measuring_with_use.htmlUsefulness, Satisfaction, and Ease of Use http://www.usesurvey.com/Wallace, D.F., Norman, K.L. y Plaisant, C. (1988). The American Voice and Robotics "Guar dian" System: A Case Study in User Interface Usability Evaluation. Citeseer.WAMMI, http://www.wammi.com

    Designing Chatbots for Crises: A Case Study Contrasting Potential and Reality

    No full text
    Chatbots are becoming ubiquitous technologies, and their popularity and adoption are rapidly spreading. The potential of chatbots in engaging people with digital services is fully recognised. However, the reputation of this technology with regards to usefulness and real impact remains rather questionable. Studies that evaluate how people perceive and utilise chatbots are generally lacking. During the last Kenyan elections, we deployed a chatbot on Facebook Messenger to help people submit reports of violence and misconduct experienced in the polling stations. Even though the chatbot was visited by more than 3,000 times, there was a clear mismatch between the users’ perception of the technology and its design. In this paper, we analyse the user interactions and content generated through this application and discuss the challenges and directions for designing more effective chatbots

    Usability of a barcode scanning system as a means of data entry on a PDA for self-report health outcome questionnaires: a pilot study in individuals over 60 years of age

    Get PDF
    BACKGROUND: Throughout the medical and paramedical professions, self-report health status questionnaires are used to gather patient-reported outcome measures. The objective of this pilot study was to evaluate in individuals over 60 years of age the usability of a PDA-based barcode scanning system with a text-to-speech synthesizer to collect data electronically from self-report health outcome questionnaires. METHODS: Usability of the system was tested on a sample of 24 community-living older adults (7 men, 17 women) ranging in age from 63 to 93 years. After receiving a brief demonstration on the use of the barcode scanner, participants were randomly assigned to complete two sets of 16 questions using the bar code wand scanner for one set and a pen for the other. Usability was assessed using directed interviews with a usability questionnaire and performance-based metrics (task times, errors, sources of errors). RESULTS: Overall, participants found barcode scanning easy to learn, easy to use, and pleasant. Participants were marginally faster in completing the 16 survey questions when using pen entry (20/24 participants). The mean response time with the barcode scanner was 31 seconds longer than traditional pen entry for a subset of 16 questions (p = 0.001). The responsiveness of the scanning system, expressed as first scan success rate, was less than perfect, with approximately one-third of first scans requiring a rescan to successfully capture the data entry. The responsiveness of the system can be explained by a combination of factors such as the location of the scanning errors, the type of barcode used as an answer field in the paper version, and the optical characteristics of the barcode scanner. CONCLUSION: The results presented in this study offer insights regarding the feasibility, usability and effectiveness of using a barcode scanner with older adults as an electronic data entry method on a PDA. While participants in this study found their experience with the barcode scanning system enjoyable and learned to become proficient in its use, the responsiveness of the system constitutes a barrier to wide-scale use of such a system. Optimizing the graphical presentation of the information on paper should significantly increase the system's responsiveness

    Meta Modeling for Business Process Improvement

    Get PDF
    Conducting business process improvement (BPI) initiatives is a topic of high priority for today’s companies. However, performing BPI projects has become challenging. This is due to rapidly changing customer requirements and an increase of inter-organizational business processes, which need to be considered from an end-to-end perspective. In addition, traditional BPI approaches are more and more perceived as overly complex and too resource-consuming in practice. Against this background, the paper proposes a BPI roadmap, which is an approach for systematically performing BPI projects and serves practitioners’ needs for manageable BPI methods. Based on this BPI roadmap, a domain-specific conceptual modeling method (DSMM) has been developed. The DSMM supports the efficient documentation and communication of the results that emerge during the application of the roadmap. Thus, conceptual modeling acts as a means for purposefully codifying the outcomes of a BPI project. Furthermore, a corresponding software prototype has been implemented using a meta modeling platform to assess the technical feasibility of the approach. Finally, the usability of the prototype has been empirically evaluated

    ISO 25062 Usability Test Planning for a Large Enterprise Applications Suite

    No full text
    • …
    corecore