6 research outputs found

    Scalable Annotation Mechanisms for Digital Content Aggregation and Context-Aware Authoring

    Get PDF
    This paper discusses the role of context information in building the next generation of human-centered information systems, and classifies the various aspects of contextualization with a special emphasis on the production and consumption of digital content. The real-time annotation of resources is a crucial element when moving from content aggregators (which process third-party digital content) to context-aware visual authoring environments (which allow users to create and edit their own documents). We present a publicly available prototype of such an environment, which required a major redesign of an existing Web intelligence and media monitoring framework to provide real-time data services and synchronize the text editor with the frontend’s visual components. The paper concludes with a summary of achieved results and an outlook on possible future research avenues including multi-user support and the visualization of document evolution

    Método para la evaluación de usabilidad de sitios web transaccionales basado en el proceso de inspección heurística

    Get PDF
    La usabilidad es considerada uno de los factores más importantes en el desarrollo de productos de software. Este atributo de calidad está referido al grado en que, usuarios específicos de un determinado aplicativo, pueden fácilmente hacer uso del software para lograr su propósito. Dada la importancia de este aspecto en el éxito de las aplicaciones informáticas, múltiples métodos de evaluación han surgido como instrumentos de medición que permiten determinar si la propuesta de diseño de la interfaz de un sistema de software es entendible, fácil de usar, atractiva y agradable al usuario. El método de evaluación heurística es uno de los métodos más utilizados en el área de Interacción Humano-Computador (HCI) para este propósito debido al bajo costo de su ejecución en comparación otras técnicas existentes. Sin embargo, a pesar de su amplio uso extensivo durante los últimos años, no existe un procedimiento formal para llevar a cabo este proceso de evaluación. Jakob Nielsen, el autor de esta técnica de inspección, ofrece únicamente lineamientos generales que, según la investigación realizada, tienden a ser interpretados de diferentes maneras por los especialistas. Por tal motivo, se ha desarrollado el presente proyecto de investigación que tiene como objetivo establecer un proceso sistemático, estructurado, organizado y formal para llevar a cabo evaluaciones heurísticas a productos de software. En base a un análisis exhaustivo realizado a aquellos estudios que reportan en la literatura el uso del método de evaluación heurística como parte del proceso de desarrollo de software, se ha formulado un nuevo método de evaluación basado en cinco fases: (1) planificación, (2) entrenamiento, (3) evaluación, (4) discusión y (5) reporte. Cada una de las fases propuestas que componen el protocolo de inspección contiene un conjunto de actividades bien definidas a ser realizadas por el equipo de evaluación como parte del proceso de inspección. Asimismo, se han establecido ciertos roles que deberán desempeñar los integrantes del equipo de inspectores para asegurar la calidad de los resultados y un apropiado desarrollo de la evaluación heurística. La nueva propuesta ha sido validada en dos escenarios académicos distintos (en Colombia, en una universidad pública, y en Perú, en dos universidades tanto en una pública como en una privada) demostrando en todos casos que es posible identificar más problemas de usabilidad altamente severos y críticos cuando un proceso estructurado de inspección es adoptado por los evaluadores. Otro aspecto favorable que muestran los resultados es que los evaluadores tienden a cometer menos errores de asociación (entre heurística que es incumplida y problemas de usabilidad identificados) y que la propuesta es percibida como fácil de usar y útil. Al validarse la nueva propuesta desarrollada por el autor de este estudio se consolida un nuevo conocimiento que aporta al bagaje cultural de la ciencia.Tesi

    User-created personas: a four case multi-ethnic study of persona artefact co-design in pastoral and Urban Namibia with ovaHerero, Ovambo, ovaHimba and San communities

    Get PDF
    A persona is an artefact widely used in technology design to aid communicational processes between designers, users and other stakeholders involved in projects. Persona originated in the Global North as an interpretative portrayal of a group of users with commonalities. Persona lacks empirical research in the Global South, while projects appearing in the literature are often framed under the philosophy of User-Centred Design –this indicates they are anchored in western epistemologies. This thesis postulates persona depictions are expected to differ across locales, and that studying differences and similarities in such representations is imperative to avoid misrepresentations that in turn can lead to designerly miscommunications, and ultimately to unsuitable technology designs. The importance of this problematic is demonstrated through four exploratory case studies on persona artefacts co-designed with communities from four Namibian ethnicities, namely ovaHerero, ovaHimba, Ovambo and San. Findings reveal diverse self-representations whereby results for each ethnicity materialise in different ways, recounts and storylines: romanticised persona archetypes versus reality with ovaHerero; collective persona representations with ovaHimba; individualised personas with Ovambo, although embedded in narratives of collectivism and interrelatedness with other personas; and renderings of two contradictory personas of their selves with a group of San youth according to either being on their own (i.e. inspiring and aspirational) or mixed with other ethnic groups (i.e. ostracised). This thesis advocates for User-Created Personas (UCP) as a potentially valid tactic and methodology to iteratively pursue conceptualisations of persona artefacts that are capable to communicate localised nuances critical to designing useful and adequate technologies across locales: Methodologies to endow laypeople to co-design persona self-representations and the results and appraisals provided are this thesis’ main contribution to knowledge

    Um Modelo para a visualização de conhecimento baseado em imagens semânticas

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduação em Engenharia e Gestão do ConhecimentoOs avanços no processamento e gerenciamento eletrônico de documentos têm gerado um acúmulo grande de conhecimento que tem excedido o que os usuários comuns podem perceber. Uma quantidade considerável de conhecimento encontra-se explicitado em diversos documentos armazenados em repositórios digitais. Em muitos casos, a possibilidade de acessar de forma eficiente e reutilizar este conhecimento é limitada. Como resultado disto, a maioria do conhecimento não é suficientemente explorado nem compartilhado, e conseqüentemente é esquecido em um tempo relativamente curto. As tecnologias emergentes de visualização e o sistema perceptual humano podem ser explorados para melhorar o acesso a grandes espaços de informação facilitando a detecção de padrões. Por outro lado, o uso de elementos visuais que contenham representações do mundo real que a priori são conhecidos pelo grupo-alvo e que fazem parte da sua visão de mundo, permite que o conhecimento apresentado por meio destas representações possa facilmente ser relacionados com o conhecimento prévio dos indivíduos, facilitando assim a aprendizagem. Apesar das representações visuais terem sido usadas como suporte para a disseminação do conhecimento, não têm sido propostos modelos que integrem os métodos e técnicas da engenharia do conhecimento com o uso das imagens como meio para recuperar e visualizar conhecimento. Neste trabalho apresenta-se um modelo que visa facilitar a visualização do conhecimento armazenado em repositórios digitais usando imagens semânticas. O usuário, através das imagens semânticas, pode recuperar e visualizar o conhecimento relacionado às entidades representadas nas regiões das imagens. As imagens semânticas são representações visuais do mundo real as quais são conhecidas previamente pelo grupo alvo e possuem mecanismos que permitem identificar os conceitos do domínio representados em cada região. O modelo proposto apóia-se no framework para visualização do conhecimento proposto por Burkhard e descreve as interações dos usuários com as imagens. Um protótipo foi desenvolvido para demonstrar a viabilidade do modelo usando imagens no domínio da anatomia, a Foundational Model of Anatomy e a Unified Medical Language System como conhecimento do domínio e o banco de dados da Scientific Electronic Library Online como repositório de documento.Advances in processing and electronic document management have generated a great accumulation of knowledge that is beyond what ordinary users can understand. A considerable amount of knowledge is explained in various documents stored in digital repositories. In many cases, the ability to eficiently access and reuse this knowledge is limited. As a result, most knowledge is not exploited or shared, and therefore it is forgotten in a relatively short time. The emerging technologies of visualization and the human perceptual system can be exploited to improve access to large information spaces facilitating the patterns detection. Moreover, the use of visual elements that contain representations of the real world that are known a priori by the target group and that are part of his world view, allows that the knowledge presented by these representations can be easily related to their prior knowledge, thereby facilitating learning. Despite visual representations have been used to support knowledge dissemination, no models have been proposed to integrate knowledge engineering methods and techniques with the use of images as a medium to retrieve and display knowledge. This work presents a model that aims to facilitate the visualization of the knowledge stored in digital repositories using semantic images. Through the semantic images, the user can retrieve and visualize the knowledge related to the entities represented in the image regions. The semantic images are visual representations of the real world which are known in advance by the target group and have mechanisms to identify domain concepts represented in each region. The proposed model is based on the framework for visualization of knowledge proposed by Burkhard and describes the interactions of users with the images. A prototype was eveloped to demonstrate the feasibility of the model using archetypes in the field of anatomy, using the Foundational Model of Anatomy and the Unifiled Medical Language System as knowledge domain and the database of the Scientific Electronic Library Online as a document repository

    Model konsep reka bentuk antaramuka koswer berbantu mahasiswa Tunakerna

    Get PDF
    The era of global education today is in need of educational media, especially those involving people with disabilities, as an effective media can have an impact on the quality of education for this group. However, in previous studies revealed that the medium of learning for this group is very limited and more focused on conventional learning because the lack of learning technology support has made it difficult for the learning process. The main purpose of this study is to propose a conceptual design model of assistive courseware interface for hearing impaired undergraduates.The development of student learning that has a hearing problem known as „tunakerna‟ according to the Fourth Edition Kamus Dewan Bahasa dan Pustaka (2008). The theoretical framework of scientific research design was chosen as the methodology used in this study. Expert validation approach has been implemented on Conceptual Design Model of Assistive Courseware Interface for Hearing Impaired Undergraduates (KOSMAT) model is designed with seven reserve components of a generic component structure, multimedia design elements, multiple intelligences for visual, interpersonal and intrapersonal, a model of instructional design, learning theory, object-oriented learning styles, and development process. Prototype courseware was developed by applying the applicability of KOSMAT model that aims to test the usability of the target users among hearing impaired undergraduates. The study found that usability testing showed a satisfactory performance for the three dimensions of ease of use, ease of learn and content. KOSMAT model is the improved results of the expert validation through consultation with experts. Hence, it should serve as a guideline or a reference to develop a learning courseware to hearing impaired undergraduates especially in special education

    Localización e internacionalización de software: puntos de encuentro entre el localizador y el programador

    Get PDF
    [ES] Por la manera en que se ha desarrollado la industria de localización de programas informáticos, el proceso de localización siempre ha funcionado como una caja negra que trabaja con su propio equipo de gestores y traductores, desvinculada de las metodologías y técnicas que se utilizan para desarrollar software. En muchos casos, y para muchas de las principales editoriales de software y desarrolladores de plataformas de programación, es, incluso, un proceso marginal que se invoca solo cuando hace falta traducir cadenas de texto. Esta inclusión tardía no solo genera enormes problemas en el proceso de la traducción de esas cadenas de texto, sino que, en muchas ocasiones, imposibilita que se pueda lanzar el producto en diversos mercados cuyos idiomas y culturas no encajan con las prestaciones que han sido tomadas en consideración durante el desarrollo y programadas al software. En esta investigación haremos un resumen histórico del desarrollo de las computadoras y de cómo han surgido las tres principales plataformas de programación: las mainframes, las minicomputadoras y las computadoras personales. Veremos que, a la par con los equipos o hardware, han surgido una variedad de lenguajes de programación y de estrategias para desarrollar programas informáticos. Según la aplicación y uso de estos equipos se ha ido ampliando, la necesidad de dar orden a las estrategias y procesos que se llevan a cabo en el desarrollo de programas informáticos sirve de base para la creación y desarrollo de estrategias y metodologías de desarrollo de programas. Junto con estos desarrollos, el surgimiento de las computadoras personales favorece la creación de productos que no solo sirvan a los mercados (principalmente) estadounidenses, sino a otros mercados que se comunican en distintos idiomas y tienen necesidades particulares. Esta es la génesis de la industria de la localización y el punto en donde comienza a confluir la traducción con la informática. Hasta este momento, todas las investigaciones que se han hecho sobre la localización de software toman como punto de partida el programador, un programador que investiga qué son programas multilingües y qué hay que tomar en consideración para crearlos. Nuestra propuesta nace del otro “lado”, del lado del localizador que se acerca a la programación como experto en lenguas y en gestión multicultural. Este experto conoce los problemas que se enfrenta el que está dentro de la caja negra de la localización, pero también está preparado para participar en los procesos anteriores, los que se llevan a cabo para crear programas nuevos. El “internacionalizador” posee los conocimientos y destrezas necesarias para poder formar parte del equipo de desarrollo de una aplicación desde sus comienzos hasta las etapas finales y ayudará a integrar en este proceso los requisitos necesarios para lograr que el software pueda ser localizado con facilidad llegado el momento de viabilizar su lanzamiento en otros mercados con necesidades lingüísticas, legales y culturales diversas. En el siglo de las comunicaciones es imposible pensar que se desarrolle software que no atienda las necesidades de más de un mercado. El internacionalizador puede ayudar al equipo de desarrollo de software a lograr esto.[EN] The localization process has always been regarded as a black box that functions on its own, with a team of project managers and translators who never get involved in software development methodologies or technologies. This is mainly due to the way in which the software localization industry has developed. In many cases, and for many of the main software publishers and platform developers, localization is a peripheral process that is only invoked when text strings need translation. Including localization late in the development process not only brings enormous problems when translating strings, on many occasions it becomes impossible to launch the product in other markets which have the need to accommodate languages and cultures that do not fit within the features that were developed and included in the software. This dissertation begins by making a historical account of the development of computers and describes how the three main programming platforms—namely mainframes, minicomputers, and personal computers—came into being. We shall see that, as hardware developed and new features were added, a variety of programming languages and strategies for developing software emerged. As features, application, and use of hardware further expanded, the need for organizing strategies and processes for creating programs became the basis for formulating and establishing software development strategies and methodologies. Along with these developments, the introduction of personal computers eventually promoted the need for creating products that serve not only markets in the United States, but other markets that communicate in different languages and exhibit particular needs of their own. Thus, the localization industry was born. It is at this point that translation and computers begin to come together and interact. Up until now, research regarding software localization has had programmers as a starting point. These programmers have researched what multilingual programs are and what needs to be done to create them. Our proposal comes from the other “side,” from the localizer’s point of view; a localizer that approaches programming as an expert in languages and intercultural mediation. This expert knows the problems within localization’s black box, but is also prepared to participate in the processes that take place before translation, the processes that take place in order to create new software. These “internationalizers,” as we refer to them, have all the necessary knowledge and skills to enable them to become part of a software development team from the beginning all the way through to the final stages. Their knowledge and presence will help integrate into this process the necessary requirements that will allow for smooth software localization, when the decision is made to launch the application into other markets that have diverse linguistic, legal, and cultural needs. In this century, mainly guided by communication, it is unthinkable to develop software that only attends to the needs of a single market. The internationalizer can help the software development team to accomplish this
    corecore