316 research outputs found

    Computer assisted education : design, development and evaluation.

    Get PDF
    Thesis (M.Sc.)-University of Natal, 2001.Educational institutions throughout the world are increasingly facing classes of educationally, culturally and linguistically diverse student groups. At the same time economic constraints require these institutions to expand their student base and they are therefore looking to distance education and continuing education modules to meet these challenges. Simultaneously rapid advances in desktop computing capabilities and Internet delivered information have revived interest in Computer Assisted Education (CAE). The University of Natal is no exception to these trends; schools, departments and individual members of staff are increasingly exploring the possibility of using the University's computer infrastructure to assist in delivering quality education, maintaining current standards, and addressing the multiple needs of the students. To investigate these issues a CAE program was developed for use in the Nelson R. Mandela School of Medicine to investigate how students would make use 'of the technology, and to report on the development and evaluation processes of such a development. In doing so various lessons could be learnt which could inform the further development of such software at the University. In order to support the development of the CAE program an extensive literature survey into current educational theory was conducted. Its objectives were to explore and understand all the factors affecting the development and use of computer based systems as an educational tool. Particular aspects considered were • the debate between constructivist and instructivist theory in their applicability to both the medium and the subject material. • instructional styles, and with them the learning styles, that could be used to support the educational goals of the diverse student population. • instructional design methodologies that are currently used as well as media production methodologies. The goal of this aspect of the research was to advise both the development of the case study and to gain a broader understanding of the methodology that could be used for other developments. Included in this phase of the research are methods and criteria for selection of authoring systems and interface design issues in a multi-cultural multi-lingual environment. • the review of different evaluation strategies in order to incorporate appropriate evaluation in the CAE case study. • the investigation of broader sociological and historical factors that may influence the way in which CAE can be used effectively in a South African context. The presumption was that students from historically disadvantaged backgrounds and those with English as a second language would be less willing to use technological interventions than those who were more likely to have had access to computers earlier in their education. The case study set out to investigate if this presumption was valid, and if so what elements of design and delivery could facilitate these students' usage of such systems. However, these presumptions were not validated by the case study, showing the exact opposite of expectations, with more historically disadvantaged students showing a willingness to use the module

    Auto-generation of rich Internet applications from visual mock-ups

    Get PDF
    Capturing and communicating software requirements accurately and quickly is a challenging activity. This needs expertise of people with unique skills. Traditionally this challenge has been compounded by assigning specialist roles for requirements gathering and analysis, design, and implementations. These multiple roles have resulted in information loss mainly due to miscommunication between requirement specialists, designers and implementers. Large enterprises have managed the information loss by using document centric approaches, leading to delays and cost escalations. But documentation centric and multiple role approaches are not suitable for Small to Medium Enterprises (SMEs) because they are vulnerable to market competitions. Moreover, SMEs require effective online applications to provide their service. Hence the motivation for carrying out this research is to explore the possibilities of empowering requirement specialists such as Business Analysts’ (BAs) to take on additional responsibilities of designers and implementers to generate web applications. In addition, SME owners and BAs can communicate better if they perceive the application requirements using a What You See Is What You Get (WYSIWYG) approach. Hence, this research explores the design and development of mock-up-based auto-generating tool to develop SME applications. A tool that auto-generates an application from a mock-up should have the capacity to extract the essential implementation details from the mock-up. Hence a visual mock-up language was created by extending existing research on meta-models of UIs for a class of popular modern web-based business applications called Rich Internet Applications (RIAs). The popularity of RIAs is due to their distinctive client-side processing power with desktop application like responsiveness and look and feel. The mock-ups drawn with the mock-up language should have sufficient level of details to auto-generate RIAs. To support this, the mock-up language includes constructs for specifying a RIA’s mock-up in terms of layouts and the widgets within them. In addition, the language uses annotations on the mock-up to specify the behaviour of the system. In such an approach the only additional effort required of a Business Analyst is to specify the requirements in terms of a mock-up of the expected interfaces of the SME application. Apart from the mock-up language, a tool was designed and developed to auto-generate the desired application from the mock-up. The tool is powered by algorithms to derive the database structure and the client-side and server-side components required for the auto-generated application. The validation of the mock-up language and auto-generating tool was performed by BAs to demonstrate its usability. The measurement and evaluation results indicate that the mock-up language and the auto-generator can be used successfully to help BAs in the development of SME application and thereby reduce delays, errors and cost overruns. The important contributions of this research are: (i) the design of a mock-up language that makes it easy to capture the structure and behaviour of SME web applications. (ii) algorithms for automatic derivation of the expected database schema from a visual mock-up. (iii) algorithms for automatic derivation of the client and server-side application logic. (iv) application of an existing measurement and evaluation process for the usability testing of the mock-up language and the auto-generated application. This research followed the Design Science Research (DSR) method for Information System to guide the IS design and to capture the knowledge created during the design process. DSR is a research method useful in solving wicked problems requiring innovative solutions for incomplete, contradictory or changing requirements that are often difficult to recognize. This research opens new ways of thinking about web application development for future research. Specifically, mock-ups with few easy to understand annotations can be used as powerful active artifacts to capture the structure and behaviour of applications not just of small but also large enterprises. Auto-generating tools can then create fully functional and usable applications holistically from such mock-ups, thereby reducing delays and cost overruns during software engineering

    Prediction, detection, and correction of misunderstandings in interactive tasks

    Get PDF
    Technology has allowed all kinds of devices and software to come into our lives. Advances in GPS, Virtual Reality, and wearable computers with increased computing power and Internet connectivity open the doors for interactive systems that were considered science fiction less than a decade ago, and are capable of guiding us in a variety of environments. This increased accessibility comes at the cost of increasing both the scale of problems that can be realistically tackled and the capabilities that we expect from such systems. Indoor navigation is an example of such a task: although guiding a car is a solved problem, guiding humans for instance inside a museum is much more challenging. Unlike cars, pedestrians use landmarks rather than absolute distances. They must discriminate from a larger number of distractors, and expect sentences of higher complexity than those appropriate for a car driver. A car driver prefers short, simple instructions that do not distract them from traffic. A tourist inside a museum on the contrary can afford the mental effort that a detailed grounding process would require. Both car and indoor navigation are specific examples of a wider family of collaborative tasks known as “Instruction Following”. In these tasks, agents with the two clearly defined roles of Instruction Giver and Instruction Follower must cooperate to achieve a joint objective. The former has access to all required information about the environment, including (but not limited to) a detailed map of the environment, a clear list of objectives, and a profound understanding of the effect that specific actions have in the environment. The latter is tasked with following the instructions, interacting with the environment and moving the undertaking forward. It is then the Instruction Giver’s responsibility to assess a detailed plan of action, segment it into smaller subgoals, and present instructions to the Instruction Follower in a language that is clear and understandable. No matter how carefully crafted the Instruction Giver’s utterances are, it is expected that misunderstandings will take place. Although some of these misunderstandings are easy to detect and repair, others can be very difficult or even impossible to solve. It is therefore important for the Instruction Giver to generate instructions that are as clear as possible, to detect misunderstandings as early as possible, and to correct them in the most effective way. This thesis introduces several algorithms and strategies designed to tackle the aforementioned problems from end to end, presenting the individual aspects of a system that successfully predicts, detects, and corrects misunderstandings in interactive Instruction Following tasks. We focus on one particular type of instruction: those involving Referring Expressions. A Referring Expression identifies a single object out of many, such as “the red button” or “the tall plant”. Generating Referring Expressions is a key component of Inst. Following tasks, since any kind of object manipulation is likely to require a description of the object. Due to its importance and complexity, this is one of the most widely studied areas of Natural Language Generation. In this thesis we use Semantically Interpreted Grammars, an approach that integrates both Referring Expression Generation (identifying which properties are required for a unique description) and Surface realization (combining those properties into a concrete Noun Phrase). The complexity of performing, recording, and analyzing Instruction Following tasks in the real world is one of the major challenges of Instruction Following research. In order to simplify both the development of new algorithms and the access to those results by the research community, our work is evaluated in what we call a Virtual Environment—an environment that mimics the main aspects of the real world and abstracts distractions while preserving enough characteristics of the real world to be useful for research. Selecting the appropriate virtual environment for a research task ensures that results will be applicable in the real world. We have selected the Virtual Environment of the GIVE Challenge, an environment designed for an Instruction Following task in which a human Instruction Follower is paired with an automated Instruction Giver in a maze-like 3D world. Completing the task requires navigating the space, avoiding alarms, interacting with objects, generating instructions in Natural Language, and preventing mistakes that can bring the task to a premature end. Even under these simplified conditions, the task presents several computational challenges: performing these tasks in real time require fast algorithms, and ensuring the efficiency of our approaches remains a priority at every step. Our first experimental study identifies the most challenging type of mistakes that our system is expected to find. Creating an Inst. Following system that leverages previously-recorded human data and follows instructions using a simple greedy algorithm, we clearly separate those situations for which no further study is warranted from those that are of interest for our research. We test our algorithm with similarity metrics of varying complexity, ranging from overlap measures such as Jaccard and edit distances to advanced machine learning algorithms such as Support Vector Machines. The best performing algorithms achieve not only good accuracy, but we show in fact that mistakes are highly correlated with situations that are also challenging for human annotators. Going a step further, we also study the type of improvement that can be expected from our system if we give it the chance of retrying after a mistake was made. This system has no prior beliefs on which actions are more likely to be selected next, and our results make a good case for this vision to be one of its weakest points. Moving away from a paradigm where all actions are considered equally likely, and moving towards a model in which the Inst. Follower’s own action is taken into account, our subsequent step is the development of a system that explicitly models listener’s understanding. Given an instruction containing a Referring Expression, we approach the Instruction Follower’s understanding of it with a combination of two probabilistic models. The Semantic model uses features of the Referring Expression to identify which object is more likely to be selected: if the instruction mentions a red button, it is unlikely that the Inst. Follower will select a blue one. The Observational model, on the other hand, predicts which object will be selected by the Inst. Follower based on their behavior: if the user is walking straight towards a specific object, it is very likely that this object will be selected. These two log-linear, probabilistic models were trained with recorded human data from the GIVE Challenge, resulting in a model that can effectively predict that a misunderstanding is about to take place several seconds before it actually happens. Using our Combined model, we can easily detect and predict misunderstandings — if the Inst. Giver tells the Inst. Follower to “click the red button”, and the Combined model detects that the Inst. Follower will select a blue one, we know that a misunderstanding took place, we know what the misunderstood object is, and we know both facts early enough to generate a correction that will stop the Inst. Follower from making the mistake in the first place. A follow-up study extends the Observational model introducing features based on the gaze of the Inst. Follower. Gaze has been shown to correlate with human attention, and our study explores whether gaze-based features can improve the accuracy of the Observational model. Using previouslycollected data from the GIVE Environment in which gaze was recorded using eye-tracking equipment, the resulting Extended Observational model improves the accuracy of predictions in challenging scenes where the number of distractors is high. Having a reliable method for the detection of misunderstandings, we turn our attention towards corrections. A corrective Referring Expression is one designed not only for the identification of a single object out of many, but rather, for identifying a previously-wrongly-identified object. The simplest possible corrective Referring Expression is repetition: if the user misunderstood the expression “the red button” the first time, it is possible that they will understand it correctly the second time. A smarter approach, however, is to reformulate the Referring Expression in a way that makes it easier for the Inst. Follower to understand. We designed and evaluated two different strategies for the generation of corrective feedback. The first of these strategies exploits the pragmatics concept of a Context Set, according to which human attention can be segmented into objects that are being attended to (that is, those inside the Context Set) and those that are ignored. According to our theory, we could virtually ignore all objects outside the Context Set and generate Referring Expressions that would not be uniquely identifying with respect to the entire context, but would still be identifying enough for the Inst. Follower. As an example, if the user is undecided between a red button and a blue one, we could generate the Referring Expression “the red one” even if there are other red buttons on the scene that the user is not paying attention to. Using our probabilistic models as a measure for which elements to include in the Context Set, we modified our Referring Expression Generation algorithm to build sentences that explicitly account for this behavior. We performed experiments over the GIVE Challenge Virtual Environment, crowdsourcing the data collection process, with mixed results: even if our definition of a Context Set were correct (a point that our results can neither confirm nor deny), our strategy generates Referring Expressions that prevents some mistakes, but are in general harder to understand than the baseline approach. The results are presented along with an extensive error analysis of the algorithm. They imply that corrections can cause the Instruction Follower to re-evaluate the entire situation in a new light, making our previous definition of Context Set impractical. Our approach also fails at identifying previously grounded referents, compounding the number of pragmatic effects that conspire against this approach. The second strategy for corrective feedback consists on adding Contrastive focus to a second, corrective Referring Expression In a scenario in which the user receives the Referring Expression “the red button” and yet mistakenly selects a blue one, an approach with contrastive focus would generate “no, the RED button” as a correction. Such a Referring Expression makes it clear to the Inst. Follower that on the one hand their selection of an object of type “button” was correct, and that on the other hand it is the property “color” that needs re-evaluation. In our approach, we model a misunderstanding as a noisy channel corruption: the Inst. Giver generates a correct Referring Expression for a given object, but it is corrupted in transit and reaches the Inst. Follower in the form of an altered, incorrect Referring Expression We correct this misconstrual by generating a new, corrective Referring Expression: starting from the original Referring Expression and the misunderstood object, we identify the constituents of the Referring Expression that were corrupted and place contrastive focus on them. Our hypothesis states that the minimum edit sequence between the original and misunderstood Referring Expression correctly identifies the constituents requiring contrastive focus, a claim that we verify experimentally. We perform crowdsourced preference tests over several variations of this idea, evaluating Referring Expressions that either present contrast side by side (as in “no, not the BLUE button, the RED button”) or attempt to remove redundant information (as in “no, the RED one”). We evaluate our approaches using both simple scenes from the GIVE Challenge and more complicated ones showing pictures from the more challenging TUNA people corpus. Our results show that human users significantly prefer our most straightforward contrastive algorithm. In addition to detailing models and strategies for misunderstanding detection and correction, this thesis also includes practical considerations that must be taken into account when dealing with similar tasks to those discussed here. We pay special attention to Crowdsourcing, a practice in which data about tasks can be collected from participants all over the world at a lower cost than traditional alternatives. Researchers interested in using crowdsourced data must often deal both with unmotivated players and with players whose main motivation is to complete as many tasks as possible in the least amount of time. Designing a crowdsourced experiment requires a multifaceted approach: the task must be designed in such a way as to motivate honest players, discourage other players from cheating, implementing technical measures to detect bad data, and prevent undesired behavior looking at the entire pipeline with a Security mindset. We dedicate a Chapter to this issue, presenting a full example that will undoubtedly be of help for future research. We also include sections dedicated to the theory behind our implementations. Background literature includes the pragmatics of dialogue, misunderstandings, and focus, the link between gaze and visual attention, the evolution of approaches towards Referring Expression Generation, and reports on the motivations of crowdsourced workers that borrow from fields such as psychology and economics. This background contextualizes our methods and results with respect to wider fields of study, enabling us to explain not only that our methods work but also why they work. We finish our work with a brief overview of future areas of study. Research on the prediction, detection, and correction of misunderstandings for a multitude of environments is already underway. With the introduction of more advanced virtual environments, modern spoken, dialoguebased tools revolutionizing the market of home devices, and computing power and data being easily available, we expect that the results presented here will prove useful for researchers in several areas of Natural Language Processing for many years to come.Die Technologie hat alle möglichen Arten von unterstützenden Geräten und Softwares in unsere Leben geführt. Fortschritte in GPS, Virtueller Realität, und tragbaren Computern mit wachsender Rechenkraft und Internetverbindung öffnen die Türen für interaktive Systeme, die vor weniger als einem Jahrzehnt als Science Fiction galten, und die in der Lage sind, uns in einer Vielfalt von Umgebungen anzuleiten. Diese gesteigerte Zugänglichkeit kommt zulasten sowohl des Umfangs der Probleme, die realistisch gelöst werden können, als auch der Leistungsfähigkeit, die wir von solchen Systemen erwarten. Innennavigation ist ein Beispiel einer solcher Aufgaben: obwohl Autonavigation ein gelöstes Problem ist, ist das Anleiten von Meschen zum Beispiel in einem Museum eine größere Herausforderung. Anders als Autos, nutzen Fußgänger eher Orientierungspunkte als absolute Distanzen. Sie müssen von einer größeren Anzahl von Ablenkungen unterscheiden können und Sätze höherer Komplexität erwarten, als die, die für Autofahrer angebracht sind. Ein Autofahrer bevorzugt kurze, einfache Instruktionen, die ihn nicht vom Verkehr ablenken. Ein Tourist in einem Museum dagegen kann die metale Leistung erbringen, die ein detaillierter Fundierungsprozess benötigt. Sowohl Auto- als auch Innennavigation sind spezifische Beispiele einer größeren Familie von kollaborativen Aufgaben bekannt als Instruction Following. In diesen Aufgaben müssen die zwei klar definierten Akteure des Instruction Givers und des Instruction Followers zusammen arbeiten, um ein gemeinsames Ziel zu erreichen. Der erstere hat Zugang zu allen benötigten Informationen über die Umgebung, inklusive (aber nicht begrenzt auf) einer detallierten Karte der Umgebung, einer klaren Liste von Zielen und einem genauen Verständnis von Effekten, die spezifische Handlungen in dieser Umgebung haben. Der letztere ist beauftragt, den Instruktionen zu folgen, mit der Umgebung zu interagieren und die Aufgabe voranzubringen. Es ist dann die Verantwortung des Instruction Giver, einen detaillierten Handlungsplan auszuarbeiten, ihn in kleinere Unterziele zu unterteilen und die Instruktionen dem Instruction Follower in einer klaren, verständlichen Sprache darzulegen. Egal wie sorgfältig die Äußerungen des Instruction Givers erarbeitet sind, ist es zu erwarten, dass Missverständnisse stattfinden. Obwohl einige dieser Missverständnisse einfach festzustellen und zu beheben sind, können anderen sehr schwierig oder gar unmöglich zu lösen sein. Daher ist es wichtig, dass der Instruction Giver die Anweisungen so klar wie möglich formuliert, um Missverständnisse so früh wie möglich aufzudecken, und sie in der effektivstenWeise zu berichtigen. Diese Thesis führt mehrere Algorithmen und Strategien ein, die dazu entworfen wurden, die oben genannten Probleme in einem End-to-End Prozess zu lösen. Dabei werden die individuellen Aspekte eines Systems präsentiert, dass erfolgreich Missverständnisse in interaktiven Instruction Following Aufgaben vorhersagen, feststellen und korrigieren kann.Wir richten unsere Aufmerksamkeit auf eine bestimmte Art von Instruktion: die sogennanten Referring Expressions. Eine Referring Expression idenfiziert ein einzelnes Objekt aus vielen, wie zum Beispiel „der rote Knopf” oder „die große Pflanze”. Das Generieren von Referring Expressions ist eine Schlüsselkomponente von Instruction Following Aufgaben, da jegliche Art von Manipulation sehr wahrscheinlich eine Beschreibung des Objektes erfordert. Wegen derWichtigkeit und Komplexität ist dies eine der am meisten untersuchten Gebiete der Textgenerierung. In dieser Thesis verwenden wir Semantisch Interpretierte Grammatik, eine Methode, die sowohl die Generierung von Referring Expressions (Identifizierung von Eigenschaften für eine eindeutige Beschreibung) als auch Surface Realization (Kombinieren dieser Eigenschaften in eine konkrete Substantivgruppe) integriert. Die Komplexität der Durchführung, Aufzeichnung und Analyse von Instruction Following Aufgaben in der realen Welt ist eine der großen Herausforderungen der Instruction Following Forschung. Um sowohl die Entwicklung neuer Algorithmen und den Zugang zu diesen Ergebnissen durch die Wissenschaftsgemeinde zu vereinfachen, wird unsere Arbeit in einer Virtuellen Umgebung bewertet. Eine virtuelle Umgebung ahmt die Hauptaspekte der realen Welt nach und nimmt Ablenkungen weg, während genug Eigenschaften der realen Welt erhalten bleiben, um verwendbar für die Untersuchung zu sein. Die Auswahl der angebrachten virtuellen Umgebung für eine Forschungsaufgabe gewährleistet, dass die Ergebnisse auch in der realenWelt anwendbar sind. Wir haben eine virtuelle Umgebung der GIVE Challenge ausgesucht â˘A ¸S eine Umgebung, die für eine Instruction Following Aufgabe entworfen wurde, in der ein menschlicher Instruction Follower mit einem automatischen Instruction Giver in einer Labyrinth-artigen 3D Welt verbunden wird. Die Aufgabe zu beenden erfordert Navigation im Raum, Vermeidung von Alarmen, Interagieren mit Objekten, Textgenerierung und Verhindern von Fehlern, die zu einer vorzeitigen Beendung der Aufgabe führen. Sogar unter diesen vereinfachten Bedingungen stellt die Aufgabe mehrere rechentechnische Herausforderungen dar: die Aufgabe in Echtzeit durchzuführen erfordert schnelle Algorithmen, und die Effizienz unserer Methode zu gewährleisten bleibt Priorotät in jedem Schritt. Unser erstes Experiment identifiziert die herausfordernste Art von Fehlern, die unser System erwartungsgemäß finden soll. Durch den Entwurf eines Instruction Following Systems, das sich zuvor aufgezeichnete menschliche Daten zu Nutze macht und durch die Nutzung eines einfachen gierigen Algorithmus Intruktionen folgt, grenzen wir klar die Situationen ab, die keine weitere Studie rechtfertigen, von denen, die interessant für unsere Forschung sind. Wir testen unseren Algorithmus mit Ähnlichkeitsmaßen verschiedener Komplexität, die sich von Überlappungsmaßnahmen wie Jaccard und Editierdistanzen, bis zu fortgeschrittenen Algorithmen des Maschinellen Lernens erstrecken. Die am besten ausführenden Algorithmen erreichen nicht nur gute Genauigkeit sondern tatsächlich zeigen wir, dass Fehler hoch korreliert sind mit Situationen, die auch herausfordernd für menschliche Kommentatoren sind. In einem weiteren Schritt untersuchen wir die Art von Verbesserung, die von unserem System erwartet werden kann wenn wir ihm die Chance geben, es wieder zu versuchen nachdem ein Fehler gemacht wurde. Dieses System macht keine vorherigen Annahmen darüber, welche Aktionen am wahrscheinlichsten als nächstes ausgewählt werden und unsere Ergebnisse liefern gute Argumente dafür, dass dieser Ansatz einer der schwächsten Aspekte ist. Um sich von einem Paradigma wegzubewegen, in dem alle Aktionen gleich wahrscheinlich betrachtet werden, zu einem Model, in dem das Handeln des Instruction Followers in Betracht gezogen wird, ist unser folgender Schritt die Entwicklung eines Systems, dass explizit das Verständnis des Anwenders modelliert. Voraussetzend, dass die Instruktion eine Referring Expression beinhaltet, gehen wir das Verstehen des Instruction Followers mit einer Kombination aus zwei probabilistischen Modellen an. Das semantische Modell verwendet Eigenschaften der Referring Expression um zu identifizieren, welches Objekt wahrscheinlicher a

    A Usability Inspection Method for Model-driven Web Development Processes

    Full text link
    Las aplicaciones Web son consideradas actualmente un elemento esencial e indispensable en toda actividad empresarial, intercambio de información y motor de redes sociales. La usabilidad, en este tipo de aplicaciones, es reconocida como uno de los factores clave más importantes, puesto que la facilidad o dificultad que los usuarios experimentan con estas aplicaciones determinan en gran medida su éxito o fracaso. Sin embargo, existen varias limitaciones en las propuestas actuales de evaluación de usabilidad Web, tales como: el concepto de usabilidad sólo se soporta parcialmente, las evaluaciones de usabilidad se realizan principalmente cuando la aplicación Web se ha desarrollado, hay una carencia de guías sobre cómo integrar adecuadamente la usabilidad en el desarrollo Web, y también existe una carencia de métodos de evaluación de la usabilidad Web que hayan sido validados empíricamente. Además, la mayoría de los procesos de desarrollo Web no aprovechan los artefactos producidos en las fases de diseño. Estos artefactos software intermedios se utilizan principalmente para guiar a los desarrolladores y para documentar la aplicación Web, pero no para realizar evaluaciones de usabilidad. Dado que la trazabilidad entre estos artefactos y la aplicación Web final no está bien definida, la realización de evaluaciones de usabilidad de estos artefactos resulta difícil. Este problema se mitiga en el desarrollo Web dirigido por modelos (DWDM), donde los artefactos intermedios (modelos) que representan diferentes perspectivas de una aplicación Web, se utilizan en todas las etapas del proceso de desarrollo, y el código fuente final se genera automáticamente a partir estos modelos. Al tener en cuenta la trazabilidad entre estos modelos, la evaluación de estos modelos permite detectar problemas de usabilidad que experimentaran los usuarios finales de la aplicación Web final, y proveer recomendaciones para corregir estos problemas de usabilidad durante fases tempranas del proceso de desarrollo Web. Esta tesis tiene como objetivo, tratando las anteriores limitaciones detectadas, el proponer un método de inspección de usabilidad que se puede integrar en diferentes procesos de desarrollo Web dirigido por modelos. El método se compone de un modelo de usabilidad Web que descompone el concepto de usabilidad en sub-características, atributos y métricas genéricas, y un proceso de evaluación de usabilidad Web (WUEP), que proporciona directrices sobre cómo el modelo de usabilidad se puede utilizar para llevar a cabo evaluaciones específicas. Las métricas genéricas del modelo de usabilidad deben operacionalizarse con el fin de ser aplicables a los artefactos software de diferentes métodos de desarrollo Web y en diferentes niveles de abstracción, lo que permite evaluar la usabilidad en varias etapas del proceso de desarrollo Web, especialmente en las etapas tempranas. Tanto el modelo de usabilidad como el proceso de evaluación están alineados con la última norma ISO/IEC 25000 estándar para la evaluación de la calidad de productos de software (SQuaRE). El método de inspección de usabilidad propuesto (WUEP) se ha instanciado en dos procesos de desarrollo Web dirigido por modelos diferentes (OO-H y WebML) a fin de demostrar la factibilidad de nuestra propuesta. Además, WUEP fue validado empíricamente mediante la realización de una familia de experimentos en OO-H y un experimento controlado en WebML. El objetivo de nuestros estudios empíricos fue evaluar la efectividad, la eficiencia, facilidad de uso percibida y la satisfacción percibida de los participantes; cuando utilizaron WUEP en comparación con un método de inspección industrial ampliamente utilizado: La Evaluación Heurística (HE). El análisis estadístico y meta-análisis de los datos obtenidos por separado de cada experimento indicaron que WUEP es más eficaz y eficiente que HE en la detección de problemas de usabilidad. Los evaluadores también percibieron más satisfacción cuando se aplicaron WUEP, y lesFernández Martínez, A. (2012). A Usability Inspection Method for Model-driven Web Development Processes [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17845Palanci

    The consumer engagement-interactivity link : an e-retailing perspective

    Get PDF
    An increasingly turbulent and unpredictable consumer landscape is posing unprecedented challenges for the modern marketer. Faced with a highly fragmented and cynical consumer base, aggressive competitive strategies, a constantly evolving digital and cyber world, and economic volatility characterising the modern macro environment, marketers are under increasing pressure to align their strategic positioning with “consumer hearts and minds”. Compounding this rise in consumer complexity is the development and salience of dual and multiple consumer identities, largely as a result of the growth in online and social media communities. Against this backdrop the Marketing Science Institute (MSI), the global voice and agenda setting body for marketing research priorities, has proposed placing consumer engagement (CE) at the forefront of marketing strategy, identifying the need to understand how to engage through innovation and design.Whilst academics and practitioners alike have acknowledged the importance of consumer engagement, describing it as the ‘holy grail’ for unlocking consumer behaviour, there is still a lack of consensus as to its conceptualisation and therefore its relationship with other marketing constructs. The salience of the online and digital consumer further compounds the difficulty in formulating a CE framework that is integrative and cross contextual. For instance, the construct of interactivity has considerable overlap with CE when applied to the online and digital domain.This study therefore moves away from the predominantly adopted exploratory approach to CE investigation, to provide empirical research into consumer engagement’s conceptualisation online and clarify the nature of the relationship between CE and interactivity. A post-positivist critical realist ontology was used to guide the research process, with the initial qualitative stage conducting twenty-eight semi-structured interviews - nine with consumers, eight with academics and eleven with marketing and communications practitioners, possessing online and digital expertise. The subsequent main quantitative phase then surveyed 600 online UK consumers, yielding 496 usable responses. Interview data suggested the centricity of emotional, cognitive and behavioural dimensions in consumer engagement’s structure; highlighted the antecedent nature of interactivity in developing CE online; and identified potential moderators to the CE-interactivity relationship. The framework developed for quantitative validation was therefore based on these initial findings. The survey data was subject to exploratory and confirmatory factor analysis, structural equation modelling, satisfaction of goodness of fit indices, reliability and validity testing, and rival model comparison.The most pertinent finding of this research is establishing the CE-interactivity link; with the interactivity constructs of customisation, communication, control and speed of response all being found to be antecedents of CE, in order of influence. The findings also confirm consumer engagement’s multi-dimensionality; highlighting the online CE facets to be emotional CE (emotion and experience) and cognitive & behavioural CE (learning & insight and co-creation). Gender, satisfaction & trust and tolerance are also identified as moderating factors in the CE-interactivity relationship. Contributions are made through investigation of consumer engagement in the e-retailing context; providing further insight into CE’s relationship within a nomological network of already established relationship marketing constructs; large scale quantitative validation of the proposed CE-interactivity framework; and through a multi-stakeholder approach to data collection, helping to bridge the academic-practitioner divide (Gambetti et al., 2012). The investigation concludes with an in-depth discussion about the managerial implications, as well as providing an overview of the studies key limitations, contributions and recommendations for future research

    Diverse Contributions to Implicit Human-Computer Interaction

    Full text link
    Cuando las personas interactúan con los ordenadores, hay mucha información que no se proporciona a propósito. Mediante el estudio de estas interacciones implícitas es posible entender qué características de la interfaz de usuario son beneficiosas (o no), derivando así en implicaciones para el diseño de futuros sistemas interactivos. La principal ventaja de aprovechar datos implícitos del usuario en aplicaciones informáticas es que cualquier interacción con el sistema puede contribuir a mejorar su utilidad. Además, dichos datos eliminan el coste de tener que interrumpir al usuario para que envíe información explícitamente sobre un tema que en principio no tiene por qué guardar relación con la intención de utilizar el sistema. Por el contrario, en ocasiones las interacciones implícitas no proporcionan datos claros y concretos. Por ello, hay que prestar especial atención a la manera de gestionar esta fuente de información. El propósito de esta investigación es doble: 1) aplicar una nueva visión tanto al diseño como al desarrollo de aplicaciones que puedan reaccionar consecuentemente a las interacciones implícitas del usuario, y 2) proporcionar una serie de metodologías para la evaluación de dichos sistemas interactivos. Cinco escenarios sirven para ilustrar la viabilidad y la adecuación del marco de trabajo de la tesis. Resultados empíricos con usuarios reales demuestran que aprovechar la interacción implícita es un medio tanto adecuado como conveniente para mejorar de múltiples maneras los sistemas interactivos.Leiva Torres, LA. (2012). Diverse Contributions to Implicit Human-Computer Interaction [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17803Palanci

    Hypermediating Journalistic Authority: the case of Ethiopian private media

    Full text link
    This study, situated at the nexus between journalism, technology and society, focuses on the mediation of authority in the hypertextual environment. It explored the technology- journalism relationship as deployed in the Ethiopian media context where the private press is marginalized from access to official information. This particular context is important to examine the deployment of hyperlinks not only as a natural adoption but as a survival mechanism as well as a locus of meaning where authority is mediated in linking strategies. As such, the study selected two private news sites in this media landscape, Addis Standard and Ethiopia Observer, based on their frequent use of hyperlinks in their news reporting. With a critical approach drawing upon qualitative and quantitative social semiotics, the study conducted a multi-stage analysis following the links trajectory, navigational pathways, link precision, link destination and authorship. Data was collected over two periods to examine the interplay of the socio-political context in hyperlinking strategies; three months of initial data and six months of the ongoing Tigray war. Though the general linking patterns, such as total number of links, showed big gaps over the two periods, a notable rise for Addis Standard and a dramatic decline for Ethiopia Observer, in contrast to the global trend, both prioritized external linking. As the study progressed, in addition to the differential utilization of hyperlinks between the two news sites as a reflection of their status in the society, the study also revealed customization of use attuned to material realities. With a relatively strong presence in the Ethiopian media landscape and better access to sources and events, AS uses a notable proportion of its hyperlinking to call attention to its original reports while EO, a diasporic outlet frequently uses external linking to fulfill its information needs. Hyperlinking is also used by AS to show professional interventions in recycled stories for contextualization. Despite the methodological constraints of the study due to different external factors, the study showed the potential of hyperlinking strategies to shape the content of news. This was demonstrated by how EO gravitated toward government outlets in the second dataset establishing the “law enforcement operations” narrative of the government over that of the atrocities against civilians reported by popular international outlets who extensively reported the war

    The design and evaluation of non-visual information systems for blind users

    Get PDF
    This research was motivated by the sudden increase of hypermedia information (such as that found on CD-ROMs and on the World Wide Web), which was not initially accessible to blind people, although offered significant advantages over traditional braille and audiotape information. Existing non-visual information systems for blind people had very different designs and functionality, but none of them provided what was required according to user requirements studies: an easy-to-use non-visual interface to hypermedia material with a range of input devices for blind students. Furthermore, there was no single suitable design and evaluation methodology which could be used for the development of non-visual information systems. The aims of this research were therefore: (1) to develop a generic, iterative design and evaluation methodology consisting of a number of techniques suitable for formative evaluation of non-visual interfaces; (2) to explore non-visual interaction possibilities for a multimodal hypermedia browser for blind students based on user requirements; and (3) to apply the evaluation methodology to non-visual information systems at different stages of their development. The methodology developed and recommended consists of a range of complementary design and evaluation techniques, and successfully allowed the systematic development of prototype non-visual interfaces for blind users by identifying usability problems and developing solutions. Three prototype interfaces are described: the design and evaluation of two versions of a hypermedia browser; and an evaluation of a digital talking book. Recommendations made from the evaluations for an effective non-visual interface include the provision of a consistent multimodal interface, non-speech sounds for information and feedback, a range of simple and consistent commands for reading, navigation, orientation and output control, and support features. This research will inform developers of similar systems for blind users, and in addition, the methodology and design ideas are considered sufficiently generic, but also sufficiently detailed, that the findings could be applied successfully to the development of non-visual interfaces of any type

    The Challenges of Algorithmically Assigning Fact-checks: A Sociotechnical Examination of Google\u27s Reviewed Claims

    Get PDF
    In the era of misinformation and machine learning, the fact-checking community is eager to develop semi-automated fact-checking techniques that can detect misinformation and present fact-checks alongside problematic content. This thesis explores the technical elements and social context of one claim matching system, Google\u27s Reviewed Claims. The Reviewed Claims feature was one of the few user-facing interfaces in the complex socio-technical system between fact-checking organizations, news publishers, Google, and online information seekers. This thesis addresses the following research questions: RQ1: How accurate was Google\u27s Reviewed Claims feature? RQ2: Is it possible to create a consensus definition for relevant fact-checks to enable the development of more successful automated fact-checking systems? RQ3: How do different actors in the fact-checking ecosystem define relevance? I pursue these research questions through a series of methods including qualitative coding, qualitative content analysis, quantitative data analysis, and user studies. To answer RQ1, I qualitatively label the relevance of 118 algorithmically assigned fact-checks and find that 21% of fact-checks are not relevant to their assigned article. To address RQ2, I find that three independent raters using a survey are only able to come to fair-moderate agreement about whether the algorithmically assigned fact-checks are relevant to the matched articles. A reconciliation process substantially raised their agreement. This indicates that further discussions may create a common understanding of relevance among information seekers. Using raters\u27 open-ended justification responses, I generated 6 categories of justifications for their explanations. To further evaluate if information seekers shared a common definition of relevance, I asked Amazon Mechanical Turk workers to classify six different algorithmically assigned fact-checks and found that crowd workers were more likely to find the matched content relevant and were unable to agree on the justifications. With regard to RQ3, a sociotechnical analysis finds that the fact-checking ecosystem is fraught with distrust and conflicting incentives between individual actors (news publishers distrust fact-checking organizations and platforms, fact-checking organizations distrust platforms, etc.). Future systems need to be interpretable and transparent about relevance and the ways in which claim matching is performed because of the distrust between the actors. Fact-checking is dependent on nuance and context, AI is not technically sophisticated enough to account for these variables. As such, human-in-the-loop models seem to be essential to future semi-automated fact-checking approaches. However, my results indicate untrained crowd workers may not be the ideal candidates for modeling complex values in sociotechnical systems
    corecore