6,975 research outputs found
Evaluating Software Engineers' Acceptance of a Technique and Tool for Web Usability Inspection
Abstract-Usability is related to software quality, improving its ability to be understood, operated and attractive to users. We proposed the Design Usability Evaluation (DUE) technologies to allow identifying usability problems earlier in the development of Web applications, through the inspection of mockups. While we found that the DUE technique and tool were effective and efficient in the identification of usability problems, we saw the need to investigate their acceptance in practitioners' work environment. This paper reports the results from a study evaluating the acceptance of the DUE technologies from the point of view of software engineers. We asked questions based on the indicators from the Technology Acceptance Model and identified that a majority of the software engineers who participated in the study: (a) found the DUE technologies useful and easy to use for supporting the usability inspection process; and (b) would regularly use the DUE technologies for future inspections in their job. Nevertheless, the practitioners indicated that the technique should be refined in order to reduce the ambiguity and repetition of some of its items, while the tool should become more intuitive
Recommended from our members
Towards a tool for the subjective assessment of speech system interfaces (SASSI)
Applications of speech recognition are now widespread, but user-centred evaluation methods are necessary to ensure their success. Objective evaluation techniques are fairly well established, but previous subjective techniques have been unstructured and unproven. This paper reports on the first stage of the development of a questionnaire measure for the Subjective Assessment of Speech System Interfaces (SASSI). The aim of the research programme is to produce a valid, reliable and sensitive measure of users' subjective experiences with speech recognition systems. Such a technique could make an important contribution to theory and practice in the design and evaluation of speech recognition systems according to best human factors practice. A prototype questionnaire was designed, based on established measures for evaluating the usability of other kinds of user interface, and on a review of the research literature into speech system design. This consisted of 50 statements with which respondents rated their level of agreement. The questionnaire was given to users of four different speech applications, and Exploratory Factor Analysis of 214 completed questionnaires was conducted. This suggested the presence of six main factors in users' perceptions of speech systems: System Response Accuracy, Likeability, Cognitive Demand, Annoyance, Habitability and Speed. The six factors have face validity, and a reasonable level of statistical reliability. The findings form a userful theoretical and practical basis for the subjective evaluation of any speech recognition interface. However, further work is recommended, to establish the validity and sensitivity of the approach, before a final tool can be produced which warrants general use
A consumer perspective e-commerce websites evaluation model
Existing website evaluation methods have some weaknesses such as neglecting consumer criteria in their evaluation, being unable to deal with qualitative criteria, and involving complex weight and score calculations. This research aims to develop a hybrid consumer-oriented e-commerce website evaluation model based on the Fuzzy Analytical Hierarchy Process (FAHP) and the Hardmard Method (HM). Four phases were involved in developing the model: requirements identification, empirical study, model construction, and model confirmation. Requirements identification and empirical study were to identify critical web-design criteria and gather online consumers' preferences. Data, collected from 152 Malaysian consumers using online questionnaires, were used to identify critical e-commerce website features and scale
of importance. The new evaluation model comprised of three components. First, the
consumer evaluation criteria that consist of the important principles considered by
consumers; second, the evaluation mechanisms that integrate FAHP and HM consisting of mathematical expressions that handle subjective judgments, new formulas to calculate the weight and score for each criterion; and third, the evaluation procedures consisting of activities that comprise of goal establishment, document
preparation, and identification of website performance. The model was examined by six experts and applied to four case studies. The results show that the new model is practical, and appropriate to evaluate e-commerce websites from consumers' perspectives, and is able to calculate weights and scores for qualitative criteria in a simple way. In addition, it is able to assist decision-makers to make decisions in a measured objective way. The model also contributes new knowledge to the software evaluation fiel
A Hybrid Data-Driven Web-Based UI-UX Assessment Model
Today, a large proportion of end user information systems have their
Graphical User Interfaces (GUI) built with web-based technology (JavaScript,
CSS, and HTML). Some of these web-based systems include: Internet of Things
(IOT), Infotainment (in vehicles), Interactive Display Screens (for digital
menu boards, information kiosks, digital signage displays at bus stops or
airports, bank ATMs, etc.), and web applications/services (on smart devices).
As such, web-based UI must be evaluated in order to improve upon its ability to
perform the technical task for which it was designed. This study develops a
framework and a processes for evaluating and improving the quality of web-based
user interface (UI) as well as at a stratified level. The study develops a
comprehensive framework which is a conglomeration of algorithms such as the
multi-criteria decision making method of analytical hierarchy process (AHP) in
coefficient generation, sentiment analysis, K-means clustering algorithms and
explainable AI (XAI)
Software languages engineering: experimental evaluation
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia InformáticaDomain-Specific Languages (DSLs) are programming languages that offer, through appropriate notation and abstraction, still enough an expressive control over a particular problem domain for more restricted use. They are expected to contribute with an enhancement of productivity, reliability, maintainability and portability, when compared with General Purpose Programming Languages (GPLs). However, like in any Software Product without passing by all development stages namely Domain Analysis, Design, Implementation and Evaluation, some of the DSLs’ alleged advantages may be impossible to be achieved with a significant level of satisfaction. This may lead to the production of inadequate or inefficient languages. This dissertation is focused on the Evaluation phase.
To characterize DSL community commitment concerning Evaluation, we conducted a systematic review. The review covered publications in the main fora dedicated to DSLs from 2001 to 2008, and allowed to analyse and classify papers with respect to the validation efforts conducted by DSLs’ producers, where have been observed a reduced concern to this matter. Another important outcome that has been identified is the absence of a concrete approach to the evaluation of DSLs, which would allow a sound assessment of the actual improvements brought by the usage of DSLs. Therefore, the main goal of this dissertation concerns the production of a Systematic Evaluation Methodology for DSLs. To achieve this objective, has been carried out the major techniques used in Experimental Software Engineering and Usability Engineering context. The proposed methodology was validated with its use in several case studies, whereupon DSLs evaluation has been made in accordance with this methodology
Design rules and guidelines for generic condition-based maintenance software's Graphic User Interface
The task of selecting and developing a method of Human Computer Interaction (HCI) for a
Condition Based Maintenance (CBM) system, is investigated in this thesis. Efficiently and
accurately communicating machinery health information extracted from Condition
Monitoring (CM) equipment, to aid and assist plant and machinery maintenance decisions,
is the crux of the problem being researched.
Challenges facing this research include: the multitude of different CM techniques,
developed for measuring different component and machinery condition parameters; the
multitude of different methods of HCI; and the multitude of different ways of
communicating machinery health conditions to CBM practitioners. Each challenge will be
considered whilst pursuing the objective of identifying a generic set of design and
development principles, applicable to the design and development of a CBM system's
Human Machine Interface (HMI). [Continues.
- …