3,899 research outputs found

    Combating e-discrimination in the North West - final report

    Get PDF
    The Combating eDiscimination in the North West project examined over 100 websites advertising job opportunities both regionally and nationally, and found the vast majority to be largely inaccessible. Professional standards, such as using valid W3C code and adhering to the W3C Web Content Accessibility Guidelines, were largely not followed. The project also conducted interviews with both public and private sector web professionals, and focus groups of disabled computer users, to draw a broader picture of the accessibility of jobs websites. Interviews with leading web development companies in the Greater Manchester region, showed that there is a view there should not be any additional cost in making websites accessible, as the expertise to create a site professionally should be in place from the start, and that accessibility will follow from applying professional standards. However, through the process of trying to create a website for the project, with such a company, it was found that following professional standards is not sufficient to catch all the potential problems, and that user testing is an essential adjunct to professional practice. The main findings of the project are, thus, that: • Most websites in the job opportunities sector are not following professional standards of web development, and are largely inaccessible • Professional standards of web development need to be augmented with user testing to ensure proper accessibility

    Uniform: The Form Validation Language

    Get PDF
    Digital forms are becoming increasingly more prevalent but the ease of creation is not. Web Forms are difficult to produce and validate. This design project seeks to simplify this process. This project is comprised of two parts: a logical programming language (Uniform) and a web application. Uniform is a language that allows its users to define logical relationships between web elements and apply simple rules to individual inputs to both validate the form and manipulate its components depending on user input. Uniform provides an extra layer of abstraction to complex coding. The web app implements Uniform to provide business-level programmers with an interface to build and manage forms. Users will create form templates, manage form instances, and cooperatively complete forms through the web app. Uniform’s development is ongoing, it will receive continued support and is available as open-source. The web application is software owned and maintained by HP Inc. which will be developed further before going to market

    A Hybrid Data-Driven Web-Based UI-UX Assessment Model

    Full text link
    Today, a large proportion of end user information systems have their Graphical User Interfaces (GUI) built with web-based technology (JavaScript, CSS, and HTML). Some of these web-based systems include: Internet of Things (IOT), Infotainment (in vehicles), Interactive Display Screens (for digital menu boards, information kiosks, digital signage displays at bus stops or airports, bank ATMs, etc.), and web applications/services (on smart devices). As such, web-based UI must be evaluated in order to improve upon its ability to perform the technical task for which it was designed. This study develops a framework and a processes for evaluating and improving the quality of web-based user interface (UI) as well as at a stratified level. The study develops a comprehensive framework which is a conglomeration of algorithms such as the multi-criteria decision making method of analytical hierarchy process (AHP) in coefficient generation, sentiment analysis, K-means clustering algorithms and explainable AI (XAI)

    BlogForever D2.6: Data Extraction Methodology

    Get PDF
    This report outlines an inquiry into the area of web data extraction, conducted within the context of blog preservation. The report reviews theoretical advances and practical developments for implementing data extraction. The inquiry is extended through an experiment that demonstrates the effectiveness and feasibility of implementing some of the suggested approaches. More specifically, the report discusses an approach based on unsupervised machine learning that employs the RSS feeds and HTML representations of blogs. It outlines the possibilities of extracting semantics available in blogs and demonstrates the benefits of exploiting available standards such as microformats and microdata. The report proceeds to propose a methodology for extracting and processing blog data to further inform the design and development of the BlogForever platform

    Website Personalization Based on Demographic Data

    Get PDF
    This study focuses on websites personalization based on user's demographic data. The main demographic data that used in this study are age, gender, race and occupation. These data is obtained through user profiling technique conducted during the study. Analysis of the data gathered is done to find the relationship between the user's demographic data and their preferences for a website design. These data will be used as a guideline in order to develop a website that will fulfill the visitor's need. The topic chose was Obesity. HCI issues are considered as one of the important factors in this study which are effectiveness and satisfaction. The methodologies used are website personalization process, incremental model, combination of these two methods and Cascading Style Sheet (CSS) which discussed detail in Chapter 3. After that, we will be discussing the effectiveness and evaluation of the personalization website that have been built. Last but not least, there will be conclusion that present the result of evaluation of the websites made by the respondents

    Web Data Extraction, Applications and Techniques: A Survey

    Full text link
    Web Data Extraction is an important problem that has been studied by means of different scientific tools and in a broad range of applications. Many approaches to extracting data from the Web have been designed to solve specific problems and operate in ad-hoc domains. Other approaches, instead, heavily reuse techniques and algorithms developed in the field of Information Extraction. This survey aims at providing a structured and comprehensive overview of the literature in the field of Web Data Extraction. We provided a simple classification framework in which existing Web Data Extraction applications are grouped into two main classes, namely applications at the Enterprise level and at the Social Web level. At the Enterprise level, Web Data Extraction techniques emerge as a key tool to perform data analysis in Business and Competitive Intelligence systems as well as for business process re-engineering. At the Social Web level, Web Data Extraction techniques allow to gather a large amount of structured data continuously generated and disseminated by Web 2.0, Social Media and Online Social Network users and this offers unprecedented opportunities to analyze human behavior at a very large scale. We discuss also the potential of cross-fertilization, i.e., on the possibility of re-using Web Data Extraction techniques originally designed to work in a given domain, in other domains.Comment: Knowledge-based System
    • …
    corecore