578,306 research outputs found

    Measuring University Web Site Quality: A Development of a User-Perceived Instrument and its Initial Implementation to Web sites of Accounting Departments in New Zealand's Universities

    Get PDF
    The emergent popularity of Web technologies and their applications have created vast opportunities for organisations, including institutions of higher education, to stretch out for broader customers and create greater networking relationships. The global and far-reaching nature of the Web, its various interactive capabilities, and the rapid growth of the Web use worldwide have made university Web sites more essential for promotion and commercial purposes. However, it has been acknowledged that in order to gain the benefits from Web utilisation, a well-designed Web site is needed. Previous studies on quality of Web sites are not lacking, but most of them have been focussed mainly on business Web sites. Empirical research that focuses on the Web site quality of institutions of higher education has been scarce. In this study, an instrument for measuring university Web site quality was developed and validated by taking into account both the perspectives of the users and the importance of its informational content. The instrument was subsequently put to the test by implementing it for measuring and ranking the quality of Web sites of Accounting Departments in New Zealand's universities. The results from this initial application substantiated the validity and reliability of the instrument.University Web sites, Web site quality, Instrument development, Accounting Department Web sites ranking

    MEASURING QUALITY OF THE SERVICES PROVIDED BY THE COMMERCIAL WEB SITES

    Get PDF
    The increasingly systematic usage of Internet in the decisional process of theconsumers determines the vendors to apply more frequently to the advantages of this instrument.A site must be before of all capable to answer the expectances of the consumers to which itaddresses. The researchers identified a number of criteria that the consumers have in view whenthey evaluate the Web sites in general and especially the quality of services they provide.Internet, electronic services quality, online shopping, electronic commerce

    Using the Global Web as an Expertise Evidence Source

    Get PDF
    This paper describes the details of our participation in expert search task of the TREC 2007 Enterprise track. The presented study demonstrates the predicting potential of the expertise evidence that can be found outside of the organization. We discovered that combining the ranking built solely on the Enterprise data with the Global Web based ranking may produce significant increases in performance. However, our main goal was to explore whether this result can be further improved by using various quality measures to distinguish among web result items. While, indeed, it was beneficial to use some of these measures, especially those measuring relevance of URL strings and titles, it stayed unclear whether they are decisively important

    Measuring E-Commerce Web Site Quality: An Empirical Examination

    Get PDF

    Empirical Methodology for Crowdsourcing Ground Truth

    Full text link
    The process of gathering ground truth data through human annotation is a major bottleneck in the use of information extraction methods for populating the Semantic Web. Crowdsourcing-based approaches are gaining popularity in the attempt to solve the issues related to volume of data and lack of annotators. Typically these practices use inter-annotator agreement as a measure of quality. However, in many domains, such as event detection, there is ambiguity in the data, as well as a multitude of perspectives of the information examples. We present an empirically derived methodology for efficiently gathering of ground truth data in a diverse set of use cases covering a variety of domains and annotation tasks. Central to our approach is the use of CrowdTruth metrics that capture inter-annotator disagreement. We show that measuring disagreement is essential for acquiring a high quality ground truth. We achieve this by comparing the quality of the data aggregated with CrowdTruth metrics with majority vote, over a set of diverse crowdsourcing tasks: Medical Relation Extraction, Twitter Event Identification, News Event Extraction and Sound Interpretation. We also show that an increased number of crowd workers leads to growth and stabilization in the quality of annotations, going against the usual practice of employing a small number of annotators.Comment: in publication at the Semantic Web Journa

    WebQual: A Web quality instrument

    Get PDF
    There does not exist a comprehensive measure to assess the quality of a Web site. To date many companies have based Web design on trial-and-error, gut-instinct, and feedback from customers. A more effective approach is to develop an instrument to measure Web site quality and to compare alternative designs. Both academic and popular trade publications have emphasized a variety of measures. Previous studies have attempted to measure the quality of a Web site via less direct measures such as the number of “hits” (Berthon, Pitt et al. 1996). The purpose of this research is: 1) to develop a multiple-item instrument for measuring W eb site quality (called WEBQUAL) and 2) to report norms for some classes of Web sites

    Web service quality-dased profiling and selection

    Get PDF
    Guaranteeing quality of service has been recently labeled as one of multiple major research challenges in the service oriented architecture. In effect, Web service selection from a set of matched services offering the same functional requirements, and ultimately claiming certain quality of service guarantees about themselves is not enough. A need emerges for the existence of a trusted third party that monitors Web service quality indicators, yet in a way that does not interfere with the normal operation of the Web service itself. The third party will eventually provide consumers with guarantees about Web service quality. In this research we propose a policy-based third party system for dynamically measuring relevant QoS metrics of a Web service, maintaining these measures for subsequent look-up requests and responding to QoS-aware service requests initiated by the clients. The system relies on a QoS-based architecture and a scheduling policy

    Measurement of User Perceived Web Quality

    Get PDF
    Web sites are now considered an extension of the entire business, not just an additional channel or storefront or a simple information portal for the company. Creating an effective web site that gives a positive overall experience to the customers and visitors is important in business today. Measuring the quality of web site from the users’ perspective, will give a fast and early feedback to the firm and enables it to take corrective actions and improve its operations. Several instruments and methodologies were developed to measure the web site performance, usability and quality in information systems, marketing and operations management literature. This study reviews the literature in web quality measurement and employs a 25 item instrument developed by Aladwani and Palvia to measure the user perceived web quality. It attempts to test the factorial validity of the instrument in Australian context using Structural Equation Modelling technique. Analysis revealed that the data set do not fit the Aladwani and Palvia’s model well enough

    The approaches to quantify web application security scanners quality: A review

    Get PDF
    The web application security scanner is a computer program that assessed web application security with penetration testing technique. The benefit of automated web application penetration testing is huge, which web application security scanner not only reduced the time, cost, and resource required for web application penetration testing but also eliminate test engineer reliance on human knowledge. Nevertheless, web application security scanners are possessing weaknesses of low test coverage, and the scanners are generating inaccurate test results. Consequently, experimentations are frequently held to quantitatively quantify web application security scanner's quality to investigate the web application security scanner's strengths and limitations. However, there is a discovery that neither a standard methodology nor criterion is available for quantifying the web application security scanner's quality. Hence, in this paper systematic review is conducted and analysed the methodology and criterion used for quantifying web application security scanners' quality. In this survey, the experiment methodologies and criterions that had been used to quantify web application security scanner's quality is classified and review using the preferred reporting items for systematic reviews and meta-analyses (PRISMA) protocol. The objectives are to provide practitioners with the understanding of methodologies and criterions that available for measuring web application security scanners' test coverage, attack coverage, and vulnerability detection rate, while provides the critical hint for development of the next testing framework, model, methodology, or criterions, to measure web application security scanner quality
    corecore