45,645 research outputs found
Design and implementation of Web-based GIS for forest fragmentation analysis
The advantages and limitations of current web GIS software for forest fragmentation information and analysis functionality were investigated using Landsat Thematic Mapper data of 1987 to 1999 for a test site in northern West Virginia. ESRI\u27s ArcIMS technology was used to build a Web-based forest fragmentation analysis system to query, represent, and analyze the status of forest fragmentation using landscape metrics. Both ArcIMS HTML and Java fragmentation analysis tools were constructed. The web GIS was evaluated with respect to accessibility, navigation, interactive cartographic functionality, and spatial analysis functionality. The current ArcIMS approach was found to offer only limited support for the spatial analysis functions required for fragmentation analysis. A variety of enhancements to the current web GIS software are recommended, including support for polygon-based spatial query, interactive representation and operation for raster data, and the integration of user-side and server-side data for spatial analysis
The evaluation of educational service integration in integrated virtual courses
The effectiveness of an integrated virtual course is determined by factors such as the navigability of the system. We argue that in a virtual course, which offers different educational services for different learning activities, the integration of services is a good indicator for the effectiveness of a virtual course infrastructure. We develop a set of metrics to measure the degree of integration of a virtual course. We combine structural metrics and an analysis of the student usage of the system in order to measure integration
Recommended from our members
Benchmarking performance management systems
The Balanced Scorecard and associated performance management approaches, has become a widely practiced and popular management reporting method in recent times. Moreover, enabling technology, which assists in the delivery and personalisation of corporate performance information, is having a deeper and more rapid impact than ever before. This paper presents a brief comparative benchmarking study of leading enterprise performance management systems. Also, the author discusses the merits of bespoke internet technology development and out-of-the-box portal functionalities. An analysis of key business drivers and implementation risks of such approaches is highlighted via a case study example, and concludes the paper
Top 10 Law School Home Pages of 2012
For a fourth consecutive year, every website home page of every ABA-accredited law school is evaluated and ranked based on objective criteria. The goal is to identify well-executed sites adopting best practices. For the 2012 report, twenty-six elements are evaluated across these three categories: Design Patterns and Metadata, Accessibility and Validation, & Marketing and Communications. For 2012, there are four new elements, two prior elements have been combined, and one element was dropped.
For 2012, forty-six schools now use the HTML5 doctype, which is up from thirteen in 2011 and just one in 2010. Eighteen schools achieve perfect scores in an adjusted web accessibility evaluation, which is a slight increase over previous years. One of the new elements awards points for use of Responsive Web Design practices, which is a page layout method that shifts the order and number of elements on a page, based on the screen size displaying the content. Our survey discovered fourteen home pages using responsive web design.
As has been the case since this annual study launched in 2009, there is still no objective way to account for good taste. For interpreting these results, please decide for yourself whether any home page is greater or less than the sum of its evaluated elements
Combination of automatic and manual testing for web accessibility
Master's thesis Information- and communication technology IKT590 - University of Agder 2018Web accessibility is an indispensable medium for online communication and digital inclusion
nowadays. With the recent adoption of the Web Accessibility Directive making the Internet
resources accessible has become a legal obligation and strikes a need for more detailed and
reliable ways of web accessibility evaluation of the websites.
Throughout the years, many tools have been developed for testing web accessibility as well
as a plethora of metrics that are expected to convey the results. Unfortunately, in most cases
the findings appear to be incomplete since the studies rely only on one testing method, i.e.,
automatic or manual. The study has set itself a goal to contribute with knowledge to solving
three research questions. First, how to combine results from automated and manual evaluation
of web accessibility? Second, how to express the integration results in a quantitative manner?
Finally, what is the impact of the dynamic content on the integration results when the content
of the website is frequently updated and personalized?
This thesis proposes a novel approach to integration of manual and automated accessibility
testing, where the results of the evaluations are combined on the basis of accessibility guidelines.
Additionally, a quantitative metric – Union Score, together with a graphical visualization
called Accessibility Pie Chart, are propounded, as the means for expressing the outcomes of the
accessibility evaluation with use of the combined approach.
The research has been grounded on the mixed-method approach and embedded the findings
of the conducted interviews into a quantitative study. In order to find emphirically the most
suitable method for combining manual and automated testing, fifteen web pages selected from
two websites were chosen for evaluation with two testing tools: WTKollen Checker and WTKollen
User-Testing Tool.
The findings of the analysis show that WCAG 2.0 may serve as a bridge between manual and
automated evaluation outcomes and result in an increased coverage of the Success Criteria. The
proposed metric has been preliminarily validated with regard to its application for benchmarking
purposes and supplemented with a graphical way of presenting accessibility testing results.
Furthermore, it is concluded that the suggested integration approach can be deployed. Yet, the
challenge of dynamic content evaluation requires more research attention.
The study has contributed to the current state of knowledge about web accessibility evaluation
and the results are expected to be used for implementation of the novel approach. For the
future paths, a more extended study on the proposed metric’s properties is advised. Also, the
importance of further research in the area of dynamic content evaluation is highlighted
Top 10 Law School Home Pages of 2011
For the third consecutive year, the website home pages for all ABA-accredited law schools are evaluated and ranked based on objective criteria. For 2011, law school home pages advanced in some areas. For instance, there are now thirteen sites using the HTML5 doctype, up from a single site in 2010. In addition, seventeen schools achieved a perfect score for three tests focused on website accessibility, up from eight in 2010. Nonetheless, there’s enough diversity in coding practices and content to help separate the great from the good.
For this year’s survey, twenty-four elements of each home page are assessed across three broad categories: Design Patterns & Metadata; Accessibility & Validation; and Marketing & Communications. Most elements require no special design skills, sophisticated technology or significant expenses. For interpreting these results, the author does not try to decide if any whole is greater or less than the sum of its parts
Challenges to describe QoS requirements for web services quality prediction to support web services interoperability in electronic commerce
Quality of service (QoS) is significant and necessary for web service applications quality assurance. Furthermore, web services quality has contributed to the successful implementation of Electronic Commerce (EC) applications. However, QoS is still the big issue for web services research and remains one of the main research questions that need to be explored. We believe that QoS should not only be measured but should also be predicted during the development and implementation stages. However, there are challenges and constraints to determine and choose QoS requirements for high quality web services. Therefore, this paper highlights the challenges for the QoS requirements prediction as they are not easy to identify. Moreover, there are many different perspectives and purposes of web services, and various prediction techniques to describe QoS requirements. Additionally, the paper introduces a metamodel as a concept of what makes a good web service
Measuring and comparing the reliability of the structured walkthrough evaluation method with novices and experts
Effective evaluation of websites for accessibility remains problematic. Automated evaluation tools still require a significant manual element. There is also a significant expertise and evaluator effect. The Structured Walkthrough method is the translation of a manual, expert accessibility evaluation process adapted for use by novices. The method is embedded in the Accessibility Evaluation Assistant (AEA), a web accessibility knowledge management tool. Previous trials examined the pedagogical potential of the tool when incorporated into an undergraduate computing curriculum. The results of the evaluations carried out by novices yielded promising, consistent levels of validity and reliability. This paper presents the results of an empirical study that compares the reliability of accessibility evaluations produced by two groups (novices and experts). The main results of this study indicate that overall reliability of expert evaluations was 76% compared to 65% for evaluations produced by novices. The potential of the Structured Walkthrough method as a useful and viable tool for expert evaluators is also examined. Copyright 2014 ACM
FAIRness and Usability for Open-access Omics Data Systems
Omics data sharing is crucial to the biological research community, and the last decade or two has seen a huge rise in collaborative analysis systems, databases, and knowledge bases for omics and other systems biology data. We assessed the FAIRness of NASAs GeneLab Data Systems (GLDS) along with four similar kinds of systems in the research omics data domain, using 14 FAIRness metrics. The range of overall FAIRness scores was 6-12 (out of 14), average 10.1, and standard deviation 2.4. The range of Pass ratings for the metrics was 29-79%, Partial Pass 0-21%, and Fail 7-50%. The systems we evaluated performed the best in the areas of data findability and accessibility, and worst in the area of data interoperability. Reusability of metadata, in particular, was frequently not well supported. We relate our experiences implementing semantic integration of omics data from some of the assessed systems for federated querying and retrieval functions, given their shortcomings in data interoperability. Finally, we propose two new principles that Big Data system developers, in particular, should consider for maximizing data accessibility
- …