17,098 research outputs found

    Energy Counselling and Modern IT. Drawing on Web 2.0 for a Greener World

    Get PDF
    The aim of this article is to explore how modern IT solutions for collaborative knowledge evolution could lead to more effective energy counselling and increased energy knowledge among the public. Comparative studies have been performed where the focus has been on the prerequisites for effective use of web 2.0 type collaboration and wikis. The research is primarily aimed at actors within the energy sector, although similar developments also take place in other sectors. Targeted investments employing collaborative IT to involve the public in energy counselling could lead to lower energy consumption and an increased consciousness of environmental issues in the society. A conclusion is that web 2.0-like initiatives could play a valuable role in the knowledge development and exchange between energy counsellors, and further the knowledge exchange between the counsellors, the regional energy agencies and the public. They could also help channel an energy interest among the public into a collaborative knowledge production, and contribute to a good quality factual basis for the conceptions that develop in society. This would strengthen both the energy counselling and the energy counsellor corps.communities, sustainability, sector transcendence, energy counselling, web 2.0.

    Detecting Deceptive Dark-Pattern Web Advertisements for Blind Screen-Reader Users

    Get PDF
    Advertisements have become commonplace on modern websites. While ads are typically designed for visual consumption, it is unclear how they affect blind users who interact with the ads using a screen reader. Existing research studies on non-visual web interaction predominantly focus on general web browsing; the specific impact of extraneous ad content on blind users\u27 experience remains largely unexplored. To fill this gap, we conducted an interview study with 18 blind participants; we found that blind users are often deceived by ads that contextually blend in with the surrounding web page content. While ad blockers can address this problem via a blanket filtering operation, many websites are increasingly denying access if an ad blocker is active. Moreover, ad blockers often do not filter out internal ads injected by the websites themselves. Therefore, we devised an algorithm to automatically identify contextually deceptive ads on a web page. Specifically, we built a detection model that leverages a multi-modal combination of handcrafted and automatically extracted features to determine if a particular ad is contextually deceptive. Evaluations of the model on a representative test dataset and \u27in-the-wild\u27 random websites yielded F1 scores of 0.86 and 0.88, respectively

    Structured Review of Code Clone Literature

    Get PDF
    This report presents the results of a structured review of code clone literature. The aim of the review is to assemble a conceptual model of clone-related concepts which helps us to reason about clones. This conceptual model unifies clone concepts from a wide range of literature, so that findings about clones can be compared with each other

    Access to Online Databases: Predicate for Faculty Research Output

    Get PDF
    The study examined the role of access to online databases as the basis for faculty research output in six universities (comprising two each of federal, state and private) in two Southwestern states in Nigeria. A descriptive research design guided the study. Multistage sampling procedures including purposive, stratification, randomization as well as proportionate sampling techniques were employed to select 339 faculty members who provided the data for the study. The data were collected using a structured questionnaire. Of the 339 copies of the questionnaires administered, 89 per cent were retrieved fully completed and found usable. The research questions that guided the study were analyzed using inferential statistics. Findings revealed that HINARI, ProQuest, JSTOR, and EBSCOhost were the most regularly accessible online databases. Incessant power supply and lack of downloadable full-text posed the greatest threats to online databases access. Similarly, the study found that the provision of full-text of most relevant research materials, steady power supply and acquisition of information literacy skills were the most effective ways of addressing online databases access constraints. Accordingly, the study recommended adequate funding of university libraries, provision of alternative means of power generation and increased user education for maximum exploitation of subscribed databases

    Access to Online Databases: Predicate for Faculty Research Output

    Get PDF
    The study examined the role of access to online databases as the basis for faculty research output in six universities (comprising two each of federal, state and private) in two Southwestern states in Nigeria. A descriptive research design guided the study. Multistage sampling procedures including purposive, stratification, randomization as well as proportionate sampling techniques were employed to select 339 faculty members who provided the data for the study. The data were collected using a structured questionnaire. Of the 339 copies of the questionnaires administered, 89 per cent were retrieved fully completed and found usable. The research questions that guided the study were analyzed using inferential statistics. Findings revealed that HINARI, ProQuest, JSTOR, and EBSCOhost were the most regularly accessible online databases. Incessant power supply and lack of downloadable full-text posed the greatest threats to online databases access. Similarly, the study found that the provision of full-text of most relevant research materials, steady power supply and acquisition of information literacy skills were the most effective ways of addressing online databases access constraints. Accordingly, the study recommended adequate funding of university libraries, provision of alternative means of power generation and increased user education for maximum exploitation of subscribed databases

    Chapter 9: Quality Assurance

    Get PDF
    The OTiS (Online Teaching in Scotland) programme, run by the now defunct Scotcit programme, ran an International e-Workshop on Developing Online Tutoring Skills which was held between 8–12 May 2000. It was organised by Heriot–Watt University, Edinburgh and The Robert Gordon University, Aberdeen, UK. Out of this workshop came the seminal Online Tutoring E-Book, a generic primer on e-learning pedagogy and methodology, full of practical implementation guidelines. Although the Scotcit programme ended some years ago, the E-Book has been copied to the SONET site as a series of PDF files, which are now available via the ALT Open Access Repository. The editor, Carol Higgison, is currently working in e-learning at the University of Bradford (see her staff profile) and is the Chair of the Association for Learning Technology (ALT)

    Quality measures for ETL processes: from goals to implementation

    Get PDF
    Extraction transformation loading (ETL) processes play an increasingly important role for the support of modern business operations. These business processes are centred around artifacts with high variability and diverse lifecycles, which correspond to key business entities. The apparent complexity of these activities has been examined through the prism of business process management, mainly focusing on functional requirements and performance optimization. However, the quality dimension has not yet been thoroughly investigated, and there is a need for a more human-centric approach to bring them closer to business-users requirements. In this paper, we take a first step towards this direction by defining a sound model for ETL process quality characteristics and quantitative measures for each characteristic, based on existing literature. Our model shows dependencies among quality characteristics and can provide the basis for subsequent analysis using goal modeling techniques. We showcase the use of goal modeling for ETL process design through a use case, where we employ the use of a goal model that includes quantitative components (i.e., indicators) for evaluation and analysis of alternative design decisions.Peer ReviewedPostprint (author's final draft

    Evolution of statistical analysis in empirical software engineering research: Current state and steps forward

    Full text link
    Software engineering research is evolving and papers are increasingly based on empirical data from a multitude of sources, using statistical tests to determine if and to what degree empirical evidence supports their hypotheses. To investigate the practices and trends of statistical analysis in empirical software engineering (ESE), this paper presents a review of a large pool of papers from top-ranked software engineering journals. First, we manually reviewed 161 papers and in the second phase of our method, we conducted a more extensive semi-automatic classification of papers spanning the years 2001--2015 and 5,196 papers. Results from both review steps was used to: i) identify and analyze the predominant practices in ESE (e.g., using t-test or ANOVA), as well as relevant trends in usage of specific statistical methods (e.g., nonparametric tests and effect size measures) and, ii) develop a conceptual model for a statistical analysis workflow with suggestions on how to apply different statistical methods as well as guidelines to avoid pitfalls. Lastly, we confirm existing claims that current ESE practices lack a standard to report practical significance of results. We illustrate how practical significance can be discussed in terms of both the statistical analysis and in the practitioner's context.Comment: journal submission, 34 pages, 8 figure

    2023 Projects Day Booklet

    Get PDF
    https://scholarworks.seattleu.edu/projects-day/1002/thumbnail.jp
    • …
    corecore