22 research outputs found

    Industry attitudes and behaviour towards web accessibility in general and age-related change in particular and the validation of a virtual third-age simulator for web accessibility training for students and professionals

    Get PDF
    While the need for web accessibility for people with disabilities is widely accepted, the same visibility does not apply to the accessibility needs of older adults. This research initially explored developer behaviour in terms of how they presented accessibility on their websites as well as their own accessibility practices in terms of presentation of accessibility statements, the mention of accessibility as a selling point to potential clients and homepage accessibility of company websites. Following from this starting point the research focused in on web accessibility for ageing in particular. A questionnaire was developed to explore the differences between developer views of general accessibility and accessibility for older people. The questionnaire findings indicated that ageing is not seen as an accessibility issue by a majority of developers. Awareness of ageing accessibility documentation was also very low, highlighting the need for raising awareness of accessibility practices for ageing. Current age-related documentation developed by the Web Accessibility Initiative was then examined and critiqued. The findings show a tension between the machine-centric Web Content Accessibility Guidelines 2.0 (WCAG 2.0) and the needs of older people. Examination of guidelines when compared to research-derived findings reveal that the Assistive Technology (AT) centric structure of the documentation does not appropriately highlight accessibility practices in a context that matches the observed behaviour of older people. The documentation also fails to appropriately address the psycho-social ramifications of how older people choose to interact with technology as well as how they identify themselves in relation to any conditions they have which may be considered disabling. The need for a novel, engaging and awareness-raising tool resulted in the development of what is essentially a "Virtual third-age simulator". This ageing simulator is the first to combine multiple impairments in an active simulation and uses eye-tracking technology to increase the fidelity of conditions resulting in partial sightedness. It also allows for developers to view their own web content in addition to the lessons provided using the simulations presented in the software. The simulator was then validated in terms of its ability to raise awareness as well as its ability to affect web industry professionals' intentions towards accessible practices that benefit older people

    Combating Robocalls to Enhance Trust in Converged Telephony

    Get PDF
    Telephone scams are now on the rise and without effective countermeasures there is no stopping. The number of scam/spam calls people receive is increasing every day. YouMail estimates that June 2021 saw 4.4 billion robocalls in the United States and the Federal Trade Commission (FTC) phone complaint portal receives millions of complaints about such fraudulent and unwanted calls each year. Voice scams have become such a serious problem that people often no longer pick up calls from unknown callers. In several scams that have been reported widely, the telephony channel is either directly used to reach potential victims or as a way to monetize scams that are advertised online, as in the case of tech support scams. The vision of this research is to bring trust back to the telephony channel. We believe this can be done by stopping unwanted and fraud calls and leveraging smartphones to offer a novel interaction model that can help enhance the trust in voice interactions. Thus, our research explores defenses against unwanted calls that include blacklisting of known fraudulent callers, detecting robocalls in presence of caller ID spoofing and proposing a novel virtual assistant that can stop more sophisticated robocalls without user intervention. We first explore phone blacklists to stop unwanted calls based on the caller ID received when a call arrives. We study how to automatically build blacklists from multiple data sources and evaluate the effectiveness of such blacklists in stopping current robocalls. We also used insights gained from this process to increase detection of more sophisticated robocalls and improve the robustness of our defense system against malicious callers who can use techniques like caller ID spoofing. To address the threat model where caller ID is spoofed, we introduce the notion of a virtual assistant. To this end, we developed a Smartphone based app named RobocallGuard which can pick up calls from unknown callers on behalf of the user and detect and filter out unwanted calls. We conduct a user study that shows that users are comfortable with a virtual assistant stopping unwanted calls on their behalf. Moreover, most users reported that such a virtual assistant is beneficial to them. Finally, we expand our threat model and introduce RobocallGuardPlus which can effectively block targeted robocalls. RobocallGuardPlus also picks up calls from unknown callers on behalf of the callee and engages in a natural conversation with the caller. RobocallGuardPlus uses a combination of NLP based machine learning models to determine if the caller is a human or a robocaller. To the best of our knowledge, we are the first to develop such a defense system that can interact with the caller and detect robocalls where robocallers utilize caller ID spoofing and voice activity detection to bypass the defense mechanism. Security analysis explored by us shows that such a system is capable of stopping more sophisticated robocallers that might emerge in the near future. By making these contributions, we believe we can bring trust back to the telephony channel and provide a better call experience for everyone.Ph.D

    Application of Common Sense Computing for the Development of a Novel Knowledge-Based Opinion Mining Engine

    Get PDF
    The ways people express their opinions and sentiments have radically changed in the past few years thanks to the advent of social networks, web communities, blogs, wikis and other online collaborative media. The distillation of knowledge from this huge amount of unstructured information can be a key factor for marketers who want to create an image or identity in the minds of their customers for their product, brand, or organisation. These online social data, however, remain hardly accessible to computers, as they are specifically meant for human consumption. The automatic analysis of online opinions, in fact, involves a deep understanding of natural language text by machines, from which we are still very far. Hitherto, online information retrieval has been mainly based on algorithms relying on the textual representation of web-pages. Such algorithms are very good at retrieving texts, splitting them into parts, checking the spelling and counting their words. But when it comes to interpreting sentences and extracting meaningful information, their capabilities are known to be very limited. Existing approaches to opinion mining and sentiment analysis, in particular, can be grouped into three main categories: keyword spotting, in which text is classified into categories based on the presence of fairly unambiguous affect words; lexical affinity, which assigns arbitrary words a probabilistic affinity for a particular emotion; statistical methods, which calculate the valence of affective keywords and word co-occurrence frequencies on the base of a large training corpus. Early works aimed to classify entire documents as containing overall positive or negative polarity, or rating scores of reviews. Such systems were mainly based on supervised approaches relying on manually labelled samples, such as movie or product reviews where the opinionist’s overall positive or negative attitude was explicitly indicated. However, opinions and sentiments do not occur only at document level, nor they are limited to a single valence or target. Contrary or complementary attitudes toward the same topic or multiple topics can be present across the span of a document. In more recent works, text analysis granularity has been taken down to segment and sentence level, e.g., by using presence of opinion-bearing lexical items (single words or n-grams) to detect subjective sentences, or by exploiting association rule mining for a feature-based analysis of product reviews. These approaches, however, are still far from being able to infer the cognitive and affective information associated with natural language as they mainly rely on knowledge bases that are still too limited to efficiently process text at sentence level. In this thesis, common sense computing techniques are further developed and applied to bridge the semantic gap between word-level natural language data and the concept-level opinions conveyed by these. In particular, the ensemble application of graph mining and multi-dimensionality reduction techniques on two common sense knowledge bases was exploited to develop a novel intelligent engine for open-domain opinion mining and sentiment analysis. The proposed approach, termed sentic computing, performs a clause-level semantic analysis of text, which allows the inference of both the conceptual and emotional information associated with natural language opinions and, hence, a more efficient passage from (unstructured) textual information to (structured) machine-processable data. The engine was tested on three different resources, namely a Twitter hashtag repository, a LiveJournal database and a PatientOpinion dataset, and its performance compared both with results obtained using standard sentiment analysis techniques and using different state-of-the-art knowledge bases such as Princeton’s WordNet, MIT’s ConceptNet and Microsoft’s Probase. Differently from most currently available opinion mining services, the developed engine does not base its analysis on a limited set of affect words and their co-occurrence frequencies, but rather on common sense concepts and the cognitive and affective valence conveyed by these. This allows the engine to be domain-independent and, hence, to be embedded in any opinion mining system for the development of intelligent applications in multiple fields such as Social Web, HCI and e-health. Looking ahead, the combined novel use of different knowledge bases and of common sense reasoning techniques for opinion mining proposed in this work, will, eventually, pave the way for development of more bio-inspired approaches to the design of natural language processing systems capable of handling knowledge, retrieving it when necessary, making analogies and learning from experience

    Application of Common Sense Computing for the Development of a Novel Knowledge-Based Opinion Mining Engine

    Get PDF
    The ways people express their opinions and sentiments have radically changed in the past few years thanks to the advent of social networks, web communities, blogs, wikis and other online collaborative media. The distillation of knowledge from this huge amount of unstructured information can be a key factor for marketers who want to create an image or identity in the minds of their customers for their product, brand, or organisation. These online social data, however, remain hardly accessible to computers, as they are specifically meant for human consumption. The automatic analysis of online opinions, in fact, involves a deep understanding of natural language text by machines, from which we are still very far. Hitherto, online information retrieval has been mainly based on algorithms relying on the textual representation of web-pages. Such algorithms are very good at retrieving texts, splitting them into parts, checking the spelling and counting their words. But when it comes to interpreting sentences and extracting meaningful information, their capabilities are known to be very limited. Existing approaches to opinion mining and sentiment analysis, in particular, can be grouped into three main categories: keyword spotting, in which text is classified into categories based on the presence of fairly unambiguous affect words; lexical affinity, which assigns arbitrary words a probabilistic affinity for a particular emotion; statistical methods, which calculate the valence of affective keywords and word co-occurrence frequencies on the base of a large training corpus. Early works aimed to classify entire documents as containing overall positive or negative polarity, or rating scores of reviews. Such systems were mainly based on supervised approaches relying on manually labelled samples, such as movie or product reviews where the opinionist’s overall positive or negative attitude was explicitly indicated. However, opinions and sentiments do not occur only at document level, nor they are limited to a single valence or target. Contrary or complementary attitudes toward the same topic or multiple topics can be present across the span of a document. In more recent works, text analysis granularity has been taken down to segment and sentence level, e.g., by using presence of opinion-bearing lexical items (single words or n-grams) to detect subjective sentences, or by exploiting association rule mining for a feature-based analysis of product reviews. These approaches, however, are still far from being able to infer the cognitive and affective information associated with natural language as they mainly rely on knowledge bases that are still too limited to efficiently process text at sentence level. In this thesis, common sense computing techniques are further developed and applied to bridge the semantic gap between word-level natural language data and the concept-level opinions conveyed by these. In particular, the ensemble application of graph mining and multi-dimensionality reduction techniques on two common sense knowledge bases was exploited to develop a novel intelligent engine for open-domain opinion mining and sentiment analysis. The proposed approach, termed sentic computing, performs a clause-level semantic analysis of text, which allows the inference of both the conceptual and emotional information associated with natural language opinions and, hence, a more efficient passage from (unstructured) textual information to (structured) machine-processable data. The engine was tested on three different resources, namely a Twitter hashtag repository, a LiveJournal database and a PatientOpinion dataset, and its performance compared both with results obtained using standard sentiment analysis techniques and using different state-of-the-art knowledge bases such as Princeton’s WordNet, MIT’s ConceptNet and Microsoft’s Probase. Differently from most currently available opinion mining services, the developed engine does not base its analysis on a limited set of affect words and their co-occurrence frequencies, but rather on common sense concepts and the cognitive and affective valence conveyed by these. This allows the engine to be domain-independent and, hence, to be embedded in any opinion mining system for the development of intelligent applications in multiple fields such as Social Web, HCI and e-health. Looking ahead, the combined novel use of different knowledge bases and of common sense reasoning techniques for opinion mining proposed in this work, will, eventually, pave the way for development of more bio-inspired approaches to the design of natural language processing systems capable of handling knowledge, retrieving it when necessary, making analogies and learning from experience

    How WEIRD is Usable Privacy and Security Research? (Extended Version)

    Full text link
    In human factor fields such as human-computer interaction (HCI) and psychology, researchers have been concerned that participants mostly come from WEIRD (Western, Educated, Industrialized, Rich, and Democratic) countries. This WEIRD skew may hinder understanding of diverse populations and their cultural differences. The usable privacy and security (UPS) field has inherited many research methodologies from research on human factor fields. We conducted a literature review to understand the extent to which participant samples in UPS papers were from WEIRD countries and the characteristics of the methodologies and research topics in each user study recruiting Western or non-Western participants. We found that the skew toward WEIRD countries in UPS is greater than that in HCI. Geographic and linguistic barriers in the study methods and recruitment methods may cause researchers to conduct user studies locally. In addition, many papers did not report participant demographics, which could hinder the replication of the reported studies, leading to low reproducibility. To improve geographic diversity, we provide the suggestions including facilitate replication studies, address geographic and linguistic issues of study/recruitment methods, and facilitate research on the topics for non-WEIRD populations.Comment: This paper is the extended version of the paper presented at USENIX SECURITY 202

    Human Factors Considerations in Designing Home-Based Video Telemedicine Systems for the Geriatric Population

    Get PDF
    Telemedicine is the process of providing healthcare services when large distances separate the patient and the doctor, with the use of communication technology. Telemedicine serves as a substitute to in-person hospital visits and in large, reduces the need to travel and wait in line to visit the doctor. It is predicted to help the geriatric population in managing their healthcare requirements. In order for telemedicine to effectively help the older population, it is essential to understand their needs and issues in telemedicine systems. A study with 40 participants was conducted to understand the usability issues of telemedicine systems with the geriatric population. Four telemedicine video platforms 1) Doxy.me, 2) Polycom, 3) Vidyo and 4) VSee, were used to understand these issues using a between-subject experimental design. Participants completed a demographic survey, followed by a telemedicine session. This was followed by a retrospective think-aloud discussion session to understand their issues and needs concluding with a post-test survey. This survey included general questions about using the system followed by NASA-TLX workload measure and IBM-Computer System Usability Questionnaire (IBM-CSUQ). Some of the issues identified included lengthy email invitation with multiple web links, application download, registration and issues relating to icons used. A Cognitive Task Analysis (CTA) is a method for understanding the cognitive or mental demands involved in performing a task. A Cognitive Task Analysis was conducted for each platform to help identify potential cognitive issues when interacting with telemedicine systems. These solutions include providing a single necessary link in the email, eliminating the necessity to download and register, and, contrast, placement and appropriate labels for icons. As suggested by the participants, detailed step-wise instructions on navigating through a session will also be provided. Future work in this area would be to develop such a system, which theoretically, will increase the efficiency in using telemedicine systems

    The People Inside

    Get PDF
    Our collection begins with an example of computer vision that cuts through time and bureaucratic opacity to help us meet real people from the past. Buried in thousands of files in the National Archives of Australia is evidence of the exclusionary “White Australia” policies of the nineteenth and twentieth centuries, which were intended to limit and discourage immigration by non-Europeans. Tim Sherratt and Kate Bagnall decided to see what would happen if they used a form of face-detection software made ubiquitous by modern surveillance systems and applied it to a security system of a century ago. What we get is a new way to see the government documents, not as a source of statistics but, Sherratt and Bagnall argue, as powerful evidence of the people affected by racism

    Practical, appropriate, empirically-validated guidelines for designing educational games

    Get PDF
    There has recently been a great deal of interest in the potential of computer games to function as innovative educational tools. However, there is very little evidence of games fulfilling that potential. Indeed, the process of merging the disparate goals of education and games design appears problematic, and there are currently no practical guidelines for how to do so in a coherent manner. In this paper, we describe the successful, empirically validated teaching methods developed by behavioural psychologists and point out how they are uniquely suited to take advantage of the benefits that games offer to education. We conclude by proposing some practical steps for designing educational games, based on the techniques of Applied Behaviour Analysis. It is intended that this paper can both focus educational games designers on the features of games that are genuinely useful for education, and also introduce a successful form of teaching that this audience may not yet be familiar with
    corecore