5,685 research outputs found

    I Know Why You Went to the Clinic: Risks and Realization of HTTPS Traffic Analysis

    Full text link
    Revelations of large scale electronic surveillance and data mining by governments and corporations have fueled increased adoption of HTTPS. We present a traffic analysis attack against over 6000 webpages spanning the HTTPS deployments of 10 widely used, industry-leading websites in areas such as healthcare, finance, legal services and streaming video. Our attack identifies individual pages in the same website with 89% accuracy, exposing personal details including medical conditions, financial and legal affairs and sexual orientation. We examine evaluation methodology and reveal accuracy variations as large as 18% caused by assumptions affecting caching and cookies. We present a novel defense reducing attack accuracy to 27% with a 9% traffic increase, and demonstrate significantly increased effectiveness of prior defenses in our evaluation context, inclusive of enabled caching, user-specific cookies and pages within the same website

    Enhancing the Engagement of Disabled Students with Disability Support Services: A Digital Story Approach

    Get PDF
    Initiatives that recognise diversity within the student population and understand the range of learner variation have been found to help institutions to better recognise and reduce barriers to learning for disabled people. This paper describes the development of a ‘digital story’ designed to inform all students about the disability support services offered by the University of Hertfordshire. The development and utilisation of a digital story to inform students about disability related issues and services was also designed to foster a more informed and tolerant learning community. The findings of the pilot evaluation study have highlighted the digital story to have increased student understandings of what is recognised as a disability. It was also found to have increased the likelihood of them approaching their disabled student’s coordinator. However, it is notable that a majority of students expressed a desire for an in person talk on disability services. It is concluded that although technology is not necessarily a replacement for the ‘personal touch’, new methods should be found to increase the personal feel of the digital story. It is suggested that the final evaluation questionnaire is modified to capture information about why students want an in person talk and how this can best be achieved in a digital format when personal engagement may not be possible.Peer reviewe

    Type-Directed Program Transformations for the Working Functional Programmer

    Get PDF
    We present preliminary research on Deuce+, a set of tools integrating plain text editing with structural manipulation that brings the power of expressive and extensible type-directed program transformations to everyday, working programmers without a background in computer science or mathematical theory. Deuce+ comprises three components: (i) a novel set of type-directed program transformations, (ii) support for syntax constraints for specifying "code style sheets" as a means of flexibly ensuring the consistency of both the concrete and abstract syntax of the output of program transformations, and (iii) a domain-specific language for specifying program transformations that can operate at a high level on the abstract (and/or concrete) syntax tree of a program and interface with syntax constraints to expose end-user options and alleviate tedious and potentially mutually inconsistent style choices. Currently, Deuce+ is in the design phase of development, and discovering the right usability choices for the system is of the highest priority

    Towards a Cloud-Based Service for Maintaining and Analyzing Data About Scientific Events

    Full text link
    We propose the new cloud-based service OpenResearch for managing and analyzing data about scientific events such as conferences and workshops in a persistent and reliable way. This includes data about scientific articles, participants, acceptance rates, submission numbers, impact values as well as organizational details such as program committees, chairs, fees and sponsors. OpenResearch is a centralized repository for scientific events and supports researchers in collecting, organizing, sharing and disseminating information about scientific events in a structured way. An additional feature currently under development is the possibility to archive web pages along with the extracted semantic data in order to lift the burden of maintaining new and old conference web sites from public research institutions. However, the main advantage is that this cloud-based repository enables a comprehensive analysis of conference data. Based on extracted semantic data, it is possible to determine quality estimations, scientific communities, research trends as well the development of acceptance rates, fees, and number of participants in a continuous way complemented by projections into the future. Furthermore, data about research articles can be systematically explored using a content-based analysis as well as citation linkage. All data maintained in this crowd-sourcing platform is made freely available through an open SPARQL endpoint, which allows for analytical queries in a flexible and user-defined way.Comment: A completed version of this paper had been accepted in SAVE-SD workshop 2017 at WWW conferenc

    SUNCAT Impact and Satisfaction Survey Report

    Get PDF

    Key skills by design: Adapting a central Web resource to departmental contexts

    Get PDF
    Web‐based delivery of support materials for students has proved to be a popular way of helping small teams to implement key skills policies within universities. The development of ‘key’ or ‘transferable’ skills is now encouraged throughout education, but resources (both in terms of staffing and budget) tend to be limited. It is difficult for key skills teams to see learners face to face, and not feasible to print or distribute large amounts of paper‐based material. Web‐based delivery presents a means of overcoming these problems but it can result in generic study skills material simply being published online without due consideration of the needs of different groups of learners within different subject disciplines. Therefore, although a centralized Website for skills provision can overcome logistical problems, it may be perceived as irrelevant or unusable by the student population. This paper presents a model for Web‐based delivery of support for key skills which incorporates two separate approaches to the design of these resources. The model was implemented as part of a wider key skills pilot project at University College London, over a period of one year. It includes a ‘core’ Website, containing information and resources for staff and students. These can also be accessed via customized, departmental key skills homepages. This paper presents the basis for the design choices made in preparing these materials, and the evaluation of some of the pilot departments using them. It then draws some wider conclusions about the effectiveness of this design for supporting skills development

    Intelligent spider for Internet searching

    Get PDF
    As World Wide Web (WWW) based Internet services become more popular, information overload also becomes a pressing research problem. Difficulties with searching on the Internet get worse as the amount of information that is available increases. A scalable approach to support Internet search is critical to the success of Internet services and other current or future national information infrastructure (NII) applications. A new approach to build an intelligent personal spider (agent), which is based on automatic textual analysis of Internet documents, is proposed. Best first search and genetic algorithm have been tested to develop the intelligent spider. These personal spiders are able to dynamically and intelligently analyze the contents of the users' selected homepages as the starting point to search for the most relevant homepages based on the links and indexing. An intelligent spider must have the capability to make adjustments according to progress of searching in order to be an intelligent agent. However, the current searching engines do not have communication between the users and the robots. The spider presented in the paper uses Java to develop the user interface such that the users can adjust the control parameters according to the progress and observe the intermediate results. The performances of the genetic algorithm based and best first search based spiders are also reported.published_or_final_versio

    A smart itsy bitsy spider for the Web

    Get PDF
    Artificial Intelligence Lab, Department of MIS, University of ArizonaAs part of the ongoing Illinois Digital Library Initiative project, this research proposes an intelligent agent approach to Web searching. In this experiment, we developed two Web personal spiders based on best first search and genetic algorithm techniques, respectively. These personal spiders can dynamically take a userâ s selected starting homepages and search for the most closely related homepages in the Web, based on the links and keyword indexing. A graphical, dynamic, Java-based interface was developed and is available for Web access. A system architecture for implementing such an agent-based spider is presented, followed by detailed discussions of benchmark testing and user evaluation results. In benchmark testing, although the genetic algorithm spider did not outperform the best first search spider, we found both results to be comparable and complementary. In user evaluation, the genetic algorithm spider obtained significantly higher recall value than that of the best first search spider. However, their precision values were not statistically different. The mutation process introduced in genetic algorithm allows users to find other potential relevant homepages that cannot be explored via a conventional local search process. In addition, we found the Java-based interface to be a necessary component for design of a truly interactive and dynamic Web agent
    • …
    corecore