25 research outputs found

    Untangling the Web: A Guide To Internet Research

    Get PDF
    [Excerpt] Untangling the Web for 2007 is the twelfth edition of a book that started as a small handout. After more than a decade of researching, reading about, using, and trying to understand the Internet, I have come to accept that it is indeed a Sisyphean task. Sometimes I feel that all I can do is to push the rock up to the top of that virtual hill, then stand back and watch as it rolls down again. The Internet—in all its glory of information and misinformation—is for all practical purposes limitless, which of course means we can never know it all, see it all, understand it all, or even imagine all it is and will be. The more we know about the Internet, the more acute is our awareness of what we do not know. The Internet emphasizes the depth of our ignorance because our knowledge can only be finite, while our ignorance must necessarily be infinite. My hope is that Untangling the Web will add to our knowledge of the Internet and the world while recognizing that the rock will always roll back down the hill at the end of the day

    Gender responses towards online social engineering attacks amongst young adult students in South Africa.

    Get PDF
    Masters Degree. University of KwaZulu-Natal, Pietermaritzburg.Online-based attacks have become prevalent and continue to be on the rise as technology advances. The complexity of the internet has posed a cybersecurity concern across various online channels. As a result, online social engineering has become an important information aspect of security in the usage of the internet. Young adults, mainly students, who have the necessary social engineering knowledge to protect their personally identifiable information (PII) are less likely to fall victim. Therefore, social engineering awareness is seen as an important defense mechanism that enables students to protect their PII. Due to the lack of social engineering awareness initiatives conducted in higher academic institutions, social engineers succeed in luring students. This study applied the quantitative research approach through distributing 379 questionnaires to both female and male students. The questionnaire tested both male and female students on their social engineering knowledge, information security attitudes, social engineering perceptions and online behaviour. The results of this study showed that there is a gender difference in online behaviour in reacting to online social engineering. The male students’ responses revealed that they have more social engineering knowledge compared to their female counterparts. The findings also provided an indication of the online behaviours that potentially increase the students’ susceptibility. The findings validate the need for social engineering awareness initiatives that address students on how to improve their online social engineering identification and information security. The study concludes by recommending attainable solutions to increasing the awareness levels of social engineering knowledge.Appendix F : Descriptive Statistics on page 134-136

    Detecting, Modeling, and Predicting User Temporal Intention

    Get PDF
    The content of social media has grown exponentially in the recent years and its role has evolved from narrating life events to actually shaping them. Unfortunately, content posted and shared in social networks is vulnerable and prone to loss or change, rendering the context associated with it (a tweet, post, status, or others) meaningless. There is an inherent value in maintaining the consistency of such social records as in some cases they take over the task of being the first draft of history as collections of these social posts narrate the pulse of the street during historic events, protest, riots, elections, war, disasters, and others as shown in this work. The user sharing the resource has an implicit temporal intent: either the state of the resource at the time of sharing, or the current state of the resource at the time of the reader \clicking . In this research, we propose a model to detect and predict the user\u27s temporal intention of the author upon sharing content in the social network and of the reader upon resolving this content. To build this model, we first examine the three aspects of the problem: the resource, time, and the user. For the resource we start by analyzing the content on the live web and its persistence. We noticed that a portion of the resources shared in social media disappear, and with further analysis we unraveled a relationship between this disappearance and time. We lose around 11% of the resources after one year of sharing and a steady 7% every following year. With this, we turn to the public archives and our analysis reveals that not all posted resources are archived and even they were an average 8% per year disappears from the archives and in some cases the archived content is heavily damaged. These observations prove that in regards to archives resources are not well-enough populated to consistently and reliably reconstruct the missing resource as it existed at the time of sharing. To analyze the concept of time we devised several experiments to estimate the creation date of the shared resources. We developed Carbon Date, a tool which successfully estimated the correct creation dates for 76% of the test sets. Since the resources\u27 creation we wanted to measure if and how they change with time. We conducted a longitudinal study on a data set of very recently-published tweet-resource pairs and recording observations hourly. We found that after just one hour, ~4% of the resources have changed by ≄30% while after a day the change rate slowed to be ~12% of the resources changed by ≄40%. In regards to the third and final component of the problem we conducted user behavioral analysis experiments and built a data set of 1,124 instances manually assigned by test subjects. Temporal intention proved to be a difficult concept for average users to understand. We developed our Temporal Intention Relevancy Model (TIRM) to transform the highly subjective temporal intention problem into the more easily understood idea of relevancy between a tweet and the resource it links to, and change of the resource through time. On our collected data set TIRM produced a significant 90.27% success rate. Furthermore, we extended TIRM and used it to build a time-based model to predict temporal intention change or steadiness at the time of posting with 77% accuracy. We built a service API around this model to provide predictions and a few prototypes. Future tools could implement TIRM to assist users in pushing copies of shared resources into public web archives to ensure the integrity of the historical record. Additional tools could be used to assist the mining of the existing social media corpus by derefrencing the intended version of the shared resource based on the intention strength and the time between the tweeting and mining

    An Examination of Interactions of U.S. Players with Offshore Gambling Sites and Online Casino Reviews: Are They Offenders or Victims?

    Full text link
    While the recent liberal stance of the United States has facilitated the proliferation of legal online gambling venues, the ‘black market’ of online gambling remains prevalent. Illegal online gambling poses concerns for the legal market and risks for gambling addiction and consumer protection. Numerous law violations occur in this domain, which are produced by the interactions between players, gambling sites, and online casino reviews. Exploring the interactions of players with online gambling sites and online casino reviews is vital in enhancing our understanding of illegal online gambling activity and therefore allows us to develop preventive measures that can reduce the illegal market; however, a limited number of criminological studies have empirically investigated this subject. This study examined how structural and operational factors of offshore gambling sites affect players’ decisions to use such sites, drawing on routine activities theory as the primary theoretical framework. In addition, this study employed framing theory coupled with neutralization techniques to examine how online casino reviews interpret the use of offshore gambling sites in the U.S. Findings indicate that high visibility of offshore sites on the Internet leads to a high usage of the sites by U.S. players. In addition, online casino reviews providing a blacklist of online gambling sites serve as informal guardians, helping players avoid rogue gambling sites that pose a risk to their customers. However, online casino reviews affiliated with offshore sites not only present misleading information about U.S. gambling laws to encourage to use their affiliated gambling sites but also employ various frames to justify the use of offshore sites in the U.S., which leads to players unknowingly depositing at illegal gambling sites while falsely believing that their behavior is legitimate. Policy implications were suggested based on the findings and would provide insights toward effective online gambling regulatory efforts

    Improving Anycast with Measurements

    Get PDF
    Since the first Distributed Denial-of-Service (DDoS) attacks were launched, the strength of such attacks has been steadily increasing, from a few megabits per second to well into the terabit/s range. The damage that these attacks cause, mostly in terms of financial cost, has prompted researchers and operators alike to investigate and implement mitigation strategies. Examples of such strategies include local filtering appliances, Border Gateway Protocol (BGP)-based blackholing and outsourced mitigation in the form of cloud-based DDoS protection providers. Some of these strategies are more suited towards high bandwidth DDoS attacks than others. For example, using a local filtering appliance means that all the attack traffic will still pass through the owner's network. This inherently limits the maximum capacity of such a device to the bandwidth that is available. BGP Blackholing does not have such limitations, but can, as a side-effect, cause service disruptions to end-users. A different strategy, that has not attracted much attention in academia, is based on anycast. Anycast is a technique that allows operators to replicate their service across different physical locations, while keeping that service addressable with just a single IP-address. It relies on the BGP to effectively load balance users. In practice, it is combined with other mitigation strategies to allow those to scale up. Operators can use anycast to scale their mitigation capacity horizontally. Because anycast relies on BGP, and therefore in essence on the Internet itself, it can be difficult for network engineers to fine tune this balancing behavior. In this thesis, we show that that is indeed the case through two different case studies. In the first, we focus on an anycast service during normal operations, namely the Google Public DNS, and show that the routing within this service is far from optimal, for example in terms of distance between the client and the server. In the second case study, we observe the root DNS, while it is under attack, and show that even though in aggregate the bandwidth available to this service exceeds the attack we observed, clients still experienced service degradation. This degradation was caused due to the fact that some sites of the anycast service received a much higher share of traffic than others. In order for operators to improve their anycast networks, and optimize it in terms of resilience against DDoS attacks, a method to assess the actual state of such a network is required. Existing methodologies typically rely on external vantage points, such as those provided by RIPE Atlas, and are therefore limited in scale, and inherently biased in terms of distribution. We propose a new measurement methodology, named Verfploeter, to assess the characteristics of anycast networks in terms of client to Point-of-Presence (PoP) mapping, i.e. the anycast catchment. This method does not rely on external vantage points, is free of bias and offers a much higher resolution than any previous method. We validated this methodology by deploying it on a testbed that was locally developed, as well as on the B root DNS. We showed that the increased \textit{resolution} of this methodology improved our ability to assess the impact of changes in the network configuration, when compared to previous methodologies. As final validation we implement Verfploeter on Cloudflare's global-scale anycast Content Delivery Network (CDN), which has almost 200 global Points-of-Presence and an aggregate bandwidth of 30 Tbit/s. Through three real-world use cases, we demonstrate the benefits of our methodology: Firstly, we show that changes that occur when withdrawing routes from certain PoPs can be accurately mapped, and that in certain cases the effect of taking down a combination of PoPs can be calculated from individual measurements. Secondly, we show that Verfploeter largely reinstates the ping to its former glory, showing how it can be used to troubleshoot network connectivity issues in an anycast context. Thirdly, we demonstrate how accurate anycast catchment maps offer operators a new and highly accurate tool to identify and filter spoofed traffic. Where possible, we make datasets collected over the course of the research in this thesis available as open access data. The two best (open) dataset awards that were awarded for these datasets confirm that they are a valued contribution. In summary, we have investigated two large anycast services and have shown that their deployments are not optimal. We developed a novel measurement methodology, that is free of bias and is able to obtain highly accurate anycast catchment mappings. By implementing this methodology and deploying it on a global-scale anycast network we show that our method adds significant value to the fast-growing anycast CDN industry and enables new ways of detecting, filtering and mitigating DDoS attacks

    DATA-DRIVEN STORYTELLING FOR CASUAL USERS

    Get PDF
    Today’s overwhelming volume of data has made effective analysis virtually inaccessible for the general public. The emerging practice of data-driven storytelling is addressing this by framing data using familiar mechanisms such as slideshows, videos, and comics to make even highly complex phenomena understandable. However, current data stories still do not utilize the full potential of the storytelling domain. One reason for this is that current data-driven storytelling practice does not leverage the full repertoire of media that can be used for storytelling, such as speech, e-learning, and video games. In this dissertation, we propose a taxonomy focused specifically on media types for the purpose of widening the purview of data-driven storytelling by putting more tools in the hands of designers. We expand the idea of data-driven storytelling into the group of casual users, who are the consumers of information and non-professionals with limited time, skills, and motivation , to bridge the data gap between the advanced data analytics tools and everyday internet users. To prove the effectiveness and the wide acceptance of our taxonomy and data-driven storytelling among the casual users, we have collected examples for data-driven storytelling by finding, reviewing, and classifying ninety-one examples. Using our taxonomy as a generative tool, we also explored two novel storytelling mechanisms, including live-streaming analytics videos—DataTV—and sequential art (comics) that dynamically incorporates visual representations—Data Comics. Meanwhile, we widened the genres we explored to fill the gaps in the literature. We also evaluated Data Comics and DataTV with user studies and expert reviews. The results show that Data Comics facilitates data-driven storytelling in terms of inviting reading, aiding memory, and viewing as a story. The results also show that an integrated system as DataTV encourages authors to create and present data stories

    Web accessibility and mental disorders

    Get PDF
    Background: Mental disorders are a significant public health issue due to the restrictions they place on participation in all areas of life and the resulting disruption to the families and societies of those affected. People with these disorders often use the Web as an informational resource, platform for convenient self-directed treatment and a means for many other kinds of support. However, some features of the Web can potentially erect barriers for this group that limit their access to these benefits, and there is a lack of research looking into this eventuality. Therefore, it is important to identify gaps in knowledge about “what” barriers exist and “how” they could be addressed so that this knowledge can inform Web professionals who aim to ensure the Web is inclusive to this population. Objective: The objective of this work was to identify the barriers people with mental disorders, especially those with depression and anxiety, experience when using the Web and the facilitation measures used to address such barriers. Methods: This work involved three studies. First, (1) a systematic review of studies that have considered the difficulties people with mental disorders experience when using digital technologies. A synthesis was performed by categorizing data according to the 4 foundational principles of Web accessibility as proposed by the World Wide Web Consortium. Facilitation measures recommended by studies were later summarized into a set of minimal recommendations. This work also relied data triangulation using (2) face-to-face semistructured interview study with participants affected by depression and anxiety and a comparison group, as well as (3) a persona-based expert online survey study with mental health practitioners. Framework analysis was used for study 2 and study 3. Results: A total of 16 publications were included in study 1’s review, comprising 13 studies and 3 international guidelines. Findings suggest that people with mental disorders experience barriers that limit how they perceive, understand, and operate websites. Identified facilitation measures target these barriers in addition to ensuring that Web content can be reliably interpreted by a wide range of user applications. In study 2, 167 difficulties were identified from the experiences of participants in the depression and anxiety group were discussed within the context of 81 Web activities, services, and features. Sixteen difficulties identified from the experiences of participants in the comparison group were discussed within the context of 11 Web activities, services, and features. In study 3, researchers identified 3 themes and 10 subthemes that described the likely difficulties people with depression and anxiety might experience online as reported by mental health practitioners. Conclusions: People with mental disorders encounter barriers on the Web, and attempts have been made to remove or reduce these barriers. This investigation has contributed to a fuller understanding of these difficulties and provides innovative guidance on how to remove and reduce them for people with depression and anxiety when using the Web. More rigorous research is still needed to be exhaustive and to have a larger impact on improving the Web for people with mental disorders
    corecore