2,951 research outputs found

    On the Privacy Practices of Just Plain Sites

    Full text link
    In addition to visiting high profile sites such as Facebook and Google, web users often visit more modest sites, such as those operated by bloggers, or by local organizations such as schools. Such sites, which we call "Just Plain Sites" (JPSs) are likely to inadvertently represent greater privacy risks than high profile sites by virtue of being unable to afford privacy expertise. To assess the prevalence of the privacy risks to which JPSs may inadvertently be exposing their visitors, we analyzed a number of easily observed privacy practices of such sites. We found that many JPSs collect a great deal of information from their visitors, share a great deal of information about their visitors with third parties, permit a great deal of tracking of their visitors, and use deprecated or unsafe security practices. Our goal in this work is not to scold JPS operators, but to raise awareness of these facts among both JPS operators and visitors, possibly encouraging the operators of such sites to take greater care in their implementations, and visitors to take greater care in how, when, and what they share.Comment: 10 pages, 7 figures, 6 tables, 5 authors, and a partridge in a pear tre

    BlogForever D2.6: Data Extraction Methodology

    Get PDF
    This report outlines an inquiry into the area of web data extraction, conducted within the context of blog preservation. The report reviews theoretical advances and practical developments for implementing data extraction. The inquiry is extended through an experiment that demonstrates the effectiveness and feasibility of implementing some of the suggested approaches. More specifically, the report discusses an approach based on unsupervised machine learning that employs the RSS feeds and HTML representations of blogs. It outlines the possibilities of extracting semantics available in blogs and demonstrates the benefits of exploiting available standards such as microformats and microdata. The report proceeds to propose a methodology for extracting and processing blog data to further inform the design and development of the BlogForever platform

    A Privacy-Preserving, Context-Aware, Insider Threat prevention and prediction model (PPCAITPP)

    Get PDF
    The insider threat problem is extremely challenging to address, as it is committed by insiders who are trusted and authorized to access the information resources of the organization. The problem is further complicated by the multifaceted nature of insiders, as human beings have various motivations and fluctuating behaviours. Additionally, typical monitoring systems may violate the privacy of insiders. Consequently, there is a need to consider a comprehensive approach to mitigate insider threats. This research presents a novel insider threat prevention and prediction model, combining several approaches, techniques and tools from the fields of computer science and criminology. The model is a Privacy- Preserving, Context-Aware, Insider Threat Prevention and Prediction model (PPCAITPP). The model is predicated on the Fraud Diamond (a theory from Criminology) which assumes there must be four elements present in order for a criminal to commit maleficence. The basic elements are pressure (i.e. motive), opportunity, ability (i.e. capability) and rationalization. According to the Fraud Diamond, malicious employees need to have a motive, opportunity and the capability to commit fraud. Additionally, criminals tend to rationalize their malicious actions in order for them to ease their cognitive dissonance towards maleficence. In order to mitigate the insider threat comprehensively, there is a need to consider all the elements of the Fraud Diamond because insider threat crime is also related to elements of the Fraud Diamond similar to crimes committed within the physical landscape. The model intends to act within context, which implies that when the model offers predictions about threats, it also reacts to prevent the threat from becoming a future threat instantaneously. To collect information about insiders for the purposes of prediction, there is a need to collect current information, as the motives and behaviours of humans are transient. Context-aware systems are used in the model to collect current information about insiders related to motive and ability as well as to determine whether insiders exploit any opportunity to commit a crime (i.e. entrapment). Furthermore, they are used to neutralize any rationalizations the insider may have via neutralization mitigation, thus preventing the insider from committing a future crime. However, the model collects private information and involves entrapment that will be deemed unethical. A model that does not preserve the privacy of insiders may cause them to feel they are not trusted, which in turn may affect their productivity in the workplace negatively. Hence, this thesis argues that an insider prediction model must be privacy-preserving in order to prevent further cybercrime. The model is not intended to be punitive but rather a strategy to prevent current insiders from being tempted to commit a crime in future. The model involves four major components: context awareness, opportunity facilitation, neutralization mitigation and privacy preservation. The model implements a context analyser to collect information related to an insider who may be motivated to commit a crime and his or her ability to implement an attack plan. The context analyser only collects meta-data such as search behaviour, file access, logins, use of keystrokes and linguistic features, excluding the content to preserve the privacy of insiders. The model also employs keystroke and linguistic features based on typing patterns to collect information about any change in an insider’s emotional and stress levels. This is indirectly related to the motivation to commit a cybercrime. Research demonstrates that most of the insiders who have committed a crime have experienced a negative emotion/pressure resulting from dissatisfaction with employment measures such as terminations, transfers without their consent or denial of a wage increase. However, there may also be personal problems such as a divorce. The typing pattern analyser and other resource usage behaviours aid in identifying an insider who may be motivated to commit a cybercrime based on his or her stress levels and emotions as well as the change in resource usage behaviour. The model does not identify the motive itself, but rather identifies those individuals who may be motivated to commit a crime by reviewing their computer-based actions. The model also assesses the capability of insiders to commit a planned attack based on their usage of computer applications and measuring their sophistication in terms of the range of knowledge, depth of knowledge and skill as well as assessing the number of systems errors and warnings generated while using the applications. The model will facilitate an opportunity to commit a crime by using honeypots to determine whether a motivated and capable insider will exploit any opportunity in the organization involving a criminal act. Based on the insider’s reaction to the opportunity presented via a honeypot, the model will deploy an implementation strategy based on neutralization mitigation. Neutralization mitigation is the process of nullifying the rationalizations that the insider may have had for committing the crime. All information about insiders will be anonymized to remove any identifiers for the purpose of preserving the privacy of insiders. The model also intends to identify any new behaviour that may result during the course of implementation. This research contributes to existing scientific knowledge in the insider threat domain and can be used as a point of departure for future researchers in the area. Organizations could use the model as a framework to design and develop a comprehensive security solution for insider threat problems. The model concept can also be integrated into existing information security systems that address the insider threat problemInformation ScienceD. Phil. (Information Systems

    Evaluating the semantic web: a task-based approach

    Get PDF
    The increased availability of online knowledge has led to the design of several algorithms that solve a variety of tasks by harvesting the Semantic Web, i.e. by dynamically selecting and exploring a multitude of online ontologies. Our hypothesis is that the performance of such novel algorithms implicity provides an insight into the quality of the used ontologies and thus opens the way to a task-based evaluation of the Semantic Web. We have investigated this hypothesis by studying the lessons learnt about online ontologies when used to solve three tasks: ontology matching, folksonomy enrichment, and word sense disambiguation. Our analysis leads to a suit of conclusions about the status of the Semantic Web, which highlight a number of strengths and weaknesses of the semantic information available online and complement the findings of other analysis of the Semantic Web landscape

    EristÀmismekanismeja selainpohjaisille ohjelmistoarkkitehtuureille

    Get PDF
    Traditional backend-oriented web applications are increasingly being replaced by frontend applications, which execute directly in the user's browser. Web application performance has been shown to directly affect business performance, and frontend applications enable unique performance improvements. However, building complex applications within the browser is still a new and poorly understood field, and engineering efforts within the field are often plagued by quality issues. This thesis addresses the current research gap around frontend applications, by investigating the applicability of isolation mechanisms available in browsers to frontend application architecture. We review the important publications around the topic, forming an overview of current research, and current best practices in the field. We use this understanding, combined with relevant industry experience, to categorize the available isolation mechanisms to four classes: state and variable isolation, isolation from the DOM, isolation within the DOM, and execution isolation. For each class, we provide background and concrete examples on both the related quality issues, as well as tools for their mitigation. Finally, we use the ISO 25010 quality standard to evaluate the impact of these isolation mechanisms on frontend application quality. Our results suggest that the application of the previously introduced isolation mechanisms has the potential to significantly improve several key areas of frontend application quality, most importantly compatibility and maintainability, but also performance and security. Many of these mechanisms also imply tradeoffs between other quality attributes, most commonly performance. Future work could include developing frontend application architectures that leverage these isolation mechanisms to their full potential.PerinteisiÀ palvelinorientoituneita verkko-ohjelmistoja korvataan kiihtyvÀllÀ vauhdilla selainpohjaisilla ohjelmistoilla. Verkko-ohjelmistojen suorituskyvyn on osoitettu vaikuttavan suoraan yritysten tulokseen, ja selainpohjaiset ohjelmistot mahdollistavat huomattavia parannuksia suorituskykyyn. Monimutkaisten selainpohjaisten ohjelmistojen rakentaminen on kuitenkin uusi ja huonosti ymmÀrretty ala, ja sillÀ tapahtuva kehitystyö on ollut laatuongelmien piinaamaa. TÀssÀ diplomityössÀ tÀydennetÀÀn puutteellista tutkimusta selainpohjaisista ohjelmistoista tutkimalla selaimista löytyvien eristysmekanismien soveltuvuutta nÀiden ohjelmistojen arkkitehtuurin parantamiseen. KÀymme lÀpi tÀrkeimmÀt alan julkaisut muodostaen yleiskuvan tutkimuksen tilasta ja parhaiksi katsotuista kÀytÀnnöistÀ alan harjoittajien keskuudessa. YhdistÀmÀllÀ kirjallisuuskatsauksen tulokset omaan työkokemukseemme alalta, luokittelemme selainten kÀytettÀvissÀ olevat eristysmekanismit neljÀÀn kategoriaan: tilan ja muuttujien eristÀminen, eristÀminen DOM:ista, eristÀminen DOM:in sisÀllÀ sekÀ suorituksen eristÀminen. KÀsittelemme tÀmÀn jÀlkeen löydetyt kategoriat sekÀ esitÀmme niihin liittyviÀ konkreettisia laatuongelmia sekÀ työkaluja nÀiden ongelmien ratkaisuun. Lopuksi arvioimme nÀiden eristysmekanismien vaikutusta selainpohjaisten ohjelmistojen laatuun ISO 25010 -laatustandardin avulla. Tuloksemme osoittavat ettÀ työssÀ esitettyjen eristysmekanismien kÀyttö saattaisi parantaa ohjelmistojen laatua usealla tÀrkeÀllÀ alueella. NÀistÀ merkittÀvimpiÀ ovat yhteensopivuus ja yllÀpidettÀvyys, mutta hyötyjÀ voitaisiin saada myös suorituskyvyn sekÀ tietoturvan parantumisella. Toisaalta monet esitellyistÀ mekanismeista myös vaativat kompromisseja muiden laatuvaatimusten osalta. Jatkotutkimusta tarvittaisiin selainpohjaisista arkkitehtuureista, jotka hyödyntÀisivÀt paremmin työssÀ esitettyjÀ eristysmekanismeja

    A psychometric analysis of the comprehensive school survey (CSS) middle school student version.

    Get PDF
    School climate is increasingly recognized by scholars and policymakers as a crucial factor associated with students’ educational experiences. Hence, practitioners endeavor to equitably measure and improve school climate to promote favorable student academic and behavior outcomes. Unfortunately, school climate research is fragmented, and a research-practice gap exists in best scale development and validity testing practices. The result is a proliferation of practitioner-developed school climate measures lacking solid theory-grounding and evidence to support intended score interpretations and uses. In response, Whitehouse et al. (2021) proposed a validity testing framework for practitioner-developed instruments aimed at supporting culturally responsive school climate measurement. Their framework, however, suffers from key limitations regarding the transparency of content validity assessment, breadth of validity evidence reported, and methods used to examine measurement invariance. Therefore, this study sought to replicate and extend their validity testing framework by using a standardized rubric to assess content validity, examining measurement invariance via the alignment method, and analyzing the predictive validity of group mean scores. By applying the extended validity testing framework to a practitioner-developed school climate student survey, the study also aimed to provide useful evidence to a large urban district with respect to the validity of comparing survey scores across Black and White middle school student groups and using scores to inform continuous improvement of student learning. Results suggest the extended framework is superior to the original for obtaining general content, factorial, and predictive validity evidence, and assessing measurement invariance across racial subgroups, provided the number and size of groups are adequate. Findings suggest the district’s middle school student survey is culturally responsive, although it may not sufficiently address all critical school climate dimensions. To improve the survey, the district must settle on a clear definition and taxonomy of school climate to facilitate a program of validity testing, and publicly document all available validity evidence. Future studies should clarify alignment sample size and simulation study requirements and extend the framework to assess additional validity concerns and for use with person-centered approaches
    • 

    corecore