7 research outputs found

    On-line trust perception: what really matters

    Get PDF
    Trust is an essential ingredient in our daily activities. The fact that these activities are increasingly carried out using the large number of available services on the Internet makes it necessary to understand how users perceive trust in the online environment. A wide body of literature concerning trust perception and ways to model it already exists. A trust perception model generally lists a set of factors influencing a person trusting another person, a computer, or a website. Different models define different set of factors, but a single unifying model, applicable to multiple scenarios in different settings, is still missing. Moreover, there are no conclusions on the importance each factor has on trust perception. In this paper, we review the existing literature and provide a general trust perception model, which is able to measure the trustworthiness of a website. Such a model takes into account a comprehensive set of trust factors, ranking them based on their importance, and can be easily adapted to different application domains. A user study has been used to determine the importance, or weight, of each factor. The results of the study show evidence that such weight differs from one application domain (e.g. e-banking or e-health) to another. We also demonstrate that the weight of certain factors is related to the users knowledge in the IT Security field. This paper constitutes a first step towards the ability to measure the trustworthiness of a website, helping developers to create more trustworthy websites, and users to make their trust decisions when using on-line services

    What websites know about you : privacy policy analysis using information extraction

    No full text
    The need for privacy protection on the Internet is well recognized. Everyday users are asked to release personal information in order to use online services and applications. Service providers do not always need all the data they gather to be able to offer a service. Thus users should be aware of what data is collected by a provider to judge whether this is too much for the services offered. Providers are obliged to describe how they treat personal data in privacy policies. By reading the policy users could discover, amongst others, what personal data they agree to give away when choosing to use a service. Unfortunately, privacy policies are long legal documents that users notoriously refuse to read. In this paper we propose a solution which automatically analyzes privacy policy text and shows what personal information is collected. Our solution is based on the use of Information Extraction techniques and represents a step towards the more ambitious aim of automated grading of privacy policies

    A machine learning solution to assess privacy policy completeness

    No full text
    A privacy policy is a legal document, used by websites to communicate how the personal data that they collect will be managed. By accepting it, the user agrees to release his data under the conditions stated by the policy. Privacy policies should provide enough information to enable users to make informed decisions. Privacy regulations support this by specifying what kind of information has to be provided. As privacy policies can be long and dif¿cult to understand, users tend not to read them. Because of this, users generally agree with a policy without knowing what it states and whether aspects important to him are covered at all. In this paper we present a solution to assist the user by providing a structured way to browse the policy content and by automatically assessing the completeness of a policy, i.e. the degree of coverage of privacy categories important to the user. The privacy categories are extracted from privacy regulations, while text categorization and machine learning techniques are used to verify which categories are covered by a policy. The results show the feasibility of our approach; an automatic classi¿er, able to associate the right category to paragraphs of a policy with an accuracy approximating that obtainable by a human judge, can be effectively created. Keywords: privacy, privacy policy, natural language, machine learnin

    A hybrid framework for data loss prevention and detection

    No full text
    \u3cp\u3eData loss, i.e. the unauthorized/unwanted disclosure of data, is a major threat for modern organizations. Data Loss Protection (DLP) solutions in use nowadays, either employ patterns of known attacks (signature-based) or try to find deviations from normal behavior (anomaly-based). While signature-based solutions provide accurate identification of known attacks and, thus, are suitable for the prevention of these attacks, they cannot cope with unknown attacks, nor with attackers who follow unusual paths (like those known only to insiders) to carry out their attack. On the other hand, anomaly-based solutions can find unknown attacks but typically have a high false positive rate, limiting their applicability to the detection of suspicious activities. In this paper, we propose a hybrid DLP framework that combines signature-based and anomaly-based solutions, enabling both detection and prevention. The framework uses an anomaly-based engine that automatically learns a model of normal user behavior, allowing it to flag when insiders carry out anomalous transactions. Typically, anomaly-based solutions stop at this stage. Our framework goes further in that it exploits an operator's feedback on alerts to automatically build and update signatures of attacks that are used to timely block undesired transactions before they can cause any damage.\u3c/p\u3

    A white-box anomaly-based framework for database leakage detection

    No full text
    Data leakage is at the heart most of the privacy breaches worldwide. In this paper we present a white-box approach to detect potential data leakage by spotting anomalies in database transactions. We refer to our solution as white-box because it builds self explanatory profiles that are easy to understand and update, as opposite to black-box systems which create profiles hard to interpret and maintain (e.g., neural networks). In this paper we introduce our approach and we demonstrate that it is a major leap forward w.r.t. previous work on the topic in several aspects: (i) it significantly decreases the number of false positives, which is orders of magnitude lower than in state-of-the-art comparable approaches (we demonstrate this using an experimental dataset consisting of millions of real enterprise transactions); (ii) it creates profiles that are easy to understand and update, and therefore it provides an explanation of the origins of an anomaly; (iii) it allows the introduction of a feedback mechanism that makes possible for the system to improve based on its own mistakes; and (iv) feature aggregation and transaction flow analysis allow the system to detect threats which span over multiple features and multiple transactions

    From system specification to anomaly detection (and back)

    No full text
    Industrial control systems have stringent safety and security demands. High safety assurance can be obtained by specifying the system with possible faults and monitoring it to ensure these faults are properly addressed. Addressing security requires considering unpredictable attacker behavior. Anomaly detection, with its data driven approach, can detect simple unusual behavior and system-based attacks like the propagation of malware; on the other hand, anomaly detection is less suitable to detect more complex process-based attacks and it provides little actionability in presence of an alert. The alternative to anomaly detection is to use specification-based intrusion detection, which is more suitable to detect process-based attacks, but is typically expensive to set up and less scalable. We propose to combine a lightweight formal system specification with anomaly detection, providing data-driven monitoring. The combination is based on mapping elements of the specification to elements of the network traffic. This allows extracting locations to monitor and relevant context information from the formal specification, thus semantically enriching the raised alerts and making them actionable. On the other hand, it also allows under-specification of data-based properties in the formal model; some predicates can be left uninterpreted and the monitoring can be used to learn a model for them. We demonstrate our methodology on a smart manufacturing use cas

    Encryption in ICS networks : a blessing or a curse?

    No full text
    Nowadays, the internal network communication of Industrial Control Systems (ICS) usually takes place in unencrypted form. This, however, seems to be bound to change in the future: as we write, encryption of network traffic is seriously being considered as a standard for future ICS. In this paper we take a critical look at the pro's and con's of traffic encryption in ICS. We come to the conclusion that encrypting this kind of network traffic may actually result in a reduction of the security and overall safety. As such, sensible versus non-sensible use of encryption needs to be carefully considered both in developing ICS standards and systems.
    corecore