3,168 research outputs found

    Online advertising: analysis of privacy threats and protection approaches

    Get PDF
    Online advertising, the pillar of the “free” content on the Web, has revolutionized the marketing business in recent years by creating a myriad of new opportunities for advertisers to reach potential customers. The current advertising model builds upon an intricate infrastructure composed of a variety of intermediary entities and technologies whose main aim is to deliver personalized ads. For this purpose, a wealth of user data is collected, aggregated, processed and traded behind the scenes at an unprecedented rate. Despite the enormous value of online advertising, however, the intrusiveness and ubiquity of these practices prompt serious privacy concerns. This article surveys the online advertising infrastructure and its supporting technologies, and presents a thorough overview of the underlying privacy risks and the solutions that may mitigate them. We first analyze the threats and potential privacy attackers in this scenario of online advertising. In particular, we examine the main components of the advertising infrastructure in terms of tracking capabilities, data collection, aggregation level and privacy risk, and overview the tracking and data-sharing technologies employed by these components. Then, we conduct a comprehensive survey of the most relevant privacy mechanisms, and classify and compare them on the basis of their privacy guarantees and impact on the Web.Peer ReviewedPostprint (author's final draft

    End-user composition of interactive applications through actionable UI components

    Get PDF
    Developing interactive systems to access and manipulate data is a very tough task. In particular, the development of user interfaces (UIs) is one of the most time-consuming activities in the software lifecycle. This is even more demanding when data have to be retrieved by accessing flexibly different online resources. Indeed, software development is moving more and more toward composite applications that aggregate on the fly specific Web services and APIs. In this article, we present a mashup model that describes the integration, at the presentation layer, of UI components. The goal is to allow non-technical end users to visualize and manipulate (i.e., to perform actions on) the data displayed by the components, which thus become actionable UI components. This article shows how the model has guided the development of a mashup platform through which non-technical end users can create component-based interactive workspaces via the aggregation and manipulation of data fetched from distributed online resources. Due to the abundance of online data sources, facilitating the creation of such interactive workspaces is a very relevant need that emerges in different contexts. A utilization study has been performed in order to assess the benefits of the proposed model and of the Actionable UI Components; participants were required to perform real tasks using the mashup platform. The study results are reported and discussed

    Standards in Disruptive Innovation: Assessment Method and Application to Cloud Computing

    Get PDF
    Die Dissertation schlĂ€gt ein konzeptionelles Informationsmodell und eine Methode zur Bewertung von Technologie-Standards im Kontext von Disruptiven Innovationen vor. Das konzeptionelle Informationsmodell stellt die Grundlage zur Strukturierung relevanter Informationen dar. Die Methode definiert ein Prozessmodell, das die Instanziierung des Informationsmodells fĂŒr verschiedenen DomĂ€nen beschreibt und Stakeholder bei der Klassifikation und Evaluation von Technologie-Standards unterstĂŒtzt

    Detecting Prominent Features and Classifying Network Traffic for Securing Internet of Things Based on Ensemble Methods

    Get PDF
    abstract: Rapid growth of internet and connected devices ranging from cloud systems to internet of things have raised critical concerns for securing these systems. In the recent past, security attacks on different kinds of devices have evolved in terms of complexity and diversity. One of the challenges is establishing secure communication in the network among various devices and systems. Despite being protected with authentication and encryption, the network still needs to be protected against cyber-attacks. For this, the network traffic has to be closely monitored and should detect anomalies and intrusions. Intrusion detection can be categorized as a network traffic classification problem in machine learning. Existing network traffic classification methods require a lot of training and data preprocessing, and this problem is more serious if the dataset size is huge. In addition, the machine learning and deep learning methods that have been used so far were trained on datasets that contain obsolete attacks. In this thesis, these problems are addressed by using ensemble methods applied on an up to date network attacks dataset. Ensemble methods use multiple learning algorithms to get better classification accuracy that could be obtained when the corresponding learning algorithm is applied alone. This dataset for network traffic classification has recent attack scenarios and contains over fifteen attacks. This approach shows that ensemble methods can be used to classify network traffic and detect intrusions with less training times of the model, and lesser pre-processing without feature selection. In addition, this thesis also shows that only with less than ten percent of the total features of input dataset will lead to similar accuracy that is achieved on whole dataset. This can heavily reduce the training times and classification duration in real-time scenarios.Dissertation/ThesisMasters Thesis Computer Science 201

    BlogForever D2.6: Data Extraction Methodology

    Get PDF
    This report outlines an inquiry into the area of web data extraction, conducted within the context of blog preservation. The report reviews theoretical advances and practical developments for implementing data extraction. The inquiry is extended through an experiment that demonstrates the effectiveness and feasibility of implementing some of the suggested approaches. More specifically, the report discusses an approach based on unsupervised machine learning that employs the RSS feeds and HTML representations of blogs. It outlines the possibilities of extracting semantics available in blogs and demonstrates the benefits of exploiting available standards such as microformats and microdata. The report proceeds to propose a methodology for extracting and processing blog data to further inform the design and development of the BlogForever platform

    Models, Values, and Disasters

    Get PDF
    Decision-support models have values embedded in them and are subjective to varying degrees. Philosophical and ethical perspectives on operations research models are used to describe this subjectivity. Approaches to model building are then suggested that take into account subjectivity and values. For the decisions to reflect the right values, the model must align with the decision-maker’s values. I argue that it is appropriate and important for Christians applying mathematical models to be keenly aware of decision-maker’s values and seek to reflect them in the model. Disaster response planning is presented as an example where incorporating values is challenging. The responding organizations have multifaceted goals. How is equity balanced with efficiency? How is cost and donor interest considered? I report on a study of how Christian relief organizations differ from non-faith based organizations in ways that can be reflected in their logistics procedures and in these models

    Fact or Fiction

    Get PDF
    Fake news is increasingly pervasive, and we address its problematic aspects to help people intelligently consume news. In this project, we research machine learning models to extract objective sentences, encouraging unbiased discussions based on facts. The most accurate model, a convolutional neural network, achieves an accuracy of 85.69%. The team implemented an end-to-end web system that highlights objective sentences in user input to make our model publicly accessible. The system also provides additional information about user input, such as links to related web pages. We evaluate our system both qualitatively by interviewing users, and quantitatively with surveys consisting of rating scale questions. Received positive feedback indicates the usability of our platform

    Deteção de ataques de negação de serviços distribuídos na origem

    Get PDF
    From year to year new records of the amount of traffic in an attack are established, which demonstrate not only the constant presence of distributed denialof-service attacks, but also its evolution, demarcating itself from the other network threats. The increasing importance of resource availability alongside the security debate on network devices and infrastructures is continuous, given the preponderant role in both the home and corporate domains. In the face of the constant threat, the latest network security systems have been applying pattern recognition techniques to infer, detect, and react more quickly and assertively. This dissertation proposes methodologies to infer network activities patterns, based on their traffic: follows a behavior previously defined as normal, or if there are deviations that raise suspicions about the normality of the action in the network. It seems that the future of network defense systems continues in this direction, not only by increasing amount of traffic, but also by the diversity of actions, services and entities that reflect different patterns, thus contributing to the detection of anomalous activities on the network. The methodologies propose the collection of metadata, up to the transport layer of the osi model, which will then be processed by the machien learning algorithms in order to classify the underlying action. Intending to contribute beyond denial-of-service attacks and the network domain, the methodologies were described in a generic way, in order to be applied in other scenarios of greater or less complexity. The third chapter presents a proof of concept with attack vectors that marked the history and a few evaluation metrics that allows to compare the different classifiers as to their success rate, given the various activities in the network and inherent dynamics. The various tests show flexibility, speed and accuracy of the various classification algorithms, setting the bar between 90 and 99 percent.De ano para ano sĂŁo estabelecidos novos recordes de quantidade de trĂĄfego num ataque, que demonstram nĂŁo sĂł a presença constante de ataques de negação de serviço distribuĂ­dos, como tambĂ©m a sua evolução, demarcando-se das outras ameaças de rede. A crescente importĂąncia da disponibilidade de recursos a par do debate sobre a segurança nos dispositivos e infraestruturas de rede Ă© contĂ­nuo, dado o papel preponderante tanto no dominio domĂ©stico como no corporativo. Face Ă  constante ameaça, os sistemas de segurança de rede mais recentes tĂȘm vindo a aplicar tĂ©cnicas de reconhecimento de padrĂ”es para inferir, detetar e reagir de forma mais rĂĄpida e assertiva. Esta dissertação propĂ”e metodologias para inferir padrĂ”es de atividades na rede, tendo por base o seu trĂĄfego: se segue um comportamento previamente definido como normal, ou se existem desvios que levantam suspeitas sobre normalidade da ação na rede. Tudo indica que o futuro dos sistemas de defesa de rede continuarĂĄ neste sentido, servindo-se nĂŁo sĂł do crescente aumento da quantidade de trĂĄfego, como tambĂ©m da diversidade de açÔes, serviços e entidades que refletem padrĂ”es distintos contribuindo assim para a deteção de atividades anĂłmalas na rede. As metodologias propĂ”em a recolha de metadados, atĂ© ĂĄ camada de transporte, que seguidamente serĂŁo processados pelos algoritmos de aprendizagem automĂĄtica com o objectivo de classificar a ação subjacente. Pretendendo que o contributo fosse alĂ©m dos ataques de negação de serviço e do dominio de rede, as metodologias foram descritas de forma tendencialmente genĂ©rica, de forma a serem aplicadas noutros cenĂĄrios de maior ou menos complexidade. No quarto capĂ­tulo Ă© apresentada uma prova de conceito com vetores de ataques que marcaram a histĂłria e, algumas mĂ©tricas de avaliação que permitem comparar os diferentes classificadores quanto Ă  sua taxa de sucesso, face Ă s vĂĄrias atividades na rede e inerentes dinĂąmicas. Os vĂĄrios testes mostram flexibilidade, rapidez e precisĂŁo dos vĂĄrios algoritmos de classificação, estabelecendo a fasquia entre os 90 e os 99 por cento.Mestrado em Engenharia de Computadores e TelemĂĄtic
    • 

    corecore