440 research outputs found

    A literature survey of active machine learning in the context of natural language processing

    Get PDF
    Active learning is a supervised machine learning technique in which the learner is in control of the data used for learning. That control is utilized by the learner to ask an oracle, typically a human with extensive knowledge of the domain at hand, about the classes of the instances for which the model learned so far makes unreliable predictions. The active learning process takes as input a set of labeled examples, as well as a larger set of unlabeled examples, and produces a classifier and a relatively small set of newly labeled data. The overall goal is to create as good a classifier as possible, without having to mark-up and supply the learner with more data than necessary. The learning process aims at keeping the human annotation effort to a minimum, only asking for advice where the training utility of the result of such a query is high. Active learning has been successfully applied to a number of natural language processing tasks, such as, information extraction, named entity recognition, text categorization, part-of-speech tagging, parsing, and word sense disambiguation. This report is a literature survey of active learning from the perspective of natural language processing

    Web Tracking: Mechanisms, Implications, and Defenses

    Get PDF
    This articles surveys the existing literature on the methods currently used by web services to track the user online as well as their purposes, implications, and possible user's defenses. A significant majority of reviewed articles and web resources are from years 2012-2014. Privacy seems to be the Achilles' heel of today's web. Web services make continuous efforts to obtain as much information as they can about the things we search, the sites we visit, the people with who we contact, and the products we buy. Tracking is usually performed for commercial purposes. We present 5 main groups of methods used for user tracking, which are based on sessions, client storage, client cache, fingerprinting, or yet other approaches. A special focus is placed on mechanisms that use web caches, operational caches, and fingerprinting, as they are usually very rich in terms of using various creative methodologies. We also show how the users can be identified on the web and associated with their real names, e-mail addresses, phone numbers, or even street addresses. We show why tracking is being used and its possible implications for the users (price discrimination, assessing financial credibility, determining insurance coverage, government surveillance, and identity theft). For each of the tracking methods, we present possible defenses. Apart from describing the methods and tools used for keeping the personal data away from being tracked, we also present several tools that were used for research purposes - their main goal is to discover how and by which entity the users are being tracked on their desktop computers or smartphones, provide this information to the users, and visualize it in an accessible and easy to follow way. Finally, we present the currently proposed future approaches to track the user and show that they can potentially pose significant threats to the users' privacy.Comment: 29 pages, 212 reference

    Applying Machine Learning to Advance Cyber Security: Network Based Intrusion Detection Systems

    Get PDF
    Many new devices, such as phones and tablets as well as traditional computer systems, rely on wireless connections to the Internet and are susceptible to attacks. Two important types of attacks are the use of malware and exploiting Internet protocol vulnerabilities in devices and network systems. These attacks form a threat on many levels and therefore any approach to dealing with these nefarious attacks will take several methods to counter. In this research, we utilize machine learning to detect and classify malware, visualize, detect and classify worms, as well as detect deauthentication attacks, a form of Denial of Service (DoS). This work also includes two prevention mechanisms for DoS attacks, namely a one- time password (OTP) and through the use of machine learning. Furthermore, we focus on an exploit of the widely used IEEE 802.11 protocol for wireless local area networks (WLANs). The work proposed here presents a threefold approach for intrusion detection to remedy the effects of malware and an Internet protocol exploit employing machine learning as a primary tool. We conclude with a comparison of dimensionality reduction methods to a deep learning classifier to demonstrate the effectiveness of these methods without compromising the accuracy of classification

    The Research of Design Based on Social Commerce

    Get PDF
    Based on previous design theories which focus only on artifacts, we study the factors of social commerce design with application environment and human capabilities. By comparing social commerce design model and information model, we develop a new social commerce design model, further exploring user requirements after shopping, including the exploration of brand community, sharing offline social shopping experience and the improvement of user social skills. According to the new model, we revealed the common features of social commerce design, including the individual, conversation, community, commerce and management levels. Besides, this paper pointed out social commerce design research problems in future

    A survey on web tracking: mechanisms, implications, and defenses

    Get PDF
    Privacy seems to be the Achilles' heel of today's web. Most web services make continuous efforts to track their users and to obtain as much personal information as they can from the things they search, the sites they visit, the people they contact, and the products they buy. This information is mostly used for commercial purposes, which go far beyond targeted advertising. Although many users are already aware of the privacy risks involved in the use of internet services, the particular methods and technologies used for tracking them are much less known. In this survey, we review the existing literature on the methods used by web services to track the users online as well as their purposes, implications, and possible user's defenses. We present five main groups of methods used for user tracking, which are based on sessions, client storage, client cache, fingerprinting, and other approaches. A special focus is placed on mechanisms that use web caches, operational caches, and fingerprinting, as they are usually very rich in terms of using various creative methodologies. We also show how the users can be identified on the web and associated with their real names, e-mail addresses, phone numbers, or even street addresses. We show why tracking is being used and its possible implications for the users. For each of the tracking methods, we present possible defenses. Some of them are specific to a particular tracking approach, while others are more universal (block more than one threat). Finally, we present the future trends in user tracking and show that they can potentially pose significant threats to the users' privacy.Peer ReviewedPostprint (author's final draft

    Predictive response mail campaign

    Get PDF
    O marketing direto está a tornar-se cada vez mais um componente crucial para a estratégia de marketing das empresas e é um processo que inclui várias abordagens para apresentar produtos ou serviços a clientes selecionados. Uma base de dados fiável de clientes-alvo é crítica para o sucesso do marketing direto. O objetivo principal da modelação de respostas é identificar clientes com maior probabilidade de responder a um anúncio direto. Existem dois desafios comuns ao lidar com dados de marketing: dados não balanceados, onde o número de clientes que não respondem é significativamente superior ao daqueles que respondem; e conjuntos de treino com elevada dimensão dado a enorme variedade de informações que são recolhidas normalmente. Esta tese descreve todo o processo de desenvolvimento de um modelo de previsão de respostas ao mesmo tempo que apresenta e estuda diversas técnicas e metodologias ao longo dos vários passos, desde o balanceamento dos dados e seleção de variáveis até ao desenvolvimento e teste dos modelos. Adicionalmente, é proposta uma técnica de seleção de variáveis que consiste no agrupamento de várias random forests para obter resultados mais robustos. Os resultados mostram que a técnica de seleção de variáveis proposta, combinada com random under-sampling para o balanceamento dos dados, e a recente técnica Extreme Gradient Boosting, conhecida como XGBoost, têm a melhor performance.Direct marketing is becoming a crucial part of companies advertising strategy and includes various approaches to presenting products or services to select customers. A reliable targeted customer database is critical to the success of direct marketing. The main objective of response modelling is to identify customers most likely to respond to a direct advertisement. There are two challenges commonly faced when dealing with marketing data: imbalanced data where the number of non-responding customers is significantly larger than that of responding customers; and large training datasets with high dimensionality due to the significant variety of features that are usually collected. This thesis describes the whole process of developing an efficient response prediction model while presenting and studying several different techniques and methods throughout the many steps, from data balancing and feature selection to model development and evaluation. Additionally, an ensemble feature selection technique that combines multiple random forests to yield a more robust result is proposed. The results show that the proposed feature selection method, combined with random under-sampling for class balancing, and the newer prediction technique Extreme Gradient Boosting, known as XGBoost, provide the best performance

    Reverse Engineering Static Content and Dynamic Behaviour of E-Commerce Websites for Fun and Profit

    Get PDF
    Atualmente os websites de comércio eletrónico são uma das ferramentas principais para a realização de transações entre comerciantes online e consumidores ou empresas. Estes websites apoiam- se fortemente na sumarização e análise dos hábitos de navegação dos consumidores, de forma a influenciar as suas ações no website com o intuito de otimizar métricas de sucesso como o CTR (Click through Rate), CPC (Cost per Conversion), Basket e Lifetime Value e User Engagement. A utilização de técnicas de data mining e machine learning na extração de conhecimento a partir dos conjuntos de dados existentes nos websites de comércio eletrónico tem vindo a ter uma crescente influência nas campanhas de marketing realizadas na Internet.Quando o provedor de serviços de machine learning se deparada com um novo website de comércio eletrónico, inicia um processo de web mining, fazendo recolha de dados, tanto históricos como em tempo real, do website e analisando/transformando estes dados de forma a tornar os mesmos utilizáveis para fins de extração de informação tanto sobre a estrutura e conteúdo de um website assim como dos hábitos de navegação dos seus utilizadores típicos. Apenas após este processo é que os data scientists são capazes de desenvolver modelos relevantes e algoritmos para melhorar e otimizar as atividades de marketing online.Este processo é, na sua generalidade, moroso em tempo e recursos, dependendo sempre da condição em que os dados são apresentados ao data scientist. Dados com mais qualidade (p.ex. dados completos), facilitam o trabalho dos data scientists e tornam o mesmo mais rápido. Por outro lado, na generalidade dos casos, os data scientists tem de recorrer a técnicas de monitorização de eventos específicos ao domínio do website de forma a atingir o objetivo de conhecer os hábitos dos utlizadores, tornando-se necessário a realização de modificações ao código fonte do website para a captura desses mesmos eventos, aumentando assim o risco de não capturar toda a informação relevante por não ativar os mecanismos de monitorização em todas as páginas do web- site. Por exemplo, podemos não ter conhecimento a priori que uma visita à página de Condições de Entrega é relevante para prever o desejo de um dado consumidor efetuar uma compra e, desta forma, os mecanismos de monitorização nessas páginas podem não ser ativados.No contexto desta problemática, a solução proposta consiste numa metodologia capaz de ex- trair e combinar a informação sobre um dado website de comércio eletrónico através de um pro- cesso de web mining, compreendendo a estrutura de páginas de um website, assim como do conteúdo das mesmas, baseando-se para isso na identificação de conteúdo dinâmico das páginas assim como informação semântica recolhida de locais predefinidos. Adicionalmente esta informação é complementada, usando dados presente nos registos de acesso de utilizadores, com modelos preditivos do futuro comportamento dos utilizadores no website. Torna-se assim possível a apresentação de um modelo de dados representando a informação sobre um dado website de comércio eletrónico e os seus utilizadores arquetípicos, podendo posteriormente estes dados serem utiliza- dos, por exemplo, em sistemas de simulação.Nowadays electronic commerce websites are one of the main transaction tools between on-line merchants and consumers or businesses. These e-commerce websites rely heavily on summarizing and analyzing the behavior of customers, making an effort to influence user actions towards the optimization of success metrics such as CTR (Click through Rate), CPC (Cost per Conversion), Basket and Lifetime Value and User Engagement. Knowledge extraction from the existing e- commerce websites datasets, using data mining and machine learning techniques, has been greatly influencing the Internet marketing activities.When faced with a new e-commerce website, the machine learning practitioner starts a web mining process by collecting historical and real-time data of the website and analyzing/transforming this data in order to be capable of extracting information about the website structure and content and its users' behavior. Only after this process the data scientists are able to build relevant models and algorithms to enhance marketing activities.This is an expensive process in resources and time since it will always depend on the condition in which the data is presented to the data scientist, since data with more quality (i.e. no incomplete data) will make the data scientist work easier and faster. On the other hand, in most of the cases, data scientists would usually resort to tracking domain-specific events throughout a user's visit to the website in order to fulfill the objective of discovering the users' behavior and, for this, it is necessary to perform code modifications to the pages themselves, that will result in a larger risk of not capturing all the relevant information by not enabling tracking mechanisms in certain pages. For example, we may not know a priori that a visit to a Delivery Conditions page is relevant to the prediction of a user's willingness to buy and therefore would not enable tracking on those pages.Within this problem context, the proposed solution consists in a methodology capable of extracting and combining information about a e-commerce website through a process of web mining, comprehending the structure as well as the content of the website pages, relying mostly on identifying dynamic content and semantic information in predefined locations, complemented with the capability of, using the user's access logs, extracting more accurate models to predict the users future behavior. This allows for the creation of a data model representing an e-commerce website and its archetypical users that can be useful, for example, in simulation systems
    corecore