321 research outputs found

    BlogForever D2.6: Data Extraction Methodology

    Get PDF
    This report outlines an inquiry into the area of web data extraction, conducted within the context of blog preservation. The report reviews theoretical advances and practical developments for implementing data extraction. The inquiry is extended through an experiment that demonstrates the effectiveness and feasibility of implementing some of the suggested approaches. More specifically, the report discusses an approach based on unsupervised machine learning that employs the RSS feeds and HTML representations of blogs. It outlines the possibilities of extracting semantics available in blogs and demonstrates the benefits of exploiting available standards such as microformats and microdata. The report proceeds to propose a methodology for extracting and processing blog data to further inform the design and development of the BlogForever platform

    Web Data Extraction For Content Aggregation From E-Commerce Websites

    Get PDF
    Internetist on saanud piiramatu andmeallikas. LĂ€bi otsingumootorite\n\ron see andmehulk tehtud kĂ€ttesaadavaks igapĂ€evasele interneti kasutajale. Sellele vaatamata on seal ikka informatsiooni, mis pole lihtsasti kĂ€ttesaadav olemasolevateotsingumootoritega. See tekitab jĂ€tkuvalt vajadust ehitada aina uusi otsingumootoreid, mis esitavad informatsiooni uuel kujul, paremini kui seda on varem tehtud. Selleks, et esitada andmeid sellisel kujul, et neist tekiks lisavÀÀrtus tuleb nad kĂ”igepealt kokku koguda ning seejĂ€rel töödelda ja analĂŒĂŒsida. Antud magistritöö uurib andmete kogumise faasi selles protsessis.\n\rEsitletakse modernset andmete eraldamise sĂŒsteemi ZedBot, mis vĂ”imaldab veebilehtedel esinevad pooleldi struktureeritud andmed teisendada kĂ”rge tĂ€psusega struktureeritud kujule. Loodud sĂŒsteem tĂ€idab enamikku nĂ”udeid, mida peab tĂ€napĂ€evane andmeeraldussĂŒsteem tĂ€itma, milleks on: platvormist sĂ”ltumatus, vĂ”imas reeglite kirjelduse sĂŒsteem, automaatne reeglite genereerimise sĂŒsteem ja lihtsasti kasutatav kasutajaliides andmete annoteerimiseks. Eriliselt disainitud otsi-robot vĂ”imaldab andmete eraldamist kogu veebilehelt ilma inimese sekkumiseta. Töös nĂ€idatakse, et esitletud programm on sobilik andmete eraldamiseks vĂ€ga suure tĂ€psusega suurelt hulgalt veebilehtedelt ning tööriista poolt loodud andmestiku saab kasutada tooteinfo agregeerimiseks ning uue lisandvÀÀrtuse loomiseks.World Wide Web has become an unlimited source of data. Search engines have made this information available to every day Internet user. There is still information available that is not easily accessible through existing search engines, so there remains the need to create new search engines that would present information better than before. In order to present data in a way that gives extra value, it must be collected, analysed and transformed. This master thesis focuses on data collection part. Modern information extraction system ZedBot is presented, that allows extraction of highly structured data form semi structured web pages. It complies with majority of requirements set for modern data extraction system: it is platform independent, it has powerful semi automatic wrapper generation system and has easy to use user interface for annotating structured data. Specially designed web crawler allows to extraction to be performed on whole web site level without human interaction. \n\r We show that presented tool is suitable for extraction highly accurate data from large number of websites and can be used as a data source for product aggregation system to create new added value

    Towards Comparative Web Content Mining using Object Oriented Model

    Get PDF
    Web content data are heterogeneous in nature; usually composed of different types of contents and data structure. Thus, extraction and mining of web content data is a challenging branch of data mining. Traditional web content extraction and mining techniques are classified into three categories: programming language based wrappers, wrapper (data extraction program) induction techniques, and automatic wrapper generation techniques. First category constructs data extraction system by providing some specialized pattern specification languages, second category is a supervised learning, which learns data extraction rules and third category is automatic extraction process. All these data extraction techniques rely on web document presentation structures, which need complicated matching and tree alignment algorithms, routine maintenance, hard to unify for vast variety of websites and fail to catch heterogeneous data together. To catch more diversity of web documents, a feasible implementation of an automatic data extraction technique based on object oriented data model technique, 00Web, had been proposed in Annoni and Ezeife (2009). This thesis implements, materializes and extends the structured automatic data extraction technique. We developed a system (called WebOMiner) for extraction and mining of structured web contents based on object-oriented data model. Thesis extends the extraction algorithms proposed by Annoni and Ezeife (2009) and develops an automata based automatic wrapper generation algorithm for extraction and mining of structured web content data. Our algorithm identifies data blocks from flat array data structure and generates Non-Deterministic Finite Automata (NFA) pattern for different types of content data for extraction. Objective of this thesis is to extract and mine heterogeneous web content and relieve the hard effort of matching, tree alignment and routine maintenance. Experimental results show that our system is highly effective and it performs the mining task with 100% precision and 96.22% recall value

    Fave: uma proposta para verifica??o de extratores de dados de p?ginas html

    Get PDF
    The constant growth of online services, for example, price and product comparison, content aggregators, among others, drives the demand for solutions for data extraction. In order for information from the Internet to be compared or grouped, it is first necessary to extract relevant data from web pages in a structured format. The techniques that provide data extraction are known as wrappers. Each wrapper is developed based on the HTML page and produces a set of structured information. But when an HTML page is modified, wrapper may stop working or works incorrectly. Currently there are several studies to perform the automatic adjustment of the data extraction system, procedure known as wrapper maintenance. This work presents some techniques of wrapper maintenance and proposes an improvement in the method of extractor automation based on the presented techniques.O constante crescimento de servi?os online, por exemplo, compara??o de pre?os e produtos, agregadores de conte?dos, entre outros, impulsiona a demanda por solu??es para a extra??o de dados. Para que informa??es oriundas internet possam ser comparadas ou agrupadas, ? necess?rio extrair os dados relevantes das p?ginas web em um formato estruturado. As t?cnicas que providenciam a extra??o de dados s?o conhecidas como wrappers. Cada wrapper ? desenvolvido usando como base a p?gina HTML e produz um conjunto de informa??es estruturadas. Por?m quando uma p?gina HTML ? modificada, o wrapper para de funcionar ou funciona de maneira incorreta. Atualmente j? existem diversos estudos para fazer o ajuste autom?tico do sistema de extra??o de dados, procedimento conhecido como wrapper maintenance. Este trabalho apresenta algumas t?cnicas de wrapper maintenance e prop?e uma melhoria no m?todo de automa??o de extratores tomando como base as t?cnicas apresentadas

    Temporal Information in Data Science: An Integrated Framework and its Applications

    Get PDF
    Data science is a well-known buzzword, that is in fact composed of two distinct keywords, i.e., data and science. Data itself is of great importance: each analysis task begins from a set of examples. Based on such a consideration, the present work starts with the analysis of a real case scenario, by considering the development of a data warehouse-based decision support system for an Italian contact center company. Then, relying on the information collected in the developed system, a set of machine learning-based analysis tasks have been developed to answer specific business questions, such as employee work anomaly detection and automatic call classification. Although such initial applications rely on already available algorithms, as we shall see, some clever analysis workflows had also to be developed. Afterwards, continuously driven by real data and real world applications, we turned ourselves to the question of how to handle temporal information within classical decision tree models. Our research brought us the development of J48SS, a decision tree induction algorithm based on Quinlan's C4.5 learner, which is capable of dealing with temporal (e.g., sequential and time series) as well as atemporal (such as numerical and categorical) data during the same execution cycle. The decision tree has been applied into some real world analysis tasks, proving its worthiness. A key characteristic of J48SS is its interpretability, an aspect that we specifically addressed through the study of an evolutionary-based decision tree pruning technique. Next, since a lot of work concerning the management of temporal information has already been done in automated reasoning and formal verification fields, a natural direction in which to proceed was that of investigating how such solutions may be combined with machine learning, following two main tracks. First, we show, through the development of an enriched decision tree capable of encoding temporal information by means of interval temporal logic formulas, how a machine learning algorithm can successfully exploit temporal logic to perform data analysis. Then, we focus on the opposite direction, i.e., that of employing machine learning techniques to generate temporal logic formulas, considering a natural language processing scenario. Finally, as a conclusive development, the architecture of a system is proposed, in which formal methods and machine learning techniques are seamlessly combined to perform anomaly detection and predictive maintenance tasks. Such an integration represents an original, thrilling research direction that may open up new ways of dealing with complex, real-world problems.Data science is a well-known buzzword, that is in fact composed of two distinct keywords, i.e., data and science. Data itself is of great importance: each analysis task begins from a set of examples. Based on such a consideration, the present work starts with the analysis of a real case scenario, by considering the development of a data warehouse-based decision support system for an Italian contact center company. Then, relying on the information collected in the developed system, a set of machine learning-based analysis tasks have been developed to answer specific business questions, such as employee work anomaly detection and automatic call classification. Although such initial applications rely on already available algorithms, as we shall see, some clever analysis workflows had also to be developed. Afterwards, continuously driven by real data and real world applications, we turned ourselves to the question of how to handle temporal information within classical decision tree models. Our research brought us the development of J48SS, a decision tree induction algorithm based on Quinlan's C4.5 learner, which is capable of dealing with temporal (e.g., sequential and time series) as well as atemporal (such as numerical and categorical) data during the same execution cycle. The decision tree has been applied into some real world analysis tasks, proving its worthiness. A key characteristic of J48SS is its interpretability, an aspect that we specifically addressed through the study of an evolutionary-based decision tree pruning technique. Next, since a lot of work concerning the management of temporal information has already been done in automated reasoning and formal verification fields, a natural direction in which to proceed was that of investigating how such solutions may be combined with machine learning, following two main tracks. First, we show, through the development of an enriched decision tree capable of encoding temporal information by means of interval temporal logic formulas, how a machine learning algorithm can successfully exploit temporal logic to perform data analysis. Then, we focus on the opposite direction, i.e., that of employing machine learning techniques to generate temporal logic formulas, considering a natural language processing scenario. Finally, as a conclusive development, the architecture of a system is proposed, in which formal methods and machine learning techniques are seamlessly combined to perform anomaly detection and predictive maintenance tasks. Such an integration represents an original, thrilling research direction that may open up new ways of dealing with complex, real-world problems

    Automated retrieval and extraction of training course information from unstructured web pages

    Get PDF
    Web Information Extraction (WIE) is the discipline dealing with the discovery, processing and extraction of specific pieces of information from semi-structured or unstructured web pages. The World Wide Web comprises billions of web pages and there is much need for systems that will locate, extract and integrate the acquired knowledge into organisations practices. There are some commercial, automated web extraction software packages, however their success comes from heavily involving their users in the process of finding the relevant web pages, preparing the system to recognise items of interest on these pages and manually dealing with the evaluation and storage of the extracted results. This research has explored WIE, specifically with regard to the automation of the extraction and validation of online training information. The work also includes research and development in the area of automated Web Information Retrieval (WIR), more specifically in Web Searching (or Crawling) and Web Classification. Different technologies were considered, however after much consideration, NaĂŻve Bayes Networks were chosen as the most suitable for the development of the classification system. The extraction part of the system used Genetic Programming (GP) for the generation of web extraction solutions. Specifically, GP was used to evolve Regular Expressions, which were then used to extract specific training course information from the web such as: course names, prices, dates and locations. The experimental results indicate that all three aspects of this research perform very well, with the Web Crawler outperforming existing crawling systems, the Web Classifier performing with an accuracy of over 95% and a precision of over 98%, and the Web Extractor achieving an accuracy of over 94% for the extraction of course titles and an accuracy of just under 67% for the extraction of other course attributes such as dates, prices and locations. Furthermore, the overall work is of great significance to the sponsoring company, as it simplifies and improves the existing time-consuming, labour-intensive and error-prone manual techniques, as will be discussed in this thesis. The prototype developed in this research works in the background and requires very little, often no, human assistance

    Detecção de anomalias na partilha de ficheiros em ambientes empresariais

    Get PDF
    File sharing is the activity of making archives (documents, videos, photos) available to other users. Enterprises use file sharing to make archives available to their employees or clients. The availability of these files can be done through an internal network, cloud service (external) or even Peer-to-Peer (P2P). Most of the time, the files within the file sharing service have sensitive information that cannot be disclosed. Equifax data breach attack exploited a zero-day attack that allowed arbitrary code execution, leading to a huge data breach as over 143 million user information was presumed compromised. Ransomware is a type of malware that encrypts computer data (documents, media, ...) making it inaccessible to the user, demanding a ransom for the decryption of the data. This type of malware has been a serious threat to enterprises. WannaCry and NotPetya are some examples of ransomware that had a huge impact on enterprises with big amounts of ransoms, for example WannaCry reached more than 142,361.51inransoms.Inthisdissertation,wepurposeasystemthatcandetectfilesharinganomalieslikeransomware(WannaCry,NotPetya)andtheft(Equifaxbreach),andalsotheirpropagation.Thesolutionconsistsofnetworkmonitoring,thecreationofcommunicationprofilesforeachuser/machine,ananalysisalgorithmusingmachinelearningandacountermeasuremechanismincaseananomalyisdetected.Partilhadeficheiroseˊaatividadededisponibilizarficheiros(documentos,vıˊdeos,fotos)autilizadores.Asempresasusamapartilhadeficheirosparadisponibilizarficheirosaosseusutilizadoresetrabalhadores.Adisponibilidadedestesficheirospodeserfeitaapartirdeumaredeinterna,servic\codenuvem(externo)ouateˊPonto−a−Ponto.Normalmente,osficheiroscontidosnoservic\codepartilhadeficheirosconte^mdadosconfidenciaisquena~opodemserdivulgados.Oataquedeviolac\ca~odedadosrealizadoaEquifaxexplorouumavulnerabilidadedediazeroquepermitiuexecuc\ca~odecoˊdigoarbitraˊrio,levandoaqueainformac\ca~ode143milho~esdeutilizadoresfossecomprometida.Ransomwareeˊumtipodemalwarequecifraosdadosdocomputador(documentos,multimeˊdia...)tornando−osinacessıˊveisaoutilizador,exigindoaesteumresgateparadecifraressesdados.Estetipodemalwaretemsidoumagrandeameac\caaˋsempresasatuais.WannaCryeNotPetyasa~oalgunsexemplosdeRansomwarequetiveramumgrandeimpactocomgrandesquantiasderesgate,WannaCryalcanc\coumaisde142,361.51 in ransoms. In this dissertation, we purpose a system that can detect file sharing anomalies like ransomware (WannaCry, NotPetya) and theft (Equifax breach), and also their propagation. The solution consists of network monitoring, the creation of communication profiles for each user/machine, an analysis algorithm using machine learning and a countermeasure mechanism in case an anomaly is detected.Partilha de ficheiros Ă© a atividade de disponibilizar ficheiros (documentos, vĂ­deos, fotos) a utilizadores. As empresas usam a partilha de ficheiros para disponibilizar ficheiros aos seus utilizadores e trabalhadores. A disponibilidade destes ficheiros pode ser feita a partir de uma rede interna, serviço de nuvem (externo) ou atĂ© Ponto-a-Ponto. Normalmente, os ficheiros contidos no serviço de partilha de ficheiros contĂȘm dados confidenciais que nĂŁo podem ser divulgados. O ataque de violação de dados realizado a Equifax explorou uma vulnerabilidade de dia zero que permitiu execução de cĂłdigo arbitrĂĄrio, levando a que a informação de 143 milhĂ”es de utilizadores fosse comprometida. Ransomware Ă© um tipo de malware que cifra os dados do computador (documentos, multimĂ©dia...) tornando-os inacessĂ­veis ao utilizador, exigindo a este um resgate para decifrar esses dados. Este tipo de malware tem sido uma grande ameaça Ă s empresas atuais. WannaCry e NotPetya sĂŁo alguns exemplos de Ransomware que tiveram um grande impacto com grandes quantias de resgate, WannaCry alcançou mais de 142,361.51 em resgates. Neste tabalho, propomos um sistema que consiga detectar anomalias na partilha de ficheiros, como o ransomware (WannaCry, NotPetya) e roubo de dados (violação de dados Equifax), bem como a sua propagação. A solução consiste na monitorização da rede da empresa, na criação de perfis para cada utilizador/mĂĄquina, num algoritmo de machine learning para anĂĄlise dos dados e num mecanismo que bloqueie a mĂĄquina afetada no caso de se detectar uma anomalia.Mestrado em Engenharia de Computadores e TelemĂĄtic

    Automated anomaly recognition in real time data streams for oil and gas industry.

    Get PDF
    There is a growing demand for computer-assisted real-time anomaly detection - from the identification of suspicious activities in cyber security, to the monitoring of engineering data for various applications across the oil and gas, automotive and other engineering industries. To reduce the reliance on field experts' knowledge for identification of these anomalies, this thesis proposes a deep-learning anomaly-detection framework that can help to create an effective real-time condition-monitoring framework. The aim of this research is to develop a real-time and re-trainable generic anomaly-detection framework, which is capable of predicting and identifying anomalies with a high level of accuracy - even when a specific anomalous event has no precedent. Machine-based condition monitoring is preferable in many practical situations where fast data analysis is required, and where there are harsh climates or otherwise life-threatening environments. For example, automated conditional monitoring systems are ideal in deep sea exploration studies, offshore installations and space exploration. This thesis firstly reviews studies about anomaly detection using machine learning. It then adopts the best practices from those studies in order to propose a multi-tiered framework for anomaly detection with heterogeneous input sources, which can deal with unseen anomalies in a real-time dynamic problem environment. The thesis then applies the developed generic multi-tiered framework to two fields of engineering: data analysis and malicious cyber attack detection. Finally, the framework is further refined based on the outcomes of those case studies and is used to develop a secure cross-platform API, capable of re-training and data classification on a real-time data feed
    • 

    corecore