127 research outputs found

    A Survey on Phishing Website Detection Using Hadoop

    Get PDF
    Phishing is an activity carried out by phishers with the aim of stealing personal data of internet users such as user IDs, password, and banking account, that data will be used for their personal interests. Average internet user will be easily trapped by phishers due to the similarity of the websites they visit to the original websites. Because there are several attributes that must be considered, most of internet user finds it difficult to distinguish between an authentic website or not. There are many ways to detecting a phishing website, but the existing phishing website detection system is too time-consuming and very dependent on the database it has. In this research, the focus of Hadoop MapReduce is to quickly retrieve some of the attributes of a phishing website that has an important role in identifying a phishing website, and then informing to users whether the website is a phishing website or not

    Mining Authoritativeness of Collaborative Innovation Partners

    Get PDF
    The global marketplace over the past decade has called for innovative products and cost reduction. This perplexing duality has led companies to seek external collaborations to effectively deliver innovative products to market. External collaboration often leads to innovation at reduced research and development expenditure. This is especially true of companies which find the most authoritative entity (usually a company or even a person) to work with. Authoritativeness accelerates development and research-to-product transformation due to the inherent knowledge of the authoritative entity. This paper offers a novel approach to automatically determine the authoritativeness of entities for collaboration. This approach automatically discovers an authoritative entity in a domain of interest. The methodology presented utilizes web mining, text mining, and generation of an authoritativeness metric. The concepts discussed in the paper are illustrated with a case study of mining the authoritativeness of collaboration partners for microelectromechanical systems (MEMS)

    Web modelling for web warehouse design

    Get PDF
    Tese de doutoramento em Informática (Engenharia Informática), apresentada à Universidade de Lisboa através da Faculdade de Ciências, 2007Users require applications to help them obtaining knowledge from the web. However, the specific characteristics of web data make it difficult to create these applications. One possible solution to facilitate this task is to extract information from the web, transform and load it to a Web Warehouse, which provides uniform access methods for automatic processing of the data. Web Warehousing is conceptually similar to Data Warehousing approaches used to integrate relational information from databases. However, the structure of the web is very dynamic and cannot be controlled by the Warehouse designers. Web models frequently do not reflect the current state of the web. Thus, Web Warehouses must be redesigned at a late stage of development. These changes have high costs and may jeopardize entire projects. This thesis addresses the problem of modelling the web and its influence in the design of Web Warehouses. A model of a web portion was derived and based on it, a Web Warehouse prototype was designed. The prototype was validated in several real-usage scenarios. The obtained results show that web modelling is a fundamental step of the web data integration process.Os utilizadores da web recorrem a ferramentas que os ajudem a satisfazer as suas necessidades de informação. Contudo, as características específicas dos conteúdos provenientes da web dificultam o desenvolvimento destas aplicações. Uma aproximação possível para a resolução deste problema é a integração de dados provenientes da web num Armazém de Dados Web que, por sua vez, disponibilize métodos de acesso uniformes e facilitem o processamento automático. Um Armazém de Dados Web é conceptualmente semelhante a um Armazém de Dados de negócio. No entanto, a estrutura da informação a carregar, a web, não pode ser controlada ou facilmente modelada pelos analistas. Os modelos da web existentes não são tipicamente representativos do seu estado presente. Como consequência, os Armazéns de Dados Web sofrem frequentemente alterações profundas no seu desenho quando já se encontram numa fase avançada de desenvolvimento. Estas mudanças têm custos elevados e podem pôr em causa a viabilidade de todo um projecto. Esta tese estuda o problema da modelação da web e a sua influência no desenho de Armazéns de Dados Web. Para este efeito, foi extraído um modelo de uma porção da web, e com base nele, desenhado um protótipo de um Armazém de Dados Web. Este protótipo foi validado através da sua utilização em vários contextos distintos. Os resultados obtidos mostram que a modelação da web deve ser considerada no processo de integração de dados da web.Fundação para Computação Científica Nacional (FCCN); LaSIGE-Laboratório de Sistemas Informáticos de Grande Escala; Fundação para a Ciência e Tecnologia (FCT), (SFRH/BD/11062/2002

    Parallel programming paradigms and frameworks in big data era

    Get PDF
    With Cloud Computing emerging as a promising new approach for ad-hoc parallel data processing, major companies have started to integrate frameworks for parallel data processing in their product portfolio, making it easy for customers to access these services and to deploy their programs. We have entered the Era of Big Data. The explosion and profusion of available data in a wide range of application domains rise up new challenges and opportunities in a plethora of disciplines-ranging from science and engineering to biology and business. One major challenge is how to take advantage of the unprecedented scale of data-typically of heterogeneous nature-in order to acquire further insights and knowledge for improving the quality of the offered services. To exploit this new resource, we need to scale up and scale out both our infrastructures and standard techniques. Our society is already data-rich, but the question remains whether or not we have the conceptual tools to handle it. In this paper we discuss and analyze opportunities and challenges for efficient parallel data processing. Big Data is the next frontier for innovation, competition, and productivity, and many solutions continue to appear, partly supported by the considerable enthusiasm around the MapReduce paradigm for large-scale data analysis. We review various parallel and distributed programming paradigms, analyzing how they fit into the Big Data era, and present modern emerging paradigms and frameworks. To better support practitioners interesting in this domain, we end with an analysis of on-going research challenges towards the truly fourth generation data-intensive science.Peer ReviewedPostprint (author's final draft

    Exploring the Relevance of Europeana Digital Resources: Preliminary Ideas on Europeana Metadata Quality

    Get PDF
    Europeana is a European project aimed to become the modern “Alexandria Digital Library”, as it targets providing access to thousands of resources of European cultural heritage, contributed by more than fifteen hundred institutions such as museums, libraries, archives and cultural centers. This article aims to explore Europeana digital resources as open learning repositories in order to re-use digital resources to improve learning process in the domain of arts and cultural heritage. To carry out this purpose, we present results of metadata quality based on a study case associated to recommendations and suggestions that provide this type of initiatives in our educational context in order to improve the access of digital resources according to a specific knowledge areas

    Professional Search in Pharmaceutical Research

    Get PDF
    In the mid 90s, visiting libraries – as means of retrieving the latest literature – was still a common necessity among professionals. Nowadays, professionals simply access information by ‘googling’. Indeed, the name of the Web search engine market leader “Google” became a synonym for searching and retrieving information. Despite the increased popularity of search as a method for retrieving relevant information, at the workplace search engines still do not deliver satisfying results to professionals. Search engines for instance ignore that the relevance of answers (the satisfaction of a searcher’s needs) depends not only on the query (the information request) and the document corpus, but also on the working context (the user’s personal needs, education, etc.). In effect, an answer which might be appropriate to one user might not be appropriate to the other user, even though the query and the document corpus are the same for both. Personalization services addressing the context become therefore more and more popular and are an active field of research. This is only one of several challenges encountered in ‘professional search’: How can the working context of the searcher be incorporated in the ranking process; how can unstructured free-text documents be enriched with semantic information so that the information need can be expressed precisely at query time; how and to which extent can a company’s knowledge be exploited for search purposes; how should data from distributed sources be accessed from into one-single-entry-point. This thesis is devoted to ‘professional search’, i.e. search at the workplace, especially in industrial research and development. We contribute by compiling and developing several approaches for facing the challenges mentioned above. The approaches are implemented into the prototype YASA (Your Adaptive Search Agent) which provides meta-search, adaptive ranking of search results, guided navigation, and which uses domain knowledge to drive the search processes. YASA is deployed in the pharmaceutical research department of Roche in Penzberg – a major pharmaceutical company – in which the applied methods were empirically evaluated. Being confronted with mostly unstructured free-text documents and having barely explicit metadata at hand, we faced a serious challenge. Incorporating semantics (i.e. formal knowledge representation) into the search process can only be as good as the underlying data. Nonetheless, we are able to demonstrate that this issue can be largely compensated by incorporating automatic metadata extraction techniques. The metadata we were able to extract automatically was not perfectly accurate, nor did the ontology we applied contain considerably “rich semantics”. Nonetheless, our results show that already the little semantics incorporated into the search process, suffices to achieve a significant improvement in search and retrieval. We thus contribute to the research field of context-based search by incorporating the working context into the search process – an area which so far has not yet been well studied

    QueueLinker: データストリームのための並列分散処理フレームワーク

    Get PDF
    早大学位記番号:新6373早稲田大
    corecore