22 research outputs found

    Personalized large scale classification of public tenders on hadoop

    Get PDF
    Ce projet a été réalisé dans le cadre d’un partenariat entre Fujitsu Canada et Université Laval. Les besoins du projets ont été centrés sur une problématique d’affaire définie conjointement avec Fujitsu. Le projet consistait à classifier un corpus d’appels d’offres électroniques avec une approche orienté big data. L’objectif était d’identifier avec un très fort rappel les offres pertinentes au domaine d’affaire de l’entreprise. Après une séries d’expérimentations à petite échelle qui nous ont permise d’illustrer empiriquement (93% de rappel) l’efficacité de notre approche basé sur l’algorithme BNS (Bi-Normal Separation), nous avons implanté un système complet qui exploite l’infrastructure technologique big data Hadoop. Nos expérimentations sur le système complet démontrent qu’il est possible d’obtenir une performance de classification tout aussi efficace à grande échelle (91% de rappel) tout en exploitant les gains de performance rendus possible par l’architecture distribuée de Hadoop.This project was completed as part of an innovation partnership with Fujitsu Canada and Université Laval. The needs and objectives of the project were centered on a business problem defined jointly with Fujitsu. Our project aimed to classify a corpus of electronic public tenders based on state of the art Hadoop big data technology. The objective was to identify with high recall public tenders relevant to the IT services business of Fujitsu Canada. A small scale prototype based on the BNS algorithm (Bi-Normal Separation) was empirically shown to classify with high recall (93%) the public tender corpus. The prototype was then re-implemented on a full scale Hadoop cluster using Apache Pig for the data preparation pipeline and using Apache Mahout for classification. Our experimentation show that the large scale system not only maintains high recall (91%) on the classification task, but can readily take advantage of the massive scalability gains made possible by Hadoop’s distributed architecture

    Adaptation-Aware Architecture Modeling and Analysis of Energy Efficiency for Software Systems

    Get PDF
    This thesis presents an approach for the design time analysis of energy efficiency for static and self-adaptive software systems. The quality characteristics of a software system, such as performance and operating costs, strongly depend upon its architecture. Software architecture is a high-level view on software artifacts that reflects essential quality characteristics of a system under design. Design decisions made on an architectural level have a decisive impact on the quality of a system. Revising architectural design decisions late into development requires significant effort. Architectural analyses allow software architects to reason about the impact of design decisions on quality, based on an architectural description of the system. An essential quality goal is the reduction of cost while maintaining other quality goals. Power consumption accounts for a significant part of the Total Cost of Ownership (TCO) of data centers. In 2010, data centers contributed 1.3% of the world-wide power consumption. However, reasoning on the energy efficiency of software systems is excluded from the systematic analysis of software architectures at design time. Energy efficiency can only be evaluated once the system is deployed and operational. One approach to reduce power consumption or cost is the introduction of self-adaptivity to a software system. Self-adaptive software systems execute adaptations to provision costly resources dependent on user load. The execution of reconfigurations can increase energy efficiency and reduce cost. If performed improperly, however, the additional resources required to execute a reconfiguration may exceed their positive effect. Existing architecture-level energy analysis approaches offer limited accuracy or only consider a limited set of system features, e.g., the used communication style. Predictive approaches from the embedded systems and Cloud Computing domain operate on an abstraction that is not suited for architectural analysis. The execution of adaptations can consume additional resources. The additional consumption can reduce performance and energy efficiency. Design time quality analyses for self-adaptive software systems ignore this transient effect of adaptations. This thesis makes the following contributions to enable the systematic consideration of energy efficiency in the architectural design of self-adaptive software systems: First, it presents a modeling language that captures power consumption characteristics on an architectural abstraction level. Second, it introduces an energy efficiency analysis approach that uses instances of our power consumption modeling language in combination with existing performance analyses for architecture models. The developed analysis supports reasoning on energy efficiency for static and self-adaptive software systems. Third, to ease the specification of power consumption characteristics, we provide a method for extracting power models for server environments. The method encompasses an automated profiling of servers based on a set of restrictions defined by the user. A model training framework extracts a set of power models specified in our modeling language from the resulting profile. The method ranks the trained power models based on their predicted accuracy. Lastly, this thesis introduces a systematic modeling and analysis approach for considering transient effects in design time quality analyses. The approach explicitly models inter-dependencies between reconfigurations, performance and power consumption. We provide a formalization of the execution semantics of the model. Additionally, we discuss how our approach can be integrated with existing quality analyses of self-adaptive software systems. We validated the accuracy, applicability, and appropriateness of our approach in a variety of case studies. The first two case studies investigated the accuracy and appropriateness of our modeling and analysis approach. The first study evaluated the impact of design decisions on the energy efficiency of a media hosting application. The energy consumption predictions achieved an absolute error lower than 5.5% across different user loads. Our approach predicted the relative impact of the design decision on energy efficiency with an error of less than 18.94%. The second case study used two variants of the Spring-based community case study system PetClinic. The case study complements the accuracy and appropriateness evaluation of our modeling and analysis approach. We were able to predict the energy consumption of both variants with an absolute error of no more than 2.38%. In contrast to the first case study, we derived all models automatically, using our power model extraction framework, as well as an extraction framework for performance models. The third case study applied our model-based prediction to evaluate the effect of different self-adaptation algorithms on energy efficiency. It involved scientific workloads executed in a virtualized environment. Our approach predicted the energy consumption with an error below 7.1%, even though we used coarse grained measurement data of low accuracy to train the input models. The fourth case study evaluated the appropriateness and accuracy of the automated model extraction method using a set of Big Data and enterprise workloads. Our method produced power models with prediction errors below 5.9%. A secondary study evaluated the accuracy of extracted power models for different Virtual Machine (VM) migration scenarios. The results of the fifth case study showed that our approach for modeling transient effects improved the prediction accuracy for a horizontally scaling application. Leveraging the improved accuracy, we were able to identify design deficiencies of the application that otherwise would have remained unnoticed

    Search engine bias: the structuration of traffic on the World-Wide Web

    Get PDF
    Search engines are essential components of the World Wide Web; both commercially and in terms of everyday usage, their importance is hard to overstate. This thesis examines the question of why there is bias in search engine results – bias that invites users to click on links to large websites, commercial websites, websites based in certain countries, and websites written in certain languages. In this thesis, the historical development of the search engine industry is traced. Search engines first emerged as prototypical technological startups emanating from Silicon Valley, followed by the acquisition of search engine companies by major US media corporations and their development into portals. The subsequent development of pay-per-click advertising is central to the current industry structure, an oligarchy of virtually integrated companies managing networks of syndicated advertising and traffic distribution. The study also shows a global landscape in which search production is concentrated in and caters for large global advertising markets, leaving the rest of the world with patchy and uneven search results coverage. The analysis of interviews with senior search engine engineers indicates that issues of quality are addressed in terms of customer service and relevance in their discourse, while the analysis of documents, interviews with search marketers, and participant observation within a search engine marketing firm showed that producers and marketers had complex relationships that combine aspects of collaboration, competition, and indifference. The results of the study offer a basis for the synthesis of insights of the political economy of media and communication and the social studies of technology tradition, emphasising the importance of culture in constructing and maintaining both local structures and wider systems. In the case of search engines, the evidence indicates that the culture of the technological entrepreneur is very effective in creating a new megabusiness, but less successful in encouraging a debate on issues of the public good or public responsibility as they relate to the search engine industry

    Quantitative analysis of networked environments to improve performance of information systems

    Get PDF
    In this thesis we encounter networks in three contexts i) as the citation networks between documents in citation databases CiteSeer and DBLP, ii) as the structure of e-government websites that is navigated by users and iii) as the social network of users of a photo-sharing site Flickr and a social networking site Yahoo!360. We study the properties of networks present in real datasets, what are the effects of their structure and how this structure can be exploited. We analyze the citation networks between computer science publications and compare them to those described in Physics community. We also demonstrate the bias of citation databases collected autonomously and present mathematical models of this bias. We then analyze the link structure of three websites extracted by exhaustive crawls. We perform a user study with 134 participants on these websites in an lab. We discuss the structure of the link networks and the performance of subjects in locating information on these websites. We finally exploit the knowledge of users' social network to provide higher quality recommendations than current collaborative filtering techniques and demonstrate the performance benefit on two real datasets.Katedra softwarového inženýrstvíDepartment of Software EngineeringFaculty of Mathematics and PhysicsMatematicko-fyzikální fakult

    Exploring determinants of attraction and helpfulness of online product review:a consumer behaviour perspective

    Get PDF
    To assist filtering and sorting massive review messages, this paper attempts to examine the determinants of review attraction and helpfulness. Our analysis divides consumers’ reading process into “notice stage” and “comprehend stage” and considers the impact of “explicit information” and “implicit information” of review attraction and review helpfulness. 633 online product reviews were collected from Amazon China. A mixed-method approach is employed to test the conceptual model proposed for examining the influencing factors of review attraction and helpfulness. The empirical results show that reviews with negative extremity, more words, and higher reviewer rank easily gain more attraction and reviews with negative extremity, higher reviewer rank, mixed subjective property, and mixed sentiment seem to be more helpful. The research findings provide some important insights, which will help online businesses to encourage consumers to write good quality reviews and take more active actions to maximise the value of online reviews

    Towards generic domain-specific information retrieval

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Adaptation-Aware Architecture Modeling and Analysis of Energy Efficiency for Software Systems

    Get PDF
    This work presents an approach for the architecture analysis of energy efficiency for static and self-adaptive software systems. It introduces a modeling language that captures consumption characteristics on an architectural level. The outlined analysis predicts the energy efficiency of systems described with this language. Lastly, this work introduces an approach for considering transient effects in design time architecture analyses

    A Language Model based Job Recommender

    Full text link
    Matching candidates to job openings is a hard real world problem of economic interest that thus far de es researchers' attempts to tackle it. Collaborative ltering methods, which have proven to be highly e ective in other domains, have a di cult time nding success when applied to Human Resources. Aside from the well known cold-start issue there are other problems speci c to the recruitment world that explain the poor results attained. In particular, fresh job openings arrive all the time and they have relatively short expiration periods. In addition, there is a large volume of passive users who are not actively looking for a job, but that would consider a change if a suitable o er came their way. The two constraints combined suggest that content based models may be advantageous. Previous attempts to attack the problem have tried to infer relevance from a variety of sources. Indirect information captured from web server and search engine logs, as well as eliciting direct feedback from users or recruiters have all been polled and used to construct models. In contrast, this thesis departs from previous methods and tries to exploit resume databases as a primary source for relevance information, a rich resource that in my view remains greatly underutilized. Relevance models are adapted for the task at hand and a formulation is derived to model job transitions as a Markov process, with the justi cation being based on David Ricardo's principle of comparative advantage. Empirical results are compiled following the Cran eld benchmarking methodology and compared against several standard competing algorithms

    Analysis, Modeling, and Algorithms for Scalable Web Crawling

    Get PDF
    This dissertation presents a modeling framework for the intermediate data generated by external-memory sorting algorithms (e.g., merge sort, bucket sort, hash sort, replacement selection) that are well-known, yet without accurate models of produced data volume. The motivation comes from the IRLbot crawl experience in June 2007, where a collection of scalable and high-performance external sorting methods are used to handle such problems as URL uniqueness checking, real-time frontier ranking, budget allocation, spam avoidance, all being monumental tasks, especially when limited to the resources of a single-machine. We discuss this crawl experience in detail, use novel algorithms to collect data from the crawl image, and then advance to a broader problem – sorting arbitrarily large-scale data using limited resources and accurately capturing the required cost (e.g., time and disk usage). To solve these problems, we present an accurate model of uniqueness probability the probability to encounter previous unseen data and use that to analyze the amount of intermediate data generated the above-mentioned sorting methods. We also demonstrate how the intermediate data volume and runtime vary based on the input properties (e.g., frequency distribution), hardware configuration (e.g., main memory size, CPU and disk speed) and the choice of sorting method, and that our proposed models accurately capture such variation. Furthermore, we propose a novel hash-based method for replacement selection sort and its model in case of duplicate data, where existing literature is limited to random or mostly-unique data. Note that the classic replacement selection method has the ability to increase the length of sorted runs and reduce their number, both directly benefiting the merge step of external sorting and . But because of a priority queue-assisted sort operation that is inherently slow, the application of replacement selection was limited. Our hash-based design solves this problem by making the sort phase significantly faster compared to existing methods, making this method a preferred choice. The presented models also enable exact analysis of Least-Recently-Used (LRU) and Random Replacement caches (i.e., their hit rate) that are used as part of the algorithms presented here. These cache models are more accurate than the ones in existing literature, since the existing ones mostly assume infinite stream of data, while our models work accurately on finite streams (e.g., sampled web graphs, click stream) as well. In addition, we present accurate models for various crawl characteristics of random graphs, which can forecast a number of aspects of crawl experience based on the graph properties (e.g., degree distribution). All these models are presented under a unified umbrella to analyze a set of large-scale information processing algorithms that are streamlined for high performance and scalability

    Music similarity analysis using the big data framework spark

    Get PDF
    A parameterizable recommender system based on the Big Data processing framework Spark is introduced, which takes multiple tonal properties of music into account and is capable of recommending music based on a user's personal preferences. The implemented system is fully scalable; more songs can be added to the dataset, the cluster size can be increased, and the possibility to add different kinds of audio features and more state-of-the-art similarity measurements is given. This thesis also deals with the extraction of the required audio features in parallel on a computer cluster. The extracted features are then processed by the Spark based recommender system, and song recommendations for a dataset consisting of approximately 114000 songs are retrieved in less than 12 seconds on a 16 node Spark cluster, combining eight different audio feature types and similarity measurements.Ein parametrisierbares Empfehlungssystem, basierend auf dem Big Data Framework Spark, wird präsentiert. Dieses berücksichtigt verschiedene klangliche Eigenschaften der Musik und erstellt Musikempfehlungen basierend auf den persönlichen Vorlieben eines Nutzers. Das implementierte Empfehlungssystem ist voll skalierbar. Mehr Lieder können dem Datensatz hinzugefügt werden, mehr Rechner können in das Computercluster eingebunden werden und die Möglichkeit andere Audiofeatures und aktuellere Ähnlichkeitsmaße hizuzufügen und zu verwenden, ist ebenfalls gegeben. Des Weiteren behandelt die Arbeit die parallele Berechnung der benötigten Audiofeatures auf einem Computercluster. Die Features werden von dem auf Spark basierenden Empfehlungssystem verarbeitet und Empfehlungen für einen Datensatz bestehend aus ca. 114000 Liedern können unter Berücksichtigung von acht verschiedenen Arten von Audiofeatures und Abstandsmaßen innerhalb von zwölf Sekunden auf einem Computercluster mit 16 Knoten berechnet werden
    corecore