8,764 research outputs found

    The Archigram Archive

    Get PDF
    The Archigram archival project made the works of seminal experimental architectural group Archigram available free online for an academic and general audience. It was a major archival work, and a new kind of digital academic archive, displaying material held in different places around the world and variously owned. It was aimed at a wide online design community, discovering it through Google or social media, as well as a traditional academic audience. It has been widely acclaimed in both fields. The project has three distinct but interlinked aims: firstly to assess, catalogue and present the vast range of Archigram's prolific work, of which only a small portion was previously available; secondly to provide reflective academic material on Archigram and on the wider picture of their work presented; thirdly to develop a new type of non-ownership online archive, suitable for both academic research at the highest level and for casual public browsing. The project hybridised several existing methodologies. It combined practical archival and editorial methods for the recovery, presentation and contextualisation of Archigram's work, with digital web design and with the provision of reflective academic and scholarly material. It was designed by the EXP Research Group in the Department of Architecture in collaboration with Archigram and their heirs and with the Centre for Parallel Computing, School of Electronics and Computer Science, also at the University of Westminster. It was rated 'outstanding' in the AHRC's own final report and was shortlisted for the RIBA research awards in 2010. It received 40,000 users and more than 250,000 page views in its first two weeks live, taking the site into twitter’s Top 1000 sites, and a steady flow of visitors thereafter. Further statistics are included in the accompanying portfolio. This output will also be returned to by Murray Fraser for UCL

    A novel approach for analysis of attack graph

    Get PDF

    The contribution of data mining to information science

    Get PDF
    The information explosion is a serious challenge for current information institutions. On the other hand, data mining, which is the search for valuable information in large volumes of data, is one of the solutions to face this challenge. In the past several years, data mining has made a significant contribution to the field of information science. This paper examines the impact of data mining by reviewing existing applications, including personalized environments, electronic commerce, and search engines. For these three types of application, how data mining can enhance their functions is discussed. The reader of this paper is expected to get an overview of the state of the art research associated with these applications. Furthermore, we identify the limitations of current work and raise several directions for future research

    Adding dimensions to the analysis of the quality of health information of websites returned by Google. Cluster analysis identifies patterns of websites according to their classification and the type of intervention described.

    Get PDF
    Background and aims: Most of the instruments used to assess the quality of health information on the Web (e.g. the JAMA criteria) only analyze one dimension of information quality, trustworthiness. We try to compare these characteristics with the type of treatments the website describe, whether evidence-based medicine or note, and correlate this with the established criteria. Methods: We searched Google for “migraine cure” and analyzed the first 200 websites for: 1) JAMA criteria (authorship, attribution, disclosure, currency); 2) class of websites (commercial, health portals, professional, patient groups, no-profit); and 3) type of intervention described (approved drugs, alternative medicine, food, procedures, lifestyle, drugs still at the research stage). We used hierarchical cluster analysis to assess associations between classes of websites and types of intervention described. Subgroup analysis on the first 10 websites returned was performed. Results: Google returned health portals (44%), followed by commercial websites (31%) and journalism websites (11%). The type of intervention mentioned most often was alternative medicine (55%), followed by procedures (49%), lifestyle (42%), food (41%) and approved drugs (35%). Cluster analysis indicated that health portals are more likely to describe more than one type of treatment while commercial websites most often describe only one. The average JAMA score of commercial websites was significantly lower than for health portals or journalism websites, and this was mainly due to lack of information on the authors of the text and indication of the date the information was written. Looking at the first 10 websites from Google, commercial websites are under-represented and approved drugs over-represented. Conclusions: This approach allows the appraisal of the quality of health-related information on the Internet focusing on the type of therapies/prevention methods that are shown to the patient

    The problem of peak loads in web applications and its solutions

    Get PDF
    En aquesta tesi analitzarem els problemes que els pics de demanda causen en les aplicacions Web i quines possibles solucions podem trobar. Les sobrecàrregues s’han convertit en un problema recurrent en different àrees a mesura que Internet s’ha anat fent més accesible. Les pàgines de comerç online són un dels pitjors casos on es poden produïr sobrecàrregues. L’anàlisi que es durà a terme estarà basat en els diferents mètodes que es coneixen en l’actualitat que tracten aquesta problematica. L’objectiu principal d’aquesta tesi és recolectar i comparar aquests métodes de forma que s’en pugui fer una guia per triar quin d’ells és més adequat segons el tipus d’aplicació que tinguem. A més també podreu trobar els tests que s’han realitzat en alguns d’aquests casos

    PhD-SNPg: a webserver and lightweight tool for scoring single nucleotide variants

    Get PDF
    One of the major challenges in human genetics is to identify functional effects of coding and non-coding single nucleotide variants (SNVs). In the past, several methods have been developed to identify disease-related single amino acid changes but only few tools are able to score the impact of non-coding variants. Among the most popular algorithms, CADD and FATHMM predict the effect of SNVs in non-coding regions combining sequence conservation with several functional features derived from the ENCODE project data. Thus, to run CADD or FATHMM locally, the installation process requires to download a large set of pre-calculated information. To facilitate the process of variant annotation we develop PhD-SNPg, a new easy-to-install and lightweight machine learning method that depends only on sequence-based features. Despite this, PhD-SNPg performs similarly or better than more complex methods. This makes PhD-SNPg ideal for quick SNV interpretation, and as benchmark for tool development
    • …
    corecore