10 research outputs found

    An Empirical Study of CSS Code Smells in Web Frameworks

    Get PDF
    Cascading Style Sheets (CSS) has become essential to front-end web development for the specification of style. But despite its simple syntax and the theoretical advantages attained through the separation of style from content and behavior, CSS authoring today is regarded as a complex task. As a result, developers are increasingly turning to CSS preprocessor languages and web frameworks to aid in development. However, previous studies show that even highly popular websites which are known to be developed with web frameworks contain CSS code smells such as duplicated rules and hard-coded values. Such code smells have the potential to cause adverse effects on websites and complicate maintenance. It is therefore important to investigate whether web frameworks may be encouraging the introduction of CSS code smells into websites. In this thesis, we investigate the prevalence of CSS code smells in websites built with different web frameworks and attempt to recognize a pattern of CSS behavior in these frameworks. We collect a dataset of several hundred websites produced by each of 19 different frameworks, collect code smells and other metrics present in the CSS code of each website, train a classifier to predict which framework the website was built with, and perform various clustering tasks to gain insight into the correlations between code smells. Our results show that CSS code smells are highly prevalent in websites built with web frameworks, we achieve an accuracy of 39% in correctly classifying the frameworks based on CSS code smells and metrics, and we find interesting correlations between code smells

    Towards secure web browsing on mobile devices

    Get PDF
    The Web is increasingly being accessed by portable, multi-touch wireless devices. Despite the popularity of platform-specific (native) mobile apps, a recent study of smartphone usage shows that more people (81%) browse the Web than use native apps (68%) on their phone. Moreover, many popular native apps such as BBC depend on browser-like components (e.g., Webview) for their functionality. The popularity and prevalence of web browsers on modern mobile phones warrants characterizing existing and emerging threats to mobile web browsing, and building solutions for the same. Although a range of studies have focused on the security of native apps on mobile devices, efforts in characterizing the security of web transactions originating at mobile browsers are limited. This dissertation presents three main contributions: First, we show that porting browsers to mobile platforms leads to new vulnerabilities previously not observed in desktop browsers. The solutions to these vulnerabilities require careful balancing between usability and security and might not always be equivalent to those in desktop browsers. Second, we empirically demonstrate that the combination of reduced screen space and an independent selection of security indicators not only make it difficult for experts to determine the security standing of mobile browsers, but actually make mobile browsing more dangerous for average users as they provide a false sense of security. Finally, we experimentally demonstrate the need for mobile specific techniques to detect malicious webpages. We then design and implement kAYO, the first mobile specific static tool to detect malicious webpages in real-time.Ph.D

    Crystallographic fragment screening - improvement of workflow, tools and procedures, and application for the development of enzyme and protein-protein interaction modulators

    Get PDF
    One of the great societal challenges of today is the fight against diseases which reduce life expectancy and lead to high economic losses. Both the understanding and the addressing of these diseases need research activities at all levels. One aspect of this is the discovery and development of tool compounds and drugs. Tool compounds support disease research and the development of drugs. For about 20 years, the discovery of new compounds has been attempted by screening small organic molecules by high-throughput methods. More recently, X-ray crystallography has emerged as the most promising method to conduct such screening. Crystallographic fragment-screening (CFS) generates binding information as well as 3D-structural information of the target protein in complex with the bound fragment. This doctoral research project is focused primarily on the optimization of the crystallographic fragment screening workflow. Investigated were the requirements for more successful screening campaigns with respect to the crystal system studied, the fragment libraries, the handling of the crystalline samples, as well as the handling of the data associated with a screening campaign. The improved CFS workflow was presented as a detailed protocol and as an accompanying video to train future CFS users in a streamlined and accessible way. Together, these improvements make CFS campaigns a more high-throughput method, offering the ability to screen larger fragment libraries and allowing higher numbers of campaigns performed per year. The protein targets throughout the project were two enzymes and a spliceosomal protein-protein complex. The enzymes comprised the aspartic protease Endothiapepsin and the SARS-Cov-2 main protease. The protein-protein complex was the RNaseH-like domain of Prp8, a vital structural protein in the spliceosome, together with its nuclear shuttling factor Aar2. By performing the CFS campaigns against disease-relevant targets, the resulting fragment hits could be used directly to develop tool compounds or drugs. The first steps of optimization of fragment hits into higher affinity binders were also investigated for improvements. In summary, a plethora of novel starting points for tool compound and drug development was identified

    Hacking the web 2.0: user agency and the role of hackers as computational mediators

    Get PDF
    This thesis studies the contested reconfigurations of computational agency within the domain of practices and affordances involved in the use of the Internet in everyday life (here labelled lifeworld Internet), through the transition of the Internet to a much deeper reliance on computation than at any previous stage. Computational agency is here considered not only in terms of capacity to act enabled (or restrained) by the computational layer but also as the recursive capacity to reconfigure the computational layer itself, therefore in turn affecting one’s own and others’ computational agency. My research is based on multisited and diachronic ethnographic fieldwork: an initial (2005–2007) autoethnographic case study focused on the negotiations of computational agency within the development of a Web 2.0 application, later (2010–2011) fieldwork interviews focused on processes through which users make sense of the increasing pervasiveness of the Internet and of computation in everyday life, and a review (2010–2015) of hacker discourses focused on tracing the processes through which hackers constitute themselves as a recursive public able to inscribe counter–narratives in the development of technical form and to reproduce itself as a public of computational mediators with capacity to operate at the intersection of the technical and the social. By grounding my enquiry in the specific context of the lifeworlds of individual end users but by following computational agency through global hacker discourses, my research explores the role of computation, computational capacity and computational mediators in the processes through which users ‘hack’ their everyday Internet environments for practical utility, or develop independent alternatives to centralized Internet services as part of their contestation of values inscribed in the materiality of mainstream Internet

    Data and the city – accessibility and openness. a cybersalon paper on open data

    Get PDF
    This paper showcases examples of bottom–up open data and smart city applications and identifies lessons for future such efforts. Examples include Changify, a neighbourhood-based platform for residents, businesses, and companies; Open Sensors, which provides APIs to help businesses, startups, and individuals develop applications for the Internet of Things; and Cybersalon’s Hackney Treasures. a location-based mobile app that uses Wikipedia entries geolocated in Hackney borough to map notable local residents. Other experiments with sensors and open data by Cybersalon members include Ilze Black and Nanda Khaorapapong's The Breather, a "breathing" balloon that uses high-end, sophisticated sensors to make air quality visible; and James Moulding's AirPublic, which measures pollution levels. Based on Cybersalon's experience to date, getting data to the people is difficult, circuitous, and slow, requiring an intricate process of leadership, public relations, and perseverance. Although there are myriad tools and initiatives, there is no one solution for the actual transfer of that data

    Promoting Andean children's learning of science through cultural and digital tools

    Get PDF
    Conference Theme: To see the world and a grain of sand: Learning across levels of space, time, and scaleIn Peru, there is a large achievement gap in rural schools. In order to overcome this problem, the study aims to design environments that enhance science learning through the integration of ICT with cultural artifacts, respecting the Andean culture and empower rural children to pursue lifelong learning. This investigation employs the Cultural-Historical Activity Theory (CHAT) framework, and the Design-Based Research (DBR) methodology using an iterative process of design, implementation and evaluation of the innovative practice.published_or_final_versio

    XXIII Congreso Argentino de Ciencias de la Computación - CACIC 2017 : Libro de actas

    Get PDF
    Trabajos presentados en el XXIII Congreso Argentino de Ciencias de la Computación (CACIC), celebrado en la ciudad de La Plata los días 9 al 13 de octubre de 2017, organizado por la Red de Universidades con Carreras en Informática (RedUNCI) y la Facultad de Informática de la Universidad Nacional de La Plata (UNLP).Red de Universidades con Carreras en Informática (RedUNCI

    Actes de la 9ème conférence des Technologies de l’Information et de la Communication pour l’Enseignement (TICE 2014)

    Get PDF
    National audienceLe cycle de conférence TICE a pour objectif de faire tous les deux ans le point sur les résultats de recherches, les nouvelles applications, les derniers usages, et les retours d’expériences dans le domaine de l’éducation supérieure numérique. Le colloque TICE 2014 est organisé par l’IUT de Beziers, une composante de l’Université Montpellier 2. Cette neuvième édition du colloque TICE sera l’occasion de rassembler à Béziers, du 18 au 20 Novembre 2014, la communauté scientifique et industrielle des TICE autour du thème « Nouvelles pédagogies et sciences et technologies du numérique »
    corecore