8 research outputs found
CSS Minification via Constraint Solving
Minification is a widely-accepted technique which aims at reducing the size
of the code transmitted over the web. We study the problem of minifying
Cascading Style Sheets (CSS) --- the de facto language for styling web
documents. Traditionally, CSS minifiers focus on simple syntactic
transformations (e.g. shortening colour names). In this paper, we propose a new
minification method based on merging similar rules in a CSS file.
We consider safe transformations of CSS files, which preserve the semantics
of the CSS file. The semantics of CSS files are sensitive to the ordering of
rules in the file. To automatically identify a rule merging opportunity that
best minimises file size, we reduce the rule-merging problem to a problem on
CSS-graphs, i.e., node-weighted bipartite graphs with a dependency ordering on
the edges, where weights capture the number of characters (e.g. in a selector
or in a property declaration). Roughly speaking, the corresponding CSS-graph
problem concerns minimising the total weight of a sequence of bicliques
(complete bipartite subgraphs) that covers the CSS-graph and respects the edge
order.
We provide the first full formalisation of CSS3 selectors and reduce
dependency detection to satisfiability of quantifier-free integer linear
arithmetic, for which highly-optimised SMT-solvers are available. To solve the
above NP-hard graph optimisation problem, we show how Max-SAT solvers can be
effectively employed. We have implemented our algorithms using Max-SAT and
SMT-solvers as backends, and tested against approximately 70 real-world
examples (including the top 20 most popular websites). In our benchmarks, our
tool yields larger savings than six well-known minifiers (which do not perform
rule-merging, but support many other optimisations). Our experiments also
suggest that better savings can be achieved in combination with one of these
six minifiers
Automatically identifying potential regressions in the layout of responsive web pages
Providing a good user experience on the ever-increasing number and variety of devices being used to browse the web is a difficult, yet critical, task. With Responsive Web Design (RWD), front-end web developers design web pages so that they dynamically resize and rearrange content to best fit the dimensions of a device’s screen. However, when making code modifications to a responsive page, developers can easily introduce regressions from the correct layout that have detrimental effects at unpredictable screen sizes.
For instance, the source code change that a developer makes to improve the layout at one screen size may obscure a page’s content at other sizes. Current approaches to testing are often insufficient because they rely on limited tools and error-prone manual inspections of a web page. As such, many unintended regressions in web page layout often go undetected and ultimately manifest in production web sites. To address the challenge of detecting regressions in responsive web pages, this paper presents an automated approach that extracts the responsive layout of two versions of a page and compares them, alerting developers to the differences in layout that they may wish to investigate further. We implemented the approach and
empirically evaluated it on 15 real-world responsive web pages. Leveraging code mutations that a tool automatically injected into the pages as a systematic simulation of developer changes, the experiments show that the approach was highly effective. When compared with manual and automated baseline testing techniques, it detected 12.5% and 18.75% more injected changes, respectively. Along with identifying the best parameters for the method that extracts the responsive layout, the experiments show that the approach surpasses the baselines across changes that vary in their impact, but works particularly well for subtle, hard-to-detect mutants, showing the benefits of automatically identifying regressions in web page layout
Full Stack Application Generation for Insurance Sales based on Product Models
The insurance market is segregated in various lines-of-business such as Life, Health, Property &
Casualty, among others. This segregation allows product engineers to focus on the rules and details of a
speci c insurance area. However, having di erent conceptual models leads to an additional complexity
when a generic presentation layer application has to be continuously adapted to work with these distinct
models.
With the objective to streamline these continuous adaptations in an existent presentation layer, this
work investigates and proposes the usage of code generators to allow a complete application generation,
able to communicate with the given insurance product model. Therefore, this work compares and
combines di erent code generation tools to accomplish the desired application generation.
During this project, it is chosen an existing framework to create several software layers and respective
components such as necessary classes to represent the Domain Model ; database mappings; Service layer;
REST Application Program Interface (API); and a rich javascript-based presentation layer.
As a conclusion, this project demonstrates that the proposed tool can generate the application already
adapted and able to communicate with the provided conceptual model. Proving that this autonomous
process is faster than the current manual development processes to adapt a presentation layer to an
Insurance product model.O mercado segurador encontra-se dividido em várias linhas-de-negócio (e.g. Vida, Saúde, Propriedade)
que têm naturalmente, diferentes modelos conceptuais para a representação dos seus produtos. Esta
panóplia de modelos leva a uma dificuldade acrescida quando o software de camada de apresentação
tem que ser constantemente adaptado aos novos modelos bem como ás alterações efetuadas aos modelos
existentes.
Com o intuito de suprimir esta constante adaptação a novos modelos, este trabalho visa a exploração
e implementação de geradores de código de forma a permitir gerar toda uma aplicação que servirá de
camada de apresentação ao utilizador para um dado modelo.
Assim, este trabalho expõe e compara várias ferramentas de geração de código actualmente disponíveis,
de forma a que seja escolhida a mais eficaz para responder aos objectivos estabelecidos. É então selecionada a ferramenta mais promissora e capaz de gerar vários componentes de software, gerando o seu
modelo de domínio, mapeamento com as respectivas tabelas de base de dados, uma camada de lógica de
negócio, serviços REST bem como uma camada de apresentação.
Como conclusão, este trabalho apresenta uma solução que é capaz de se basear num modelo proveniente
do sistema de modelação de produto e assim gerar completamente a aplicação de camada de apresentação
desejada para esse mesmo modelo. Permitindo assim, um processo mais rápido e eficaz quando comparado
com os processos manuais de desenvolvimento e de adaptação de código-fonte existentes
Reengineering a Content Manager for Humanoid Robots with Web Technology
This project aims to reengineer a content manager for humanoid robots with web technology at PAL Robotics in order to abandon the current Adobe Flash implementation. This software runs in the robot, displays content applications and handles user interaction
Automatic Identification of Presentation Failures in Responsive Web Pages
With the increasing number and variety of devices being used to access the World Wide Web, providing a good browsing experience to all users, regardless of device, is a critical task. To do this, many web developers now use responsive web design (RWD) to build web pages that provide a bespoke layout tailored to the specific characteristics of the device in use, normally the viewport width. However, implementing responsive web pages is an error-prone task, as web page elements can behave in unpredictable ways as the viewport expands and contracts. This leads to presentation failures — errors in the visual appearance of the web page. As well-designed responsive web pages can have an array of benefits, identifying presentation failures quickly and accurately is an important task.
Unfortunately, current approaches to detecting presentation failures in web pages are insufficient. The huge number of different viewport widths that require support makes thorough checking of the layout on all devices infeasible. Furthermore, the current range of developer tools only provide limited support for testing responsive web pages.
This thesis tackles these problems by making the following contributions. First, it proposes the responsive layout graph (RLG), a model of the dynamic layout of modern responsive web pages. Then, it explores how the RLG can be used to automatically detect potentially unseen side-effects of small changes to the source code of a web page. Next, it investigates the detection of several common types of layout failures, leveraging implicit oracle information in place of an explicit oracle. Experiments showed both the approach for detecting potentially unseen side-effects and the approach for identifying common types of layout failure to be highly effective. The manual effort required by the user is further reduced by an approach that automatically grouped related failures together. Finally, a case study of 33 real-world responsive layout failures investigates how difficult such failures are to fix. These approaches have all been implemented into a software tool, ReDeCheck, which helps web developers create better responsive web pages
Interactive computer vision through the Web
Computer vision is the computational science aiming at reproducing and improving the ability of human vision to understand its environment. In this thesis, we focus on two fields of computer vision, namely image segmentation and visual odometry and we show the positive impact that interactive Web applications provide on each. The first part of this thesis focuses on image annotation and segmentation. We introduce the image annotation problem and challenges it brings for large, crowdsourced datasets. Many interactions have been explored in the literature to help segmentation algorithms. The most common consist in designating contours, bounding boxes around objects, or interior and exterior scribbles. When crowdsourcing, annotation tasks are delegated to a non-expert public, sometimes on cheaper devices such as tablets. In this context, we conducted a user study showing the advantages of the outlining interaction over scribbles and bounding boxes. Another challenge of crowdsourcing is the distribution medium. While evaluating an interaction in a small user study does not require complex setup, distributing an annotation campaign to thousands of potential users might differ. Thus we describe how the Elm programming language helped us build a reliable image annotation Web application. A highlights tour of its functionalities and architecture is provided, as well as a guide on how to deploy it to crowdsourcing services such as Amazon Mechanical Turk. The application is completely opensource and available online. In the second part of this thesis we present our open-source direct visual odometry library. In that endeavor, we provide an evaluation of other open-source RGB-D camera tracking algorithms and show that our approach performs as well as the currently available alternatives. The visual odometry problem relies on geometry tools and optimization techniques traditionally requiring much processing power to perform at realtime framerates. Since we aspire to run those algorithms directly in the browser, we review past and present technologies enabling high performance computations on the Web. In particular, we detail how to target a new standard called WebAssembly from the C++ and Rust programming languages. Our library has been started from scratch in the Rust programming language, which then allowed us to easily port it to WebAssembly. Thanks to this property, we are able to showcase a visual odometry Web application with multiple types of interactions available. A timeline enables one-dimensional navigation along the video sequence. Pairs of image points can be picked on two 2D thumbnails of the image sequence to realign cameras and correct drifts. Colors are also used to identify parts of the 3D point cloud, selectable to reinitialize camera positions. Combining those interactions enables improvements on the tracking and 3D point reconstruction results
Recommended from our members
Investigating the detection of stored scripting attacks using machine learning
Web applications now play an essential role in our daily lives; through them we can make bank transfers, purchase products and/or make bookings on the Internet. This makes them a target for attackers who will attempt to exploit security vulnerabilities in web applications in order to obtain access to sensitive user information or gain unauthorized privileges. One of the most common attacks aimed at stealing user information is Cross-Site Scripting; this is ranked among the top 10 security vulnerabilities in web applications. Traditional defense systems rely on a signature database describing known attacks; however, XSS attacks written in JavaScript are very variable; they do not exist only in a single form. The most common cause of XSS security vulnerabilities is weakness of verification of the user’s input. This provides the motivation for finding a method for identifying malicious code, written in JavaScript, that an attacker attempts to have executed on the server.
Machine learning has contributed to the security of web applications. Several studies have been conducted in relation to Intrusion Detecting Systems (IDS) which detect and prevent attacks against web applications. Cross-Site Scripting is one of the attacks that has been studied employing a number of methods: for example, using features to identify obfuscated scripts or using JavaScript keywords, evaluating machine learning algorithms in term of detecting attacks against web applications such as random forest, and SVM. These studies have achieved highly accurate results by using machine learning to detect XSS attacks. They often attained better results than dynamic and static analysis in terms of acting as a protection layer for web applications.
This present study will demonstrate the use of machine learning methods, incorporated into a web application at the user input validation stage - prior to the request being passed to the application server. Classifiers will be used to prevent persistent or stored XSS attacks, which are caused by malicious code injections via an input point in the web application. This study relies on supervised machine learning and the application of Boolean feature sets, in order to achieve ease and speed of classification. Furthermore, this study examined the use of such methods on two other types of injection attacks: SQL-i and LDAP. Cascading classifiers and ensemble techniques were used to reduce complexity while maintaining accuracy and speed. To understand how a decision is made in the classifier, an approximate Boolean function is extracted; this is done based on the techniques which have been employed to extract rules from black box classifiers
Don’t forget to save! User experience principles for video game narrative authoring tools.
Interactive Digital Narratives (IDNs) are a natural evolution of traditional storytelling melded with technological improvements brought about by the rapidly increasing digital revolution. This has and continues to enhance the complexities and functionality of the stories that we can tell. Video game narratives, both old and new, are considered close relatives of IDN, and due to their enhanced interactivity and presentational methods, further complicate the creation process. Authoring tool software aims to alleviate the complexities of this by abstracting underlying data models into accessible user interfaces that creatives, even those with limited technical experience, can use to author their stories. Unfortunately, despite the vast array of authoring tools in this space, user experience is often overlooked even though it is arguably one of the most vital components. This has resulted in a focus on the audience within IDN research rather than the authors, and consequently our knowledge and understanding of the impacts of user experience design decisions in authoring tools are limited. This thesis tackles the modeling of complex video game narrative structures and investigates how user experience design decisions within IDN authoring tools may impact the authoring process. I first introduce my concept of Discoverable Narrative which establishes a vocabulary for the analysis, categorization, and comparison of aspects of video game narrative that are discovered, observed, or experienced by players — something that existing models struggle to detail. I also develop and present my Novella Narrative Model which provides support for video game narrative elements and makes several novel innovations that set it apart from existing narrative models. This thesis then builds upon these models by presenting two bespoke user studies that examine the user experience of the state-of-the-art in IDN authoring tool design, together building a listing of seven general Themes and five principles (Metaphor Testing, Fast Track Testing, Structure, Experimentation, Branching) that highlight evidenced behavioral trends of authors based on different user experience design factors within IDN authoring tools. This represents some of the first work in this space that investigates the relationships between the user experience design of IDN authoring tools and the impacts that they can have on authors. Additionally, a generalized multi-stage pipeline for the design and development of IDN authoring tools is introduced, informed by professional industry- standard design techniques, in an effort to both ensure quality user experience within my own work and to raise awareness of the importance of following proper design processes when creating authoring tools, also serving as a template for doing so