3,504 research outputs found

    Semantics and Security Issues in JavaScript

    Get PDF
    There is a plethora of research articles describing the deep semantics of JavaScript. Nevertheless, such articles are often difficult to grasp for readers not familiar with formal semantics. In this report, we propose a digest of the semantics of JavaScript centered around security concerns. This document proposes an overview of the JavaScript language and the misleading semantic points in its design. The first part of the document describes the main characteristics of the language itself. The second part presents how those characteristics can lead to problems. It finishes by showing some coding patterns to avoid certain traps and presents some ECMAScript 5 new features.Comment: Deliverable Resilience FUI 12: 7.3.2.1 Failles de s\'ecurit\'e en JavaScript / JavaScript security issue

    Proof-of-Concept Application - Annual Report Year 2

    Get PDF
    This document first gives an introduction to Application Layer Networks and subsequently presents the catallactic resource allocation model and its integration into the middleware architecture of the developed prototype. Furthermore use cases for employed service models in such scenarios are presented as general application scenarios as well as two very detailed cases: Query services and Data Mining services. This work concludes by describing the middleware implementation and evaluation as well as future work in this area. --Grid Computing

    Automatic Discovery, Association Estimation and Learning of Semantic Attributes for a Thousand Categories

    Full text link
    Attribute-based recognition models, due to their impressive performance and their ability to generalize well on novel categories, have been widely adopted for many computer vision applications. However, usually both the attribute vocabulary and the class-attribute associations have to be provided manually by domain experts or large number of annotators. This is very costly and not necessarily optimal regarding recognition performance, and most importantly, it limits the applicability of attribute-based models to large scale data sets. To tackle this problem, we propose an end-to-end unsupervised attribute learning approach. We utilize online text corpora to automatically discover a salient and discriminative vocabulary that correlates well with the human concept of semantic attributes. Moreover, we propose a deep convolutional model to optimize class-attribute associations with a linguistic prior that accounts for noise and missing data in text. In a thorough evaluation on ImageNet, we demonstrate that our model is able to efficiently discover and learn semantic attributes at a large scale. Furthermore, we demonstrate that our model outperforms the state-of-the-art in zero-shot learning on three data sets: ImageNet, Animals with Attributes and aPascal/aYahoo. Finally, we enable attribute-based learning on ImageNet and will share the attributes and associations for future research.Comment: Accepted as a conference paper at CVPR 201

    Observations and Recommendations on the Internationalisation of Software

    Get PDF
    As computer programs enter the lives of more and more people worldwide, it is becoming increasingly unacceptable to assume that software with a user interface designed for an indigenous English speaking market will be acceptable outside its country of origin simply by changing the currency symbol. Developers of software who are serious about expanding sales into new markets must consider many issues when giving thought either to the creation of new software or the modification of existing software to work within the linguistic and cultural constraints of these new markets. The purpose of this paper is to examine the task of preparing software to be used in countries and cultures other than that in which it is created. We do this by reviewing some of the most important localisation issues that have been identified, and some of the tools and practices that are available to the software designer to deal with them. We shall also consider some of the areas of the software development process that are currently less well understood and supported. Our major emphasis is in non-graphical applications targeted at European markets. Keywords: Internationalisation, I18N, Localising, Enabling, Multi-lingual

    Analysis of GraphQL performance: a case study

    Get PDF
    Atualmente os aplicativos da Web têm um papel relevante, com um grande número de aparelhos conectados à Internet e os dados são transmitidos entre plataformas distintas a um ritmo sem precedentes. Vários sistemas e plataformas de tipos diferentes, como web e móveis, exigem que os aplicativos se adaptem de maneira rápida e eficiente às necessidades dos consumidores. Em 2000, o Representation State Transfer (REST) foi apresentado e foi rapidamente adotado pelos desenvolvedores. No entanto, devido ao crescimento dos consumidores e às necessidades distintas, este estilo arquitetónico, na forma como é utilizado, revelou algumas fragilidades relacionadas com o desempenho e flexibilidade das aplicações. Estas são ou podem ser endereçadas com GraphQL. Apesar de ser uma tecnologia recente, já é usada por grandes empresas como Facebook, Netflix, GitHub e PayPal. Recentemente, uma plataforma do INESC TEC, denominada IRIS, enfrentou os mesmos problemas de desempenho e a possibilidade de adoção do GraphQL foi considerada. Várias alternativas com GraphQL foram estudadas e analisadas de forma a verificar se poderiam beneficiar o IRIS em termos de desempenho e flexbilidade. Uma das conclusões deste estudo é que todas as alternativas testadas revelam, no geral, melhores resultados de desempenho, tendo em consideração o tempo de resposta e o tamanho da resposta. No entanto, a utilização de uma alternativa constituída apenas por GraphQL apresenta-se como a melhor solução para melhorar o desempenho e flexibilidade de uma aplicação.Web applications today play a significant role, with a large number of devices connected to the Internet, and data is transmitted across disparate platforms at an unprecedented rate. Many systems and platforms of different types, such as web and mobile, require applications to adapt quickly and efficiently to the needs of consumers. In 2000, the Representation State Transfer (REST) was introduced, and the developers quickly adopted it. However, due to the growth of consumers and the different needs, this architectural style, in the way it is used, revealed some weaknesses related to the performance and flexibility of the applications. These are or can be addressed with GraphQL. Despite being a recent technology, it is already used by big companies like Facebook, Netflix, GitHub, and PayPal. Recently, an INESC TEC platform, called IRIS, faced the same performance problems and the possibility of adopting GraphQL was considered. Several alternatives with GraphQL were studied and analyzed to see if they could benefit IRIS in terms of performance and flexibility. One of the conclusions of this study is that all of the alternatives tested show, overall, better performance results, taking into account response time and response size. However, the use of an alternative consisting solely of GraphQL is the best solution to improve the performance and flexibility of an application

    Web news mining in an evolving framework

    Get PDF
    Online news has become one of the major channels for Internet users to get news. News websites are daily overwhelmed with plenty of news articles. Huge amounts of online news articles are generated and updated everyday, and the processing and analysis of this large corpus of data is an important challenge. This challenge needs to be tackled by using big data techniques which process large volume of data within limited run times. Also, since we are heading into a social-media data explosion, techniques such as text mining or social network analysis need to be seriously taken into consideration. In this work we focus on one of the most common daily activities: web news reading. News websites produce thousands of articles covering a wide spectrum of topics or categories which can be considered as a big data problem. In order to extract useful information, these news articles need to be processed by using big data techniques. In this context, we present an approach for classifying huge amounts of different news articles into various categories (topic areas) based on the text content of the articles. Since these categories are constantly updated with new articles, our approach is based on Evolving Fuzzy Systems (EFS). The EFS can update in real time the model that describes a category according to the changes in the content of the corresponding articles. The novelty of the proposed system relies in the treatment of the web news articles to be used by these systems and the implementation and adjustment of them for this task. Our proposal not only classifies news articles, but it also creates human interpretable models of the different categories. This approach has been successfully tested using real on-line news. (C) 2015 Elsevier B.V. All rights reserved.This work has been supported by the Spanish Government under i-Support (Intelligent Agent Based Driver Decision Support) Project (TRA2011-29454-C03-03)

    Combining SAWSDL, OWL-DL and UDDI for Semantically Enhanced Web Service Discovery

    Get PDF
    UDDI registries are included as a standard offering within the product suite of any major SOA vendor, serving as the foundation for establishing design-time and run-time SOA governance. Despite the success of the UDDI specification and its rapid uptake by the industry, the capabilities of its offered service discovery facilities are rather limited. The lack of machine-understandable semantics in the technical specifications and classification schemes used for retrieving services, prevent UDDI registries from supporting fully automated and thus truly effective service discovery. This paper presents the implementation of a semantically-enhanced registry that builds on the UDDI specification and augments its service publication and discovery facilities to overcome the aforementioned limitations. The proposed solution combines the use of SAWSDL for creating semantically annotated descriptions of service interfaces and the use of OWL-DL for modelling service capabilities and for performing matchmaking via DL reasoning

    Multimodalities in Metadata: Gaia Gate

    Get PDF
    Metadata is information about objects. Existing metadata standards seldom describe details concerning an object’s context within an environment; this thesis proposes a new concept, external contextual metadata (ECM), examining metadata, digital photography, and mobile interface theory as context for a proposed multimodal framework of media that expresses the internal and external qualities of the digital object and how they might be employed in various use cases. The framework is binded to a digital image as a singular object. Information contained in these ‘images’ can then be processed by a renderer application to reinterpret the context that the image was captured, including non-visually. Two prototypes are developed through the process of designing a renderer for the new multimodal data framework: a proof-of-concept application and a demonstration of ‘figurative’ execution (titled ‘Gaia Gate’), followed by a critical design analysis of the resulting products
    corecore