34 research outputs found

    A decade of Portuguese research in e-government: Evolution, current standing, and ways forward

    Get PDF
    In this paper, we present an investigation of the Portuguese research on e-government. Bibliometric techniques are used to explore all the documents published by researchers affiliated to Portuguese institutions from 2005 to 2014 and listed in the Scopus® database. Research production, impact, source types, language used, subject areas, topics, scopes, methods, authors, institutions, networks, and international cooperation are analysed and discussed. We conclude that so that Portuguese research on e-government can evolve, more researchers should be involved, international cooperation should be developed, and more attention should be given to the study of the reasons behind the very good results of the country in the provision of e-government services, as measured by the international rankings. By establishing the evolution and current standing of e-government research in Portugal and exploring the ways forward, our conclusions may prove useful to e-government researchers, research managers, and research policy makers. © Copyright 2016 Inderscience Enterprises Ltd

    Assisted Interaction for Improving Web Accessibility: An Approach Driven and Tested by Userswith Disabilities

    Get PDF
    148 p.Un porcentaje cada vez mayor de la población mundial depende de la Web para trabajar, socializar, opara informarse entre otras muchas actividades. Los beneficios de la Web son todavía más cruciales paralas personas con discapacidades ya que les permite realizar un sinfín de tareas que en el mundo físico lesestán restringidas debido distintas barreras de accesibilidad. A pesar de sus ventajas, la mayoría depáginas web suelen ignoran las necesidades especiales de las personas con discapacidad, e incluyen undiseño único para todos los usuarios. Existen diversos métodos para combatir este problema, como porejemplo los sistemas de ¿transcoding¿, que transforman automáticamente páginas web inaccesibles enaccesibles. Para mejorar la accesibilidad web a grupos específicos de personas, estos métodos requiereninformación sobre las técnicas de adaptación más adecuadas que deben aplicarse.En esta tesis se han realizado una serie de estudios sobre la idoneidad de diversas técnicas de adaptaciónpara mejorar la navegación web para dos grupos diferentes de personas con discapacidad: personas conmovilidad reducida en miembros superiores y personas con baja visión. Basado en revisionesbibliográficas y estudios observacionales, se han desarrollado diferentes adaptaciones de interfaces web ytécnicas alternativas de interacción, que posteriormente han sido evaluadas a lo largo de varios estudioscon usuarios con necesidades especiales. Mediante análisis cualitativos y cuantitativos del rendimiento yla satisfacción de los participantes, se han evaluado diversas adaptaciones de interfaz y métodosalternativos de interacción. Los resultados han demostrado que las técnicas probadas mejoran el acceso ala Web y que los beneficios varían según la tecnología asistiva usada para acceder al ordenador

    Linked data as medium for distributed Multi-Agent Systems

    Get PDF
    The conceptual design and discussion of multi-agents systems (MAS) typically focuses on agents and their models, and the elements and effects in the environment which they perceive. This view, however, leaves out potential pitfalls in the later implementation of the system that may stem from limitations in data models, interfaces, or protocols by which agents and environments exchange information. By today, the research community agrees that for this, that the environment should be understood as well as abstraction layer by which agents access, interpret, and modify elements within the environment. This, however, blurs the the line of the environment being the sum of interactive elements and phenomena perceivable by agents, and the underlying technology by which this information and interactions are offered to agents. This thesis proposes as remedy to consider as third component of multi agent systems, besides agents and environments, the digital medium by which the environment is provided to agents. "Medium" then refers to exactly this technological component via which environment data is published interactively towards the agents, and via which agents perceive, interpret, and finally, modify the underlying environment data. Furthermore, this thesis will detail how MAS may use capabilities of a properly chosen medium to achieve coordinating system behaviors. A suitable candidate technology for digital agent media comes from the Semantic Web in form of Linked Data. In addition to conceptual discussions about the notions of digital agent media, this thesis will provide in detail a specification of a Linked Data agent medium, and detail on means to implement MAS around Linked Data media technologies.Sowohl der konzeptuelle Entwurf von, als auch die wissenschaftliche Diskussion über Multi-Agenten-Systeme (MAS) konzentrieren sich für gewöhnlich auf die Agenten selbst, die Agentenmodelle, sowie die Elemente und Effekte, die sie in ihrer Umgebung wahrnehmen. Diese Betrachtung lässt jedoch mögliche Probleme in einer späteren Implementierung aus, die von Einschränkungen in Datenmodellen, Schnittstellen, oder Protokollen herrühren können, über die Agenten und ihre Umgebung Informationen miteinander austauschen. Heutzutage ist sich die Forschungsgemeinschaft einig, dass die Umgebung als solche als Abstraktionsschicht verstanden werden sollte, über die Agenten Umgebungseffekte und -elemente wahrnehmen, interpretieren, und mit ihnen interagieren. Diese Betrachtungsweise verschleiert jedoch die Trennung zwischen der Umgebung als die Sammlung interaktiver Elemente und wahrnehmbarer Phänomene auf der einen Seite, und der zugrundeliegenden Technologie, über die diese Information den Agenten bereitgestellt wird, auf der anderen. Diese Dissertation schlägt als Lösung vor, zusätzlich zu Agenten undUmgebung ein digitales Medium, über das Agenten die Umgebung bereitgestellt wird, als drittes Element von Multi-Agenten-Systemen zu betrachten. Der Begriff "Medium" bezieht sich dann genau auf diese technologische Komponente, über die Umgebungsinformationen Agenten interaktiv bereitgestellt werden, und über die Agenten die zugrundeliegenden Daten wahrnehmen, interpretieren, und letztendlich modifizieren. Desweiteren wird diese Dissertation aufzeigen, wie die Eigenschaften eines sorgfältig gewählten Mediums ausgenutzt werden können, um ein koordiniertes Systemverhalten zu erreichen. Ein geeigneter Kandidat für ein digitales Agentenmedium findet sich im Ökosystem des „Semantic Web”, in Form von „Linked Data”, wörtlich („verknüpfte Daten”). Zusätzlich zu einer konzeptionellen Diskussion über die Natur digitaler Agenten- Media, spezifiziert diese Dissertation „Linked Data” als Agentenmedium detailliert aus, und beschreibt im Detail die Mittel, wie sich MAS um Linked Data Technologien herum implementieren lassen

    Agile managing of web requirements with WebSpec

    Get PDF
    Web application development is a complex and time consuming process that involves di erent stakeholders (ranging from customers to developers); these applications have some unique characteristics like navigational access to information, sophisticated interaction features, etc. However, there have been few proposals to represent those requirements that are speci c to Web applications. Consequently, validation of requirements (e.g. in acceptance tests) is usually informal, and as a result troublesome. To overcome these problems, this PhD Thesis proposes WebSpec, a domain speci c language for specifying the most relevant and characteristic requirements of Web applications: those involving interaction and navigation. We describe WebSpec diagrams, discussing their abstraction and expressive power. As part of this work, we have created a test driven model based approach called WebTDD that gives a good framework for the language. Using the language with this approach we have test several of its features such as automatic test generation, management of changes in requirements, and improving the understanding of the diagrams through application simulation. This PhD Thesis is composed of a set of published and submitted papers. In order to write this PhD Thesis as a collection of papers, several requirements must be taken into account as stated by the University of Alicante. With regard to the content of the PhD Thesis, it must speci cally include a summary which is devoted to the description of initial hypotheses, research objectives, and the collection of publications itself, thus justifying its coherence. It should be underlined that this summary of the PhD Thesis must also include research results and nal conclusions. This summary corresponds to part I of this PhD Thesis (chapter 1 has been written in Spanish while chapter 2 is in English). This work has been partially supported by the following projects: MANTRA (GV/2011/035) from Valencia Ministry, MANTRA (GRE09-17) from the University of Alicante and by the MESOLAP (TIN2010-14860) project from the Spanish Ministry of Education and Science.Este trabajo ha sido parcialmente financiado por los siguientes proyectos: Mantra (GV/2011/035), Ministerio de Valencia, MANTRA (GRE09-17) de la Universidad de Alicante y por el MESOLAP (TIN2010-14860) proyecto del Ministerio de Educación y Ciencia de España.Facultad de Informátic

    Validating a sentiment dictionary for German political language - a workbench note

    Get PDF
    Automated sentiment scoring offers relevant empirical information for many political science applications. However, apart from English language resources, validated dictionaries are rare. This note introduces a German sentiment dictionary and assesses its performance against human intuition in parliamentary speeches, party manifestos, and media coverage. The tool published with this note is indeed able to discriminate positive and negative political language. But the validation exercises indicate that positive language is easier to detect than negative language, while the scores are numerically biased to zero. This warrants caution when interpreting sentiment scores as interval or even ratio scales in applied research

    Modelo de acesso a fontes em linguagem natural no governo electrónico

    Get PDF
    Doutoramento em Engenharia InformáticaFor the actual existence of e-government it is necessary and crucial to provide public information and documentation, making its access simple to citizens. A portion, not necessarily small, of these documents is in an unstructured form and in natural language, and consequently outside of which the current search systems are generally able to cope and effectively handle. Thus, in thesis, it is possible to improve access to these contents using systems that process natural language and create structured information, particularly if supported in semantics. In order to put this thesis to test, this work was developed in three major phases: (1) design of a conceptual model integrating the creation of structured information and making it available to various actors, in line with the vision of e-government 2.0; (2) definition and development of a prototype instantiating the key modules of this conceptual model, including ontology based information extraction supported by examples of relevant information, knowledge management and access based on natural language; (3) assessment of the usability and acceptability of querying information as made possible by the prototype - and in consequence of the conceptual model - by users in a realistic scenario, that included comparison with existing forms of access. In addition to this evaluation, at another level more related to technology assessment and not to the model, evaluations were made on the performance of the subsystem responsible for information extraction. The evaluation results show that the proposed model was perceived as more effective and useful than the alternatives. Associated with the performance of the prototype to extract information from documents, comparable to the state of the art, results demonstrate the feasibility and advantages, with current technology, of using natural language processing and integration of semantic information to improve access to unstructured contents in natural language. The conceptual model and the prototype demonstrator intend to contribute to the future existence of more sophisticated search systems that are also more suitable for e-government. To have transparency in governance, active citizenship, greater agility in the interaction with the public administration, among others, it is necessary that citizens and businesses have quick and easy access to official information, even if it was originally created in natural language.Para a efectiva existência de governo electrónico é necessário e crucial a disponibilização de informação e documentação pública e tornar simples o acesso a esta pelos cidadãos. Uma parte, não necessariamente pequena, destes documentos encontra-se sob uma forma não estruturada e em linguagem natural e, consequentemente, fora do que os sistemas de pesquisa actuais conseguem em geral suportar e disponibilizar eficazmente. Assim, em tese, é possível melhorar o acesso a estes conteúdos com recurso a sistemas que processem linguagem natural e que sejam capazes de criar informação estruturada, em especial se suportados numa semântica. Com o objectivo de colocar esta tese à prova, o desenvolvimento deste trabalho integrou três grandes fases ou vertentes: (1) Criação de um modelo conceptual integrando a criação de informação estruturada e a sua disponibilização para vários actores, alinhado com a visão do governo electrónico 2.0; (2) Definição e desenvolvimento de um protótipo instanciando os módulos essenciais deste modelo conceptual, nomeadamente a extracção de informação suportada em ontologias e exemplos de informação relevante, gestão de conhecimento e acesso baseado em linguagem natural; (3) Uma avaliação de usabilidade e aceitabilidade da consulta à informação tornada possível pelo protótipo – e em consequência do modelo conceptual - por utilizadores num cenário realista e que incluiu comparação com formas de acesso existentes. Além desta avaliação, a outro nível, mais relacionado com avaliação de tecnologias e não do modelo, foram efectuadas avaliações do desempenho do subsistema responsável pela extracção de informação. Os resultados da avaliação mostram que o modelo proposto foi percepcionado como mais eficaz e mais útil que as alternativas. Associado ao desempenho do protótipo a extrair informação dos documentos, comparável com o estado da arte, os resultados obtidos mostram a viabilidade e as vantagens, com a tecnologia actual, de utilizar processamento de linguagem natural e integração de informação semântica para melhorar acesso a conteúdos em linguagem natural e não estruturados. O modelo conceptual e o protótipo demonstrador pretendem contribuir para a existência futura de sistemas de pesquisa mais sofisticados e adequados ao governo electrónico. Para existir transparência na governação, cidadania activa, maior agilidade na interacção com a administração pública, entre outros, é necessário que cidadãos e empresas tenham acesso rápido e fácil a informação oficial, mesmo que ela tenha sido originalmente criada em linguagem natural

    A Framework for Model-Driven Development of Mobile Applications with Context Support

    Get PDF
    Model-driven development (MDD) of software systems has been a serious trend in different application domains over the last 15 years. While technologies, platforms, and architectural paradigms have changed several times since model-driven development processes were first introduced, their applicability and usefulness are discussed every time a new technological trend appears. Looking at the rapid market penetration of smartphones, software engineers are curious about how model-driven development technologies can deal with this novel and emergent domain of software engineering (SE). Indeed, software engineering of mobile applications provides many challenges that model-driven development can address. Model-driven development uses a platform independent model as a crucial artifact. Such a model usually follows a domain-specific modeling language and separates the business concerns from the technical concerns. These platform-independent models can be reused for generating native program code for several mobile software platforms. However, a major drawback of model-driven development is that infrastructure developers must provide a fairly sophisticated model-driven development infrastructure before mobile application developers can create mobile applications in a model-driven way. Hence, the first part of this thesis deals with designing a model-driven development infrastructure for mobile applications. We will follow a rigorous design process comprising a domain analysis, the design of a domain-specific modeling language, and the development of the corresponding model editors. To ensure that the code generators produce high-quality application code and the resulting mobile applications follow a proper architectural design, we will analyze several representative reference applications beforehand. Thus, the reader will get an insight into both the features of mobile applications and the steps that are required to design and implement a model-driven development infrastructure. As a result of the domain analysis and the analysis of the reference applications, we identified context-awareness as a further important feature of mobile applications. Current software engineering tools do not sufficiently support designing and implementing of context-aware mobile applications. Although these tools (e.g., middleware approaches) support the definition and the collection of contextual information, the adaptation of the mobile application must often be implemented by hand at a low abstraction level by the mobile application developers. Thus, the second part of this thesis demonstrates how context-aware mobile applications can be designed more easily by using a model-driven development approach. Techniques such as model transformation and model interpretation are used to adapt mobile applications to different contexts at design time or runtime. Moreover, model analysis and model-based simulation help mobile application developers to evaluate a designed mobile application (i.e., app model) prior to its generation and deployment with respected to certain contexts. We demonstrate the usefulness and applicability of the model-driven development infrastructure we developed by seven case examples. These showcases demonstrate the designing of mobile applications in different domains. We demonstrate the scalability of our model-driven development infrastructure with several performance tests, focusing on the generation time of mobile applications, as well as their runtime performance. Moreover, the usability was successfully evaluated during several hands-on training sessions by real mobile application developers with different skill levels

    Prometheus: a generic e-commerce crawler for the study of business markets and other e-commerce problems

    Get PDF
    Dissertação de mestrado em Computer ScienceThe continuous social and economic development has led over time to an increase in consumption, as well as greater demand from the consumer for better and cheaper products. Hence, the selling price of a product assumes a fundamental role in the purchase decision by the consumer. In this context, online stores must carefully analyse and define the best price for each product, based on several factors such as production/acquisition cost, positioning of the product (e.g. anchor product) and the competition companies strategy. The work done by market analysts changed drastically over the last years. As the number of Web sites increases exponentially, the number of E-commerce web sites also prosperous. Web page classification becomes more important in fields like Web mining and information retrieval. The traditional classifiers are usually hand-crafted and non-adaptive, that makes them inappropriate to use in a broader context. We introduce an ensemble of methods and the posterior study of its results to create a more generic and modular crawler and scraper for detection and information extraction on E-commerce web pages. The collected information may then be processed and used in the pricing decision. This framework goes by the name Prometheus and has the goal of extracting knowledge from E-commerce Web sites. The process requires crawling an online store and gathering product pages. This implies that given a web page the framework must be able to determine if it is a product page. In order to achieve this we classify the pages in three categories: catalogue, product and ”spam”. The page classification stage was addressed based on the html text as well as on the visual layout, featuring both traditional methods and Deep Learning approaches. Once a set of product pages has been identified we proceed to the extraction of the pricing information. This is not a trivial task due to the disparity of approaches to create a web page. Furthermore, most product pages are dynamic in the sense that they are truly a page for a family of related products. For instance, when visiting a shoe store, for a particular model there are probably a number of sizes and colours available. Such a model may be displayed in a single dynamic web page making it necessary for our framework to explore all the relevant combinations. This process is called scraping and is the last stage of the Prometheus framework.O contínuo desenvolvimento social e económico tem conduzido ao longo do tempo a um aumento do consumo, assim como a uma maior exigência do consumidor por produtos melhores e mais baratos. Naturalmente, o preço de venda de um produto assume um papel fundamental na decisão de compra por parte de um consumidor. Nesse sentido, as lojas online precisam de analisar e definir qual o melhor preço para cada produto, tendo como base diversos fatores, tais como o custo de produção/venda, posicionamento do produto (e.g. produto âncora) e as próprias estratégias das empresas concorrentes. O trabalho dos analistas de mercado mudou drasticamente nos últimos anos. O crescimento de sites na Web tem sido exponencial, o número de sites E-commerce também tem prosperado. A classificação de páginas da Web torna-se cada vez mais importante, especialmente em campos como mineração de dados na Web e coleta/extração de informações. Os classificadores tradicionais são geralmente feitos manualmente e não adaptativos, o que os torna inadequados num contexto mais amplo. Nós introduzimos um conjunto de métodos e o estudo posterior dos seus resultados para criar um crawler e scraper mais genéricos e modulares para extração de conhecimento em páginas de Ecommerce. A informação recolhida pode então ser processada e utilizada na tomada de decisão sobre o preço de venda. Esta Framework chama-se Prometheus e tem como intuito extrair conhecimento de Web sites de E-commerce. Este processo necessita realizar a navegação sobre lojas online e armazenar páginas de produto. Isto implica que dado uma página web a framework seja capaz de determinar se é uma página de produto. Para atingir este objetivo nós classificamos as páginas em três categorias: catálogo, produto e spam. A classificação das páginas foi realizada tendo em conta o html e o aspeto visual das páginas, utilizando tanto métodos tradicionais como Deep Learning. Depois de identificar um conjunto de páginas de produto procedemos à extração de informação sobre o preço. Este processo não é trivial devido à quantidade de abordagens possíveis para criar uma página web. A maioria dos produtos são dinâmicos no sentido em que um produto é na realidade uma família de produtos relacionados. Por exemplo, quando visitamos uma loja online de sapatos, para um modelo em especifico existe a provavelmente um conjunto de tamanhos e cores disponíveis. Esse modelo pode ser apresentado numa única página dinâmica fazendo com que seja necessário para a nossa Framework explorar estas combinações relevantes. Este processo é chamado de scraping e é o último passo da Framework Prometheus

    Mediating skills on risk management for improving the resilience of Supply Networks by developing and using a serious game

    Get PDF
    Given their importance, the need for resilience and the management of risk within Supply Networks, means that engineering students need a solid under-standing of these issues. An innovative way of meeting this need is through the use of serious games. Serious games allow an active experience on how differ-ent factors influencethe flexibility, vulnerability and capabilities in Supply Networks and allow the students to apply knowledge and methods acquired from theory. This supports their ability to understand, analyse and evaluate how different factors contribute to the resilience. The experience gained within the game will contribute to the studentsâ abilities to construct new knowledge based on their active observation and reflection of the environment when they later work in a dynamic environment in industry. This game, Beware, was developed for use in a blended learning environment. It is a part of a course for engineering master students at the University of Bremen. It was found that the game was effective in mediating the topic of risk management to the students espscially in supporting their ability of applying methods, analyse the different interactions and the game play as well as to support the assessment of how their decision-making affected the simulated network

    Spatial and Temporal Sentiment Analysis of Twitter data

    Get PDF
    The public have used Twitter world wide for expressing opinions. This study focuses on spatio-temporal variation of georeferenced Tweets’ sentiment polarity, with a view to understanding how opinions evolve on Twitter over space and time and across communities of users. More specifically, the question this study tested is whether sentiment polarity on Twitter exhibits specific time-location patterns. The aim of the study is to investigate the spatial and temporal distribution of georeferenced Twitter sentiment polarity within the area of 1 km buffer around the Curtin Bentley campus boundary in Perth, Western Australia. Tweets posted in campus were assigned into six spatial zones and four time zones. A sentiment analysis was then conducted for each zone using the sentiment analyser tool in the Starlight Visual Information System software. The Feature Manipulation Engine was employed to convert non-spatial files into spatial and temporal feature class. The spatial and temporal distribution of Twitter sentiment polarity patterns over space and time was mapped using Geographic Information Systems (GIS). Some interesting results were identified. For example, the highest percentage of positive Tweets occurred in the social science area, while science and engineering and dormitory areas had the highest percentage of negative postings. The number of negative Tweets increases in the library and science and engineering areas as the end of the semester approaches, reaching a peak around an exam period, while the percentage of negative Tweets drops at the end of the semester in the entertainment and sport and dormitory area. This study will provide some insights into understanding students and staff ’s sentiment variation on Twitter, which could be useful for university teaching and learning management
    corecore