85 research outputs found

    Integration of decision support systems to improve decision support performance

    Get PDF
    Decision support system (DSS) is a well-established research and development area. Traditional isolated, stand-alone DSS has been recently facing new challenges. In order to improve the performance of DSS to meet the challenges, research has been actively carried out to develop integrated decision support systems (IDSS). This paper reviews the current research efforts with regard to the development of IDSS. The focus of the paper is on the integration aspect for IDSS through multiple perspectives, and the technologies that support this integration. More than 100 papers and software systems are discussed. Current research efforts and the development status of IDSS are explained, compared and classified. In addition, future trends and challenges in integration are outlined. The paper concludes that by addressing integration, better support will be provided to decision makers, with the expectation of both better decisions and improved decision making processes

    Types to the rescue: verification of REST APIs Consumer Code

    Get PDF
    Tese de mestrado, Engenharia Informática (Engenharia de Software) Universidade de Lisboa, Faculdade de Ciências, 2019As arquiteturas de software são fundamentais para o desenvolvimento de um software fiável, escalável e com uma fácil manutenção. Com a criação e crescimento da internet, surgiu a necessidade de criar padrões de software que permitam trocar informação neste novo ambiente. O protocolo SOAP e a arquitetura REST são, dos padrões que emergiram, os que mais se destacaram ao nível da utilização. Durante as últimas décadas, e devido ao grande crescimento daWorld WideWeb, a arquitetura REST tem se destacado como a mais importante e utilizada pela comunidade. REST (Representational State Transfer) retira partido das características do protocolo HTTP para descrever as mensagens trocadas entre clientes e servidores. Os dados na arquitectura REST são representados por recursos, que são identificados por um identificador único (p.e. URI) e que podem ter várias representações (em vários formatos), que são os dados concretos de um recurso. A interação com os recursos é feita usando os métodos HTTP: get para obter um recurso, post para adicionar um novo recurso, put para fazer uma atualização de um recurso, delete para remover um recurso; entre outros, sendo estes os principais para aplicações CRUD. As aplicações RESTful, isto é, aplicações que fornecem os seus serviços através da arquitetura REST, devem ser claras na especificação dos seus serviços de forma a que os seus clientes possam utilizá-las sem erros. Para tal, existem várias linguagens de especificação de APIs REST, como a Open API Specification ou a API Blueprint, no qual é possível descrever formalmente as várias operações fornecidas pelo serviço, como o formato dos pedidos de cada operação e as respetivas respostas. No entanto, estas linguagens apresentam uma limitação nas condições formais que se pode colocar nos parâmetros dos pedidos e no impacto que estes têm no formato e conteúdo da resposta. Deste modo, foi introduzida uma nova linguagem de especificação de aplicações REST, HeadREST, onde é adicionada a expressividade necessária para cobrir as lacunas das outras linguagens. Esta expressividade é introduzida com a utilização de tipos refinados, que permitem restringir os valores de um determinado tipo. Adicionalmente, é introduzida também uma operação que permite verificar se uma determinada expressão pertence a um determinado tipo. Em HeadREST, cada operação é especificada usando uma ou mais asserções. Cada asserção é composta por um método HTTP, um URI template da operação, uma pré-condição que define as condições onde esta operação é aceite, e uma pós-condição que estabelece os resultados da operação se a pré-condição for comprida. Deste modo, estas condições permitem expressar os dados enviados nos pedidos e a receber na resposta, assim como expressar o estado do conjunto de recursos antes e depois do pedido REST. Devido à utilização de tipos refinados não é possível resolver sintaticamente a relação de subtipos na validação de uma especificação HeadREST. Deste modo, é necessária uma abordagem semântica: a relação de subtipos é transformada em fórmulas de lógica de primeira ordem, e depois é utilizado um SMT solver para resolver a formula e, consecutivamente, resolver a relação de subtipos. Por outro lado, é também importante garantir que as chamadas às APIs REST cumprem as especificações das mesmas. As linguagens de programação comuns não conseguem garantir que as chamadas a um serviço REST estão de acordo com a especificação do serviço, nomeadamente se o URL da chamada é válido e se o pedido e resposta estão bem formados ao nível dos valores enviados. Assim, um cliente só percebe se as chamadas estão bem feitas em tempo de execução. Existem poucas soluções para análise estática deste tipo de chamadas (RESType é um raro exemplo) e tendem a ser limitadas e a depender de um único tipo de linguagem de especificação. Para além disso, os clientes de serviços REST tendem a ser maioritariamente desenvolvidos em JavaScript, que possui uma fraca análise estática, o que potencializa ainda mais o problema identificado.Numa primeiro passo para tentar resolver este problema desenvolveu-se a linguagem SafeScript, que se caracteriza por ser um subconjunto do JavaScript equipado com um forte sistema de tipos. O sistema de tipos é muito expressivo graças à adição de tipos refinados e também de um operador que verifica se uma expressão pertence a um tipo. SafeScript apresenta flow typing, isto é, o tipo de uma expressão depende da sua localização no fluxo de controlo do programa. Tal como no HeadREST, não é possível realizar uma simples análise sintática para a validação de tipos. No entanto, neste caso trata-se de uma linguagem imperativa com flow typing, logo uma abordagem igual de tradução direta para um SMT solver não é trivial. Deste modo, a validação de tipos é feita traduzido o código SafeScript para a linguagem intermédia Boogie, onde as necessárias validações são traduzidas como asserções, sendo que o Boogie utiliza internamente o Z3 SMT solver para resolver semanticamente as asserções. Devido à validação semântica, o compilador de SafeScript consegue detetar estaticamente diversos erros de execução comuns, como divisão por zero ou acesso a um array fora dos seus limites, e que não conseguem ser detetados por linguagens similares, como o TypeScript. SafeScript compila para JavaScript, com o intuito de poder ser utilizado em conjunto com este. Graças ao seu expressivo sistema de tipos, o validador de programas SafeScript é também um verificador estático. A partir deste é possível provar que um programa cumpre uma determinada especificação, que pode ser descrita usando os tipos refinados. Neste trabalho destacou-se a capacidade de prova do validador de SafeScript, concretamente resolvendo alguns desafios propostos pelo Verification Benchmarks Challange. A partir do SafeScript desenvolveu-se a extensão SafeRESTScript, que adiciona pedidos REST à sintaxe do SafeScript e valida-os estaticamente de encontro a uma especificação HeadREST. Para cada chamada REST são feitas principalmente duas validações. Em primeiro lugar, é verificado se o URL é um endereço válido do serviço para o método HTTP do pedido, isto é, se existe algum triplo na especificação com o par método e URL do pedido. De seguida, e com a tradução da especificação HeadREST importada para Boogie, é verificado se as chamadas REST cumprem os triplos da especificação, nomeadamente, se as pré-condições são cumpridas então as pós-condições também se devem verificar. Por exemplo, se uma pós-condição, cuja respetiva pré-condição é verdadeira para uma determinada chamada, asserta que no corpo da resposta existe um objeto com o campo id, então um acesso a este campo no corpo da resposta é validado. Neste trabalho, como exemplo ilustrativo das capacidades da linguagem, desenvolveu-se um cliente SafeRESTScript da API REST do conhecido repositório GitHub. Ambas as linguagens possuem um compilador e editor que estão disponíveis como plug-in para o IDE Eclipse, para além de uma versão terminal. As duas linguagens possuem várias limitações, e por isso muito trabalho ainda existe pela frente. No entanto, SafeScript e SafeRESTScript não têm ambição de ser linguagens de produção, mas sim contribuir para um melhoramento da análise estática de programas e mostrar que é possível auxiliar o desenvolvimento fiável de código cliente de serviços REST.REST is the architectural sytle most used in the web to exchange data. RESTful applications must be well documented so clients can use its services without doubts and errors. There are several specification languages for describing REST APIs, e.g. Open API Specification, but they lack on expressiveness to describe the exchanged data. Head- REST specification language was introduced to address this gap, containing an expressive type system that allows to describe rigorously the request and response formats of a service endpoint. On the other hand, it is also important to ensure that REST calls in client code meet the service specification. This challenge is even more important taking in account that most REST clients are made in JavaScript, a weakly typed language. To aim this problem, we firstly developed SafeScript, a subset of JavaScript equipped with a strong type system. SafeScript has a expressive type system thanks to refinement types and to an operator that checks if an expression belongs to a type. A semantic subtyping analysis is necessary; the typing validation in done by translating the code to Boogie intermediate language which uses the Z3 SMT solver for the semantic evaluation. SafeScript compiles directly to JavaScript. SafeRESTScript is an extension of SafeScript that adds REST calls, being a client-side language for consuming REST services. It uses HeadREST specifications to verify REST calls: whether the URL of the call is a valid endpoint and whether the data exchanged match the pre and post-conditions declared in the specification. With the creation of this new languages, we dot not intend in having them as production languages, but to show that it is possible to contribute with a better verification and correction in area where software reliability is weak

    Use of Human–Computer Interaction Devices and Web 3.0 Skills Among Engineers

    Get PDF
    Despite massive company investments in human–computer interaction devices and software, such as Web 3.0 technologies, engineers are not demonstrating measurable performance and productivity increases. There is a lack of knowledge and understanding related to the motivation of engineers to use Web 3.0 technologies including the semantic web and cloud applications for increased performance. The purpose of this quantitative correlational study was to investigate whether the use of human–computer interaction devices predict Web 3.0 skills among engineers. Solow’s information technology productivity paradox was the theoretical foundation for this study. Convenience sampling was used for a sample of 214 participants from metropolitan areas of Georgia. Multiple linear regression was used to develop a predictive model and evaluate the influence on Web 3.0 skills of 10 independent variables measuring self-reported reliance on and competence with five human–computer interaction devices, two aggregate indices of reliance and competence, and two-factor interactions. Results indicated a significant linear relationship between several predictors (laptop reliance, tablet reliance, desktop competence, wearable competence, and five interactions) and the dependent variable (Web 3.0 skills). The results may enable engineering managers to make more informed, strategic decisions regarding the types of technology to invest in to improve engineer skills and productivity. The results of this study have potential implications for positive social change by helping engineering organizations overcome the information technology productivity paradox to reap the benefits from engineers who are more motivated and skilled

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    An Agent-based Approach for Improving the Performance of Distributed Business Processes in Maritime Port Community

    Get PDF
    In the recent years, the concept of “port community” has been adopted by the maritime transport industry in order to achieve a higher degree of coordination and cooperation amongst organizations involved in the transfer of goods through the port area. The business processes of the port community supply chain form a complicated process which involves several process steps, multiple actors, and numerous information exchanges. One of the widely used applications of ICT in ports is the Port Community System (PCS) which is implemented in ports in order to reduce paperwork and to facilitate the information flow related to port operations and cargo clearance. However, existing PCSs are limited in functionalities that facilitate the management and coordination of material, financial, and information flows within the port community supply chain. This research programme addresses the use of agent technology to introduce business process management functionalities, which are vital for port communities, aiming to the enhancement of the performance of the port community supply chain. The investigation begins with an examination of the current state in view of the business perspective and the technical perspective. The business perspective focuses on understanding the nature of the port community, its main characteristics, and its problems. Accordingly, a number of requirements are identified as essential amendments to information systems in seaports. On the other hand, the technical perspective focuses on technologies that are convenient for solving problems in business process management within port communities. The research focuses on three technologies; the workflow technology, agent technology, and service orientation. An analysis of information systems across port communities enables an examination of the current PCSs with regard to their coordination and workflow management capabilities. The most important finding of this analysis is that the performance of the business processes, and in particular the performance of the port community supply chain, is not in the scope of the examined PCSs. Accordingly, the Agent-Based Middleware for Port Community Management (ABMPCM) is proposed as an approach for providing essential functionalities that would facilitate collaborative planning and business process management. As a core component of the ABMPCM, the Collaborative Planning Facility (CPF) is described in further details. A CPF prototype has been developed as an agent-based system for the domain of inland transport of containers to demonstrate its practical effectiveness. To evaluate the practical application of the CPF, a simulation environment is introduced in order to facilitate the evaluation process. The research started with the definition of a multi-agent simulation framework for port community supply chain. Then, a prototype has been implemented and employed for the evaluation of the CPF. The results of the simulation experiments demonstrate that our agent-based approach effectively enhances the performance of business process in the port community

    User-Specific Bicluster-based Collaborative Filtering

    Get PDF
    Tese de mestrado, Ciência de Dados, Universidade de Lisboa, Faculdade de Ciências, 2020Collaborative Filtering is one of the most popular and successful approaches for Recommender Systems. However, some challenges limit the effectiveness of Collaborative Filtering approaches when dealing with recommendation data, mainly due to the vast amounts of data and their sparse nature. In order to improve the scalability and performance of Collaborative Filtering approaches, several authors proposed successful approaches combining Collaborative Filtering with clustering techniques. In this work, we study the effectiveness of biclustering, an advanced clustering technique that groups rows and columns simultaneously, in Collaborative Filtering. When applied to the classic U-I interaction matrices, biclustering considers the duality relations between users and items, creating clusters of users who are similar under a particular group of items. We propose USBCF, a novel biclustering-based Collaborative Filtering approach that creates user specific models to improve the scalability of traditional CF approaches. Using a realworld dataset, we conduct a set of experiments to objectively evaluate the performance of the proposed approach, comparing it against baseline and state-of-the-art Collaborative Filtering methods. Our results show that the proposed approach can successfully suppress the main limitation of the previously proposed state-of-the-art biclustering-based Collaborative Filtering (BBCF) since BBCF can only output predictions for a small subset of the system users and item (lack of coverage). Moreover, USBCF produces rating predictions with quality comparable to the state-of-the-art approaches

    Cognitive Maps

    Get PDF
    undefine

    Strategies to Improve Data Quality for Forecasting Repairable Spare Parts

    Get PDF
    Poor input data quality used in repairable spare parts forecasting by aerospace small and midsize enterprises (SME) suppliers results in poor inventory practices that manifest into higher costs and critical supply shortage risks. Guided by the data quality management (DQM) theory as the conceptual framework, the purpose of this exploratory multiple case study was to identify the key strategies that the aerospace SME repairable spares suppliers use to maximize their input data quality used in forecasting repairable spare parts. The multiple case study comprised of a census sample of 6 forecasting business leaders from aerospace SME repairable spares suppliers located in the states of Florida and Kansas. The sample was collected via semistructured interviews and supporting documentation from the consenting participants and organizational websites. Eight core themes emanated from the application of the content data analysis process coupled with methodological triangulation. These themes were labeled as establish data governance, identify quality forecast input data sources, develop a sustainable relationship and collaboration with customers and vendors, utilize a strategic data quality system, conduct continuous input data quality analysis, identify input data quality measures, incorporate continuous improvement initiatives, and engage in data quality training and education. Of the 8 core themes, 6 aligned to the DQM theory\u27s conceptual constructs while 2 surfaced as outliers. The key implication of the research toward positive social change may include the increased situational awareness for SME forecasting business leaders to focus on enhancing business practices for input data quality to forecast repairable spare parts to attain sustainable profits

    Telecommunications Networks

    Get PDF
    This book guides readers through the basics of rapidly emerging networks to more advanced concepts and future expectations of Telecommunications Networks. It identifies and examines the most pressing research issues in Telecommunications and it contains chapters written by leading researchers, academics and industry professionals. Telecommunications Networks - Current Status and Future Trends covers surveys of recent publications that investigate key areas of interest such as: IMS, eTOM, 3G/4G, optimization problems, modeling, simulation, quality of service, etc. This book, that is suitable for both PhD and master students, is organized into six sections: New Generation Networks, Quality of Services, Sensor Networks, Telecommunications, Traffic Engineering and Routing

    An XML Messaging Service for Mobile Devices

    Get PDF
    In recent years, XML has been accepted as the format of messages for several applications. Prominent examples include SOAP for Web services, XMPP for instant messaging, and RSS and Atom for content syndication. This XML usage is understandable, as the format itself is a well-accepted standard for structured data, and it has excellent support for many popular programming languages, so inventing an application-specific format no longer seems worth the effort. Simultaneously with this XML's rise to prominence there has been an upsurge in the number and capabilities of various mobile devices. These devices are connected through various wireless technologies to larger networks, and a goal of current research is to integrate them seamlessly into these networks. These two developments seem to be at odds with each other. XML as a fully text-based format takes up more processing power and network bandwidth than binary formats would, whereas the battery-powered nature of mobile devices dictates that energy, both in processing and transmitting, be utilized efficiently. This thesis presents the work we have performed to reconcile these two worlds. We present a message transfer service that we have developed to address what we have identified as the three key issues: XML processing at the application level, a more efficient XML serialization format, and the protocol used to transfer messages. Our presentation includes both a high-level architectural view of the whole message transfer service, as well as detailed descriptions of the three new components. These components consist of an API, and an associated data model, for XML processing designed for messaging applications, a binary serialization format for the data model of the API, and a message transfer protocol providing two-way messaging capability with support for client mobility. We also present relevant performance measurements for the service and its components. As a result of this work, we do not consider XML to be inherently incompatible with mobile devices. As the fixed networking world moves toward XML for interoperable data representation, so should the wireless world also do to provide a better-integrated networking infrastructure. However, the problems that XML adoption has touch all of the higher layers of application programming, so instead of concentrating simply on the serialization format we conclude that improvements need to be made in an integrated fashion in all of these layers
    corecore