13,472 research outputs found

    To Develop a Database Management Tool for Multi-Agent Simulation Platform

    Get PDF
    Depuis peu, la Modélisation et Simulation par Agents (ABMs) est passée d'une approche dirigée par les modèles à une approche dirigée par les données (Data Driven Approach, DDA). Cette tendance vers l’utilisation des données dans la simulation vise à appliquer les données collectées par les systèmes d’observation à la simulation (Edmonds and Moss, 2005; Hassan, 2009). Dans la DDA, les données empiriques collectées sur les systèmes cibles sont utilisées non seulement pour la simulation des modèles mais aussi pour l’initialisation, la calibration et l’évaluation des résultats issus des modèles de simulation, par exemple, le système d’estimation et de gestion des ressources hydrauliques du bassin Adour-Garonne Français (Gaudou et al., 2013) et l’invasion des rizières du delta du Mékong au Vietnam par les cicadelles brunes (Nguyen et al., 2012d). Cette évolution pose la question du « comment gérer les données empiriques et celles simulées dans de tels systèmes ». Le constat que l’on peut faire est que, si la conception et la simulation actuelles des modèles ont bénéficié des avancées informatiques à travers l’utilisation des plateformes populaires telles que Netlogo (Wilensky, 1999) ou GAMA (Taillandier et al., 2012), ce n'est pas encore le cas de la gestion des données, qui sont encore très souvent gérées de manière ad-hoc. Cette gestion des données dans des Modèles Basés Agents (ABM) est une des limitations actuelles des plateformes de simulation multiagents (SMA). Autrement dit, un tel outil de gestion des données est actuellement requis dans la construction des systèmes de simulation par agents et la gestion des bases de données correspondantes est aussi un problème important de ces systèmes. Dans cette thèse, je propose tout d’abord une structure logique pour la gestion des données dans des plateformes de SMA. La structure proposée qui intègre des solutions de l’Informatique Décisionnelle et des plateformes multi-agents s’appelle CFBM (Combination Framework of Business intelligence and Multi-agent based platform), elle a plusieurs objectifs : (1) modéliser et exécuter des SMAs, (2) gérer les données en entrée et en sortie des simulations, (3) intégrer les données de différentes sources, et (4) analyser les données à grande échelle. Ensuite, le besoin de la gestion des données dans les simulations agents est satisfait par une implémentation de CFBM dans la plateforme GAMA. Cette implémentation présente aussi une architecture logicielle pour combiner entrepôts deIv données et technologies du traitement analytique en ligne (OLAP) dans les systèmes SMAs. Enfin, CFBM est évaluée pour la gestion de données dans la plateforme GAMA à travers le développement de modèles de surveillance des cicadelles brunes (BSMs), où CFBM est utilisé non seulement pour gérer et intégrer les données empiriques collectées depuis le système cible et les résultats de simulation du modèle simulé, mais aussi calibrer et valider ce modèle. L'intérêt de CFBM réside non seulement dans l'amélioration des faiblesses des plateformes de simulation et de modélisation par agents concernant la gestion des données mais permet également de développer des systèmes de simulation complexes portant sur de nombreuses données en entrée et en sortie en utilisant l’approche dirigée par les données.Recently, there has been a shift from modeling driven approach to data driven approach inAgent Based Modeling and Simulation (ABMS). This trend towards the use of data-driven approaches in simulation aims at using more and more data available from the observation systems into simulation models (Edmonds and Moss, 2005; Hassan, 2009). In a data driven approach, the empirical data collected from the target system are used not only for the design of the simulation models but also in initialization, calibration and evaluation of the output of the simulation platform such as e.g., the water resource management and assessment system of the French Adour-Garonne Basin (Gaudou et al., 2013) and the invasion of Brown Plant Hopper on the rice fields of Mekong River Delta region in Vietnam (Nguyen et al., 2012d). That raises the question how to manage empirical data and simulation data in such agentbased simulation platform. The basic observation we can make is that currently, if the design and simulation of models have benefited from advances in computer science through the popularized use of simulation platforms like Netlogo (Wilensky, 1999) or GAMA (Taillandier et al., 2012), this is not yet the case for the management of data, which are still often managed in an ad hoc manner. Data management in ABM is one of limitations of agent-based simulation platforms. Put it other words, such a database management is also an important issue in agent-based simulation systems. In this thesis, I first propose a logical framework for data management in multi-agent based simulation platforms. The proposed framework is based on the combination of Business Intelligence solution and a multi-agent based platform called CFBM (Combination Framework of Business intelligence and Multi-agent based platform), and it serves several purposes: (1) model and execute multi-agent simulations, (2) manage input and output data of simulations, (3) integrate data from different sources; and (4) analyze high volume of data. Secondly, I fulfill the need for data management in ABM by the implementation of CFBM in the GAMA platform. This implementation of CFBM in GAMA also demonstrates a software architecture to combine Data Warehouse (DWH) and Online Analytical Processing (OLAP) technologies into a multi-agent based simulation system. Finally, I evaluate the CFBM for data management in the GAMA platform via the development of a Brown Plant Hopper Surveillance Models (BSMs), where CFBM is used ii not only to manage and integrate the whole empirical data collected from the target system and the data produced by the simulation model, but also to calibrate and validate the models.The successful development of the CFBM consists not only in remedying the limitation of agent-based modeling and simulation with regard to data management but also in dealing with the development of complex simulation systems with large amount of input and output data supporting a data driven approach

    An Open Source Based Data Warehouse Architecture to Support Decision Making in the Tourism Sector

    Get PDF
    In this paper an alternative Tourism oriented Data Warehousing architecture is proposed which makes use of the most recent free and open source technologies like Java, Postgresql and XML. Such architecture's aim will be to support the decision making process and giving an integrated view of the whole Tourism reality in an established context (local, regional, national, etc.) without requesting big investments for getting the necessary software.Tourism, Data warehousing architecture

    To Develop a Database Management Tool for Multi-Agent Simulation Platform

    Get PDF
    Depuis peu, la Modélisation et Simulation par Agents (ABMs) est passée d'une approche dirigée par les modèles à une approche dirigée par les données (Data Driven Approach, DDA). Cette tendance vers l’utilisation des données dans la simulation vise à appliquer les données collectées par les systèmes d’observation à la simulation (Edmonds and Moss, 2005; Hassan, 2009). Dans la DDA, les données empiriques collectées sur les systèmes cibles sont utilisées non seulement pour la simulation des modèles mais aussi pour l’initialisation, la calibration et l’évaluation des résultats issus des modèles de simulation, par exemple, le système d’estimation et de gestion des ressources hydrauliques du bassin Adour-Garonne Français (Gaudou et al., 2013) et l’invasion des rizières du delta du Mékong au Vietnam par les cicadelles brunes (Nguyen et al., 2012d). Cette évolution pose la question du « comment gérer les données empiriques et celles simulées dans de tels systèmes ». Le constat que l’on peut faire est que, si la conception et la simulation actuelles des modèles ont bénéficié des avancées informatiques à travers l’utilisation des plateformes populaires telles que Netlogo (Wilensky, 1999) ou GAMA (Taillandier et al., 2012), ce n'est pas encore le cas de la gestion des données, qui sont encore très souvent gérées de manière ad-hoc. Cette gestion des données dans des Modèles Basés Agents (ABM) est une des limitations actuelles des plateformes de simulation multiagents (SMA). Autrement dit, un tel outil de gestion des données est actuellement requis dans la construction des systèmes de simulation par agents et la gestion des bases de données correspondantes est aussi un problème important de ces systèmes. Dans cette thèse, je propose tout d’abord une structure logique pour la gestion des données dans des plateformes de SMA. La structure proposée qui intègre des solutions de l’Informatique Décisionnelle et des plateformes multi-agents s’appelle CFBM (Combination Framework of Business intelligence and Multi-agent based platform), elle a plusieurs objectifs : (1) modéliser et exécuter des SMAs, (2) gérer les données en entrée et en sortie des simulations, (3) intégrer les données de différentes sources, et (4) analyser les données à grande échelle. Ensuite, le besoin de la gestion des données dans les simulations agents est satisfait par une implémentation de CFBM dans la plateforme GAMA. Cette implémentation présente aussi une architecture logicielle pour combiner entrepôts deIv données et technologies du traitement analytique en ligne (OLAP) dans les systèmes SMAs. Enfin, CFBM est évaluée pour la gestion de données dans la plateforme GAMA à travers le développement de modèles de surveillance des cicadelles brunes (BSMs), où CFBM est utilisé non seulement pour gérer et intégrer les données empiriques collectées depuis le système cible et les résultats de simulation du modèle simulé, mais aussi calibrer et valider ce modèle. L'intérêt de CFBM réside non seulement dans l'amélioration des faiblesses des plateformes de simulation et de modélisation par agents concernant la gestion des données mais permet également de développer des systèmes de simulation complexes portant sur de nombreuses données en entrée et en sortie en utilisant l’approche dirigée par les données.Recently, there has been a shift from modeling driven approach to data driven approach inAgent Based Modeling and Simulation (ABMS). This trend towards the use of data-driven approaches in simulation aims at using more and more data available from the observation systems into simulation models (Edmonds and Moss, 2005; Hassan, 2009). In a data driven approach, the empirical data collected from the target system are used not only for the design of the simulation models but also in initialization, calibration and evaluation of the output of the simulation platform such as e.g., the water resource management and assessment system of the French Adour-Garonne Basin (Gaudou et al., 2013) and the invasion of Brown Plant Hopper on the rice fields of Mekong River Delta region in Vietnam (Nguyen et al., 2012d). That raises the question how to manage empirical data and simulation data in such agentbased simulation platform. The basic observation we can make is that currently, if the design and simulation of models have benefited from advances in computer science through the popularized use of simulation platforms like Netlogo (Wilensky, 1999) or GAMA (Taillandier et al., 2012), this is not yet the case for the management of data, which are still often managed in an ad hoc manner. Data management in ABM is one of limitations of agent-based simulation platforms. Put it other words, such a database management is also an important issue in agent-based simulation systems. In this thesis, I first propose a logical framework for data management in multi-agent based simulation platforms. The proposed framework is based on the combination of Business Intelligence solution and a multi-agent based platform called CFBM (Combination Framework of Business intelligence and Multi-agent based platform), and it serves several purposes: (1) model and execute multi-agent simulations, (2) manage input and output data of simulations, (3) integrate data from different sources; and (4) analyze high volume of data. Secondly, I fulfill the need for data management in ABM by the implementation of CFBM in the GAMA platform. This implementation of CFBM in GAMA also demonstrates a software architecture to combine Data Warehouse (DWH) and Online Analytical Processing (OLAP) technologies into a multi-agent based simulation system. Finally, I evaluate the CFBM for data management in the GAMA platform via the development of a Brown Plant Hopper Surveillance Models (BSMs), where CFBM is used ii not only to manage and integrate the whole empirical data collected from the target system and the data produced by the simulation model, but also to calibrate and validate the models.The successful development of the CFBM consists not only in remedying the limitation of agent-based modeling and simulation with regard to data management but also in dealing with the development of complex simulation systems with large amount of input and output data supporting a data driven approach

    Business Intelligence and Big Data in Higher Education: Status of a Multi-Year Model Curriculum Development Effort for Business School Undergraduates, MS Graduates, and MBAs

    Get PDF
    Business intelligence (BI), “big data”, and analytics solutions are being deployed in an increasing number of organizations, yet recent predictions point to severe shortages in the number of graduates prepared to work in the area. New model curriculum is needed that can properly introduce BI and analytics topics into existing curriculum. That curriculum needs to incorporate current big data developments even as new dedicated analytics programs are becoming more prominent throughout the world. This paper contributes to the BI field by providing the first BI model curriculum guidelines. It focuses on adding appropriate elective courses to existing curriculum in order to foster the development of BI skills, knowledge, and experience for undergraduate majors, master of science in business information systems degree students, and MBAs. New curricula must achieve a delicate balance between a topic’s level of coverage that is appropriate to students’ level of expertise and background, and it must reflect industry workforce needs. Our approach to model curriculum development for business intelligence courses follows the structure of Krathwohl’s (2002) revised taxonomy, and we incorporated multi-level feedback from faculty and industry experts. Overall, this was a long-term effort that resulted in model curriculum guidelines

    IS THERE STILL A NEED FOR MULTIDIMENSIONAL DATA MODELS?

    Get PDF
    Organizational and technical changes challenge standards of data warehouse design and initiate a redesign of contemporary Business Intelligence and Analytics environments. As a result, the use of multidimensional models for performance oriented reasons is not necessarily taken for granted. Simple data models or operational structures emerge as a basis for complex analyses. The paper therefore conducts a laboratory experiment to examine from a non-technical perspective the influnce of different data modeling types on the representational information quality of end users. A comparison is made between the multidimensional model and the transactional model respectively the flat file model. The experiment involves 78 participants and aims to compare perceived and observed representational information quality aspects of ad hoc analyses regarding the data modeling type. The results indicate a higher observed quality for multidimensional modeled data, while different types of data models do not influnce the end user perception of the representational information quality

    A comprehensive IVR (Interactive Voice Response) analysis model using online analytical processing (OLAP) on a multidimensional data cube

    Get PDF
    Private Branch eXchange (PBX) is a tool indispensable in the business world. The telephone exchanges allow employees to perform internal connections between telephones, or make calls to the external network also known as Public Switched Telephone Network (PSTN). With increasing Internet usage, there is interest in understanding what services are offered. Enterprise Courier is a commercial Internet Protocol Private Branch eXchange (IP PBX) based on open source Asterisk web-based PBX software for Linux, which supports multiple protocols and services, like Interactive Voice Response (IVR). Cisco Unified Communications Manager (CUCM) or CallManager, is a software based call-processing system (IP PBX) developed by Cisco Systems. CUCM tracks all active Voice over IP (VoIP) network components; including phones, gateways, conference bridges, among others. IVR is part of the Academic Services costumer contact and ticketing of University of Beira Interior (UBI). IVR monitoring and analysis are essential for effective operation and resource management, in particular, multidimensional analysis for long-term data is necessary for comprehensive understanding of the trend, the quality of customer service and costumer experience. In this paper, we propose a new IVR analysis model for large volumes of IVR data accumulated over a long period of time. The IVRCube proposed is an analysis model using online analytical processing (OLAP) on a multidimensional data cube that provides an easy and fast way to construct a multidimensional IVR analysis system for comprehensive and detailed evaluation of long-term data. The feasibility and applicability are validated, as the proposed IVRCube analysis model is implemented and applied to Academic Services costumer contact and ticketing IVR data.A Private Branch eXchange (PBX) é uma ferramenta indispensável no mundo dos negócios. As centrais telefónicas permitem que os funcionários realizem chamadas internas entre telefones, ou façam chamadas para a rede externa, também conhecida como Public Switched Telephone Network (PSTN). Com o aumento sistemático da utilização da Internet, há um interesse acrescido em entender quais os serviços que são oferecidos nas redes baseadas em Internet Protocol (IP). Um destes serviços é o Voice over IP (VoIP). O Enterprise Courier é um software IP PBX comercial para VoIP baseado na aplicação de código aberto Asterisk, que opera sobre Linux. O IP PBX Enterprise Courier suporta vários protocolos e serviços, por exemplo o Interactive Voice Response (IVR). O Cisco Unified Communications Manager (CUCM) também chamado de CallManager, é um sistema de processamento de chamadas IP, ou IP PBX, desenvolvido pela Cisco Systems. O CUCM permite fazer a gestão e operação de todos os componentes ativos de voz, incluindo telefones, gateways, equipamentos de conferência entre outros. Estes sistemas coexistem na rede de gestão de comunicações de voz da Universidade da Beira Interior (UBI), sendo que o sistema automatizado utilizado para o encaminhamento de chamadas dos Serviços Académicos na UBI utiliza a tecnologia IVR. Este serviço da UBI é uma das formas que os clientes da Universidade (alunos e não alunos) têm para obter informações e resolver questões de forma rápida e simples usando o telefone. Por ser um importante ponto de interface entre a universidade e a comunidade, a monitorização e análise de desempenho do IVR são essenciais para o funcionamento eficaz e gestão de recursos humanos atribuídos a este serviço, o que torna a tarefa de extrair os dados do sistema de VoIP e apresentá-los de forma a poder extrair deles informação útil à gestão, o centro deste trabalho de investigação. Para a análise dos dados, foi usada uma técnica de análise multidimensional de dados a longo prazo, necessária para uma compreensão abrangente da evolução e qualidade de serviço prestada ao cliente tendo como objetivo a melhor experiência possível por parte do cliente. Neste trabalho, propomos um novo modelo de análise de IVR para grandes volumes de dados acumulados ao longo de um extenso período de tempo. O IVRCube é um modelo de análise utilizando online analytical processing (OLAP) num cubo de dados multidimensional que fornece uma forma fácil e rápida de construir um sistema de análise multidimensional para avaliação exaustiva e pormenorizada dos dados ao longo do tempo. A viabilidade e aplicabilidade deste modelo são validadas, uma vez que o modelo de análise IVRCube proposto é implementado e aplicado ao serviço de contacto telefónico (IVR) dos Serviços Académicos da UBI

    Designing algorithms to aid discovery by chemical robots

    Get PDF
    Recently, automated robotic systems have become very efficient, thanks to improved coupling between sensor systems and algorithms, of which the latter have been gaining significance thanks to the increase in computing power over the past few decades. However, intelligent automated chemistry platforms for discovery orientated tasks need to be able to cope with the unknown, which is a profoundly hard problem. In this Outlook, we describe how recent advances in the design and application of algorithms, coupled with the increased amount of chemical data available, and automation and control systems may allow more productive chemical research and the development of chemical robots able to target discovery. This is shown through examples of workflow and data processing with automation and control, and through the use of both well-used and cutting-edge algorithms illustrated using recent studies in chemistry. Finally, several algorithms are presented in relation to chemical robots and chemical intelligence for knowledge discovery

    Technology pillars for digital transformation of cities based on open software architecture for end2end data streaming

    Get PDF
    The goal of the paper is to expose the pillars of urban digital infrastructure and their role in the development of smart and data driven applications in the e2e model. Today, no one contests the need for a general-purpose infrastructure in the city that improves the quality of life with respect to various types of socio-economic activity of residents. However, the digital aspect of fostering this kind of activity is linked to a very dynamic and ubiquitous element of technology, and the quality and stability of solutions based on advanced technology is a key parameter of the quality of life and development that a city should provide to its residents. This pace of technology development means that our environment (including cities) are subject to a process of continuous digital transformation. Starting from examples of such transformation and top-trends in technology development, we focus on transformation pillars that can ensure sustainable growth of innovation supply in a city. We give three examples of platforms designed for City of Poznan. We conclude that a key factor benefiting this multidimensional transformation process is the provision of an open, data-transmission and processing infrastructure with the support of which all smart city stakeholders can pursue their social, scientific, economic, or political goals in a sustainable manner

    A Data Quality Multidimensional Model for Social Media Analysis

    Get PDF
    Social media platforms have become a new source of useful information for companies. Ensuring the business value of social media first requires an analysis of the quality of the relevant data and then the development of practical business intelligence solutions. This paper aims at building high-quality datasets for social business intelligence (SoBI). The proposed method offers an integrated and dynamic approach to identify the relevant quality metrics for each analysis domain. This method employs a novel multidimensional data model for the construction of cubes with impact measures for various quality metrics. In this model, quality metrics and indicators are organized in two main axes. The first one concerns the kind of facts to be extracted, namely: posts, users, and topics. The second axis refers to the quality perspectives to be assessed, namely: credibility, reputation, usefulness, and completeness. Additionally, quality cubes include a user-role dimension so that quality metrics can be evaluated in terms of the user business roles. To demonstrate the usefulness of this approach, the authors have applied their method to two separate domains: automotive business and natural disasters management. Results show that the trade-off between quantity and quality for social media data is focused on a small percentage of relevant users. Thus, data filtering can be easily performed by simply ranking the posts according to the quality metrics identified with the proposed method. As far as the authors know, this is the first approach that integrates both the extraction of analytical facts and the assessment of social media data quality in the same framework.Funding for open access charge: CRUE-Universitat Jaume
    corecore