281 research outputs found

    A Business Intelligence Solution, based on a Big Data Architecture, for processing and analyzing the World Bank data

    Get PDF
    The rapid growth in data volume and complexity has needed the adoption of advanced technologies to extract valuable insights for decision-making. This project aims to address this need by developing a comprehensive framework that combines Big Data processing, analytics, and visualization techniques to enable effective analysis of World Bank data. The problem addressed in this study is the need for a scalable and efficient Business Intelligence solution that can handle the vast amounts of data generated by the World Bank. Therefore, a Big Data architecture is implemented on a real use case for the International Bank of Reconstruction and Development. The findings of this project demonstrate the effectiveness of the proposed solution. Through the integration of Apache Spark and Apache Hive, data is processed using Extract, Transform and Load techniques, allowing for efficient data preparation. The use of Apache Kylin enables the construction of a multidimensional model, facilitating fast and interactive queries on the data. Moreover, data visualization techniques are employed to create intuitive and informative visual representations of the analysed data. The key conclusions drawn from this project highlight the advantages of a Big Data-driven Business Intelligence solution in processing and analysing World Bank data. The implemented framework showcases improved scalability, performance, and flexibility compared to traditional approaches. In conclusion, this bachelor thesis presents a Business Intelligence solution based on a Big Data architecture for processing and analysing the World Bank data. The project findings emphasize the importance of scalable and efficient data processing techniques, multidimensional modelling, and data visualization for deriving valuable insights. The application of these techniques contributes to the field by demonstrating the potential of Big Data Business Intelligence solutions in addressing the challenges associated with large-scale data analysis

    Discovering Utility-driven Interval Rules

    Full text link
    For artificial intelligence, high-utility sequential rule mining (HUSRM) is a knowledge discovery method that can reveal the associations between events in the sequences. Recently, abundant methods have been proposed to discover high-utility sequence rules. However, the existing methods are all related to point-based sequences. Interval events that persist for some time are common. Traditional interval-event sequence knowledge discovery tasks mainly focus on pattern discovery, but patterns cannot reveal the correlation between interval events well. Moreover, the existing HUSRM algorithms cannot be directly applied to interval-event sequences since the relation in interval-event sequences is much more intricate than those in point-based sequences. In this work, we propose a utility-driven interval rule mining (UIRMiner) algorithm that can extract all utility-driven interval rules (UIRs) from the interval-event sequence database to solve the problem. In UIRMiner, we first introduce a numeric encoding relation representation, which can save much time on relation computation and storage on relation representation. Furthermore, to shrink the search space, we also propose a complement pruning strategy, which incorporates the utility upper bound with the relation. Finally, plentiful experiments implemented on both real-world and synthetic datasets verify that UIRMiner is an effective and efficient algorithm.Comment: Preprint. 11 figures, 5 table

    Queensland University of Technology: Handbook 2023

    Get PDF
    The Queensland University of Technology handbook gives an outline of the faculties and subject offerings available that were offered by QUT

    Sensing the Cultural Significance with AI for Social Inclusion

    Get PDF
    Social Inclusion has been growing as a goal in heritage management. Whereas the 2011 UNESCO Recommendation on the Historic Urban Landscape (HUL) called for tools of knowledge documentation, social media already functions as a platform for online communities to actively involve themselves in heritage-related discussions. Such discussions happen both in “baseline scenarios” when people calmly share their experiences about the cities they live in or travel to, and in “activated scenarios” when radical events trigger their emotions. To organize, process, and analyse the massive unstructured multi-modal (mainly images and texts) user-generated data from social media efficiently and systematically, Artificial Intelligence (AI) is shown to be indispensable. This thesis explores the use of AI in a methodological framework to include the contribution of a larger and more diverse group of participants with user-generated data. It is an interdisciplinary study integrating methods and knowledge from heritage studies, computer science, social sciences, network science, and spatial analysis. AI models were applied, nurtured, and tested, helping to analyse the massive information content to derive the knowledge of cultural significance perceived by online communities. The framework was tested in case study cities including Venice, Paris, Suzhou, Amsterdam, and Rome for the baseline and/or activated scenarios. The AI-based methodological framework proposed in this thesis is shown to be able to collect information in cities and map the knowledge of the communities about cultural significance, fulfilling the expectation and requirement of HUL, useful and informative for future socially inclusive heritage management processes

    Time-, Graph- and Value-based Sampling of Internet of Things Sensor Networks

    Get PDF

    Effective Natural Language Interfaces for Data Visualization Tools

    Get PDF
    How many Covid cases and deaths are there in my hometown? How much money was invested into renewable energy projects across states in the last 5 years? How large was the biggest investment in solar energy projects in the previous year? These questions and others are of interest to users and can often be answered by data visualization tools (e.g., COVID-19 dashboards) provided by governmental organizations or other institutions. However, while users in organizations or private life with limited expertise with data visualization tools (hereafter referred to as end users) are also interested in these topics, they do not necessarily have knowledge of how to use these data visualization tools effectively to answer these questions. This challenge is highlighted by previous research that provided evidence suggesting that while business analysts and other experts can effectively use these data visualization tools, end users with limited expertise with data visualization tools are still impeded in their interactions. One approach to tackle this problem is natural language interfaces (NLIs) that provide end users with a more intuitive way of interacting with these data visualization tools. End users would be enabled to interact with the data visualization tool both by utilizing the graphical user interface (GUI) elements and by just typing or speaking a natural language (NL) input to the data visualization tool. While NLIs for data visualization tools have been regarded as a promising approach to improving the interaction, two design challenges still remain. First, existing NLIs for data visualization tools still target users who are familiar with the technology, such as business analysts. Consequently, the unique design required by end users that address their specific characteristics and that would enable the effective use of data visualization tools by them is not included in existing NLIs for data visualization tools. Second, developers of NLIs for data visualization tools are not able to foresee all NL inputs and tasks that end users want to perform with these NLIs for data visualization tools. Consequently, errors still occur in current NLIs for data visualization tools. End users need to be therefore enabled to continuously improve and personalize the NLI themselves by addressing these errors. However, only limited work exists that focus on enabling end users in teaching NLIs for data visualization tools how to correctly respond to new NL inputs. This thesis addresses these design challenges and provides insights into the related research questions. Furthermore, this thesis contributes prescriptive knowledge on how to design effective NLIs for data visualization tools. Specifically, this thesis provides insights into how data visualization tools can be extended through NLIs to improve their effective use by end users and how to enable end users to effectively teach NLIs how to respond to new NL inputs. Furthermore, this thesis provides high-level guidance that developers and providers of data visualization tools can utilize as a blueprint for developing data visualization tools with NLIs for end users and outlines future research opportunities that are of interest in supporting end users to effectively use data visualization tools

    Desenvolvimento de um sistema de gestão de funcionalidades de Business Intelligence em plataforma low-code

    Get PDF
    Data is currently one of the most important and critical assets of an organization. Its exploration and analysis is an asset in supporting decision making. This volume is growing exponentially and access to this information in a timely manner can make all the difference in the organizational context. This means that organizations must be agile and often make complex, less intuitive and more information-based decisions, whether they are strategic, tactical, or operational decisions. In this master’s thesis in Software Engineering, a Business Intelligence (BI) functionality management sys tem was developed to support the strategic management of the opensource DISME (”Dynamic Information System Modeler and Executer”) software developed in the En terprise Engineering Lab at ARDITI. The system integrates dashboards, data analysis and exploration components, providing users with a simple and user-friendly interface. The results of the study demonstrated the effectiveness of the system in terms of its ability to integrate and manage different BI tools, resulting in more informed decision making. This solution is well suited to meet the requirements of organizations looking to integrate BI capabilities, especially on low-code platforms, providing a scalable and functional solution.Os dados são atualmente um dos bens mais importantes e críticos de uma organização. A sua exploração e análise é uma mais valia no suporte à tomada de decisão. Este volume está a crescer exponencialmente e o acesso a esta informação de forma atempada pode fazer toda a diferença no contexto organizacional. Isto significa que as organizações devem ser ágeis e tomar frequentemente decisões complexas, menos intuitivas e mais baseadas em informação, sejam elas decisões estratégicas, tácticas, ou operacionais. Nesta tese de mestrado em Engenharia Informática, foi desenvolvido um sistema de gestão de funcionalidades de Business Intelligence (BI) para suportar a gestão estratégica do software opensource DISME (“Dynamic Information System Modeler and Executer”) desenvolvido no Enterprise Engineering Lab do M-ITI. O sistema integra dashboards, análise de dados e componentes de exploração, proporcionando aos utilizadores uma interface simples e de fácil utilização. Os resultados do estudo demonstraram a eficácia do sistema em termos da sua capacidade de integrar e gerir diferentes ferramentas de BI, resultando numa tomada de decisão mais informada. Esta solução é bem adequada para satisfazer os requisitos das organizações que procuram integrar capacidades de BI, especialmente em plataformas de low-code, proporcionando uma solução escalável e funcional

    Intelligent computing : the latest advances, challenges and future

    Get PDF
    Computing is a critical driving force in the development of human civilization. In recent years, we have witnessed the emergence of intelligent computing, a new computing paradigm that is reshaping traditional computing and promoting digital revolution in the era of big data, artificial intelligence and internet-of-things with new computing theories, architectures, methods, systems, and applications. Intelligent computing has greatly broadened the scope of computing, extending it from traditional computing on data to increasingly diverse computing paradigms such as perceptual intelligence, cognitive intelligence, autonomous intelligence, and human computer fusion intelligence. Intelligence and computing have undergone paths of different evolution and development for a long time but have become increasingly intertwined in recent years: intelligent computing is not only intelligence-oriented but also intelligence-driven. Such cross-fertilization has prompted the emergence and rapid advancement of intelligent computing
    corecore