5 research outputs found

    ALIGNING IS CURRICULUM WITH INDUSTRY SKILL EXPECTATIONS: A TEXT MINING APPROACH

    Get PDF
    Digitalization offers both great opportunities as well as new challenges and uncertainties. In particular, students in their role as future employees will have to cope with the new digital environments, which makes lifelong learning and up-to-date skills even more important than they already are. Key players in this long-term development are the universities as providers of the necessary skills and knowledge. By now, it is clear that digitalization will have a broad impact on the future conditions of universities. But are they already prepared for it? Against this backdrop, we present an approach to combine universities’ offerings with the required industry job skills to identify potential curricular gaps at course level that arise through ongoing digitalization and, as a consequence, changing skill requests for employees. We identify an appropriate set of methods for our project including text min-ing methods, an expert survey and an interview phase for evaluation. We illustrate our approach using a large data set of German IS curricular module descriptions and offers for IS job starters

    Analytical Study on Building a Comprehensive Big Data Management Maturity Framework

    Get PDF
    Harnessing big data in organizations today realizes benefits for competitive advantage. Generated profound insights are reflected in informed decision making, creating better business plans, and improved service delivery. Yet, organizations are still not recognizing how mature their big data management capabilities are. However, there is no structured approach to assess and build necessary capabilities for valuable big data utilizing, which draws a clear improvement pathway. Existing solutions lack a consistent perception of big data management capabilities, a reliable assessment, and a rigid improvement scheme. This paper contributes in building an analytical study on existing key works in assessing and building big data management capabilities. Drawing upon the results and gaps revealed from this analytical study, the main requirements for building a comprehensive big data management maturity framework are defined. This framework will enable organizations to assess and improve their current capabilities towards effective big data management.https://dorl.net/dor/ 20.1001.1.20088302.2022.20.1.13.

    Improving software project management quality through the use of analytics on project management data

    Get PDF
    Abstract in EnglishSoftware project management has been less effective as a result of being focused on resource management and the completion of projects within allocated resources and other confines. There has not been much focus on improving software project management quality through improved decision-making, software project management standards and methodologies, hence the focus of this study to explore the possibility of using data analytics with project management standards and methodologies to improve software project management quality. The main question to be addressed in this study is: Can data analytics use in software project management improve decision-making and project management quality? This study, therefore, explores and provides insight on data analytics use, by means of a survey that was completed by software project managers. A questionnaire was used to collect data from software project managers. The gathered data was captured and analysed using the Statistical Package for the Social Sciences (SPSS), and the analysed data was used for validity testing, while the reliability of the measurement items was tested using Cronbach’s Alpha. A hypothesis was used to evaluate the effect of data analytics use on software project management quality. The research made use of the positivist research method. The study established that data analytics has not yet been widely adopted by software project managers and organisations alike, as both the project managers and organisations have not done enough to promote the training in, and the adoption of data analytics. The research also established that data analytics can improve software project management quality through improved decision-making and in complementing software project management standards. The study findings will be beneficial to software project managers, researchers and organisations as it reveals the factors that are necessary to effectively use data analytics in software project management, as well as highlighting how data analytics improves software project management qualitySchool of ComputingM.Sc. (Computing

    Big Data - Characterizing an Emerging Research Field Using Topic Models

    No full text

    Reescrita de operações de limpeza de dados

    Get PDF
    Este trabalho surge num contexto em que as tecnologias estão presentes em grande escala no quotidiano das organizações/pessoas, produzindo uma elevada quantidade de dados. Nesses dados, por sua vez residem problemas de qualidade que precisam de ser tratados de uma forma simples e eficaz. O problema abordado neste trabalho é a reescrita de operações de limpeza de dados e a sua aplicabilidade entre diferentes repositórios/ferramentas de limpeza de dados. Essa necessidade surge pelo facto de no momento em que se define uma operação, a mesma se encontra dependente do repositório de dados/linguagem que utiliza. O objetivo proposto neste trabalho consiste em especificar as operações a um elevado nível conceptual (permitindo o seu acesso a outros utilizadores sem ser unicamente os especialistas) e permitir a automatização da sua reescrita. A abordagem adotada consiste na elaboração de uma solução que implementa um processo de reescrita genérico, tendo por base conteúdo de configuração definido de acordo com o contexto em que vai ser utilizada. O conteúdo de configuração utilizado consiste na captura sobre um formato de ontologias do domínio (descrição da estrutura) do repositório de dados utilizado para definir as operações como também do domínio do repositório escolhido para a reescrita, do vocabulário utilizado para definir as operações como também do vocabulário que descreve a linguagem utilizada na ferramenta de limpeza de dados (para a qual a operação vai ser reescrita), alinhamentos entre as ontologias definidas anteriormente com o objetivo de obter as correspondências entre elementos das ontologias e por fim uma gramática que permite a definir a estrutura das operações. A partir da gramática e do conteúdo criado pelo processo de reescrita a um nível conceptual, é possível construir a operação final devidamente formatada para ser utilizada na ferramenta de dados. Os resultados alcançados permitem a reescrita de operações simples utilizadas na maioria dos casos para reutilização em outros contextos de uma forma automática/semiautomática entre diferentes repositórios de dados/ferramentas de limpeza de dados. As operações que se encontram a um nível semiautomático de reescrita deve-se ao facto de necessitarem de parâmetros definidos explicitamente pelo utilizador. Apesar ser possível reescrever operações simples, existem limitações de reescrita quer ao nível da complexidade das operações, como também do suporte prestado pelo vocabulário base utilizado relativamente às funcionalidades das ferramentas de limpeza de dados.This work arises in a context where technologies are present in large scale in the daily life of organizations/individuals producing a high amount of data. In the data, there are quality problems that need to be addressed in a simple and effective manner. The problem addressed in this paper is the rewriting of data cleaning and its applicability across diferente repositories/data cleansing tools. This need arises because when the user defines an operation, it is dependent on the data repository/language it uses. The goal proposed in this paper is to specify the operations at a high conceptual level (allowing access to other users not experts only) and enable automation of its rewrite. The approach consists in developing a solution that implements a generic rewriting process, ith the defined configuration content according to the context in which it will be used. The onfiguration content used is the capture of a domain through an ontology (description of structure) of data repository used to define the operations as well as the repository of the domain chosen for the rewrite, the vocabulary used to define the operations as well as the vocabulary that describes the language used in data cleaning tool (for which the operation will be rewritten), alignments between the previously defined ontologies in order to obtain correspondence between elements of ontologies and finally a grammar that allows to define operations’ structure. From the grammar and content created by the rewrite process at a onceptual level, it is possible to construct the final operation with the properly formats for use in data tool. The results achieved allow the rewriting of simple operations used in most cases for reuse in other contexts in an automatic/semiautomatic manner between different data epositories/data cleansing tools. The operations which are in a semiautomatic level of rewriting due to the fact that require explicit user-defined parameters. Although it is possible to rewrite simple operations, there are some rewriting limitations at operations’ complexity level, as well as in the support provided by the base vocabulary used comparing to the features of data cleansing tools
    corecore