59 research outputs found
Relatório de Estágio - Solução de BI Roaming Data Science (RoaDS) em ambiente Vodafone
A telecom company (Vodafone), had the need to implement a Business Intelligence solution for
Roaming data across a wide set of different data sources. Based on the data visualization of this
solution, its key users with decision power, can make a business analysis and needs of infrastructure
and software expansion. This document aims to expose the scientific papers produced with the various
stages of production of the solution (state of the art, architecture design and implementation results),
this Business Intelligence solution was designed and implemented with OLAP methodologies and
technologies in a Data Warehouse composed of Data Marts arranged in constellation, the visualization
layer was custom made in JavaScript (VueJS). As a base for the results a questionnaire was created to
be filled in by the key users of the solution. Based on this questionnaire it was possible to ascertain
that user acceptance was satisfactory. The proposed objectives for the implementation of the BI
solution with all the requirements was achieved with the infrastructure itself created from scratch in
Kubernetes. This BI platform can be expanded using column storage databases created specifically
with OLAP workloads in mind, removing the need for an OLAP cube layer. Based on Machine
Learning algorithms, the platform will be able to perform the predictions needed to make decisions
about Vodafone's Roaming infrastructure
Growth of relational model: Interdependence and complementary to big data
A database management system is a constant application of science that provides a platform for the creation, movement, and use of voluminous data. The area has witnessed a series of developments and technological advancements from its conventional structured database to the recent buzzword, bigdata. This paper aims to provide a complete model of a relational database that is still being widely used because of its well known ACID properties namely, atomicity, consistency, integrity and durability. Specifically, the objective of this paper is to highlight the adoption of relational model approaches by bigdata techniques. Towards addressing the reason for this in corporation, this paper qualitatively studied the advancements done over a while on the relational data model. First, the variations in the data storage layout are illustrated based on the needs of the application. Second, quick data retrieval techniques like indexing, query processing and concurrency control methods are revealed. The paper provides vital insights to appraise the efficiency of the structured database in the unstructured environment, particularly when both consistency and scalability become an issue in the working of the hybrid transactional and analytical database management system
Database Principles and Technologies – Based on Huawei GaussDB
This open access book contains eight chapters that deal with database technologies, including the development history of database, database fundamentals, introduction to SQL syntax, classification of SQL syntax, database security fundamentals, database development environment, database design fundamentals, and the application of Huawei’s cloud database product GaussDB database. This book can be used as a textbook for database courses in colleges and universities, and is also suitable as a reference book for the HCIA-GaussDB V1.5 certification examination. The Huawei GaussDB (for MySQL) used in the book is a Huawei cloud-based high-performance, highly applicable relational database that fully supports the syntax and functionality of the open source database MySQL. All the experiments in this book can be run on this database platform. As the world’s leading provider of ICT (information and communication technology) infrastructure and smart terminals, Huawei’s products range from digital data communication, cyber security, wireless technology, data storage, cloud computing, and smart computing to artificial intelligence
Database Principles and Technologies – Based on Huawei GaussDB
This open access book contains eight chapters that deal with database technologies, including the development history of database, database fundamentals, introduction to SQL syntax, classification of SQL syntax, database security fundamentals, database development environment, database design fundamentals, and the application of Huawei’s cloud database product GaussDB database. This book can be used as a textbook for database courses in colleges and universities, and is also suitable as a reference book for the HCIA-GaussDB V1.5 certification examination. The Huawei GaussDB (for MySQL) used in the book is a Huawei cloud-based high-performance, highly applicable relational database that fully supports the syntax and functionality of the open source database MySQL. All the experiments in this book can be run on this database platform. As the world’s leading provider of ICT (information and communication technology) infrastructure and smart terminals, Huawei’s products range from digital data communication, cyber security, wireless technology, data storage, cloud computing, and smart computing to artificial intelligence
Recommended from our members
Antecedents of business intelligence system use
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London.Organisational reliance on information has become vital for organisational competitiveness. With increasing data volumes, Business Intelligence (BI) becomes a cornerstone of the decision-support system. However, employee resistance to use Business Intelligence Systems (BIS) is evident. This creates a problem to organisations in realising the benefits of BIS. It is thus important to study the enablers of sustained use of BIS amongst employees.
This thesis identifies existing theories that can be used to study BI system use. It integrates and extends technology use theories through a framework focusing on Business Intelligence System Use (BISU). Empirical research is then conducted in Kuwait’s telecom and banking industries through a close-ended, self-administered questionnaire using a five-point Likert scale. Responses were received from 211 BI users. The data was analysed using SmartPLS to study the convergent and discriminant validity and reliability. Partial least squares structural equation modelling (PLS-SEM) was used to study the direct and indirect relationships between constructs and answer the hypotheses. In addition to SmartPLS, SPSS was used for descriptive analysis.
The results indicated that UTAUT factors consisting of performance expectancy, effort expectancy and social influence positively impact BI system use. Voluntariness of use was found to positively moderate the relationship between social influence and BI system use. Furthermore, BI system quality positively impacts both performance expectancy and effort expectancy. The BI user’s self-efficacy also positively impacts effort expectancy. In addition, social influence was found to be positively influenced by organisational factors, namely top management support and information culture.
The findings of this research contribute to literature by determining and quantifying the factors that influence BISU through the lens of employee perspectives. This thesis also explains how employees’ object-based beliefs about BI affect their behavioural beliefs, which in turn impact BISU. Limitations of this research include the omission of UTAUT’s facilitating conditions and the limited variance of respondent demographics
Uso de sistemas multidimensionais e algoritmos de data mining para implantação do método Time Driven Activity Based Costing (TDABC) em organizações orientadas por projectos
JEL: M150 e M410Quando o método ABC (Activity Based Costing) foi apresentado para o rateio de custos de
actividades de processos gerenciais, representou uma profunda modificação em relação aos
métodos anteriormente utilizados.
Logo ficaram patentes as enormes vantagens que trazia assim como os desafios em implementálo.
O método TDABC (Time Driven Activity Based Costing) surgiu devido justamente à s
dificuldades operacionais do uso do ABC. Ao invés do uso de estimativas, normalmente dadas
pelo corpo de funcionários da empresa, do percentual de tempo gasto em cada actividade, o
TDABC propõe duas fundamentais mudanças em relação ao seu predecessor.
A primeira é que se considera um tempo de inactividade em relação ao total de horas
potencialmente trabalhadas (idle time). A segunda é que será calculado o tempo gasto por hora
de trabalho.
Nesse caso, o gasto em cada actividade será conduzido multiplicando-se esse valor por hora
pelo total de horas requerido por ela. O método TDABC gera um resultado fundamental na hora
que for implantado em uma empresa. São as chamadas equações de tempo para cada actividade.
Nessas equações, Ă© calculado o tempo gasto em cada actividade diante de diferentes nĂveis de
complexidade na execução dessa. Todo esse trabalho sĂł Ă© possĂvel diante da existĂŞncia de
sistemas de gestão integrada ERP (Enterprise Resource Planning) que registram cada acção na
empresa.
Nessa tese de doutoramento há duas propostas relativas a implantação do TDABC em
empresas: A primeira Ă© que o acompanhamento dos tempos de actividades seja feito por um
sistema de ERP associado a um sistema de Business Intelligence (BI) ao invés de um sistema
simples de ERP.
A segunda proposta Ă© decorrente da primeira. Sugere-se o uso de algoritmos de data mining
(principalmente os algoritmos de árvore de indução e de análise de conglomerados), presentes
nos sistemas de BI, para a detecção de nĂveis de complexidade nas equações de tempo.
Como razĂŁo para a primeira proposta mostramos que sistemas de ERP jamais foram planejados
para a detecção de padrões entre os dados neles armazenados. Portanto, sozinhos, eles não
poderiam detectar os nĂveis de complexidade existentes na execução de uma mesma actividade.
Para a segunda proposta mostramos que em organizações orientadas por projectos, ou que
tenham departamentos que elaborem projectos e possam ser considerados como análogos a
estas, a escala do nĂşmero de actividades e seus dados gerados Ă© tĂŁo ampla que gera a
necessidade de um sistema automático de detecção de nĂveis de complexidade nessas
actividades.
A construção desses objectivos nessa tese segue a seguinte ordem: Primeiro é elaborada uma
revisão do método ABC e as razões que levaram ao modelo subsequente TDABC. Em seguida
apresenta-se também os conceitos de gerenciamento de projectos e Business Intelligence,
notadamente a arquitectura multidimensional de dados e os algoritmos de data mining,
introduzindo-se a maneira com que BI possibilita a diferenciação em nĂveis de complexidade
nas equações de tempo. Para tanto faz-se uma introdução à linguagem MDX (Multidimensional
Expression) de construção de relatórios em BI. Também se mostra, através de uma introdução
aos sistemas de ERP, que esse tipo de sistema sozinho nĂŁo viabilizaria esse tipo de resultado.
Como forma de ilustrar todos esses conceitos Ă© relatada a experiĂŞncia de colecta de dados de
actividades em projectos desenvolvidos em três organizações e a aplicação de BI para a geração
das equações de tempo sobre esses dados.ABC (Activity Based Cost) method was introduced in order to organize the way costs should
be partitioned among enterprise management activities, and caused a deep change in the way
this division used to be made.
Soon it became quite clear the huge advantages of employing such method and the challenges
associated with it.
The TDABC method (Time Driven Activity Based Cost) was designed to overcome the
operational difficulties in using ABC. Rather than employing estimates provided by the
company employees, concerning the time spent on each management activity, TDABC suggest
two pivotal changes in comparison with its predecessor.
First, TDABC considers an idle time regarding the potential total time available for work.
Second, TDABC calculates the cost spent per work hour.
Therefore, the overall activity cost is reached by simple multiplication of this cost per hour by
the number of work hours required by the activity. TDABC produces a fundamental output
when it is employed in a company. It is the set of time equations for the management activities.
Through these equations, it is possible to calculate the time spent in each activity considering
also their different levels of complexities. This result is possible only due to ERP (Enterprise
Resource Planning) systems that record every action being performed within the company.
In this thesis, it is suggested two main initiatives concerning the usage of TDABC in enterprises.
The first one is to employ a Business Intelligence (BI) system associated with an ERP system
in order to track the time spent on the management activities.
The second initiative is a consequence of the first. It is suggested the usage of Data Mining
algorithms (mainly the algorithms for cluster analysis), available in BI suites, for the detection
of the complexities levels within the time equations.
As justification for the first initiative, it is shown that ERP systems were never designed to
detect patterns within their databases. Therefore, without a BI module, it would be quite
cumbersome for an ERP system to detect complexity levels in executing a management activity.
For the second initiative, it is shown that an average enterprise produces a large-scale number
of management activities, and tracking these activities generates a huge amount of data. The
volume of information makes impossible to realize the levels of complexities inside the time
equations without an automatic procedure to support it.
The first part of this work is oriented to introduce a revision of the ABC and TDABC methods.
Later, it is introduced the concepts of projects and project management. It is also presented
some concepts about Business Intelligence systems and the multidimensional data architecture.
The work also introduces the data mining algorithms that make available the detection of the
complexity levels in management activities.
It is also introduced the MDX( Multidimensional Expression ) language for building reports in
BI systems as way to generate the proper sets of data for such detection. It is then reinforced
the difficulties to perform this type of analysis in pure ERP systems. In order to illustrate these
results it is reported a case study performed in three project management companies and the BI
generation of time equations
Data aggregation for multi-instance security management tools in telecommunication network
Communication Service Providers employ multiple instances of network monitoring tools within extensive networks that span large geographical regions, encompassing entire countries. By collecting monitoring data from various nodes and consolidating it in a central location, a comprehensive control dashboard is established, presenting an overall network status categorized under different perspectives.
In order to achieve this centralized view, we evaluated three architectural options: polling data from individual nodes to a central node, asynchronous push of data from individual nodes to a central node, and a cloud-based Extract, Transform, Load (ETL) approach. Our analysis leads us to the conclusion that the third option is most suitable for the telecommunication system use case.
Remarkably, we observed that the quantity of monitoring results is approximately 30 times greater than the total number of devices monitored within the network. Implementing the ETL-based approach, we achieved favorable performance times of 2.23 seconds, 7.16 seconds, and 27.96 seconds for small, medium, and large networks, respectively. Notably, the extraction operation required the most significant amount of time, followed by the load and processing phases. Furthermore, in terms of average memory consumption, the small, medium, and large networks necessitated 323.59 MB, 497.34 MB, and 1668.59 MB, respectively. It is worth noting that the relationship between the total number of devices in the system and both performance and memory consumption is linear in nature
- …