82,723 research outputs found

    Service-oriented architecture for big data and business intelligence analytics in the cloud

    Get PDF
    © 2017 by Taylor & Francis Group, LLC. Service-oriented architecture (SOA) has emerged, supporting scalability and service reuse. At the same time, Big Data analytics has impacted on business services and business process management. However, there is a lack of a systematic engineering approach to Big Data analytics. This chapter provides a systematic approach to SOA design strategies and business process for Big Data analytics. Our approach is based on SOA reference architecture and service component model for Big Data applications, known as softBD and also includes a large-scale, real-world case study demonstrating our approach to SOA for Big Data analytics. SOA Big Data architecture is scalable, generic, and customizable for a variety of data applications. The main contribution of this chapter includes a unique, innovative, and generic softBD framework, service component model, and a generic SOA architecture for large-scale Big Data applications. This chapter also contributes to Big Data metrics, which allows measurement and evaluation when analyzing data

    Impact of Big Data Analytics on Banking: A Case Study

    Get PDF
    Purpose – The paper aims to help enterprises gain valuable knowledge about big data implementation in practice and improve their information management ability, as they accumulate experience, to reuse or adapt the proposed method to achieve a sustainable competitive advantage. Design/methodology/approach – Guided by the theory of technological frames of reference (TFR) and transaction cost theory (TCT), this paper describes a real-world case study in the banking industry to explain how to help enterprises leverage big data analytics for changes. Through close integration with bank\u27s daily operations and strategic planning, the case study shows how the analytics team frame the challenge and analyze the data with two analytic models – customer segmentation (unsupervised) and product affinity prediction (supervised), to initiate the adoption of big data analytics in precise marketing. Findings – The study reported relevant findings from a longitudinal data analysis and identified some key success factors. First, non-technical factors, for example intuitive analytics results, appropriate evaluation baseline, multiple-wave implementation and selection of marketing channels critically influence big data implementation progress in organizations. Second, a successful campaign also relies on technical factors. For example, the clustering analytics could promote customers\u27 response rates, and the product affinity prediction model could boost efficient transaction and lower time costs. Originality/value – For theoretical contribution, this paper verified that the outstanding characteristics of online mutual fund platforms brought up by Nagle, Seamans and Tadelis (2010) could not guarantee organizations\u27 competitive advantages from the aspect of TCT

    Revisiting Ralph Sprague’s Framework for Developing Decision Support Systems

    Get PDF
    Ralph H. Sprague Jr. was a leader in the MIS field and helped develop the conceptual foundation for decision support systems (DSS). In this paper, I pay homage to Sprague and his DSS contributions. I take a personal perspective based on my years of working with Sprague. I explore the history of DSS and its evolution. I also present and discuss Sprague’s DSS development framework with its dialog, data, and models (DDM) paradigm and characteristics. At its core, the development framework remains valid in today’s world of business intelligence and big data analytics. I present and discuss a contemporary reference architecture for business intelligence and analytics (BI/A) in the context of Sprague’s DSS development framework. The practice of decision support continues to evolve and can be described by a maturity model with DSS, enterprise data warehousing, real-time data warehousing, big data analytics, and the emerging cognitive as successive generations. I use a DSS perspective to describe and provide examples of what the forthcoming cognitive generation will bring

    Big Data Analytics and Electronic Resource Usage in Academic Libraries: A Case Study of a Private University in Kenya

    Get PDF
    The purpose of the study was to apply Big Data analytics as a tool for evaluating electronic resources usage in the academic library setup in Kenya with reference to the library of one private university. Log files of postgraduate students were mined from the server where the offsite access platform (ezproxy) has been installed. Descriptive statistical techniques such as mean, standard deviation and percentages were computed. Data was transferred to the Statistical Package for Social Sciences (SPSS) software was aided in the analysis. Results revealed that in terms of usage intensity, total URL count was 2,352, the highest user made 283 downloads and the mean URL count was 49 downloads. Further findings revealed that no user utilized more than 5 databases over a period of one year. The mean usage intensity score for respondents who were trained or orientated on e-resource usage was above average at 69.0 while those who had not received training were below average at 29.8. It was concluded that big data analytics is a necessary and powerful tool for investigating electronic resources seeking and usage trends and patterns within Kenyan university libraries. Through big data analysis and data mining, usage patterns and trends such as usage intensity that might not have accurately been revealed through other tools are unearthed. Big data analytics has revealed user preferences and intensity of utilization of various databases and helped in detection of redundant databases. From the usage patterns, it was clear that the level of utilization of the University library’s e-resource platform was very low. Most of the databases accessible through the platform were redundant. Further, only two databases namely e-book central and ebscohost were the most popular among users while the rest were barely being utilized if at all. For most students, just one or two databases were sufficient in meeting their research needs. An integrated data analytics model for investigating university library’s e-resources usage is proposed

    A Reference Model for Data-Driven Business Model Innovation Initiatives in Incumbent Firms

    Get PDF
    In the past decade, we have witnessed the rise of big data analytics to a well-established phenomenon in business and academic fields. Novel opportunities appear for organizations to maximize the value from data through improved decision making, enhanced value propositions and new business models. The latter two are investigated by scholars as part of an emerging research field of data-driven business model (DDBM) innovation. Aiming to deploy DDBM innovation, companies start initiatives to either renovate their existing BM or develop a new DDBM. Responding to the recent calls for further research on design knowledge for DDBM innovation, we developed a reference model for DDBM innovation initiatives. Building upon a design science research approach and the Work System Theory as a kernel theory and a set of design principles, we propose a reference model comprising a static and a dynamic view. Our results are based on a research study with empirical insights from 18 companies, 19 cases and 16 expert interviews as well as theoretical grounding from a systematic literature research on key concepts of DDBM innovation. The developed reference model fills a gap mentioned in the DDBM innovation literature and provides practical guidance for companies

    Managing Distributed Cloud Applications and Infrastructure

    Get PDF
    The emergence of the Internet of Things (IoT), combined with greater heterogeneity not only online in cloud computing architectures but across the cloud-to-edge continuum, is introducing new challenges for managing applications and infrastructure across this continuum. The scale and complexity is simply so complex that it is no longer realistic for IT teams to manually foresee the potential issues and manage the dynamism and dependencies across an increasing inter-dependent chain of service provision. This Open Access Pivot explores these challenges and offers a solution for the intelligent and reliable management of physical infrastructure and the optimal placement of applications for the provision of services on distributed clouds. This book provides a conceptual reference model for reliable capacity provisioning for distributed clouds and discusses how data analytics and machine learning, application and infrastructure optimization, and simulation can deliver quality of service requirements cost-efficiently in this complex feature space. These are illustrated through a series of case studies in cloud computing, telecommunications, big data analytics, and smart cities

    SmartAQnet: remote and in-situ sensing of urban air quality

    Get PDF
    our time. However, it is very difficult for many cities to take measures to accommodate today’s needs concerning e.g. mobility, housing and work, because a consistent fine-granular data and information on causal chains is largely missing. This has the potential to change, as today, both large-scale basic data as well as new promising measuring approaches are becoming available. The project “SmartAQnet”, funded by the German Federal Ministry of Transport and Digital Infrastructure (BMVI), is based on a pragmatic, data driven approach, which for the first time combines existing data sets with a networked mobile measurement strategy in the urban space. By connecting open data, such as weather data or development plans, remote sensing of influencing factors, and new mobile measurement approaches, such as participatory sensing with low-cost sensor technology, “scientific scouts” (autonomous, mobile smart dust measurement device that is auto-calibrated to a high-quality reference instrument within an intelligent monitoring network) and demand-oriented measurements by light-weight UAVs, a novel measuring and analysis concept is created within the model region of Augsburg, Germany. In addition to novel analytics, a prototypical technology stack is planned which, through modern analytics methods and Big Data and IoT technologies, enables application in a scalable way

    Managing Distributed Cloud Applications and Infrastructure

    Get PDF
    The emergence of the Internet of Things (IoT), combined with greater heterogeneity not only online in cloud computing architectures but across the cloud-to-edge continuum, is introducing new challenges for managing applications and infrastructure across this continuum. The scale and complexity is simply so complex that it is no longer realistic for IT teams to manually foresee the potential issues and manage the dynamism and dependencies across an increasing inter-dependent chain of service provision. This Open Access Pivot explores these challenges and offers a solution for the intelligent and reliable management of physical infrastructure and the optimal placement of applications for the provision of services on distributed clouds. This book provides a conceptual reference model for reliable capacity provisioning for distributed clouds and discusses how data analytics and machine learning, application and infrastructure optimization, and simulation can deliver quality of service requirements cost-efficiently in this complex feature space. These are illustrated through a series of case studies in cloud computing, telecommunications, big data analytics, and smart cities

    A contribution for data processing and interoperability in Industry 4.0

    Get PDF
    Dissertação de mestrado em Engenharia de SistemasIndustry 4.0 is expected to drive a significant change in companies’ growth. The idea is to cluster important information from all the company’s supply chain, enabling valuable decision-making while permitting interactions between machines and humans in real time. Autonomous systems powered with Information Technologies are enablers of Industry 4.0 – like Internet of Things (IoT), Cyber Physical-Systems (CPS) and Big Data and analytics. IoT gather information from every piece of the big puzzle which is the manufacturing process. Cloud Computing store all that information in one place. People share information across the company, between its supply chain and hierarchical levels through integration of systems. Finally, Big Data and analytics are of intelligence that will improve Industry 4.0. Methods and tools in Industry 4.0 are designed to increase interoperability across industrial stakeholders. In order to make the complete process possible, standardisation must be implemented across the company. Two reference models for Industry 4.0 were studied - RAMI 4.0 and IIRA. RAMI 4.0, a German initiative, focuses on industrial digitalization while IIRA, an American initiative, focuses on “Internet of Things” world, i.e. energy, healthcare and transportation. The two initiatives aim to obtain intelligence data from processes while enabling interoperability among systems. Representatives from the two reference models are working together on the technological interface standards that could be used by companies joining this new era. This study aims at the interoperability between systems. Even though there must be a model to guide the company into Industry 4.0, this model ought to be mutable and flexible enough to handle differences in manufacturing process, as an example automotive industry 4.0 will not have the same approach as aviation Industry 4.0.Espera-se que a Indústria 4.0 seja uma mudança significativa no crescimento das empresas. O objetivo é agrupar informações importantes de toda a cadeia de suprimentos da empresa, proporcionando uma tomada de decisão mais acertada, ao mesmo tempo que permite interações entre seres humanos e máquinas em tempo real. Sistemas autônomos equipados com Tecnologias da Informação possibilitam a Indústria 4.0 como a Internet das Coisas (IoT), sistemas ciber-físicos (CPS) e Big Data e analytics. A IoT coleta informações de cada peça do grande quebra-cabeça que é o processo de fabricação. Cloud Computing lida com armazenamento de toda essa informação em um só lugar. As pessoas compartilham informações em toda a empresa, na cadeia de abastecimento e níveis hierárquicos por meio da integração de sistemas. Por fim, Big Data e analytics são de inteligência que melhorarão a Indústria 4.0. Os métodos e ferramentas da Indústria 4.0 são projetadas para aumentar a interoperabilidade entre os stakeholders. Para tornar possível essa interoperabilidade, um padrão em toda a empresa deve ser implementado. Dois modelos de referência para a Indústria 4.0 foram estudados - RAMI 4.0 e IIRA. RAMI 4.0, a iniciativa alemã, concentra-se na digitalização industrial, enquanto IIRA, a iniciativa americana, foca no mundo da Internet das Coisas, como energia, saúde e transporte. As duas iniciativas visam obter dados inteligentes dos processos e, ao mesmo tempo, permitir a interoperabilidade entre os sistemas. Representantes dos dois modelos de referência estão a trabalhar juntos para discutir os padrões de interface tecnológica que podem ser usados pelas empresas que entram nessa nova era. Este estudo visa a interoperabilidade entre sistemas. Embora deva haver um modelo para orientar a empresa na Indústria 4.0, esse modelo deve ser mutável e flexível o suficiente para lidar com diferenças no processo de fabricação, como exemplo a indústria 4.0 automotiva não terá a mesma abordagem que a Indústria 4.0 de aviação

    A Cloud-Edge Orchestration Platform for the Innovative Industrial Scenarios of the IoTwins Project

    Get PDF
    The concept of digital twins has growing more and more interest not only in the academic field but also among industrial environments thanks to the fact that the Internet of Things has enabled its cost-effective implementation. Digital twins (or digital models) refer to a virtual representation of a physical product or process that integrate data from various sources such as data APIs, historical data, embedded sensors and open data, giving to the manufacturers an unprecedented view into how their products are performing. The EU-funded IoTwins project plans to build testbeds for digital twins in order to run real-time computation as close to the data origin as possible (e.g., IoT Gateway or Edge nodes), and whilst batch-wise tasks such as Big Data analytics and Machine Learning model training are advised to run on the Cloud, where computing resources are abundant. In this paper, the basic concepts of the IoTwins project, its reference architecture, functionalities and components have been presented and discussed
    corecore