64 research outputs found

    Onsite/offsite social commerce adoption for SMEs using fuzzy linguistic decision making in complex framework

    Get PDF
    There has been a growing social commerce adoption trend among SMEs for few years. However, it is often a challenging strategic task for SMEs to choose the right type of social commerce. SMEs usually have a limited budget, technical skills and resources and want to maximise productivity with those limited resources. There is much literature that discusses the social commerce adoption strategy for SMEs. However, there is no work to enable SMEs to choose social commerce—onsite/offsite or hybrid strategy. Moreover, very few studies allow the decision-makers to handle uncertain, complex nonlinear relationships of social commerce adoption factors. The paper proposes a fuzzy linguistic multi-criteria group decision-making in a complex framework for onsite, offsite social commerce adoption to address the problem. The proposed approach uses a novel hybrid approach by combining FAHP, FOWA and selection criteria of the technological–organisation–environment (TOE) framework. Unlike previous methods, the proposed approach uses the decision maker's attitudinal characteristics and recommends intelligently using the OWA operator. The approach further demonstrates the decision behaviour of the decision-makers with Fuzzy Minimum (FMin), Fuzzy Maximum (FMax), Laplace criteria, Hurwicz criteria, FWA, FOWA and FPOWA. The framework enables the SMEs to choose the right type of social commerce considering TOE factors that help them build a stronger relationship with current and potential customers. The approach's applicability is demonstrated using a case study of three SMEs seeking to adopt a social commerce type. The analysis results indicate the proposed approach's effectiveness in handling uncertain, complex nonlinear decisions in social commerce adoption

    Analysing Cloud QoS Prediction Approaches and Its Control Parameters: Considering Overall Accuracy and Freshness of a Dataset

    Get PDF
    Service level agreement (SLA) management is one of the key issues in cloud computing. The primary goal of a service provider is to minimize the risk of service violations, as these results in penalties in terms of both money and a decrease in trustworthiness. To avoid SLA violations, the service provider needs to predict the likelihood of violation for each SLO and its measurable characteristics (QoS parameters) and take immediate action to avoid violations occurring. There are several approaches discussed in the literature to predict service violation; however, none of these explores how a change in control parameters and the freshness of data impact prediction accuracy and result in the effective management of an SLA of the cloud service provider. The contribution of this paper is two-fold. First, we analyzed the accuracy of six widely used prediction algorithms - simple exponential smoothing, simple moving average, weighted moving average, Holt-Winter double exponential smoothing, extrapolation, and the autoregressive integrated moving average - by varying their individual control parameters. Each of the approaches is compared to 10 different datasets at different time intervals between 5 min and 4 weeks. Second, we analyzed the prediction accuracy of the simple exponential smoothing method by considering the freshness of a data; i.e., how the accuracy varies in the initial time period of prediction compared to later ones. To achieve this, we divided the cloud QoS dataset into sets of input values that range from 100 to 500 intervals in sets of 1-100, 1-200, 1-300, 1-400, and 1-500. From the analysis, we observed that different prediction methods behave differently based on the control parameter and the nature of the dataset. The analysis helps service providers choose a suitable prediction method with optimal control parameters so that they can obtain accurate prediction results to manage SLA intelligently and avoid violation penalties

    Deep Learning for Financial Time Series Prediction : A State-of-the-Art Review of Standalone and Hybrid Models

    Get PDF
    Financial time series prediction, whether for classification or regression, has been a heated research topic over the last decade. While traditional machine learning algorithms have experienced mediocre results, deep learning has largely contributed to the elevation of the prediction performance. Currently, the most up-to-date review of advanced machine learning techniques for financial time series prediction is still lacking, making it challenging for finance domain experts and relevant practitioners to determine which model potentially performs better, what techniques and components are involved, and how the model can be designed and implemented. This review article provides an overview of techniques, components and frameworks for financial time series prediction, with an emphasis on state-of-the-art deep learning models in the literature from 2015 to 2023, including standalone models like convolutional neural networks (CNN) that are capable of extracting spatial dependencies within data, and long short-term memory (LSTM) that is designed for handling temporal dependencies; and hybrid models integrating CNN, LSTM, attention mechanism (AM) and other techniques. For illustration and comparison purposes, models proposed in recent studies are mapped to relevant elements of a generalized framework comprised of input, output, feature extraction, prediction, and related processes. Among the state-of-the-art models, hybrid models like CNN-LSTM and CNN-LSTM-AM in general have been reported superior in performance to stand-alone models like the CNN-only model. Some remaining challenges have been discussed, including non-friendliness for finance domain experts, delayed prediction, domain knowledge negligence, lack of standards, and inability of real-time and high-frequency predictions. The principal contributions of this paper are to provide a one-stop guide for both academia and industry to review, compare and summarize technologies and recent advances in this area, to facilitate smooth and informed implementation, and to highlight future research directions

    Evaluating interpretable machine learning predictions for cryptocurrencies

    Get PDF
    This study explores various machine learning and deep learning applications on financial data modelling, analysis and prediction processes. The main focus is to test the prediction accuracy of cryptocurrency hourly returns and to explore, analyse and showcase the various interpretability features of the ML models. The study considers the six most dominant cryptocurrencies in the market: Bitcoin, Ethereum, Binance Coin, Cardano, Ripple and Litecoin. The experimental settings explore the formation of the corresponding datasets from technical, fundamental and statistical analysis. The paper compares various existing and enhanced algorithms and explains their results, features and limitations. The algorithms include decision trees, random forests and ensemble methods, SVM, neural networks, single and multiple features N-BEATS, ARIMA and Google AutoML. From experimental results, we see that predicting cryptocurrency returns is possible. However, prediction algorithms may not generalise for different assets and markets over long periods. There is no clear winner that satisfies all requirements, and the main choice of algorithm will be tied to the user needs and provided resources

    Seventeen Years of the ACM Transactions on Multimedia Computing, Communications and Applications : A Bibliometric Overview

    Get PDF
    ACM Transactions on Multimedia Computing, Communications, and Applications has been dedicated to advancing multimedia research, fostering discoveries, innovations, and practical applications since 2005. The journal consistently publishes top-notch, original research in emerging fields through open submissions, calls for articles, special issues, rigorous review processes, and diverse research topics. This study aims to delve into an extensive bibliometric analysis of the journal, utilising various bibliometric indicators. The article seeks to unveil the latent implications within the journal’s scholarly landscape from 2005 to 2022. The data primarily draws from the Web of Science Core Collection database. The analysis encompasses diverse viewpoints, including yearly publication rates and citations, identifying highly cited articles, and assessing the most prolific authors, institutions, and countries. The article employs VOSviewer-generated graphical maps, effectively illustrating networks of co-citations, keyword co-occurrences, and institutional and national bibliographic couplings. Furthermore, the study conducts a comprehensive global and temporal examination of co-occurrences of the author’s keywords. This investigation reveals the emergence of numerous novel keywords over the past decades

    Forecasting with Machine Learning Techniques

    Get PDF
    The decision-maker is increasingly utilising machine learning (ML) techniques to find patterns in huge quantities of real-time data [...

    Revolutionising healthcare with artificial intelligence : A bibliometric analysis of 40 years of progress in health systems

    Get PDF
    The development of artificial intelligence (AI) has revolutionised the medical system, empowering healthcare professionals to analyse complex nonlinear big data and identify hidden patterns, facilitating well-informed decisions. Over the last decade, there has been a notable trend of research in AI, machine learning (ML), and their associated algorithms in health and medical systems. These approaches have transformed the healthcare system, enhancing efficiency, accuracy, personalised treatment, and decision-making. Recognising the importance and growing trend of research in the topic area, this paper presents a bibliometric analysis of AI in health and medical systems. The paper utilises the Web of Science (WoS) Core Collection database, considering documents published in the topic area for the last four decades. A total of 64,063 papers were identified from 1983 to 2022. The paper evaluates the bibliometric data from various perspectives, such as annual papers published, annual citations, highly cited papers, and most productive institutions, and countries. The paper visualises the relationship among various scientific actors by presenting bibliographic coupling and co-occurrences of the author's keywords. The analysis indicates that the field began its significant growth in the late 1970s and early 1980s, with significant growth since 2019. The most influential institutions are in the USA and China. The study also reveals that the scientific community's top keywords include ‘ML’, ‘Deep Learning’, and ‘Artificial Intelligence’

    E-learning: Closing the Digital Gap between Developed and Developing Countries

    Get PDF
    Abstract: As there are many gaps between developed and developing countries, Digital Gap is one of them. Research has raised the idea and question of e-learning closing this gap. Research has identified, compared, evaluated and reviewed the issue from both the angels of literature and quantitative research. The focus has been to assess the e-learning potential to provide quality education though electronic means and review to what extent this is going to be feasible. ICT infrastructure, channels of communication, learning styles, the role of teacher and classroom and blended learning has been discussed
    • …
    corecore