4,982 research outputs found

    Agents and E-commerce: Beyond Automation

    Get PDF
    The fast-growing information and communication technologies have shifted the contemporary commerce in both its information and market spaces. Businesses demand a new generation of agile and adaptive commerce systems. Towards this end, software agents, a type of autonomous artifacts, have been viewed as a promising solution. They have been taking an increasingly important part in facilitating e-commerce operations in the last two decades. This article presents a systematized overview of the diversity of agent applications in commerce. The paper argues that agents start playing more substantial role in determining social affairs. They also have a strong potential to be used to build the future highly responsive and smart e-commerce systems. The opportunities and challenges presented by proliferation of agent technologies in e-commerce necessitate the development of insights into their place in information systems research, as well as practical implications for the management

    Challenges for the comprehensive management of cloud services in a PaaS framework

    Full text link
    The 4CaaSt project aims at developing a PaaS framework that enables flexible definition, marketing, deployment and management of Cloud-based services and applications. The major innovations proposed by 4CaaSt are the blueprint and its lifecycle management, a one stop shop for Cloud services and a PaaS level resource management featuring elasticity. 4CaaSt also provides a portfolio of ready to use Cloud native services and Cloud-aware immigrant technologies

    Sensor data fusion for the industrial artificial intelligence of things

    Get PDF
    The emergence of smart sensors, artificial intelligence, and deep learning technologies yield artificial intelligence of things, also known as the AIoT. Sophisticated cooperation of these technologies is vital for the effective processing of industrial sensor data. This paper introduces a new framework for addressing the different challenges of the AIoT applications. The proposed framework is an intelligent combination of multi-agent systems, knowledge graphs and deep learning. Deep learning architectures are used to create models from different sensor-based data. Multi-agent systems can be used for simulating the collective behaviours of the smart sensors using IoT settings. The communication among different agents is realized by integrating knowledge graphs. Different optimizations based on constraint satisfaction as well as evolutionary computation are also investigated. Experimental analysis is undertaken to compare the methodology presented to state-of-the-art AIoT technologies. We show through experimentation that our designed framework achieves good performance compared to baseline solutions.publishedVersio

    Exploring Evaluation Factors and Framework for the Object of Automated Trading System

    Get PDF
    Automated trading system (ATS) is a computer program that combines different trading rules to find optimal trading opportunities. The objects of ATS, which are financial assets, need evaluation because that is of great significance for stakeholders and market orders. From the perspectives of dealers, agents, external environment, and objects themselves, this study explored factors in evaluating and choosing the object of ATS. Based on design science research (DSR), we presented a preliminary evaluation framework and conducted semi-structured interviews with twelve trading participants engaged in different occupations. By analyzing the data collected, we validated eight factors from literatures and found four new factors and fifty-four sub-factors. Additionally, this paper developed a relationship model of factors. The results could be used in future work to explore and validate more evaluation factors by using data mining

    Trading With A Day Job: Can Automated Trading Strategies Be Profitable?

    Get PDF
    The focus of the research is the profitability of using automated trading strategies. In other words, can trading strategies that are automatically executed in financial markets be profitable? In this study, three strategies are traded in a simulated environment under two different types of market conditions and on two different underlying assets. The trading strategies are based on a moving average crossover system with 5, 10, and 20 day moving averages. The first strategy uses only this moving average crossover system. The second strategy uses this same moving average system requiring increasing volume confirmation to make a trade. The final strategy uses this moving average crossover system but requires confirmation by a relative strength index to make a trade. The two market conditions used are an upward trending market and a consolidating market. The assets traded are the NASDQ 100 (i.e., QQQQ) and the S&P Deposit Receipts Trust (SPY). These assets tend to have different levels of volatility over time. The automated trading strategies are simulated using historical data and the trading software TradeStation. TradeStation allows for trading strategies to be implemented and tested on historic data at various time intervals and using a variety of time charts. A number of numeric values are also calculated by TradeStation including the number of trades and the profit or loss produced by these trades. The simulation results indicated that for both assets in markets that trend upwards, the moving average strategy with confirmation by the relative strength index dominated the other two strategies in terms of profits. During consolidating market periods, the simulation results are less clear. The magnitude of the profits when trading the relatively stable S&P varied across the three strategies and various time charts. However for the more volatile NASDQ 100, profits tended to be greater for the simple moving average strategy than the other two strategies

    Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For

    Get PDF
    Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered

    Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud

    Full text link
    With the advent of cloud computing, organizations are nowadays able to react rapidly to changing demands for computational resources. Not only individual applications can be hosted on virtual cloud infrastructures, but also complete business processes. This allows the realization of so-called elastic processes, i.e., processes which are carried out using elastic cloud resources. Despite the manifold benefits of elastic processes, there is still a lack of solutions supporting them. In this paper, we identify the state of the art of elastic Business Process Management with a focus on infrastructural challenges. We conceptualize an architecture for an elastic Business Process Management System and discuss existing work on scheduling, resource allocation, monitoring, decentralized coordination, and state management for elastic processes. Furthermore, we present two representative elastic Business Process Management Systems which are intended to counter these challenges. Based on our findings, we identify open issues and outline possible research directions for the realization of elastic processes and elastic Business Process Management.Comment: Please cite as: S. Schulte, C. Janiesch, S. Venugopal, I. Weber, and P. Hoenisch (2015). Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud. Future Generation Computer Systems, Volume NN, Number N, NN-NN., http://dx.doi.org/10.1016/j.future.2014.09.00

    A Systematic Literature Review of Peer-to-Peer, Community Self-Consumption, and Transactive Energy Market Models

    Get PDF
    Capper, T., Gorbatcheva, A., Mustafa, M. A., Bahloul, M., Schwidtal, J. M., Chitchyan, R., Andoni, M., Robu, V., Montakhabi, M., Scott, I., Francis, C., Mbavarira, T., Espana, J. M., & Kiesling, L. (2021). A Systematic Literature Review of Peer-to-Peer, Community Self-Consumption, and Transactive Energy Market Models. Social Science Research Network (SSRN), Elsevier. https://doi.org/10.2139/ssrn.3959620Peer-to-peer and transactive energy markets, and community or collective self-consumption offer new models for trading energy locally. Over the past 10 years there has been significant growth in the amount of academic literature and trial projects examining how these energy trading models might function. This systematic literature review of 139 peer-reviewed journal articles examines the market designs used in these energy trading models. The Business Ecosystem Architecture Modelling framework is used to extract information about the market models used in the literature and identify differences and similarities between the models. This paper identifies six archetypal market designs and three archetypal auction mechanisms used in markets presented in the reviewed literature. It classifies the types of commodities being traded, the benefits of the markets and other features such as the types of grid models. Finally, this paper identifies five evidence gaps which need future research before these markets can be widely adopted.publishersversionpublishe
    corecore