962 research outputs found

    Introduction to the special issue on neural networks in financial engineering

    Get PDF
    There are several phases that an emerging field goes through before it reaches maturity, and computational finance is no exception. There is usually a trigger for the birth of the field. In our case, new techniques such as neural networks, significant progress in computing technology, and the need for results that rely on more realistic assumptions inspired new researchers to revisit the traditional problems of finance, problems that have often been tackled by introducing simplifying assumptions in the past. The result has been a wealth of new approaches to these time-honored problems, with significant improvements in many cases

    mt5se: An Open Source Framework for Building Autonomous Traders

    Full text link
    Autonomous trading robots have been studied in artificial intelligence area for quite some time. Many AI techniques have been tested for building autonomous agents able to trade financial assets. These initiatives include traditional neural networks, fuzzy logic, reinforcement learning but also more recent approaches like deep neural networks and deep reinforcement learning. Many developers claim to be successful in creating robots with great performance when simulating execution with historical price series, so called backtesting. However, when these robots are used in real markets frequently they present poor performance in terms of risks and return. In this paper, we propose an open source framework, called mt5se, that helps the development, backtesting, live testing and real operation of autonomous traders. We built and tested several traders using mt5se. The results indicate that it may help the development of better traders. Furthermore, we discuss the simple architecture that is used in many studies and propose an alternative multiagent architecture. Such architecture separates two main concerns for portfolio manager (PM) : price prediction and capital allocation. More than achieve a high accuracy, a PM should increase profits when it is right and reduce loss when it is wrong. Furthermore, price prediction is highly dependent of asset's nature and history, while capital allocation is dependent only on analyst's prediction performance and assets' correlation. Finally, we discuss some promising technologies in the area.Comment: This paper replaces an old version of the framework, called mt5b3, which is now deprecate

    Deep Reinforcement Learning for Power Trading

    Full text link
    The Dutch power market includes a day-ahead market and an auction-like intraday balancing market. The varying supply and demand of power and its uncertainty induces an imbalance, which causes differing power prices in these two markets and creates an opportunity for arbitrage. In this paper, we present collaborative dual-agent reinforcement learning (RL) for bi-level simulation and optimization of European power arbitrage trading. Moreover, we propose two novel practical implementations specifically addressing the electricity power market. Leveraging the concept of imitation learning, the RL agent's reward is reformed by taking into account prior domain knowledge results in better convergence during training and, moreover, improves and generalizes performance. In addition, tranching of orders improves the bidding success rate and significantly raises the P&L. We show that each method contributes significantly to the overall performance uplifting, and the integrated methodology achieves about three-fold improvement in cumulative P&L over the original agent, as well as outperforms the highest benchmark policy by around 50% while exhibits efficient computational performance

    Methodologies for innovation and best practices in Industry 4.0 for SMEs

    Get PDF
    Today, cyber physical systems are transforming the way in which industries operate, we call this Industry 4.0 or the fourth industrial revolution. Industry 4.0 involves the use of technologies such as Cloud Computing, Edge Computing, Internet of Things, Robotics and most of all Big Data. Big Data are the very basis of the Industry 4.0 paradigm, because they can provide crucial information on all the processes that take place within manufacturing (which helps optimize processes and prevent downtime), as well as provide information about the employees (performance, individual needs, safety in the workplace) as well as clients/customers (their needs and wants, trends, opinions) which helps businesses become competitive and expand on the international market. Current processing capabilities thanks to technologies such as Internet of Things, Cloud Computing and Edge Computing, mean that data can be processed much faster and with greater security. The implementation of Artificial Intelligence techniques, such as Machine Learning, can enable technologies, can help machines take certain decisions autonomously, or help humans make decisions much faster. Furthermore, data can be used to feed predictive models which can help businesses and manufacturers anticipate future changes and needs, address problems before they cause tangible harm

    Building Efficient Smart Cities

    Get PDF
    Current technological developments offer promising solutions to the challenges faced by cities such as crowding, pollution, housing, the search for greater comfort, better healthcare, optimized mobility and other urban services that must be adapted to the fast-paced life of the citizens. Cities that deploy technology to optimize their processes and infrastructure fit under the concept of a smart city. An increasing number of cities strive towards becoming smart and some are even already being recognized as such, including Singapore, London and Barcelona. Our society has an ever-greater reliance on technology for its sustenance. This will continue into the future, as technology is rapidly penetrating all facets of human life, from daily activities to the workplace and industries. A myriad of data is generated from all these digitized processes, which can be used to further enhance all smart services, increasing their adaptability, precision and efficiency. However, dealing with large amounts of data coming from different types of sources is a complex process; this impedes many cities from taking full advantage of data, or even worse, a lack of control over the data sources may lead to serious security issues, leaving cities vulnerable to cybercrime. Given that smart city infrastructure is largely digitized, a cyberattack would have fatal consequences on the city’s operation, leading to economic loss, citizen distrust and shut down of essential city services and networks. This is a threat to the efficiency smart cities strive for

    Safety-guided deep reinforcement learning via online gaussian process estimation

    Full text link
    An important facet of reinforcement learning (RL) has to do with how the agent goes about exploring the environment. Traditional exploration strategies typically focus on efficiency and ignore safety. However, for practical applications, ensuring safety of the agent during exploration is crucial since performing an unsafe action or reaching an unsafe state could result in irreversible damage to the agent. The main challenge of safe exploration is that characterizing the unsafe states and actions is difficult for large continuous state or action spaces and unknown environments. In this paper, we propose a novel approach to incorporate estimations of safety to guide exploration and policy search in deep reinforcement learning. By using a cost function to capture trajectory-based safety, our key idea is to formulate the state-action value function of this safety cost as a candidate Lyapunov function and extend control-theoretic results to approximate its derivative using online Gaussian Process (GP) estimation. We show how to use these statistical models to guide the agent in unknown environments to obtain high-performance control policies with provable stability certificates.Accepted manuscrip

    Artificial Intelligence, social changes and impact on the world of education

    Get PDF
    The way in which humans acquire and share knowledge has been under constant evolution throughout times. Since the appearance of the first computers, education has changed dramatically. Now, as disruptive technologies are in full development, new opportunities arise for taking education to levels that have never been seen before. Ever since the coronavirus pandemic, the use of online teaching modalities has become widespread all over the world and the situation has caused the development of robust digital learning solutions an urgent need. At present, primary, secondary, third-level teaching and all sorts of courses may be delivered online, either in real-time or recorded for later viewing. Classes can be complemented with videos, documents or even interactive exercises. However, the institutions that used little or no technology prior to Covid-19 have found this situation overwhelming. The lack of knowledge regarding the digital teaching/learning tools available on the market and/or lack of knowledge regarding their use, means that educational institutions will not be able to take full advantage of the opportunities offered; poor use of technology in online classrooms may hinder the students’ progress

    AIoT for Achieving Sustainable Development Goals

    Get PDF
    Artificial Intelligence of Things (AIoT) is a relatively new concept that involves the merging of Artificial Intelligence (AI) with the Internet of Things (IoT). It has emerged from the realization that Internet of Things networks could be further enhanced if they were also provided with Artificial Intelligence, enhancing the extraction of data and network operation. Prior to AIoT, the Internet of Things would consist of networks of sensors embedded in a physical environment, that collected data and sent them to a remote server. Upon reaching the server, a data analysis would be carried out which normally involved the application of a series of Artificial Intelligence techniques by experts. However, as Internet of Things networks expand in smart cities, this workflow makes optimal operation unfeasible. This is because the data that is captured by IoT is increasing in size continually. Sending such amounts of data to a remote server becomes costly, time-consuming and resource inefficient. Moreover, dependence on a central server means that a server failure, which would be imminent if overloaded with data, would lead to a halt in the operation of the smart service for which the IoT network had been deployed. Thus, decentralizing the operation becomes a crucial element of AIoT. This is done through the Edge Computing paradigm which takes the processing of data to the edge of the network. Artificial Intelligence is found at the edge of the network so that the data may be processed, filtered and analyzed there. It is even possible to equip the edge of the network with the ability to make decisions through the implementation of AI techniques such as Machine Learning. The speed of decision making at the edge of the network means that many social, environmental, industrial and administrative processes may be optimized, as crucial decisions may be taken faster. Deep Intelligence is a tool that employs disruptive Artificial Intelligence techniques for data analysis i.e., classification, clustering, forecasting, optimization, visualization. Its strength lies in its ability to extract data from virtually any source type. This is a very important feature given the heterogeneity of the data being produced in the world today. Another very important characteristic is its intuitiveness and ability to operate almost autonomously. The user is guided through the process which means that anyone can use it without any knowledge of the technical, technological and mathematical aspects of the processes performed by the platform. This means that the Deepint.net platform integrates functionalities that would normally take years to implement in any sector individually and that would normally require a group of experts in data analysis and related technologies [1-322]. The Deep Intelligence platform can be used to easily operate Edge Computing architectures and IoT networks. The joint characteristics of a well-designed Edge Computing platform (that is, one which brings computing resources to the edge of the network) and of the advanced Deepint.net platform deployed in a cloud environment, mean that high speed, real-time response, effective troubleshooting and management, as well as precise forecasting can be achieved. Moreover, the low cost of the solution, in combination with the availability of low-cost sensors, devices, Edge Computing hardware, means that deployment becomes a possibility for developing countries, where such solutions are needed most
    • …
    corecore