3,899 research outputs found

    Applications of Deep Learning Models in Financial Forecasting

    Get PDF
    In financial markets, deep learning techniques sparked a revolution, reshaping conventional approaches and amplifying predictive capabilities. This thesis explored the applications of deep learning models to unravel insights and methodologies aimed at advancing financial forecasting. The crux of the research problem lies in the applications of predictive models within financial domains, characterised by high volatility and uncertainty. This thesis investigated the application of advanced deep-learning methodologies in the context of financial forecasting, addressing the challenges posed by the dynamic nature of financial markets. These challenges were tackled by exploring a range of techniques, including convolutional neural networks (CNNs), long short-term memory networks (LSTMs), autoencoders (AEs), and variational autoencoders (VAEs), along with approaches such as encoding financial time series into images. Through analysis, methodologies such as transfer learning, convolutional neural networks, long short-term memory networks, generative modelling, and image encoding of time series data were examined. These methodologies collectively offered a comprehensive toolkit for extracting meaningful insights from financial data. The present work investigated the practicality of a deep learning CNN-LSTM model within the Directional Change framework to predict significant DC events—a task crucial for timely decisionmaking in financial markets. Furthermore, the potential of autoencoders and variational autoencoders to enhance financial forecasting accuracy and remove noise from financial time series data was explored. Leveraging their capacity within financial time series, these models offered promising avenues for improved data representation and subsequent forecasting. To further contribute to financial prediction capabilities, a deep multi-model was developed that harnessed the power of pre-trained computer vision models. This innovative approach aimed to predict the VVIX, utilising the cross-disciplinary synergy between computer vision and financial forecasting. By integrating knowledge from these domains, novel insights into the prediction of market volatility were provided

    Information actors beyond modernity and coloniality in times of climate change:A comparative design ethnography on the making of monitors for sustainable futures in Curaçao and Amsterdam, between 2019-2022

    Get PDF
    In his dissertation, Mr. Goilo developed a cutting-edge theoretical framework for an Anthropology of Information. This study compares information in the context of modernity in Amsterdam and coloniality in Curaçao through the making process of monitors and develops five ways to understand how information can act towards sustainable futures. The research also discusses how the two contexts, that is modernity and coloniality, have been in informational symbiosis for centuries which is producing negative informational side effects within the age of the Anthropocene. By exploring the modernity-coloniality symbiosis of information, the author explains how scholars, policymakers, and data-analysts can act through historical and structural roots of contemporary global inequities related to the production and distribution of information. Ultimately, the five theses propose conditions towards the collective production of knowledge towards a more sustainable planet

    Graduate Catalog of Studies, 2023-2024

    Get PDF

    Natural and Technological Hazards in Urban Areas

    Get PDF
    Natural hazard events and technological accidents are separate causes of environmental impacts. Natural hazards are physical phenomena active in geological times, whereas technological hazards result from actions or facilities created by humans. In our time, combined natural and man-made hazards have been induced. Overpopulation and urban development in areas prone to natural hazards increase the impact of natural disasters worldwide. Additionally, urban areas are frequently characterized by intense industrial activity and rapid, poorly planned growth that threatens the environment and degrades the quality of life. Therefore, proper urban planning is crucial to minimize fatalities and reduce the environmental and economic impacts that accompany both natural and technological hazardous events

    The Application of Data Analytics Technologies for the Predictive Maintenance of Industrial Facilities in Internet of Things (IoT) Environments

    Get PDF
    In industrial production environments, the maintenance of equipment has a decisive influence on costs and on the plannability of production capacities. In particular, unplanned failures during production times cause high costs, unplanned downtimes and possibly additional collateral damage. Predictive Maintenance starts here and tries to predict a possible failure and its cause so early that its prevention can be prepared and carried out in time. In order to be able to predict malfunctions and failures, the industrial plant with its characteristics, as well as wear and ageing processes, must be modelled. Such modelling can be done by replicating its physical properties. However, this is very complex and requires enormous expert knowledge about the plant and about wear and ageing processes of each individual component. Neural networks and machine learning make it possible to train such models using data and offer an alternative, especially when very complex and non-linear behaviour is evident. In order for models to make predictions, as much data as possible about the condition of a plant and its environment and production planning data is needed. In Industrial Internet of Things (IIoT) environments, the amount of available data is constantly increasing. Intelligent sensors and highly interconnected production facilities produce a steady stream of data. The sheer volume of data, but also the steady stream in which data is transmitted, place high demands on the data processing systems. If a participating system wants to perform live analyses on the incoming data streams, it must be able to process the incoming data at least as fast as the continuous data stream delivers it. If this is not the case, the system falls further and further behind in processing and thus in its analyses. This also applies to Predictive Maintenance systems, especially if they use complex and computationally intensive machine learning models. If sufficiently scalable hardware resources are available, this may not be a problem at first. However, if this is not the case or if the processing takes place on decentralised units with limited hardware resources (e.g. edge devices), the runtime behaviour and resource requirements of the type of neural network used can become an important criterion. This thesis addresses Predictive Maintenance systems in IIoT environments using neural networks and Deep Learning, where the runtime behaviour and the resource requirements are relevant. The question is whether it is possible to achieve better runtimes with similarly result quality using a new type of neural network. The focus is on reducing the complexity of the network and improving its parallelisability. Inspired by projects in which complexity was distributed to less complex neural subnetworks by upstream measures, two hypotheses presented in this thesis emerged: a) the distribution of complexity into simpler subnetworks leads to faster processing overall, despite the overhead this creates, and b) if a neural cell has a deeper internal structure, this leads to a less complex network. Within the framework of a qualitative study, an overall impression of Predictive Maintenance applications in IIoT environments using neural networks was developed. Based on the findings, a novel model layout was developed named Sliced Long Short-Term Memory Neural Network (SlicedLSTM). The SlicedLSTM implements the assumptions made in the aforementioned hypotheses in its inner model architecture. Within the framework of a quantitative study, the runtime behaviour of the SlicedLSTM was compared with that of a reference model in the form of laboratory tests. The study uses synthetically generated data from a NASA project to predict failures of modules of aircraft gas turbines. The dataset contains 1,414 multivariate time series with 104,897 samples of test data and 160,360 samples of training data. As a result, it could be proven for the specific application and the data used that the SlicedLSTM delivers faster processing times with similar result accuracy and thus clearly outperforms the reference model in this respect. The hypotheses about the influence of complexity in the internal structure of the neuronal cells were confirmed by the study carried out in the context of this thesis

    Advances and Challenges of Multi-task Learning Method in Recommender System: A Survey

    Full text link
    Multi-task learning has been widely applied in computational vision, natural language processing and other fields, which has achieved well performance. In recent years, a lot of work about multi-task learning recommender system has been yielded, but there is no previous literature to summarize these works. To bridge this gap, we provide a systematic literature survey about multi-task recommender systems, aiming to help researchers and practitioners quickly understand the current progress in this direction. In this survey, we first introduce the background and the motivation of the multi-task learning-based recommender systems. Then we provide a taxonomy of multi-task learning-based recommendation methods according to the different stages of multi-task learning techniques, which including task relationship discovery, model architecture and optimization strategy. Finally, we raise discussions on the application and promising future directions in this area

    Writing Facts: Interdisciplinary Discussions of a Key Concept in Modernity

    Get PDF
    "Fact" is one of the most crucial inventions of modern times. Susanne Knaller discusses the functions of this powerful notion in the arts and the sciences, its impact on aesthetic models and systems of knowledge. The practice of writing provides an effective procedure to realize and to understand facts. This concerns preparatory procedures, formal choices, models of argumentation, and narrative patterns. By considering "writing facts" and "writing facts", the volume shows why and how "facts" are a result of knowledge, rules, and norms as well as of description, argumentation, and narration. This approach allows new perspectives on »fact« and its impact on modernity

    Perceptions and Practicalities for Private Machine Learning

    Get PDF
    data they and their partners hold while maintaining data subjects' privacy. In this thesis I show that private computation, such as private machine learning, can increase end-users' acceptance of data sharing practices, but not unconditionally. There are many factors that influence end-users' privacy perceptions in this space; including the number of organizations involved and the reciprocity of any data sharing practices. End-users emphasized the importance of detailing the purpose of a computation and clarifying that inputs to private computation are not shared across organizations. End-users also struggled with the notion of protections not being guaranteed 100\%, such as in statistical based schemes, thus demonstrating a need for a thorough understanding of the risk form attacks in such applications. When training a machine learning model on private data, it is critical to understand the conditions under which that data can be protected; and when it cannot. For instance, membership inference attacks aim to violate privacy protections by determining whether specific data was used to train a particular machine learning model. Further, the successful transition of private machine learning theoretical research to practical use must account for gaps in achieving these properties that arise due to the realities of concrete implementations, threat models, and use cases; which is not currently the case

    Frivolous Floodgate Fears

    Get PDF
    When rejecting plaintiff-friendly liability standards, courts often cite a fear of opening the floodgates of litigation. Namely, courts point to either a desire to protect the docket of federal courts or a burden on the executive branch. But there is little empirical evidence exploring whether the adoption of a stricter standard can, in fact, decrease the filing of legal claims in this circumstance. This Article empirically analyzes and theoretically models the effect of adopting arguably stricter liability standards on litigation by investigating the context of one of the Supreme Court’s most recent reliances on this argument when adopting a stricter liability standard for causation in employment discrimination claims. In 2013, the Supreme Court held that a plaintiff proving retaliation under Title VII of the Civil Rights Act must prove that their participation in a protected activity was a but-for cause of the adverse employment action they experienced. Rejecting the arguably more plaintiff-friendly motivating-factor standard, the Court stated, “[L]essening the causation standard could also contribute to the filing of frivolous claims, which would siphon resources from efforts by employer[s], administrative agencies, and courts to combat workplace harassment.” Univ. of Tex. Sw. Med. Ctr. v. Nassar, 570 U.S. 338, 358 (2013). And over the past ten years, the Court has overturned the application of motivating-factor causation as applied to at least four different federal antidiscrimination statutes. Contrary to the Supreme Court’s concern that motivating-factor causation encourages frivolous charges, many employment law scholars worry that the heightened but-for standard will deter legitimate claims. This Article empirically explores these concerns, in part using data received from the Equal Employment Opportunity Commission (EEOC) through a Freedom of Information Act (FOIA) request. Specifically, it empirically tests whether the adoption of the but-for causation standard for claims filed under the Age Discrimination in Employment Act and by federal courts of appeals under the Americans with Disabilities Act has impacted the filing of discrimination claims and the outcome of those claims in federal court. Consistent with theory detailed in this Article, the empirical analysis provides evidence that the stricter standard may have increased the docket of the federal courts by decreasing settlement within the EEOC and during litigation. The empirical results weigh in on concerns surrounding the adoption of the but-for causation standard and provide evidence that the floodgates argument, when relied on to deter frivolous filings by changing liability standards, in fact, may do just the opposite by decreasing the likelihood of settlement in the short term, without impacting the filing of claims or other case outcomes
    • …
    corecore