9,217 research outputs found
Forecasting stock prices using a novel filtering-combination technique: Application to the Pakistan stock exchange
Traders and investors find predicting stock market values an intriguing subject to study in stock exchange markets. Accurate projections lead to high financial revenues and protect investors from market risks. This research proposes a unique filtering-combination approach to increase forecast accuracy. The first step is to filter the original series of stock market prices into two new series, consisting of a nonlinear trend series in the long run and a stochastic component of a series, using the Hodrick-Prescott filter. Next, all possible filtered combination models are considered to get the forecasts of each filtered series with linear and nonlinear time series forecasting models. Then, the forecast results of each filtered series are combined to extract the final forecasts. The proposed filtering-combination technique is applied to Pakistan's daily stock market price index data from January 2, 2013 to February 17, 2023. To assess the proposed forecasting methodology's performance in terms of model consistency, efficiency and accuracy, we analyze models in different data set ratios and calculate four mean errors, correlation coefficients and directional mean accuracy. Last, the authors recommend testing the proposed filtering-combination approach for additional complicated financial time series data in the future to achieve highly accurate, efficient and consistent forecasts
A foundation for synthesising programming language semantics
Programming or scripting languages used in real-world systems are seldom designed
with a formal semantics in mind from the outset. Therefore, the first step for developing well-founded analysis tools for these systems is to reverse-engineer a formal
semantics. This can take months or years of effort.
Could we automate this process, at least partially? Though desirable, automatically reverse-engineering semantics rules from an implementation is very challenging,
as found by Krishnamurthi, Lerner and Elberty. They propose automatically learning
desugaring translation rules, mapping the language whose semantics we seek to a simplified, core version, whose semantics are much easier to write. The present thesis
contains an analysis of their challenge, as well as the first steps towards a solution.
Scaling methods with the size of the language is very difficult due to state space
explosion, so this thesis proposes an incremental approach to learning the translation
rules. I present a formalisation that both clarifies the informal description of the challenge by Krishnamurthi et al, and re-formulates the problem, shifting the focus to the
conditions for incremental learning. The central definition of the new formalisation is
the desugaring extension problem, i.e. extending a set of established translation rules
by synthesising new ones.
In a synthesis algorithm, the choice of search space is important and non-trivial,
as it needs to strike a good balance between expressiveness and efficiency. The rest
of the thesis focuses on defining search spaces for translation rules via typing rules.
Two prerequisites are required for comparing search spaces. The first is a series of
benchmarks, a set of source and target languages equipped with intended translation
rules between them. The second is an enumerative synthesis algorithm for efficiently
enumerating typed programs. I show how algebraic enumeration techniques can be applied to enumerating well-typed translation rules, and discuss the properties expected
from a type system for ensuring that typed programs be efficiently enumerable.
The thesis presents and empirically evaluates two search spaces. A baseline search
space yields the first practical solution to the challenge. The second search space is
based on a natural heuristic for translation rules, limiting the usage of variables so that
they are used exactly once. I present a linear type system designed to efficiently enumerate translation rules, where this heuristic is enforced. Through informal analysis
and empirical comparison to the baseline, I then show that using linear types can speed
up the synthesis of translation rules by an order of magnitude
Automatisierte Erstellung von Predictive-Maintenance-Modellen in der Automobilindustrie
Neue Technologien in der Automobilindustrie fĂŒhren zu einer rasanten Weiterentwicklung des Fahrzeugs mit immer stĂ€rkerem Fokus auf die Software. Damit einhergehend ist die steigende KomplexitĂ€t der Fahrzeuge, was die Instandhaltung erschwert. Neue Technologien, wie etwa vernetzte Fahrzeuge, ermöglichen jedoch auch die Umsetzung neuer Konzepte der Fahrzeuginstandhaltung. Bei der Instandhaltung durch Predictive Maintenance werden Defekte am Fahrzeug mithilfe von Vorhersagemodellen prognostiziert. Vernetzte Fahrzeuge sind in der Lage, die dafĂŒr benötigten Daten per Fernzugriff bereitzustellen.
Bisherige AnsĂ€tze einer Umsetzung von Predictive Maintenance in der Automobilindustrie konzentrieren sich auf einzelne Bauteile und verwenden Expertenwissen fĂŒr die Entwicklung eines Vorhersagemodells. Die stĂ€ndige Weiterentwicklung und der hohe Grad der Individualisierung von Fahrzeugen fĂŒhrt jedoch dazu, dass ein bereits entwickeltes Vorhersagemodell nicht mehr valide ist bei VerĂ€nderungen der Bauteile, wodurch weitere manuelle Nacharbeit benötigt wird. Dadurch lĂ€sst sich mit diesem Vorgehen keine gesamtheitliche Umsetzung von Predictive Maintenance fĂŒr das Gesamtfahrzeug realisieren.
In dieser Doktorarbeit wird eine Umsetzung von Predictive Maintenance vorgestellt, die ohne Expertenwissen auskommt und fĂŒr alle Defekte im Fahrzeug automatisiert angewendet werden kann.
DafĂŒr werden Daten des gesamten Fahrzeugs verwendet, die ĂŒber Mechanismen der Fahrzeugdiagnose erhoben werden.
Das vorgestellte Vorgehen umfasst die Aufbereitung der Datenbasis, die Identifikation geeigneter Bauteile beziehungsweise Defekte fĂŒr die Vorhersage, die Wahl relevanter Messwerte und die Erstellung des Vorhersagemodells. Im Anschluss werden die vorgestellten Konzepte an einer realen Fahrzeugflotte, bestehend aus circa 2,2 Mio. Fahrzeugen, angewendet und verifiziert
Separately, Connectedly: Exploring Trauma Through Ekphrasis in Contemporary Novels
This thesis examines ekphrasis as a rhetorical tool to explore, represent, and contemplate trauma affect in contemporary novels. From the Greek phrase for âdescription,â ekphrasis is part of a long and ancient literary tradition, dating as far back as the ancient depictions of art on urns, weaponry, as well as more disambiguated descriptions of scenes and people. The uses of ekphrasis as a literary device are broad and complex, but its use is under-researched in contemporary novels, and there is a near total absence of investigation into ekphrasis within the novel as a means of contemplating and understanding the affect of a condition that is inherently abstract and disorienting.Literary trauma theory has evolved considerably in recent years. In keeping with important findings in psychology and psychiatric research, there is a broad recognition that rethinking trauma representation beyond the recitation and reliving of events and into textured descriptions of trauma affect is essential for thoughtful, nuanced explorations of an experience that resists narrative convenience. As a result, there are increased calls to accept and represent its inherent fractured nature and resist the authorial temptation to forge a story around it that fits neatly into a cohesive whole. This thesis proposes a framework for considering how various aspects of ekphrastic descriptions of real and imagined art as well as their connotative and denotative significance in the novel reveals nuance in the representation of trauma affect through the activation of language and image. The contemporary novels explored herein are: The Goldfinch by Donna Tartt, What I Loved by Siri Hustvedt, and How to Be Both by Ali Smith. Each of these novels present ekphrasis and affect differently, which enables broader testing of the flexibility of the proposed framework
Recommended from our members
Interpretable Machine Learning Architectures for Efficient Signal Detection with Applications to Gravitational Wave Astronomy
Deep learning has seen rapid evolution in the past decade, accomplishing tasks that were previously unimaginable. At the same time, researchers strive to better understand and interpret the underlying mechanisms of the deep models, which are often justifiably regarded as "black boxes". Overcoming this deficiency will not only serve to suggest better learning architectures and training methods, but also extend deep learning to scenarios where interpretability is key to the application. One such scenario is signal detection and estimation, with gravitational wave detection as a specific example, where classic methods are often preferred for their interpretability. Nonetheless, while classic statistical detection methods such as matched filtering excel in their simplicity and intuitiveness, they can be suboptimal in terms of both accuracy and computational efficiency. Therefore, it is appealing to have methods that achieve ``the best of both worlds'', namely enjoying simultaneously excellent performance and interpretability.
In this thesis, we aim to bridge this gap between modern deep learning and classic statistical detection, by revisiting the signal detection problem from a new perspective. First, to address the perceived distinction in interpretability between classic matched filtering and deep learning, we state the intrinsic connections between the two families of methods, and identify how trainable networks can address the structural limitations of matched filtering. Based on these ideas, we propose two trainable architectures that are constructed based on matched filtering, but with learnable templates and adaptivity to unknown noise distributions, and therefore higher detection accuracy. We next turn our attention toward improving the computational efficiency of detection, where we aim to design architectures that leverage structures within the problem for efficiency gains. By leveraging the statistical structure of class imbalance, we integrate hierarchical detection into trainable networks, and use a novel loss function which explicitly encodes both detection accuracy and efficiency. Furthermore, by leveraging the geometric structure of the signal set, we consider using signal space optimization as an alternative computational primitive for detection, which is intuitively more efficient than covering with a template bank. We theoretical prove the efficiency gain by analyzing Riemannian gradient descent on the signal manifold, which reveals an exponential improvement in efficiency over matched filtering. We also propose a practical trainable architecture for template optimization, which makes use of signal embedding and kernel interpolation.
We demonstrate the performance of all proposed architectures on the task of gravitational wave detection in astrophysics, where matched filtering is the current method of choice. The architectures are also widely applicable to general signal or pattern detection tasks, which we exemplify with the handwritten digit recognition task using the template optimization architecture. Together, we hope the this work useful to scientists and engineers seeking machine learning architectures with high performance and interpretability, and contribute to our understanding of deep learning as a whole
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Solving forward and inverse problems in a non-linear 3D PDE via an asymptotic expansion based approach
This paper concerns the use of asymptotic expansions for the efficient
solving of forward and inverse problems involving a nonlinear singularly
perturbed time-dependent reaction--diffusion--advection equation. By using an
asymptotic expansion with the local coordinates in the transition-layer region,
we prove the existence and uniqueness of a smooth solution with a sharp
transition layer for a three-dimensional partial differential equation.
Moreover, with the help of asymptotic expansion, a simplified model is derived
for the corresponding inverse source problem, which is close to the original
inverse problem over the entire region except for a narrow transition layer. We
show that such simplification does not reduce the accuracy of the inversion
results when the measurement data contain noise. Based on this simpler
inversion model, an asymptotic-expansion regularization algorithm is proposed
for efficiently solving the inverse source problem in the three-dimensional
case. A model problem shows the feasibility of the proposed numerical approach
- âŠ