103 research outputs found
Synthetic Aperture Radar (SAR) Meets Deep Learning
This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports
A predictive equation for wave setup using genetic programming
We applied machine learning to improve the accuracy of present predictors of wave setup. Namely, we used an evolutionary-based genetic programming model and a previously published dataset, which includes various beach and wave conditions. Here, we present two new wave setup predictors: a simple predictor, which is a function of wave height, wavelength, and foreshore beach slope, and a fitter, but more complex predictor, which is also a function of sediment diameter. The results show that the new predictors outperform existing formulas. We conclude that machine learning models are capable of improving predictive capability (when compared to existing predictors) and also of providing a physically sound description of wave setup.</p
Self-Learning Longitudinal Control for On-Road Vehicles
Fahrerassistenzsysteme (Advanced Driver Assistance Systems) sind ein wichtiges Verkaufsargument fĂĽr PKWs, fordern jedoch hohe Entwicklungskosten.
Insbesondere die Parametrierung für Längsregelung, die einen wichtigen Baustein für Fahrerassistenzsysteme darstellt, benötigt viel Zeit und Geld, um die richtige Balance zwischen Insassenkomfort und Regelgüte zu treffen.
Reinforcement Learning scheint ein vielversprechender Ansatz zu sein, um dies zu automatisieren.
Diese Klasse von Algorithmen wurde bislang allerdings vorwiegend auf simulierte Aufgaben angewendet, die unter idealen Bedingungen stattfinden und nahezu unbegrenzte Trainingszeit ermöglichen.
Unter den größten Herausforderungen für die Anwendung von Reinforcement Learning in einem realen Fahrzeug sind Trajektorienfolgeregelung und unvollständige Zustandsinformationen aufgrund von nur teilweise beobachteter Dynamik.
DarĂĽber hinaus muss ein Algorithmus, der in realen Systemen angewandt wird, innerhalb von Minuten zu einem Ergebnis kommen.
Außerdem kann das Regelziel sich während der Laufzeit beliebig ändern, was eine zusätzliche Schwierigkeit für Reinforcement Learning Methoden darstellt.
Diese Arbeit stellt zwei Algorithmen vor, die wenig Rechenleistung benötigen und diese Hürden überwinden.
Einerseits wird ein modellfreier Reinforcement Learning Ansatz vorgeschlagen, der auf der Actor-Critic-Architektur basiert und eine spezielle Struktur in der Zustandsaktionswertfunktion verwendet, um mit teilweise beobachteten Systemen eingesetzt werden zu können.
Um eine Vorsteuerung zu lernen, wird ein Regler vorgeschlagen, der sich auf eine Projektion und Trainingsdatenmanipulation stĂĽtzt.
Andererseits wird ein modellbasierter Algorithmus vorgeschlagen, der auf Policy Search basiert.
Diesem wird eine automatisierte Entwurfsmethode fĂĽr eine inversionsbasierte Vorsteuerung zur Seite gestellt.
Die vorgeschlagenen Algorithmen werden in einer Reihe von Szenarien verglichen, in denen sie online, d.h. während der Fahrt und bei geschlossenem Regelkreis, in einem realen Fahrzeug lernen.
Obwohl die Algorithmen etwas unterschiedlich auf verschiedene Randbedingungen reagieren, lernen beide robust und zügig und sind in der Lage, sich an verschiedene Betriebspunkte, wie zum Beispiel Geschwindigkeiten und Gänge, anzupassen, auch wenn Störungen während des Trainings einwirken.
Nach bestem Wissen des Autors ist dies die erste erfolgreiche Anwendung eines Reinforcement Learning Algorithmus, der online in einem realen Fahrzeug lernt
On the Challenges of Software Performance Optimization with Statistical Methods
Most recent programming languages, such as Java, Python and Ruby, include a collection framework as part of their standard library (or runtime). The Java Collection Framework provides a number of collection classes, some of which implement the same abstract data type, making them interchangeable. Developers can therefore choose between several functionally equivalent options. Since collections have different performance characteristics, and can be allocated in thousands of programs locations, the choice of collection has an important impact on performance. Unfortunately, programmers often make sub-optimal choices when selecting their collections.In this thesis, we consider the problem of building automated tools that would help the programmer choose between different collection implementations. We divide this problem into two sub-problems. First, we need to measure the performance of a collection, and use relevant statistical methods to make meaningful comparisons. Second, we need to predict the performance of a collection with as little benchmarking as possible.To measure and analyze the performance of Java collections, we identify problems with the established methods, and suggest the need for more appropriate statistical methods, borrowed from Bayesian statistics. We use these statistical methods in a reproduction of two state-of-the-art dynamic collection selection approaches: CoCo and CollectionSwitch. Our Bayesian approach allows us to make sound comparisons between the previously reported results and our own experimental evidence.We find that we cannot reproduce the original results, and report on possible causes for the discrepancies between our results and theirs.To predict the performance of a collection, we consider an existing tool called Brainy. Brainy suggests collections to developers for C++ programs, using machine learning. One particularity of Brainy is that it generates its own training data, by synthesizing programs and benchmarking them. As a result Brainy can automatically learn about new collections and new CPU architectures, while other approaches required an expert to teach the system about collection performance. We adapt Brainy to the Java context, and investigate whether Brainy's adaptability also holds for Java. We find that Brainy's benchmark synthesis methods do not apply well to the Java context, as they introduce some significant biases. We propose a new generative model for collection benchmarks and present the challenges that porting Brainy to Java entails
Self-Learning Longitudinal Control for On-Road Vehicles
Reinforcement Learning is a promising tool to automate controller tuning. However, significant extensions are required for real-world applications to enable fast and robust learning. This work proposes several additions to the state of the art and proves their capability in a series of real world experiments
The Simplest Inflationary Potentials
Inflation is a highly favoured theory for the early Universe. It is
compatible with current observations of the cosmic microwave background and
large scale structure and is a driver in the quest to detect primordial
gravitational waves. It is also, given the current quality of the data, highly
under-determined with a large number of candidate implementations. We use a new
method in symbolic regression to generate all possible simple scalar field
potentials for one of two possible basis sets of operators. Treating these as
single-field, slow-roll inflationary models we then score them with an
information-theoretic metric ("minimum description length") that quantifies
their efficiency in compressing the information in the Planck data. We explore
two possible priors on the parameter space of potentials, one related to the
functions' structural complexity and one that uses a Katz back-off language
model to prefer functions that may be theoretically motivated. This enables us
to identify the inflaton potentials that optimally balance simplicity with
accuracy at explaining the Planck data, which may subsequently find theoretical
motivation. Our exploratory study opens the door to extraction of fundamental
physics directly from data, and may be augmented with more refined theoretical
priors in the quest for a complete understanding of the early Universe.Comment: 13+4 pages, 4 figures; submitted to Physical Review
Simplification of genetic programs: a literature survey
Genetic programming (GP), a widely used evolutionary computing technique, suffers from bloat—the problem of excessive growth in individuals’ sizes. As a result, its ability to efficiently explore complex search spaces reduces. The resulting solutions are less robust and generalisable. Moreover, it is difficult to understand and explain models which contain bloat. This phenomenon is well researched, primarily from the angle of controlling bloat: instead, our focus in this paper is to review the literature from an explainability point of view, by looking at how simplification can make GP models more explainable by reducing their sizes. Simplification is a code editing technique whose primary purpose is to make GP models more explainable. However, it can offer bloat control as an additional benefit when implemented and applied with caution. Researchers have proposed several simplification techniques and adopted various strategies to implement them. We organise the literature along multiple axes to identify the relative strengths and weaknesses of simplification techniques and to identify emerging trends and areas for future exploration. We highlight design and integration challenges and propose several avenues for research. One of them is to consider simplification as a standalone operator, rather than an extension of the standard crossover or mutation operators. Its role is then more clearly complementary to other GP operators, and it can be integrated as an optional feature into an existing GP setup. Another proposed avenue is to explore the lack of utilisation of complexity measures in simplification. So far, size is the most discussed measure, with only two pieces of prior work pointing out the benefits of using time as a measure when controlling bloat
Applied Cognitive Sciences
Cognitive science is an interdisciplinary field in the study of the mind and intelligence. The term cognition refers to a variety of mental processes, including perception, problem solving, learning, decision making, language use, and emotional experience. The basis of the cognitive sciences is the contribution of philosophy and computing to the study of cognition. Computing is very important in the study of cognition because computer-aided research helps to develop mental processes, and computers are used to test scientific hypotheses about mental organization and functioning. This book provides a platform for reviewing these disciplines and presenting cognitive research as a separate discipline
- …