33 research outputs found
COMPARATIVE STUDY OF THE DIFFERENT VERSIONS OF THE GENERAL IMAGE QUALITY EQUATION
The General Image Quality Equation (GIQE) is an analytical tool derived by regression modelling that is routinely employed to gauge the interpretability of raw and processed images, computing the most popular quantitative metric to evaluate image quality; the National Image Interpretability Rating Scale (NIIRS). There are three known versions of this equation; GIQE 3, GIQE 4 and GIQE 5, but the last one is scarcely known. The variety of versions, their subtleties, discontinuities and incongruences, generate confusion and problems among users. The first objective of this paper is to identify typical sources of confusion in the use of the GIQE, suggesting novel solutions to the main problems found in its application and presenting the derivation of a continuous form of GIQE 4, denominated GIQE 4C, that provides better correlation with GIQE 3 and GIQE 5. The second objective of this paper is to compare the predictions of GIQE 4C and GIQE 5, regarding the maximum image quality rating that can be achieved by image processing techniques. It is concluded that the transition from GIQE 4 to GIQE 5 is a major paradigm shift in image quality metrics, because it reduces the benefit of image processing techniques and enhances the importance of the raw image and its signal to noise ratio
Advanced Algebraic Concepts for Efficient Multi-Channel Signal Processing
ï»żUnsere moderne Gesellschaft ist Zeuge eines fundamentalen Wandels in der Art und Weise
wie wir mit Technologie interagieren. GerĂ€te werden zunehmend intelligenter - sie verfĂŒgen
ĂŒber mehr und mehr Rechenleistung und hĂ€ufiger ĂŒber eigene Kommunikationsschnittstellen.
Das beginnt bei einfachen HaushaltsgerĂ€ten und reicht ĂŒber Transportmittel bis zu groĂen
ĂŒberregionalen Systemen wie etwa dem Stromnetz. Die Erfassung, die Verarbeitung und der
Austausch digitaler Informationen gewinnt daher immer mehr an Bedeutung. Die Tatsache,
dass ein wachsender Anteil der GerÀte heutzutage mobil und deshalb batteriebetrieben ist,
begrĂŒndet den Anspruch, digitale Signalverarbeitungsalgorithmen besonders effizient zu gestalten.
Dies kommt auch dem Wunsch nach einer Echtzeitverarbeitung der groĂen anfallenden
Datenmengen zugute.
Die vorliegende Arbeit demonstriert Methoden zum Finden effizienter algebraischer Lösungen
fĂŒr eine Vielzahl von Anwendungen mehrkanaliger digitaler Signalverarbeitung. Solche AnsĂ€tze
liefern nicht immer unbedingt die bestmögliche Lösung, kommen dieser jedoch hÀufig recht
nahe und sind gleichzeitig bedeutend einfacher zu beschreiben und umzusetzen. Die einfache
Beschreibungsform ermöglicht eine tiefgehende Analyse ihrer LeistungsfĂ€higkeit, was fĂŒr den
Entwurf eines robusten und zuverlÀssigen Systems unabdingbar ist. Die Tatsache, dass sie nur
gebrĂ€uchliche algebraische Hilfsmittel benötigen, erlaubt ihre direkte und zĂŒgige Umsetzung
und den Test unter realen Bedingungen.
Diese Grundidee wird anhand von drei verschiedenen Anwendungsgebieten demonstriert.
ZunÀchst wird ein semi-algebraisches Framework zur Berechnung der kanonisch polyadischen
(CP) Zerlegung mehrdimensionaler Signale vorgestellt. Dabei handelt es sich um ein sehr
grundlegendes Werkzeug der multilinearen Algebra mit einem breiten Anwendungsspektrum
von Mobilkommunikation ĂŒber Chemie bis zur Bildverarbeitung. Verglichen mit existierenden
iterativen Lösungsverfahren bietet das neue Framework die Möglichkeit, den Rechenaufwand
und damit die GĂŒte der erzielten Lösung zu steuern. Es ist auĂerdem weniger anfĂ€llig gegen eine
schlechte Konditionierung der Ausgangsdaten. Das zweite Gebiet, das in der Arbeit besprochen
wird, ist die unterraumbasierte hochauflösende ParameterschĂ€tzung fĂŒr mehrdimensionale Signale,
mit Anwendungsgebieten im RADAR, der Modellierung von Wellenausbreitung, oder
bildgebenden Verfahren in der Medizin. Es wird gezeigt, dass sich derartige mehrdimensionale
Signale mit Tensoren darstellen lassen. Dies erlaubt eine natĂŒrlichere Beschreibung und eine
bessere Ausnutzung ihrer Struktur als das mit Matrizen möglich ist. Basierend auf dieser Idee
entwickeln wir eine tensor-basierte SchÀtzung des Signalraums, welche genutzt werden kann
um beliebige existierende Matrix-basierte Verfahren zu verbessern. Dies wird im Anschluss
exemplarisch am Beispiel der ESPRIT-artigen Verfahren gezeigt, fĂŒr die verbesserte Versionen
vorgeschlagen werden, die die mehrdimensionale Struktur der Daten (Tensor-ESPRIT),
nichzirkulÀre Quellsymbole (NC ESPRIT), sowie beides gleichzeitig (NC Tensor-ESPRIT) ausnutzen.
Um die endgĂŒltige SchĂ€tzgenauigkeit objektiv einschĂ€tzen zu können wird dann ein
Framework fĂŒr die analytische Beschreibung der LeistungsfĂ€higkeit beliebiger ESPRIT-artiger
Algorithmen diskutiert. Verglichen mit existierenden analytischen AusdrĂŒcken ist unser Ansatz
allgemeiner, da keine Annahmen ĂŒber die statistische Verteilung von Nutzsignal und
Rauschen benötigt werden und die Anzahl der zur VerfĂŒgung stehenden SchnappschĂŒsse beliebig
klein sein kann. Dies fĂŒhrt auf vereinfachte AusdrĂŒcke fĂŒr den mittleren quadratischen
SchĂ€tzfehler, die Schlussfolgerungen ĂŒber die Effizienz der Verfahren unter verschiedenen Bedingungen
zulassen. Das dritte Anwendungsgebiet ist der bidirektionale Datenaustausch mit
Hilfe von Relay-Stationen. Insbesondere liegt hier der Fokus auf Zwei-Wege-Relaying mit Hilfe
von Amplify-and-Forward-Relays mit mehreren Antennen, da dieser Ansatz ein besonders gutes
Kosten-Nutzen-VerhÀltnis verspricht. Es wird gezeigt, dass sich die nötige Kanalkenntnis
mit einem einfachen algebraischen Tensor-basierten SchĂ€tzverfahren gewinnen lĂ€sst. AuĂerdem
werden Verfahren zum Finden einer gĂŒnstigen Relay-VerstĂ€rkungs-Strategie diskutiert. Bestehende
AnsÀtze basieren entweder auf komplexen numerischen Optimierungsverfahren oder auf
Ad-Hoc-AnsÀtzen die keine zufriedenstellende Bitfehlerrate oder Summenrate liefern. Deshalb
schlagen wir algebraische AnsÀtze zum Finden der RelayverstÀrkungsmatrix vor, die von relevanten
Systemmetriken inspiriert sind und doch einfach zu berechnen sind. Wir zeigen das
algebraische ANOMAX-Verfahren zum Erreichen einer niedrigen Bitfehlerrate und seine Modifikation
RR-ANOMAX zum Erreichen einer hohen Summenrate. FĂŒr den Spezialfall, in dem
die EndgerÀte nur eine Antenne verwenden, leiten wir eine semi-algebraische Lösung zum
Finden der Summenraten-optimalen Strategie (RAGES) her. Anhand von numerischen Simulationen
wird die LeistungsfĂ€higkeit dieser Verfahren bezĂŒglich Bitfehlerrate und erreichbarer
Datenrate bewertet und ihre EffektivitÀt gezeigt.Modern society is undergoing a fundamental change in the way we interact with technology.
More and more devices are becoming "smart" by gaining advanced computation capabilities
and communication interfaces, from household appliances over transportation systems to large-scale
networks like the power grid. Recording, processing, and exchanging digital information
is thus becoming increasingly important. As a growing share of devices is nowadays mobile
and hence battery-powered, a particular interest in efficient digital signal processing techniques
emerges.
This thesis contributes to this goal by demonstrating methods for finding efficient algebraic
solutions to various applications of multi-channel digital signal processing. These may not
always result in the best possible system performance. However, they often come close while
being significantly simpler to describe and to implement. The simpler description facilitates a
thorough analysis of their performance which is crucial to design robust and reliable systems.
The fact that they rely on standard algebraic methods only allows their rapid implementation
and test under real-world conditions.
We demonstrate this concept in three different application areas. First, we present a semi-algebraic
framework to compute the Canonical Polyadic (CP) decompositions of multidimensional
signals, a very fundamental tool in multilinear algebra with applications ranging from
chemistry over communications to image compression. Compared to state-of-the art iterative
solutions, our framework offers a flexible control of the complexity-accuracy trade-off and
is less sensitive to badly conditioned data. The second application area is multidimensional
subspace-based high-resolution parameter estimation with applications in RADAR, wave propagation
modeling, or biomedical imaging. We demonstrate that multidimensional signals can
be represented by tensors, providing a convenient description and allowing to exploit the
multidimensional structure in a better way than using matrices only. Based on this idea,
we introduce the tensor-based subspace estimate which can be applied to enhance existing
matrix-based parameter estimation schemes significantly. We demonstrate the enhancements
by choosing the family of ESPRIT-type algorithms as an example and introducing enhanced
versions that exploit the multidimensional structure (Tensor-ESPRIT), non-circular source
amplitudes (NC ESPRIT), and both jointly (NC Tensor-ESPRIT). To objectively judge the
resulting estimation accuracy, we derive a framework for the analytical performance assessment
of arbitrary ESPRIT-type algorithms by virtue of an asymptotical first order perturbation
expansion. Our results are more general than existing analytical results since we do not need
any assumptions about the distribution of the desired signal and the noise and we do not
require the number of samples to be large. At the end, we obtain simplified expressions for the
mean square estimation error that provide insights into efficiency of the methods under various
conditions. The third application area is bidirectional relay-assisted communications. Due to
its particularly low complexity and its efficient use of the radio resources we choose two-way
relaying with a MIMO amplify and forward relay. We demonstrate that the required channel
knowledge can be obtained by a simple algebraic tensor-based channel estimation scheme. We
also discuss the design of the relay amplification matrix in such a setting. Existing approaches
are either based on complicated numerical optimization procedures or on ad-hoc solutions
that to not perform well in terms of the bit error rate or the sum-rate. Therefore, we propose
algebraic solutions that are inspired by these performance metrics and therefore perform well
while being easy to compute. For the MIMO case, we introduce the algebraic norm maximizing
(ANOMAX) scheme, which achieves a very low bit error rate, and its extension Rank-Restored
ANOMAX (RR-ANOMAX) that achieves a sum-rate close to an upper bound. Moreover, for
the special case of single antenna terminals we derive the semi-algebraic RAGES scheme which
finds the sum-rate optimal relay amplification matrix based on generalized eigenvectors. Numerical
simulations evaluate the resulting system performance in terms of bit error rate and
system sum rate which demonstrates the effectiveness of the proposed algebraic solutions
Studies of a prototype of an Electro-Optic Beam Position Monitor at the CERN Super Proton Synchrotron
The commissioning, development and results of an Electro-Optic Beam Position Moni- tor (EO-BPM) prototype installed in the CERN Super Proton Synchrotron (SPS) are reported in this thesis. This technology is a diagnostic technique that aims to be capable of measuring the transverse intra-bunch position in 1 ns proton bunches with a time reso- lution less than 100 ps, in order to achieve the requirements of the High Luminosity Large Hadron Collider (HL-LHC). The thesis details the mechanism that generates the electro-optic signal that results from the interaction of the Coulomb field with a lithium niobate crystal via the Pockels effect. The theoretical background leads to the introduction of the EO-BPM concept based on vacuum-integrated EO crystals in the context of the SPS machine. In conjunction with this, an analytical framework has been developed to estimate the EO pickup signal for the SPS beam parameters. This study also presents two different opto-mechanical pickup designs, pickup zero and one. Numerical electromagnetic simulations have been carried out to predict, more precisely, the performance of both proposals in relation to the modulating field. In addition, a detailed description of the experimental optical setup adjacent to the prototype and the acquisition system is presented. Further simulations have been applied to incorporate the response of the detection system to calculate the final signal delivered by the prototype. Results from measurements in December 2016 for pickup zero and over the summer 2017 for pickup one are reported and constitute the first detection ever of a proton beam by electro-optic means. Analysis verifies that the signal at a radial distance of 66.5mm scales correctly as a function of the beam conditions and the pickup model, and is also sensitive to the beam transverse position. These results provide the first proof of concept, in preparation for future developments of the technology towards the LHC upgrade
Functional nucleic acids as substrate for information processing
Information processing applications driven by self-assembly and conformation dynamics of nucleic acids are possible. These underlying paradigms (self-assembly and conformation
dynamics) are essential for natural information processors as illustrated by proteins. A key advantage in utilising nucleic acids as information processors is the availability of computational tools to support the design process. This provides us with a platform to develop an integrated environment in which an orchestration of molecular building blocks can be realised. Strict arbitrary control over the design of these computational nucleic acids is not feasible. The microphysical behaviour of these molecular materials must be taken into consideration during the design phase. This thesis investigated, to what extent the construction of molecular building blocks for a particular purpose is possible with the support of a software environment. In this work we developed a computational protocol that functions on a multi-molecular level, which enable us to directly
incorporate the dynamic characteristics of nucleic acids molecules. To allow the implementation of this computational protocol, we developed a designer that able to solve the
nucleic acids inverse prediction problem, not only in the multi-stable states level, but also include the interactions among molecules that occur in each meta-stable state. The realisation of our computational protocol are evaluated by generating computational nucleic acids units that resembles synthetic RNA devices that have been successfully implemented in the laboratory. Furthermore, we demonstrated the feasibility of the
protocol to design various types of computational units. The accuracy and diversity of the generated candidates are significantly better than the best candidates produced by conventional designers. With the computational protocol, the design of nucleic acid information processor using a network of interconnecting nucleic acids is now feasible
Large strain computational modeling of high strain rate phenomena in perforating gun devices by Lagrangian/Eulerian FEM simulations
The present doctoral thesis deals with the study and the analysis of large strain and high strain rate behavior of materials and components. Theoretical, experimental and computational aspects are taken into consideration. Particular reference is made to the modeling of metallic materials, although other kinds of materials are considered as well. The work may be divided into three main parts.
The first part of the work consists in a critical review of the constitutive modeling of materials subjected to large strains and high to very high strain rates. Specific attention is paid to the opportunity of adopting so-called strength models and equations of state. Damage and failure modeling is discussed as well. In this part, specific interest is addressed to reviewing the so-called Johnson-Cook strength model, by critically highlighting its positive and negative aspects. One of the main tackled issue consists in a reasoned assessment of the various procedures adoptable in order to calibrate the parameters of the model. This phase is enriched and clarified by applying different calibration strategies to a real case, i.e. the evaluation of the model parameters for a structural steel. The consequences determined by each calibration approach are then carefully evaluated and compared.
The second part of the work aims at introducing a new strength model, that consists in a generalization of the Johnson-Cook model. The motivations for the introduction of this model are first exposed and discussed. The features of the new strength model are then described. Afterwards, the various procedures adoptable for the determination of the material parameters are presented. The new strength model is then applied to a real case, i.e. a structural steel as above, and the results are compared to those obtained from the original Johnson-Cook model. Comparing to that, the obtained outcomes show that the new model displays a better capacity in reproducing experimental data. Results are discussed and commented.
The third and final part of the work deals with an application of the studied topics to a real industrial case of interest. A device called perforating gun is analyzed in its structural problematics and critical aspects. This challenging application involves the modeling of several typologies of material, large strains, very high strain rate phenomena, high temperatures, explosions, hypervelocity impacts, damage, fracture and phase changes. In this regard, computational applications of the studied theories are presented and their outcomes are assessed and discussed. Several finite element techniques are considered. In particular, tridimensional Eulerian simulations are presented. The obtained results appear to be very promising in terms of the possibilities of a fruitful use in the design process of the device, in particular in order to achieve an optimization of its key features
Path planning for mobile robots in the real world: handling multiple objectives, hierarchical structures and partial information
Autonomous robots in real-world environments face a number of challenges even to accomplish apparently simple tasks like moving to a given location. We present four realistic scenarios in which robot navigation takes into account partial information, hierarchical structures, and multiple objectives. We start by discussing navigation in indoor environments shared with people, where routes are characterized by effort, risk, and social impact. Next, we improve navigation by computing optimal trajectories and implementing human-friendly local navigation behaviors. Finally, we move to outdoor environments, where robots rely on uncertain traversability estimations and need to account for the risk of getting stuck or having to change route
Bayesian Variational Regularisation for Dark Matter Reconstruction with Uncertainty Quantification
Despite the great wealth of cosmological knowledge accumulated since the early 20th century, the nature of dark-matter, which accounts for ~85% of the matter content of the universe, remains illusive. Unfortunately, though dark-matter is scientifically interesting, with implications for our fundamental understanding of the Universe, it cannot be directly observed. Instead, dark-matter may be inferred from e.g. the optical distortion (lensing) of distant galaxies which, at linear order, manifests as a perturbation to the apparent magnitude (convergence) and ellipticity (shearing). Ensemble observations of the shear are collected and leveraged to construct estimates of the convergence, which can directly be related to the universal dark-matter distribution. Imminent stage IV surveys are forecast to accrue an unprecedented quantity of cosmological information; a discriminative partition of which is accessible through the convergence, and is disproportionately concentrated at high angular resolutions, where the echoes of cosmological evolution under gravity are most apparent. Capitalising on advances in probability concentration theory, this thesis merges the paradigms of Bayesian inference and optimisation to develop hybrid convergence inference techniques which are scalable, statistically principled, and operate over the Euclidean plane, celestial sphere, and 3-dimensional ball. Such techniques can quantify the plausibility of inferences at one-millionth the computational overhead of competing sampling methods. These Bayesian techniques are applied to the hotly debated Abell-520 merging cluster, concluding that observational catalogues contain insufficient information to determine the existence of dark-matter self-interactions. Further, these techniques were applied to all public lensing catalogues, recovering the then largest global dark-matter mass-map. The primary methodological contributions of this thesis depend only on posterior log-concavity, paving the way towards a, potentially revolutionary, complete hybridisation with artificial intelligence techniques. These next-generation techniques are the first to operate over the full 3-dimensional ball, laying the foundations for statistically principled universal dark-matter cartography, and the cosmological insights such advances may provide
The total reaction cross section of heavy-ion reactions induced by stable and unstable exotic beams: The low-energy regime
In this review paper we present a detailed account of the extraction and the
calculation of the total reaction cross section of strongly bound and weakly
bound, stable and unstable, exotic, nuclei. We discuss the optical model and
the more general coupled channels model of direct reactions. The effect of
long-range absorption due to the coupling to excited states in the target and
to the breakup continuum in the projectile is also discussed. The generalized
optical theorem for charged particle scattering and the resulting sum-of
differences method is then discussed. The so-called "quarter-point recipe" is
discussed next, and the quarter-point angle is introduced as a simple and rapid
mean to obtain the total reaction cross section. The last topic discussed is
the reduction of the total reaction cross section that would allow a large body
of data to sit on a single universal function. Such a universal function exists
in the case of the fusion data, and the aim of this last topic of the review is
to extend the fusion case to the total reaction, by adding the direct reaction
contribution. Also discussed is the inclusive breakup cross section and how it
can be used to extract the total reaction cross section of the interacting
fragment with the target. This method is also known as the Surrogate method and
represents a case of hybrid reactions. The sum of the integrated inclusive
breakup cross section with the complete fusion cross section supplies the total
fusion cross section. The main experimental methods to determine the total
reaction cross section are also discussed, with emphasis in recent techniques
developed to deal with reactions induced by unstable beams.Comment: 43 pages, 27 figures, submitted to EPJA (under review