2,270 research outputs found

    Least-biased correction of extended dynamical systems using observational data

    Full text link
    We consider dynamical systems evolving near an equilibrium statistical state where the interest is in modelling long term behavior that is consistent with thermodynamic constraints. We adjust the distribution using an entropy-optimizing formulation that can be computed on-the- fly, making possible partial corrections using incomplete information, for example measured data or data computed from a different model (or the same model at a different scale). We employ a thermostatting technique to sample the target distribution with the aim of capturing relavant statistical features while introducing mild dynamical perturbation (thermostats). The method is tested for a point vortex fluid model on the sphere, and we demonstrate both convergence of equilibrium quantities and the ability of the formulation to balance stationary and transient- regime errors.Comment: 27 page

    A first-order phase transition at the random close packing of hard spheres

    Full text link
    Randomly packing spheres of equal size into a container consistently results in a static configuration with a density of ~64%. The ubiquity of random close packing (RCP) rather than the optimal crystalline array at 74% begs the question of the physical law behind this empirically deduced state. Indeed, there is no signature of any macroscopic quantity with a discontinuity associated with the observed packing limit. Here we show that RCP can be interpreted as a manifestation of a thermodynamic singularity, which defines it as the "freezing point" in a first-order phase transition between ordered and disordered packing phases. Despite the athermal nature of granular matter, we show the thermodynamic character of the transition in that it is accompanied by sharp discontinuities in volume and entropy. This occurs at a critical compactivity, which is the intensive variable that plays the role of temperature in granular matter. Our results predict the experimental conditions necessary for the formation of a jammed crystal by calculating an analogue of the "entropy of fusion". This approach is useful since it maps out-of-equilibrium problems in complex systems onto simpler established frameworks in statistical mechanics.Comment: 33 pages, 10 figure

    Bayesian reconstruction of the cosmological large-scale structure: methodology, inverse algorithms and numerical optimization

    Full text link
    We address the inverse problem of cosmic large-scale structure reconstruction from a Bayesian perspective. For a linear data model, a number of known and novel reconstruction schemes, which differ in terms of the underlying signal prior, data likelihood, and numerical inverse extra-regularization schemes are derived and classified. The Bayesian methodology presented in this paper tries to unify and extend the following methods: Wiener-filtering, Tikhonov regularization, Ridge regression, Maximum Entropy, and inverse regularization techniques. The inverse techniques considered here are the asymptotic regularization, the Jacobi, Steepest Descent, Newton-Raphson, Landweber-Fridman, and both linear and non-linear Krylov methods based on Fletcher-Reeves, Polak-Ribiere, and Hestenes-Stiefel Conjugate Gradients. The structures of the up-to-date highest-performing algorithms are presented, based on an operator scheme, which permits one to exploit the power of fast Fourier transforms. Using such an implementation of the generalized Wiener-filter in the novel ARGO-software package, the different numerical schemes are benchmarked with 1-, 2-, and 3-dimensional problems including structured white and Poissonian noise, data windowing and blurring effects. A novel numerical Krylov scheme is shown to be superior in terms of performance and fidelity. These fast inverse methods ultimately will enable the application of sampling techniques to explore complex joint posterior distributions. We outline how the space of the dark-matter density field, the peculiar velocity field, and the power spectrum can jointly be investigated by a Gibbs-sampling process. Such a method can be applied for the redshift distortions correction of the observed galaxies and for time-reversal reconstructions of the initial density field.Comment: 40 pages, 11 figure

    2D Reconstruction of Small Intestine's Interior Wall

    Full text link
    Examining and interpreting of a large number of wireless endoscopic images from the gastrointestinal tract is a tiresome task for physicians. A practical solution is to automatically construct a two dimensional representation of the gastrointestinal tract for easy inspection. However, little has been done on wireless endoscopic image stitching, let alone systematic investigation. The proposed new wireless endoscopic image stitching method consists of two main steps to improve the accuracy and efficiency of image registration. First, the keypoints are extracted by Principle Component Analysis and Scale Invariant Feature Transform (PCA-SIFT) algorithm and refined with Maximum Likelihood Estimation SAmple Consensus (MLESAC) outlier removal to find the most reliable keypoints. Second, the optimal transformation parameters obtained from first step are fed to the Normalised Mutual Information (NMI) algorithm as an initial solution. With modified Marquardt-Levenberg search strategy in a multiscale framework, the NMI can find the optimal transformation parameters in the shortest time. The proposed methodology has been tested on two different datasets - one with real wireless endoscopic images and another with images obtained from Micro-Ball (a new wireless cubic endoscopy system with six image sensors). The results have demonstrated the accuracy and robustness of the proposed methodology both visually and quantitatively.Comment: Journal draf

    Hybrid solutions to instantaneous MIMO blind separation and decoding: narrowband, QAM and square cases

    Get PDF
    Future wireless communication systems are desired to support high data rates and high quality transmission when considering the growing multimedia applications. Increasing the channel throughput leads to the multiple input and multiple output and blind equalization techniques in recent years. Thereby blind MIMO equalization has attracted a great interest.Both system performance and computational complexities play important roles in real time communications. Reducing the computational load and providing accurate performances are the main challenges in present systems. In this thesis, a hybrid method which can provide an affordable complexity with good performance for Blind Equalization in large constellation MIMO systems is proposed first. Saving computational cost happens both in the signal sep- aration part and in signal detection part. First, based on Quadrature amplitude modulation signal characteristics, an efficient and simple nonlinear function for the Independent Compo- nent Analysis is introduced. Second, using the idea of the sphere decoding, we choose the soft information of channels in a sphere, and overcome the so- called curse of dimensionality of the Expectation Maximization (EM) algorithm and enhance the final results simultaneously. Mathematically, we demonstrate in the digital communication cases, the EM algorithm shows Newton -like convergence.Despite the widespread use of forward -error coding (FEC), most multiple input multiple output (MIMO) blind channel estimation techniques ignore its presence, and instead make the sim- plifying assumption that the transmitted symbols are uncoded. However, FEC induces code structure in the transmitted sequence that can be exploited to improve blind MIMO channel estimates. In final part of this work, we exploit the iterative channel estimation and decoding performance for blind MIMO equalization. Experiments show the improvements achievable by exploiting the existence of coding structures and that it can access the performance of a BCJR equalizer with perfect channel information in a reasonable SNR range. All results are confirmed experimentally for the example of blind equalization in block fading MIMO systems

    Efficient material characterization by means of the Doppler effect in microwaves

    Get PDF
    Subject of this thesis is the efficient material characterization and defects detection by means of the Doppler effect with microwaves. The first main goal of the work is to develop a prototype of a microwave Doppler system for Non-Destructive Testing (NDT) purposes. Therefore it is necessary that the Doppler system satisfies the following requirements: non-expensive, easily integrated into industrial process, allows fast measurements. The Doppler system needs to include software for hardware control, measurements, and fast signal processing. The second main goal of the thesis is to establish and experimentally confirm possible practical applications of the Doppler system. The Doppler system consists of the following parts. The hardware part is designed in a way to ensure fast measurement and easy adjustment to the different radar types. The software part of the system contains tools for: hardware control, data acquisition, signal processing and representing data to the user. In this work firstly a new type of 2D Doppler amplitude imaging was developed and formalized. Such a technique is used to derive information about the measured object from several angles of view. In the thesis special attention was paid to the frequency analysis of the mea- sured signals as a means to improve spatial resolution of the radar. In the context of frequency analysis we present 2D Doppler frequency imaging and compare it with amplitude imaging. In the thesis the spatial resolution ability of CW radars is examined and im- proved. We show that the joint frequency and the amplitude signal processing allows to significantly increase the spatial resolution of the radar.Das Thema dieser Dissertation ist die effiziente Materialcharakterisierung und Fehlerdetektion durch Nutzung des Dopplereffektes mittels Mikrowellen. Das erste Hauptziel der Arbeit ist die Entwicklung eines Prototyps eines Mikrowellen-Doppler-Systems im Bereich der zerstörungsfreien Prüfung. Das Doppler-System muss folgenden Voraussetzungen erfüllen: es sollte preisgünstig sein, leicht in industrielle Prozesse integrierbar sein und schnelle Messungen erlauben. Das Doppler-System muss die Software für die Hardware-Kontrolle, den Messablauf und die schnelle Signalverarbeitung beinhalten. Das zweite Hauptziel der Dissertation ist es, mögliche praktische Anwendungsfelder des Doppler-Systems zu identifizieren und experimentell zu bearbeiten. Das Doppler-System besteht aus zwei Teilen. Der Hardware-Teil ist so konstruiert, dass er schnelle Messungen und leichte Anpassungen an verschiedene Sensor- und Radartypen zulässt. Der Software-Teil des Systems beinhaltet Werkzeuge für: Hardware-Kontrolle, Datenerfassung, Signalverarbeitung und Programme, um die Daten für den Benutzer zu präsentieren. In dieser Arbeit wurde zuerst ein neuer Typ der 2D-Doppler-Amplitudenbildgebung entwickelt und formalisiert. Dieser Technik wird dafür benutzt, Informationen über die gemessenen Objekte von verschiedenen Blickpunkten aus zu erhalten. In dieser Doktorarbeit wird der Frequenzanalyse der gemessenen Signale besondere Aufmerksamkeit geschenkt, um die Ortsauflösung des Radars zu verbessern. Im Kontext der Frequenzanalyse wird die 2D-Doppler-Frequenzbildgebung präsentiert und mit der Amplitudenbildgebung vergleichen. In dieser Dissertation werden die räumliche Auflösungsmöglichkeiten von CW-Radaren untersucht und verbessert. Es wird gezeigt, dass es die Frequenz- und Amplitudensignalverarbeitung erlaubt, die Ortsauflösung des Radars erheblich zu erhöhen

    Object Tracking from Audio and Video data using Linear Prediction method

    Get PDF
    Microphone arrays and video surveillance by camera are widely used for detection and tracking of a moving speaker. In this project, object tracking was planned using multimodal fusion i.e., Audio-Visual perception. Source localisation can be done by GCC-PHAT, GCC-ML for time delay estimation delay estimation. These methods are based on spectral content of the speech signals that can be effected by noise and reverberation. Video tracking can be done using Kalman filter or Particle filter. Therefore Linear Prediction method is used for audio and video tracking. Linear prediction in source localisation use features related to excitation source information of speech which are less effected by noise. Hence by using this excitation source information, time delays are estimated and the results are compared with GCC PHAT method. The dataset obtained from [20] is used in video tracking a single moving object captured through stationary camera. Then for object detection, projection histogram is done followed by linear prediction for tracking and the corresponding results are compared with Kalman filter method

    Advanced maximum entropy approaches for medical and microscopy imaging

    Get PDF
    The maximum entropy framework is a cornerstone of statistical inference, which is employed at a growing rate for constructing models capable of describing and predicting biological systems, particularly complex ones, from empirical datasets.‎ In these high-yield applications, determining exact probability distribution functions with only minimal information about data characteristics and without utilizing human subjectivity is of particular interest. In this thesis, an automated procedure of this kind for univariate and bivariate data is employed to reach this objective through combining the maximum entropy method with an appropriate optimization method. The only necessary characteristics of random variables are their continuousness and ability to be approximated as independent and identically distributed. In this work, we try to concisely present two numerical probabilistic algorithms and apply them to estimate the univariate and bivariate models of the available data. In the first case, a combination of the maximum entropy method, Newton's method, and the Bayesian maximum a posteriori approach leads to the estimation of the kinetic parameters with arterial input functions (AIFs) in cases without any measurement of the AIF. ‎The results shows that the AIF can reliably be determined from the data of dynamic contrast enhanced-magnetic resonance imaging (DCE-MRI) by maximum entropy method. Then, kinetic parameters can be obtained. By using the developed method, a good data fitting and thus a more accurate prediction of the kinetic parameters are achieved, which, in turn, leads to a more reliable application of DCE-MRI. ‎ In the bivariate case, we consider colocalization as a quantitative analysis in fluorescence microscopy imaging. The method proposed in this case is obtained by combining the Maximum Entropy Method (MEM) and a Gaussian Copula, which we call the Maximum Entropy Copula (MEC). This novel method is capable of measuring the spatial and nonlinear correlation of signals to obtain the colocalization of markers in fluorescence microscopy images. Based on the results, MEC is able to specify co- and anti-colocalization even in high-background situations.‎ ‎The main point here is that determining the joint distribution via its marginals is an important inverse problem which has one possible unique solution in case of choosing an proper copula according to Sklar's theorem. This developed combination of Gaussian copula and the univariate maximum entropy marginal distribution enables the determination of a unique bivariate distribution. Therefore, a colocalization parameter can be obtained via Kendall’s t, which is commonly employed in the copula literature. In general, the importance of applying these algorithms to biological data is attributed to the higher accuracy, faster computing rate, and lower cost of solutions in comparison to those of others. The extensive application and success of these algorithms in various contexts depend on their conceptual plainness and mathematical validity. ‎ Afterward, a probability density is estimated via enhancing trial cumulative distribution functions iteratively, in which more appropriate estimations are quantified using a scoring function that recognizes irregular fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian criterion. Uncertainty induced by statistical fluctuations in random samples is reflected by multiple estimates for the probability density. In addition, as a useful diagnostic for visualizing the quality of the estimated probability densities, scaled quantile residual plots are introduced. Kullback--Leibler divergence is an appropriate measure to indicate the convergence of estimations for the probability density function (PDF) to the actual PDF as sample. The findings indicate the general applicability of this method to high-yield statistical inference.Die Methode der maximalen Entropie ist ein wichtiger Bestandteil der statistischen Inferenz, die in immer stärkerem Maße für die Konstruktion von Modellen verwendet wird, die biologische Systeme, insbesondere komplexe Systeme, aus empirischen Datensätzen beschreiben und vorhersagen können. In diesen ertragreichen Anwendungen ist es von besonderem Interesse, exakte Verteilungsfunktionen mit minimaler Information über die Eigenschaften der Daten und ohne Ausnutzung menschlicher Subjektivität zu bestimmen. In dieser Arbeit wird durch eine Kombination der Maximum-Entropie-Methode mit geeigneten Optimierungsverfahren ein automatisiertes Verfahren verwendet, um dieses Ziel für univariate und bivariate Daten zu erreichen. Notwendige Eigenschaften von Zufallsvariablen sind lediglich ihre Stetigkeit und ihre Approximierbarkeit als unabhängige und identisch verteilte Variablen. In dieser Arbeit versuchen wir, zwei numerische probabilistische Algorithmen präzise zu präsentieren und sie zur Schätzung der univariaten und bivariaten Modelle der zur Verfügung stehenden Daten anzuwenden. Zunächst wird mit einer Kombination aus der Maximum-Entropie Methode, der Newton-Methode und dem Bayes'schen Maximum-A-Posteriori-Ansatz die Schätzung der kinetischen Parameter mit arteriellen Eingangsfunktionen (AIFs) in Fällen ohne Messung der AIF ermöglicht. Die Ergebnisse zeigen, dass die AIF aus den Daten der dynamischen kontrastverstärkten Magnetresonanztomographie (DCE-MRT) mit der Maximum-Entropie-Methode zuverlässig bestimmt werden kann. Anschließend können die kinetischen Parameter gewonnen werden. Durch die Anwendung der entwickelten Methode wird eine gute Datenanpassung und damit eine genauere Vorhersage der kinetischen Parameter erreicht, was wiederum zu einer zuverlässigeren Anwendung der DCE-MRT führt. Im bivariaten Fall betrachten wir die Kolokalisierung zur quantitativen Analyse in der Fluoreszenzmikroskopie-Bildgebung. Die in diesem Fall vorgeschlagene Methode ergibt sich aus der Kombination der Maximum-Entropie-Methode (MEM) und einer Gaußschen Copula, die wir Maximum-Entropie-Copula (MEC) nennen. Mit dieser neuartigen Methode kann die räumliche und nichtlineare Korrelation von Signalen gemessen werden, um die Kolokalisierung von Markern in Bildern der Fluoreszenzmikroskopie zu erhalten. Das Ergebnis zeigt, dass MEC in der Lage ist, die Ko- und Antikolokalisation auch in Situationen mit hohem Grundrauschen zu bestimmen. Der wesentliche Punkt hierbei ist, dass die Bestimmung der gemeinsamen Verteilung über ihre Marginale ein entscheidendes inverses Problem ist, das eine mögliche eindeutige Lösung im Falle der Wahl einer geeigneten Copula gemäß dem Satz von Sklar hat. Diese neu entwickelte Kombination aus Gaußscher Kopula und der univariaten Maximum Entropie Randverteilung ermöglicht die Bestimmung einer eindeutigen bivariaten Verteilung. Daher kann ein Kolokalisationsparameter über Kendall's t ermittelt werden, der üblicherweise in der Copula-Literatur verwendet wird. Die Bedeutung der Anwendung dieser Algorithmen auf biologische Daten lässt sich im Allgemeinen mit hoher Genauigkeit, schnellerer Rechengesch windigkeit und geringeren Kosten im Vergleich zu anderen Lösungen begründen. Die umfassende Anwendung und der Erfolg dieser Algorithmen in verschiedenen Kontexten hängen von ihrer konzeptionellen Eindeutigkeit und mathematischen Gültigkeit ab. Anschließend wird eine Wahrscheinlichkeitsdichte durch iterative Erweiterung von kumulativen Verteilungsfunktionen geschätzt, wobei die geeignetsten Schätzungen mit einer Scoring-Funktion quantifiziert werden, um unregelmäßige Schwankungen zu erkennen. Dieses Kriterium verhindert eine Unter- oder Überanpassung der Daten als Alternative zur Verwendung des Bayes-Kriteriums. Die durch statistische Schwankungen in Stichproben induzierte Unsicherheit wird durch mehrfache Schätzungen für die Wahrscheinlichkeitsdichte berücksichtigt. Zusätzlich werden als nützliche Diagnostik zur Visualisierung der Qualität der geschätzten Wahrscheinlichkeitsdichten skalierte Quantil-Residuen-Diagramme eingeführt. Die Kullback-Leibler-Divergenz ist ein geeignetes Maß, um die Konvergenz der Schätzungen für die Wahrscheinlichkeitsdichtefunktion (PDF) zu der tatsächlichen PDF als Stichprobe anzuzeigen. Die Ergebnisse zeigen die generelle Anwendbarkeit dieser Methode für statistische Inferenz mit hohem Ertrag.
    corecore