2,846 research outputs found

    Mutual Information in the Frequency Domain for Application in Biological Systems

    Get PDF
    Biological systems are comprised of multiple components that typically interact nonlinearly and produce multiple outputs (time series/signals) with specific frequency characteristics. Although the exact knowledge of the underlying mechanism remains unknown, the outputs observed from these systems can provide the dependency relations through quantitative methods and increase our understanding of the original systems. The nonlinear relations at specific frequencies require advanced dependency measures to capture the generalized interactions beyond typical correlation in the time domain or coherence in the frequency domain. Mutual information from Information Theory is such a quantity that can measure statistical dependency between random variables. Herein, we develop a model–free methodology for detection of nonlinear relations between time series with respect to frequency, that can quantify dependency under a general probabilistic framework. Classic nonlinear dynamical system and their coupled forms (Lorenz, bidirectionally coupled Lorenz, and unidirectionally coupled Macky–Glass systems) are employed to generate artificial data and to test the proposed methodology. Comparisons between the performances of this measure and a conventional linear measure are presented from applications to the artificial data. This set of results indicates that the proposed methodology is better in capturing the dependency between the variables of the systems. This measure of dependency is also applied to a real–world electrophysiological dataset for emotion analysis to study brain stimuli–response functional connectivity. The results reveal distinct brain regions and specific frequencies that are involved in emotional processing

    Design of Indoor Positioning Systems Based on Location Fingerprinting Technique

    Get PDF
    Positioning systems enable location-awareness for mobile computers in ubiquitous and pervasive wireless computing. By utilizing location information, location-aware computers can render location-based services possible for mobile users. Indoor positioning systems based on location fingerprints of wireless local area networks have been suggested as a viable solution where the global positioning system does not work well. Instead of depending on accurate estimations of angle or distance in order to derive the location with geometry, the fingerprinting technique associates location-dependent characteristics such as received signal strength to a location and uses these characteristics to infer the location. The advantage of this technique is that it is simple to deploy with no specialized hardware required at the mobile station except the wireless network interface card. Any existing wireless local area network infrastructure can be reused for this kind of positioning system. While empirical results and performance studies of such positioning systems are presented in the literature, analytical models that can be used as a framework for efficiently designing the positioning systems are not available. This dissertation develops an analytical model as a design tool and recommends a design guideline for such positioning systems in order to expedite the deployment process. A system designer can use this framework to strike a balance between the accuracy, the precision, the location granularity, the number of access points, and the location spacing. A systematic study is used to analyze the location fingerprint and discover its unique properties. The location fingerprint based on the received signal strength is investigated. Both deterministic and probabilistic approaches of location fingerprint representations are considered. The main objectives of this work are to predict the performance of such systems using a suitable model and perform sensitivity analyses that are useful for selecting proper system parameters such as number of access points and minimum spacing between any two different locations

    Permutation entropy and its main biomedical and econophysics applications: a review

    Get PDF
    Entropy is a powerful tool for the analysis of time series, as it allows describing the probability distributions of the possible state of a system, and therefore the information encoded in it. Nevertheless, important information may be codified also in the temporal dynamics, an aspect which is not usually taken into account. The idea of calculating entropy based on permutation patterns (that is, permutations defined by the order relations among values of a time series) has received a lot of attention in the last years, especially for the understanding of complex and chaotic systems. Permutation entropy directly accounts for the temporal information contained in the time series; furthermore, it has the quality of simplicity, robustness and very low computational cost. To celebrate the tenth anniversary of the original work, here we analyze the theoretical foundations of the permutation entropy, as well as the main recent applications to the analysis of economical markets and to the understanding of biomedical systems.Facultad de Ingenierí

    A Methodology for Project Risk Analysis using Bayesian Belief Networks within a Monte Carlo Simulation Environment

    Get PDF
    Projects are commonly over budget and behind schedule, to some extent because uncertainties are not accounted for in cost and schedule estimates. Research and practice is now addressing this problem, often by using Monte Carlo methods to simulate the effect of variances in work package costs and durations on total cost and date of completion. However, many such project risk approaches ignore the large impact of probabilistic correlation on work package cost and duration predictions. This dissertation presents a risk analysis methodology that integrates schedule and cost uncertainties considering the effect of correlations. Current approaches deal with correlation typically by using a correlation matrix in input parameters. This is conceptually correct, but the number of correlation coefficients to be estimated grows combinatorially with the number of variables. Moreover, if historical data are unavailable, the analyst is forced to elicit values for both the variances and the correlations from expert opinion. Most experts are not trained in probability and have difficulty quantifying correlations. An alternative is the integration of Bayesian belief networks (BBN's) within an integrated cost-schedule Monte Carlo simulation (MCS) model. BBN's can be used to implicitly generate dependency among risk factors and to examine non-additive impacts. The MCS is used to model independent events, which are propagated through BBN's to assess dependent posterior probabilities of cost and time to completion. BBN's can also include qualitative considerations and project characteristics when soft evidence is acquired. The approach builds on emerging methods of systems reliability

    Parameter dependencies for reusable performance specifications of software components

    Get PDF
    To avoid design-related per­for­mance problems, model-driven performance prediction methods analyse the response times, throughputs, and re­source utilizations of software architectures before and during implementation. This thesis proposes new modeling languages and according model transformations, which allow a reusable description of usage profile dependencies to the performance of software components. Predictions based on this new methods can support performance-related design decisions

    A mathematical model for breath gas analysis of volatile organic compounds with special emphasis on acetone

    Full text link
    Recommended standardized procedures for determining exhaled lower respiratory nitric oxide and nasal nitric oxide have been developed by task forces of the European Respiratory Society and the American Thoracic Society. These recommendations have paved the way for the measurement of nitric oxide to become a diagnostic tool for specific clinical applications. It would be desirable to develop similar guidelines for the sampling of other trace gases in exhaled breath, especially volatile organic compounds (VOCs) which reflect ongoing metabolism. The concentrations of water-soluble, blood-borne substances in exhaled breath are influenced by: (i) breathing patterns affecting gas exchange in the conducting airways; (ii) the concentrations in the tracheo-bronchial lining fluid; (iii) the alveolar and systemic concentrations of the compound. The classical Farhi equation takes only the alveolar concentrations into account. Real-time measurements of acetone in end-tidal breath under an ergometer challenge show characteristics which cannot be explained within the Farhi setting. Here we develop a compartment model that reliably captures these profiles and is capable of relating breath to the systemic concentrations of acetone. By comparison with experimental data it is inferred that the major part of variability in breath acetone concentrations (e.g., in response to moderate exercise or altered breathing patterns) can be attributed to airway gas exchange, with minimal changes of the underlying blood and tissue concentrations. Moreover, it is deduced that measured end-tidal breath concentrations of acetone determined during resting conditions and free breathing will be rather poor indicators for endogenous levels. Particularly, the current formulation includes the classical Farhi and the Scheid series inhomogeneity model as special limiting cases.Comment: 38 page

    Evaluating stream CO2 outgassing via drifting and anchored flux chambers in a controlled flume experiment

    Get PDF
    Carbon dioxide (CO2) emissions from running waters represent a key component of the global carbon cycle. However, quantifying CO2 fluxes across air-water boundaries remains challenging due to practical difficulties in the estimation of reach-scale standardized gas exchange velocities (k(600)) and water equilibrium concentrations. Whereas craft-made floating chambers supplied by internal CO2 sensors represent a promising technique to estimate CO2 fluxes from rivers, the existing literature lacks rigorous comparisons among differently designed chambers and deployment techniques. Moreover, as of now the uncertainty of k(600) estimates from chamber data has not been evaluated. Here, these issues were addressed by analysing the results of a flume experiment carried out in the Summer of 2019 in the Lunzer:::Rinnen - Experimental Facility (Austria). During the experiment, 100 runs were performed using two different chamber designs (namely, a standard chamber and a flexible foil chamber with an external floating system and a flexible sealing) and two different deployment modes (drifting and anchored). The runs were performed using various combinations of discharge and channel slope, leading to variable turbulent kinetic energy dissipation rates (1.5 x 10(-3) epsilon < 1 x 10(-1) m(2) s(-3)). Estimates of gas exchange velocities were in line with the existing literature (4 < k(600) < 32 m(2) s(-3)), with a general increase in k(600) for larger turbulent kinetic energy dissipation rates. The flexible foil chamber gave consistent k600 patterns in response to changes in the slope and/or the flow rate. Moreover, acoustic Doppler velocimeter measurements indicated a limited increase in the turbulence induced by the flexible foil chamber on the flow field (22 % increase in 8, leading to a theoretical 5 % increase in k(600)). The uncertainty in the estimate of gas exchange velocities was then estimated using a generalized likelihood uncertainty estimation (GLUE) procedure. Overall, uncertainty in k(600) was moderate to high, with enhanced uncertainty in high-energy set-ups. For the anchored mode, the standard deviations of k 6 00 were between 1.6 and 8.2 m d(-1), whereas significantly higher values were obtained in drifting mode. Interestingly, for the standard chamber the uncertainty was larger (+ 20 %) as compared to the flexible foil chamber. Our study suggests that a flexible foil design and the anchored deployment might be useful techniques to enhance the robustness and the accuracy of CO2 measurements in low-order streams. Furthermore, the study demonstrates the value of analytical and numerical tools in the identification of accurate estimations for gas exchange velocities. These findings have important implications for improving estimates of greenhouse gas emissions and reaeration rates in running waters

    Using mutual information to measure time lags from nonlinear processes in astronomy

    Get PDF
    Measuring time lags between time series or light curves at different wavelengths from a variable or transient source in astronomy is an essential probe of physical mechanisms causing multiwavelength variability. Time lags are typically quantified using discrete correlation functions (DCFs), which are appropriate for linear relationships. However, in variable sources such as x-ray binaries, active galactic nuclei (AGNs), and other accreting systems, the radiative processes and the resulting multiwavelength light curves often have nonlinear relationships. For such systems it is more appropriate to use nonlinear information-theoretic measures of causation such as mutual information, routinely used in other disciplines. We demonstrate with toy models the limitations of using the standard DCF and show improvements when using a discrete mutual information function (DMIF). For nonlinear correlations, the latter accurately and sharply identifies the lag components as opposed to the DCF, which can be erroneous. Following that, we apply the DMIF to the multiwavelength light curves of AGN NGC 4593. We find that x-ray fluxes are leading UVW2 fluxes by ∼0.2 days, closer to model predictions from reprocessing by the accretion disk than the DCF estimate. The uncertainties with the current light curves are too large, though, to rule out negative lags. Additionally, we find another delay component at approximately −1 day, i.e., UVW2 leading x rays, consistent with inward propagating fluctuations in the accretion disk scenario. This is not detected by the DCF. Keeping in mind the nonlinear relation between x ray and UVW2, this is worthy of further theoretical investigation. From both the toy models and real observations, it is clear that the mutual-information-based estimator is highly sensitive to complex nonlinear relations. With sufficiently high temporal resolution and signal-to-noise ratio, we will precisely detect each of the lag features corresponding to these relations

    Architectural level delay and leakage power modelling of manufacturing process variation

    Get PDF
    PhD ThesisThe effect of manufacturing process variations has become a major issue regarding the estimation of circuit delay and power dissipation, and will gain more importance in the future as device scaling continues in order to satisfy market place demands for circuits with greater performance and functionality per unit area. Statistical modelling and analysis approaches have been widely used to reflect the effects of a variety of variational process parameters on system performance factor which will be described as probability density functions (PDFs). At present most of the investigations into statistical models has been limited to small circuits such as a logic gate. However, the massive size of present day electronic systems precludes the use of design techniques which consider a system to comprise these basic gates, as this level of design is very inefficient and error prone. This thesis proposes a methodology to bring the effects of process variation from transistor level up to architectural level in terms of circuit delay and leakage power dissipation. Using a first order canonical model and statistical analysis approach, a statistical cell library has been built which comprises not only the basic gate cell models, but also more complex functional blocks such as registers, FIFOs, counters, ALUs etc. Furthermore, other sensitive factors to the overall system performance, such as input signal slope, output load capacitance, different signal switching cases and transition types are also taken into account for each cell in the library, which makes it adaptive to an incremental circuit design. The proposed methodology enables an efficient analysis of process variation effects on system performance with significantly reduced computation time compared to the Monte Carlo simulation approach. As a demonstration vehicle for this technique, the delay and leakage power distributions of a 2-stage asynchronous micropipeline circuit has been simulated using this cell library. The experimental results show that the proposed method can predict the delay and leakage power distribution with less than 5% error and at least 50,000 times faster computation time compare to 5000-sample SPICE based Monte Carlo simulation. The methodology presented here for modelling process variability plays a significant role in Design for Manufacturability (DFM) by quantifying the direct impact of process variations on system performance. The advantages of being able to undertake this analysis at a high level of abstraction and thus early in the design cycle are two fold. First, if the predicted effects of process variation render the circuit performance to be outwith specification, design modifications can be readily incorporated to rectify the situation. Second, knowing what the acceptable limits of process variation are to maintain design performance within its specification, informed choices can be made regarding the implementation technology and manufacturer selected to fabricate the design
    • …
    corecore