28 research outputs found

    Extraction of reliable information from time-domain pressure and flow signals measured by means of forced oscillation techniques

    Get PDF
    This paper aims to give a proof-of-concept for the possible application of the forced oscillation lung function test to assess the viscoelastic properties of the airways and tissue. In particular, a novel signal processing algorithm is employed on non-stationary, noisy, (relatively) short time series of respiratory pressure and flow signals. This novel technique is employed to filter the useful information from the signals acquired under two measurement conditions: pseudo-functional residual capacity (PFRC) and pseudo-total lung capacity (PTLC). The PFRC is the measurement performed at lowest lung volume with maximum deflation, and the PTLC is measurement performed at the maximum lung volume under maximum inflation. The results suggest that the proposed technique is able to extract information on the viscoelastic properties of the lung tissue at a macroscopic level. The conclusion of this preliminary study is that the proposed combination of signal processing method and lung function test is suited to be employed on a large database in order to deliver reference values and perform further statistical analysis

    Self-similarity principle: the reduced description of randomness

    Get PDF
    A new general fitting method based on the Self-Similar (SS) organization of random sequences is presented. The proposed analytical function helps to fit the response of many complex systems when their recorded data form a self-similar curve. The verified SS principle opens new possibilities for the fitting of economical, meteorological and other complex data when the mathematical model is absent but the reduced description in terms of some universal set of the fitting parameters is necessary. This fitting function is verified on economical (price of a commodity versus time) and weather (the Earth’s mean temperature surface data versus time) and for these nontrivial cases it becomes possible to receive a very good fit of initial data set. The general conditions of application of this fitting method describing the response of many complex systems and the forecast possibilities are discussed

    Self-Similar Property of Random Signals: Solution of Inverse Problem

    Get PDF
    Many random signals with clearly expressed trends can have selfsimilar properties. In order to see this self-similar property new presentation of signals is suggested. A novel algorithm for inverse solution of the scaling equation is developed. This original algorithm allows finding the scaling parameters, the corresponding power-law exponent and the unknown log-periodic function from the fitting procedure. The effectiveness of algorithm is tested in financial data revealing season fluctuations of annual, monthly and weekly prices. The general recommendations are given that allow the verification of this algorithm in general data series.info:eu-repo/semantics/publishedVersio

    Detection of quasi-periodic processes in complex systems: how do we quantitatively describe their properties?

    Get PDF
    It has been shown that in reality at least two general scenarios of data structuring are possible: (a) a self-similar (SS) scenario when the measured data form an SS structure and (b) a quasi-periodic (QP) scenario when the repeated (strongly correlated) data form random sequences that are almost periodic with respect to each other. In the second case it becomes possible to describe their behavior and express a part of their randomness quantitatively in terms of the deterministic amplitude–frequency response belonging to the generalized Prony spectrum. This possibility allows us to re-examine the conventional concept of measurements and opens a new way for the description of a wide set of different data. In particular, it concerns different complex systems when the ‘best-fit’ model pretending to be the description of the data measured is absent but the barest necessity of description of these data in terms of the reduced number of quantitative parameters exists. The possibilities of the proposed approach and detection algorithm of the QP processes were demonstrated on actual data: spectroscopic data recorded for pure water and acoustic data for a test hole. The suggested methodology allows revising the accepted classification of different incommensurable and self-affine spatial structures and finding accurate interpretation of the generalized Prony spectroscopy that includes the Fourier spectroscopy as a partial case

    Application of the Complex Moments for Selection of an Optimal Sensor

    No full text
    In the first time we apply the statistics of the complex moments for selection of an optimal pressure sensor (from the available set of sensors) based on their statistical/correlation characteristics. The complex moments contain additional source of information and, therefore, they can realize the comparison of random sequences registered for almost identical devices or gadgets. The proposed general algorithm allows to calculate 12 key correlation parameters in the significance space. These correlation parameters allow to realize the desired comparison. New algorithm is rather general and can be applied for a set of other data if they are presented in the form of rectangle matrices. Each matrix contains N data points and M columns that are connected with repetitious cycle of measurements. In addition, we want to underline that the value of correlations evaluated with the help of Pearson correlation coefficient (PCC) has a relative character. One can introduce also external correlations based on the statistics of the fractional/complex moments that form a complete picture of correlations. To the PCC value of internal correlations one can add at least 7 additional external correlators evaluated in the space of fractional and complex moments in order to realize the justified choice. We do suppose that the proposed algorithm (containing an additional source of information in the complex space) can find a wide application in treatment of different data, where it is necessary to select the “best sensors/chips” based on their measured data, presented usually in the form of random rectangle matrices

    Can Self-Similarity Processes Be Reflected by the Power-Law Dependencies?

    No full text
    This work was greatly influenced by the opinions of one of the authors (JS), who demonstrated in a recent book that it is important to distinguish between “fractal models” and “fractal” (power-law) behaviors. According to the self-similarity principle (SSP), the authors of this study completely distinguish between independent “fractal” (power-law) behavior and the “fractal models”, which result from the solution of equations incorporating non-integer differentiation/integration operators. It is feasible to demonstrate how many random curves resemble one another and how they can be predicted by functions with real and complex-conjugated power-law exponents. Bellman’s inequality can be used to demonstrate that the generalized geometric mean, not the arithmetic mean, which is typically recognized as the fundamental criterion in the signal processing field, corresponds to the global fitting minimum. To highlight the efficiency of the proposed algorithms, they are applied to two sets of data: one without a clearly expressed power-law behavior, the other containing clear power-law dependence

    Self-Similarity Principle and the General Theory of Fractal Elements: How to Fit a Random Curve with a Clearly Expressed Trend?

    No full text
    The well-known power-law fractal element was determined to need several important revisions by the authors of this work. It is now possible to demonstrate that any scaling equation associated with a fractal element is actually K-fold degenerated and includes previously unknown but crucial adjustments. These new discoveries have the potential to significantly alter the preexisting theory and create new connections between it and its experimental support, particularly when it comes to measurements of the impedances of diverse metamaterials. It is now easy to demonstrate that any random curve with a clearly stated tendency in a specific range of scales is self-similar using the method involving reduction to three invariant points (Ymx, Ymn, and Ymin). This useful procedure indicates that the chosen random curve, even after being compressed a certain number of times, still resembles the original curve. Based on this common peculiarity, it is now possible to derive “a universal” fitting function that can be used in a variety of applied sciences, particularly those that deal with complex systems, to parametrize many initial curves when a model fitting function derived from a simple model is not present. This self-similarity principle-derived function demonstrates its effectiveness in data linked to photodiode noise and the smoothed integral curves produced from well-known transcendental numbers E and Pi, which are considered in the paper as an example

    Application of the Complex Moments for Selection of an Optimal Sensor

    No full text
    In the first time we apply the statistics of the complex moments for selection of an optimal pressure sensor (from the available set of sensors) based on their statistical/correlation characteristics. The complex moments contain additional source of information and, therefore, they can realize the comparison of random sequences registered for almost identical devices or gadgets. The proposed general algorithm allows to calculate 12 key correlation parameters in the significance space. These correlation parameters allow to realize the desired comparison. New algorithm is rather general and can be applied for a set of other data if they are presented in the form of rectangle matrices. Each matrix contains N data points and M columns that are connected with repetitious cycle of measurements. In addition, we want to underline that the value of correlations evaluated with the help of Pearson correlation coefficient (PCC) has a relative character. One can introduce also external correlations based on the statistics of the fractional/complex moments that form a complete picture of correlations. To the PCC value of internal correlations one can add at least 7 additional external correlators evaluated in the space of fractional and complex moments in order to realize the justified choice. We do suppose that the proposed algorithm (containing an additional source of information in the complex space) can find a wide application in treatment of different data, where it is necessary to select the “best sensors/chips” based on their measured data, presented usually in the form of random rectangle matrices
    corecore