465 research outputs found

    Novel Alternative Methods to Romberg Integration and Richardson’s Extrapolation with Matlab Package:Integral_Calculator

    Get PDF
    This paper introduces new integration methods for numerical integration problems in science and engineering applications. It is shown that the exact results of these integrals can be obtained by these methods with the use of  only 2 segments. So no additional function and integrand evaluations are required for different levels of computation. This situation overcomes the computational inefficiency. A new Matlab Package; Integral_Calculator is presented. Integral_Calculator provides a user-friendly computational platform which requires only 3 data entries from the user and performs the integration and give the results for any functions to be integrated. This package has been tested for each numerical example considered belo

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Fuzzy integration using homotopy perturbation method

    Get PDF
    Complicated fuzzy integrals are difficult to solve, and cannot be expressed in terms of elementary functions or analytical formulae. In this paper, we calculate the fuzzy integrals by using homotopy perturbation method. Some examples are given, revealing its effectiveness and convenience

    Kinematic discrimination of ataxia in horses is facilitated by blindfolding

    Get PDF
    BACKGROUND: Agreement among experienced clinicians is poor when assessing the presence and severity of ataxia, especially when signs are mild. Consequently, objective gait measurements might be beneficial for assessment of horses with neurological diseases. OBJECTIVES: To assess diagnostic criteria using motion capture to measure variability in spatial gait-characteristics and swing duration derived from ataxic and non-ataxic horses, and to assess if variability increases with blindfolding. STUDY DESIGN: Cross-sectional. METHODS: A total of 21 horses underwent measurements in a gait laboratory and live neurological grading by multiple raters. In the gait laboratory, the horses were made to walk across a runway surrounded by a 12-camera motion capture system with a sample frequency of 240 Hz. They were made to walk normally and with a blindfold in at least three trials each. Displacements of reflective markers on head, fetlock, hoof, fourth lumbar vertebra, tuber coxae and sacrum derived from three to four consecutive strides were processed and descriptive statistics, receiver operator characteristics (ROC) to determine the diagnostic sensitivity, specificity and area under the curve (AUC), and correlation between median ataxia grade and gait parameters were determined. RESULTS: For horses with a median ataxia grade ≥2, coefficient of variation for the location of maximum vertical displacement of pelvic and thoracic distal limbs generated good diagnostic yield. The hoofs of the thoracic limbs yielded an AUC of 0.81 with 64% sensitivity and 90% specificity. Blindfolding exacerbated the variation for ataxic horses compared to non-ataxic horses with the hoof marker having an AUC of 0.89 with 82% sensitivity and 90% specificity. MAIN LIMITATIONS: The low number of consecutive strides per horse obtained with motion capture could decrease diagnostic utility. CONCLUSIONS: Motion capture can objectively aid the assessment of horses with ataxia. Furthermore, blindfolding increases variation in distal pelvic limb kinematics making it a useful clinical tool

    The Data Big Bang and the Expanding Digital Universe: High-Dimensional, Complex and Massive Data Sets in an Inflationary Epoch

    Get PDF
    Recent and forthcoming advances in instrumentation, and giant new surveys, are creating astronomical data sets that are not amenable to the methods of analysis familiar to astronomers. Traditional methods are often inadequate not merely because of the size in bytes of the data sets, but also because of the complexity of modern data sets. Mathematical limitations of familiar algorithms and techniques in dealing with such data sets create a critical need for new paradigms for the representation, analysis and scientific visualization (as opposed to illustrative visualization) of heterogeneous, multiresolution data across application domains. Some of the problems presented by the new data sets have been addressed by other disciplines such as applied mathematics, statistics and machine learning and have been utilized by other sciences such as space-based geosciences. Unfortunately, valuable results pertaining to these problems are mostly to be found only in publications outside of astronomy. Here we offer brief overviews of a number of concepts, techniques and developments, some "old" and some new. These are generally unknown to most of the astronomical community, but are vital to the analysis and visualization of complex datasets and images. In order for astronomers to take advantage of the richness and complexity of the new era of data, and to be able to identify, adopt, and apply new solutions, the astronomical community needs a certain degree of awareness and understanding of the new concepts. One of the goals of this paper is to help bridge the gap between applied mathematics, artificial intelligence and computer science on the one side and astronomy on the other.Comment: 24 pages, 8 Figures, 1 Table. Accepted for publication: "Advances in Astronomy, special issue "Robotic Astronomy

    Objective assessment of movement disabilities using wearable sensors

    Full text link
    The research presents a series of comprehensive analyses based on inertial measurements obtained from wearable sensors to quantitatively describe and assess human kinematic performance in certain tasks that are most related to daily life activities. This is not only a direct application of human movement analysis but also very pivotal in assessing the progression of patients undergoing rehabilitation services. Moreover, the detailed analysis will provide clinicians with greater insights to capture movement disorders and unique ataxic features regarding axial abnormalities which are not directly observed by the clinicians

    Perbandingan Solusi Numerik Metode Romberg dan Simulasi Monte Carlo pada Penyelesaian Integral

    Get PDF
    Hasil simulasi menunjukkan bahwa dengan menggunakan 9 angka penting, untuk n = 4, metode Romberg bisa menghasilkan galat sebesar 0.000000003 pada fungsi aljabar rasional dan 0.00000007 pada fungsi aljabar irrasional. Bahkan pada fungsi aljabar fuzzy nilai yang dihasilkan sama dengan nilai eksaknya. Sedangkan metode Simulasi Monte Carlo untuk iterasi n = 10000 sekalipun galat yang dihasilkan tidak lebih kecil dari galat metode Romberg. Oleh karena itu dapat disimpulkan bahwa metode Romberg jauh lebih akurat dari metode Simulasi Monte Carlo, baik pada penyelesaian integral lipat dua dengan fungsi aljabar maupun integral tunggal dengan fungsi fuzz

    Local time in diffusive media and applications to imaging

    Full text link
    Local time is the measure of how much time a random walk has visited a given position. In multiple scattering media, where waves are diffuse, local time measures the sensitivity of the waves to the local medium's properties. Local variations of absorption, velocity and scattering between two measurements yield variations in the wave field. These variations are proportionnal to the local time of the volume where the change happened and the amplitude of variation. The wave field variations are measured using correlations and can be used as input in a inversion algorithm to produce variation maps. The present article gives the expression of the local time in dimensions one, two and three and an expression of its fluctuations, in order to perform such inversions and estimate their accuracy.Comment: 10 pages, 2 figures and 3 table

    Compact representation of wall-bounded turbulence using compressive sampling

    Get PDF
    Compressive sampling is well-known to be a useful tool used to resolve the energetic content of signals that admit a sparse representation. The broadband temporal spectrum acquired from point measurements in wall-bounded turbulence has precluded the prior use of compressive sampling in this kind of flow, however it is shown here that the frequency content of flow fields that have been Fourier transformed in the homogeneous spatial (wall-parallel) directions is approximately sparse, giving rise to a compact representation of the velocity field. As such, compressive sampling is an ideal tool for reducing the amount of information required to approximate the velocity field. Further, success of the compressive sampling approach provides strong evidence that this representation is both physically meaningful and indicative of special properties of wall turbulence. Another advantage of compressive sampling over periodic sampling becomes evident at high Reynolds numbers, since the number of samples required to resolve a given bandwidth with compressive sampling scales as the logarithm of the dynamically significant bandwidth instead of linearly for periodic sampling. The combination of the Fourier decomposition in the wall-parallel directions, the approximate sparsity in frequency, and empirical bounds on the convection velocity leads to a compact representation of an otherwise broadband distribution of energy in the space defined by streamwise and spanwise wavenumber, frequency, and wall-normal location. The data storage requirements for reconstruction of the full field using compressive sampling are shown to be significantly less than for periodic sampling, in which the Nyquist criterion limits the maximum frequency that can be resolved. Conversely, compressive sampling maximizes the frequency range that can be recovered if the number of samples is limited, resolving frequencies up to several times higher than the mean sampling rate. It is proposed that the approximate sparsity in frequency and the corresponding structure in the spatial domain can be exploited to design simulation schemes for canonical wall turbulence with significantly reduced computational expense compared with current techniques
    corecore