368,926 research outputs found

    An asynchronous leapfrog method II

    Full text link
    A second order explicit one-step numerical method for the initial value problem of the general ordinary differential equation is proposed. It is obtained by natural modifications of the well-known leapfrog method, which is a second order, two-step, explicit method. According to the latter method, the input data for an integration step are two system states, which refer to different times. The usage of two states instead of a single one can be seen as the reason for the robustness of the method. Since the time step size thus is part of the step input data, it is complicated to change this size during the computation of a discrete trajectory. This is a serious drawback when one needs to implement automatic time step control. The proposed modification transforms one of the two input states into a velocity and thus gets rid of the time step dependency in the step input data. For these new step input data, the leapfrog method gives a unique prescription how to evolve them stepwise. The stability properties of this modified method are the same as for the original one: the set of absolute stability is the interval [-i,+i] on the imaginary axis. This implies exponential growth of trajectories in situations where the exact trajectory has an asymptote. By considering new evolution steps that are composed of two consecutive old evolution steps we can average over the velocities of the sub-steps and get an integrator with a much larger set of absolute stability, which is immune to the asymptote problem. The method is exemplified with the equation of motion of a one-dimensional non-linear oscillator describing the radial motion in the Kepler problem.Comment: 41 pages, 25 figure

    Plug & Test at System Level via Testable TLM Primitives

    Get PDF
    With the evolution of Electronic System Level (ESL) design methodologies, we are experiencing an extensive use of Transaction-Level Modeling (TLM). TLM is a high-level approach to modeling digital systems where details of the communication among modules are separated from the those of the implementation of functional units. This paper represents a first step toward the automatic insertion of testing capabilities at the transaction level by definition of testable TLM primitives. The use of testable TLM primitives should help designers to easily get testable transaction level descriptions implementing what we call a "Plug & Test" design methodology. The proposed approach is intended to work both with hardware and software implementations. In particular, in this paper we will focus on the design of a testable FIFO communication channel to show how designers are given the freedom of trading-off complexity, testability levels, and cos

    Trailblazers in Electromechanical Computing

    Get PDF
    Over the last six decades, electronic computing has spread so deeply in science and technology to became a fundamental tool for studying, researching and designing. Passing through vacuum tubes, transistors, integrated circuits and microprocessors, electronics has allows an amazing growth in computing power [1] and the recent commissioning in 2016 of the all-Chinese Sunway TaihuLight with a computing power 93 PFLOPS (1015 floating point operations per second), two and a half times larger than the previous world top supercomputer, the Chinese Tianhe-2 of 2013 powered with Intel processors, suggests that the evolution is still far from saturation. It is quite intriguing to wonder what was automatic computing before electronics started such a boost in computing power. Indeed, the search for mechanical tools aimed at relieving from the burden of computing goes far back into the past, at least to the ancient times when the abacus was built. However, it was with electricity that this possibility made a major step ahead

    Automatic epilepsy detection using fractal dimensions segmentation and GP-SVM classification

    Get PDF
    Objective: The most important part of signal processing for classification is feature extraction as a mapping from original input electroencephalographic (EEG) data space to new features space with the biggest class separability value. Features are not only the most important, but also the most difficult task from the classification process as they define input data and classification quality. An ideal set of features would make the classification problem trivial. This article presents novel methods of feature extraction processing and automatic epilepsy seizure classification combining machine learning methods with genetic evolution algorithms. Methods: Classification is performed on EEG data that represent electric brain activity. At first, the signal is preprocessed with digital filtration and adaptive segmentation using fractal dimensions as the only segmentation measure. In the next step, a novel method using genetic programming (GP) combined with support vector machine (SVM) confusion matrix as fitness function weight is used to extract feature vectors compressed into lower dimension space and classify the final result into ictal or interictal epochs. Results: The final application of GP SVM method improves the discriminatory performance of a classifier by reducing feature dimensionality at the same time. Members of the GP tree structure represent the features themselves and their number is automatically decided by the compression function introduced in this paper. This novel method improves the overall performance of the SVM classification by dramatically reducing the size of input feature vector. Conclusion: According to results, the accuracy of this algorithm is very high and comparable, or even superior to other automatic detection algorithms. In combination with the great efficiency, this algorithm can be used in real-time epilepsy detection applications. From the results of the algorithm's classification, we can observe high sensitivity, specificity results, except for the Generalized Tonic Clonic Seizure (GTCS). As the next step, the optimization of the compression stage and final SVM evaluation stage is in place. More data need to be obtained on GTCS to improve the overall classification score for GTCS.Web of Science142449243

    Deriving and improving CMA-ES with Information geometric trust regions

    Get PDF
    CMA-ES is one of the most popular stochastic search algorithms. It performs favourably in many tasks without the need of extensive parameter tuning. The algorithm has many beneficial properties, including automatic step-size adaptation, efficient covariance updates that incorporates the current samples as well as the evolution path and its invariance properties. Its update rules are composed of well established heuristics where the theoretical foundations of some of these rules are also well understood. In this paper we will fully derive all CMA-ES update rules within the framework of expectation-maximisation-based stochastic search algorithms using information-geometric trust regions. We show that the use of the trust region results in similar updates to CMA-ES for the mean and the covariance matrix while it allows for the derivation of an improved update rule for the step-size. Our new algorithm, Trust-Region Covariance Matrix Adaptation Evolution Strategy (TR-CMA-ES) is fully derived from first order optimization principles and performs favourably in compare to standard CMA-ES algorithm

    Juxta-Vascular Pulmonary Nodule Segmentation in PET-CT Imaging Based on an LBF Active Contour Model with Information Entropy and Joint Vector

    Get PDF
    The accurate segmentation of pulmonary nodules is an important preprocessing step in computer-aided diagnoses of lung cancers. However, the existing segmentation methods may cause the problem of edge leakage and cannot segment juxta-vascular pulmonary nodules accurately. To address this problem, a novel automatic segmentation method based on an LBF active contour model with information entropy and joint vector is proposed in this paper. Our method extracts the interest area of pulmonary nodules by a standard uptake value (SUV) in Positron Emission Tomography (PET) images, and automatic threshold iteration is used to construct an initial contour roughly. The SUV information entropy and the gray-value joint vector of Positron Emission Tomography–Computed Tomography (PET-CT) images are calculated to drive the evolution of contour curve. At the edge of pulmonary nodules, evolution will be stopped and accurate results of pulmonary nodule segmentation can be obtained. Experimental results show that our method can achieve 92.35% average dice similarity coefficient, 2.19 mm Hausdorff distance, and 3.33% false positive with the manual segmentation results. Compared with the existing methods, our proposed method that segments juxta-vascular pulmonary nodules in PET-CT images is more accurate and efficient
    corecore