382 research outputs found

    Complexity Analysis of Surface Electromyography for Assessing the Myoelectric Manifestation of Muscle Fatigue: A Review

    Get PDF
    The surface electromyography (sEMG) records the electrical activity of muscle fibers during contraction: one of its uses is to assess changes taking place within muscles in the course of a fatiguing contraction to provide insights into our understanding of muscle fatigue in training protocols and rehabilitation medicine. Until recently, these myoelectric manifestations of muscle fatigue (MMF) have been assessed essentially by linear sEMG analyses. However, sEMG shows a complex behavior, due to many concurrent factors. Therefore, in the last years, complexity-based methods have been tentatively applied to the sEMG signal to better individuate the MMF onset during sustained contractions. In this review, after describing concisely the traditional linear methods employed to assess MMF we present the complexity methods used for sEMG analysis based on an extensive literature search. We show that some of these indices, like those derived from recurrence plots, from entropy or fractal analysis, can detect MMF efficiently. However, we also show that more work remains to be done to compare the complexity indices in terms of reliability and sensibility; to optimize the choice of embedding dimension, time delay and threshold distance in reconstructing the phase space; and to elucidate the relationship between complexity estimators and the physiologic phenomena underlying the onset of MMF in exercising muscles

    Drones in turbulence

    Get PDF

    Complexity analysis of surface electromyography for assessing the myoelectric manifestation of muscle fatigue: A review

    Get PDF
    The surface electromyography (sEMG) records the electrical activity of muscle fibers during contraction: one of its uses is to assess changes taking place within muscles in the course of a fatiguing contraction to provide insights into our understanding of muscle fatigue in training protocols and rehabilitation medicine. Until recently, these myoelectric manifestations of muscle fatigue (MMF) have been assessed essentially by linear sEMG analyses. However, sEMG shows a complex behavior, due to many concurrent factors. Therefore, in the last years, complexity-based methods have been tentatively applied to the sEMG signal to better individuate the MMF onset during sustained contractions. In this review, after describing concisely the traditional linear methods employed to assess MMF we present the complexity methods used for sEMG analysis based on an extensive literature search. We show that some of these indices, like those derived from recurrence plots, from entropy or fractal analysis, can detect MMF efficiently. However, we also show that more work remains to be done to compare the complexity indices in terms of reliability and sensibility; to optimize the choice of embedding dimension, time delay and threshold distance in reconstructing the phase space; and to elucidate the relationship between complexity estimators and the physiologic phenomena underlying the onset of MMF in exercising muscles

    Non Linear Modelling of Financial Data Using Topologically Evolved Neural Network Committees

    No full text
    Most of artificial neural network modelling methods are difficult to use as maximising or minimising an objective function in a non-linear context involves complex optimisation algorithms. Problems related to the efficiency of these algorithms are often mixed with the difficulty of the a priori estimation of a network's fixed topology for a specific problem making it even harder to appreciate the real power of neural networks. In this thesis, we propose a method that overcomes these issues by using genetic algorithms to optimise a network's weights and topology, simultaneously. The proposed method searches for virtually any kind of network whether it is a simple feed forward, recurrent, or even an adaptive network. When the data is high dimensional, modelling its often sophisticated behaviour is a very complex task that requires the optimisation of thousands of parameters. To enable optimisation techniques to overpass their limitations or failure, practitioners use methods to reduce the dimensionality of the data space. However, some of these methods are forced to make unrealistic assumptions when applied to non-linear data while others are very complex and require a priori knowledge of the intrinsic dimension of the system which is usually unknown and very difficult to estimate. The proposed method is non-linear and reduces the dimensionality of the input space without any information on the system's intrinsic dimension. This is achieved by first searching in a low dimensional space of simple networks, and gradually making them more complex as the search progresses by elaborating on existing solutions. The high dimensional space of the final solution is only encountered at the very end of the search. This increases the system's efficiency by guaranteeing that the network becomes no more complex than necessary. The modelling performance of the system is further improved by searching not only for one network as the ideal solution to a specific problem, but a combination of networks. These committces of networks are formed by combining a diverse selection of network species from a population of networks derived by the proposed method. This approach automatically exploits the strengths and weaknesses of each member of the committee while avoiding having all members giving the same bad judgements at the same time. In this thesis, the proposed method is used in the context of non-linear modelling of high-dimensional financial data. Experimental results are'encouraging as both robustness and complexity are concerned.Imperial Users onl

    The knowledge transfer openness matrix facilitating accessibility in UK management education teaching

    Get PDF
    This is an empirical investigation considering how the Knowledge Transfer Openness Matrix (KTOM) could facilitate accessibility and Knowledge Transfer (KT) for the UK Higher Education (HE) Management Education Teaching when utilising learning technologies. Its focus is where learning technologies applications currently assist the KT process and support accessibility for the HE teacher and learner. It considers the philosophy of openness, focussing on its usefulness to support accessibility within UK HE Management Education Teaching. It discusses how the openness philosophy may assist the KT process for the HE teacher and learners using learning technologies. In particular, the potential to support accessibility within HE Management Education Teaching environments is appraised. There appear several implications for both teachers and learners. These are characterized in the proposed KTOM. The matrix organises KT events based on the principles of the openness philosophy. The role of learning technologies in events is illustrated with regard to teaching and learning accessibility

    New Foundation in the Sciences: Physics without sweeping infinities under the rug

    Get PDF
    It is widely known among the Frontiers of physics, that “sweeping under the rug” practice has been quite the norm rather than exception. In other words, the leading paradigms have strong tendency to be hailed as the only game in town. For example, renormalization group theory was hailed as cure in order to solve infinity problem in QED theory. For instance, a quote from Richard Feynman goes as follows: “What the three Nobel Prize winners did, in the words of Feynman, was to get rid of the infinities in the calculations. The infinities are still there, but now they can be skirted around . . . We have designed a method for sweeping them under the rug. [1] And Paul Dirac himself also wrote with similar tune: “Hence most physicists are very satisfied with the situation. They say: Quantum electrodynamics is a good theory, and we do not have to worry about it any more. I must say that I am very dissatisfied with the situation, because this so-called good theory does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it turns out to be small—not neglecting it just because it is infinitely great and you do not want it!”[2] Similarly, dark matter and dark energy were elevated as plausible way to solve the crisis in prevalent Big Bang cosmology. That is why we choose a theme here: New Foundations in the Sciences, in order to emphasize the necessity to introduce a new set of approaches in the Sciences, be it Physics, Cosmology, Consciousness etc

    Learning, monetary policy and asset prices

    Get PDF
    The dissertation examines several policy-related implications of relaxing the assumption that economic agents are guided by rational expectations. A first, introductory chapter presents the main technical issues related to adaptive learning. The second chapter studies the implications for monetary policy of positing that both the private sector and the central bank form their expectations through adaptive learning and that the central bank has private information on shocks to the economy but cannot credibly commit. The main finding of this chapter is that when agents learn adaptively a bias against activist policy arises. The following chapter focuses on large, non-linear models, where no unambiguous linear approximation eligible as perceived law of motion exists. Accordingly, there are heterogeneous expectations and the system converges to a misspecification equilibrium, affected by the communication strategies of the central bank. The main results are: (1) the heterogeneity of expectations persists even when a large number of observations are available; (2) the monetary policymaker has no incentive to be an inflation hawk; (3) partial transparency enhances welfare somewhat but full transparency does not. The final chapter adopts a model in which agents are fully informed and use Bayesian techniques to estimate the hidden states of the economy. The monetary policy stance is unobservable and state-independent, generating uncertainty among agents, who try to gauge it from inflation: a change in consumer prices that confirms beliefs reduces stock risk premia, while a change that contradicts beliefs drives the risk premia upward. This may generate a negative correlation between returns and inflation that explains the Fisher puzzle. The model is tested on US data. The econometric evidence suggests: (1) that a mimickingportfolio proxying for monetary policy uncertainty is a risk factor priced by financial markets; and (2) that conditioning on monetary uncertainty and fundamentals eliminates the Fisher puzzle

    Of evolution, information, vitalism and entropy: reflections of the history of science and epistemology in the works of Balzac, Zola, Queneau, and Houellebecq

    Full text link
    This dissertation proposes the application of rarely-used epistemological and scientific lenses to the works of four authors spanning two centuries: Honoré de Balzac, Émile Zola, Raymond Queneau, and Michel Houellebecq. Each of these novelists engaged closely with questions of science and epistemology, yet each approached that engagement from a different scientific perspective and epistemological moment. In Balzac’s La Peau de chagrin, limits of determinism and experimental method tend to demonstrate that there remains an inscrutable yet guided excess in the interactions between the protagonist Raphaël and his enchanted skin. This speaks to an embodiment of the esprit préscientifique, a framework that minimizes the utility of scientific practice in favor of the unresolved mystery of vitalism. With Zola comes a move away from undefinable mystery to a construction of the novel consistent with Claude Bernard’s deterministic experimental medicine. Yet Zola’s Roman expérimental project is only partially executed, in that the Newtonian framework that underlies Bernard’s method yields to contrary evidence in Zola’s text of entropy, error, and loss of information consistent with the field of thermodynamics. In Queneau’s texts, Zola’s interest in current science not only remains, but is updated to reflect the massive upheaval in scientific thought that took place in the last half of the nineteenth and early part of the twentieth centuries. If Queneau’s texts explicitly mention advances like relativity, however, they often do so in a humorously dismissive manner that values pre-entropic and even early geometric constructs like perpetual motion machines and squared circles. Queneau’s apparent return to the pre-scientific ultimately yields to Houellebecq’s textual abyss. For Houellebecq, science is not only to be embraced in its entropic and relativistic constructs; it is these very constructs - and the style typically used to present them – that serve as a reminder of the abjection, decay, and hopelessness of human existence. Gone is the mystery of life in its totality. In its place remain humans acting as a series of particles mechanically obeying deterministic laws. The parenthesis that opened with Balzac’s positive coding of pre-scientific thought closes with Houellebecq’s negative coding of modern scientific theory

    All-Silicon-Based Photonic Quantum Random Number Generators

    Get PDF
    Random numbers are fundamental elements in different fields of science and technology such as computer simulation like Monte Carlo-method simulation, statistical sampling, cryptography, games and gambling, and other areas where unpredictable results are necessary. Random number generators (RNG) are generally classified as “pseudo”-random number generators (PRNG) and "truly" random number generators (TRNG). Pseudo random numbers are generated by computer algorithms with a (random) seed and a specific formula. The random numbers produced in this way (with a small degree of unpredictability) are good enough for some applications such as computer simulation. However, for some other applications like cryptography they are not completely reliable. When the seed is revealed, the entire sequence of numbers can be produced. The periodicity is also an undesirable property of PRNGs that can be disregarded for most practical purposes if the sequence recurs after a very long period. However, the predictability still remains a tremendous disadvantage of this type of generators. Truly random numbers, on the other hand, can be generated through physical sources of randomness like flipping a coin. However, the approaches exploiting classical motion and classical physics to generate random numbers possess a deterministic nature that is transferred to the generated random numbers. The best solution is to benefit from the assets of indeterminacy and randomness in quantum physics. Based on the quantum theory, the properties of a particle cannot be determined with arbitrary precision until a measurement is carried out. The result of a measurement, therefore, remains unpredictable and random. Optical phenomena including photons as the quanta of light have various random, non-deterministic properties. These properties include the polarization of the photons, the exact number of photons impinging a detector and the photon arrival times. Such intrinsically random properties can be exploited to generate truly random numbers. Silicon (Si) is considered as an interesting material in integrated optics. Microelectronic chips made from Si are cheap and easy to mass-fabricate, and can be densely integrated. Si integrated optical chips, that can generate, modulate, process and detect light signals, exploit the benefits of Si while also being fully compatible with electronic. Since many electronic components can be integrated into a single chip, Si is an ideal candidate for the production of small, powerful devices. By complementary metal-oxide-semiconductor (CMOS) technology, the fabrication of compact and mass manufacturable devices with integrated components on the Si platform is achievable. In this thesis we aim to model, study and fabricate a compact photonic quantum random number generator (QRNG) on the Si platform that is able to generate high quality, "truly" random numbers. The proposed QRNG is based on a Si light source (LED) coupled with a Si single photon avalanche diode (SPAD) or an array of SPADs which is called Si photomultiplier (SiPM). Various implementations of QRNG have been developed reaching an ultimate geometry where both the source and the SPAD are integrated on the same chip and fabricated by the same process. This activity was performed within the project SiQuro—on Si chip quantum optics for quantum computing and secure communications—which aims to bring the quantum world into integrated photonics. By using the same successful paradigm of microelectronics—the study and design of very small electronic devices typically made from semiconductor materials—, the vision is to have low cost and mass manufacturable integrated quantum photonic circuits for a variety of different applications in quantum computing, measure, sensing, secure communications and services. The Si platform permits, in a natural way, the integration of quantum photonics with electronics. Two methodologies are presented to generate random numbers: one is based on photon counting measurements and another one is based on photon arrival time measurements. The latter is robust, masks all the drawbacks of afterpulsing, dead time and jitter of the Si SPAD and is effectively insensitive to ageing of the LED and to its emission drifts related to temperature variations. The raw data pass all the statistical tests in national institute of standards and technology (NIST) tests suite and TestU01 Alphabit battery without a post processing algorithm. The maximum demonstrated bit rate is 1.68 Mbps with the efficiency of 4-bits per detected photon. In order to realize a small, portable QRNG, we have produced a compact configuration consisting of a Si nanocrystals (Si-NCs) LED and a SiPM. All the statistical test in the NIST tests suite pass for the raw data with the maximum bit rate of 0.5 Mbps. We also prepared and studied a compact chip consisting of a Si-NCs LED and an array of detectors. An integrated chip, composed of Si p+/n junction working in avalanche region and a Si SPAD, was produced as well. High quality random numbers are produced through our robust methodology at the highest speed of 100 kcps. Integration of the source of entropy and the detector on a single chip is an efficient way to produce a compact RNG. A small RNG is an essential element to guarantee the security of our everyday life. It can be readily implemented into electronic devices for data encryption. The idea of "utmost security" would no longer be limited to particular organs owning sensitive information. It would be accessible to every one in everyday life
    corecore