430 research outputs found

    Maximizing the Switching Activity of Different Modules Within a Processor Core via Evolutionary Techniques

    Get PDF
    One key aspect to be considered during device testing is the minimization of the switching activity of the circuit under test (CUT), thus avoiding possible problems stemming from overheating it. But there are also scenarios, where the maximization of certain circuits' modules switching activity could be proven useful (e.g., during Burn-In) in order to exercise the circuit under extreme operating conditions in terms of temperature (and temperature gradients). Resorting to a functional approach based on Software-based Self-test guarantees that the high induced activity cannot damage the CUT nor produce any yield loss. However, the generation of effective suitable test programs remains a challenging task. In this paper, we consider a scenario where the modules to be stressed are sub-modules of a fully pipelined processor. We present a technique, based on an evolutionary approach, able to automatically generate stress test programs, i.e., sequences of instructions achieving a high toggling activity in the target module. With respect to previous approaches, the generated sequences are short and repeatable, thus guaranteeing their easy usability to stress a module (and increase its temperature). The processor we used for our experiments is the Open RISC 1200. Results demonstrate that the proposed method is effective in achieving a high value of sustained toggling activity with short (3 instructions) and repeatable sequences

    Composite Materials in Design Processes

    Get PDF
    The use of composite materials in the design process allows one to tailer a component’s mechanical properties, thus reducing its overall weight. On the one hand, the possible combinations of matrices, reinforcements, and technologies provides more options to the designer. On the other hand, it increases the fields that need to be investigated in order to obtain all the information requested for a safe design. This Applied Sciences Special Issue, “Composite Materials in Design Processes”, collects recent advances in the design methods for components made of composites and composite material properties at a laminate level or using a multi-scale approach

    Machine Learning-Based Data and Model Driven Bayesian Uncertanity Quantification of Inverse Problems for Suspended Non-structural System

    Get PDF
    Inverse problems involve extracting the internal structure of a physical system from noisy measurement data. In many fields, the Bayesian inference is used to address the ill-conditioned nature of the inverse problem by incorporating prior information through an initial distribution. In the nonparametric Bayesian framework, surrogate models such as Gaussian Processes or Deep Neural Networks are used as flexible and effective probabilistic modeling tools to overcome the high-dimensional curse and reduce computational costs. In practical systems and computer models, uncertainties can be addressed through parameter calibration, sensitivity analysis, and uncertainty quantification, leading to improved reliability and robustness of decision and control strategies based on simulation or prediction results. However, in the surrogate model, preventing overfitting and incorporating reasonable prior knowledge of embedded physics and models is a challenge. Suspended Nonstructural Systems (SNS) pose a significant challenge in the inverse problem. Research on their seismic performance and mechanical models, particularly in the inverse problem and uncertainty quantification, is still lacking. To address this, the author conducts full-scale shaking table dynamic experiments and monotonic & cyclic tests, and simulations of different types of SNS to investigate mechanical behaviors. To quantify the uncertainty of the inverse problem, the author proposes a new framework that adopts machine learning-based data and model driven stochastic Gaussian process model calibration to quantify the uncertainty via a new black box variational inference that accounts for geometric complexity measure, Minimum Description length (MDL), through Bayesian inference. It is validated in the SNS and yields optimal generalizability and computational scalability

    Quantum Mind in TGD Universe

    Get PDF
    The basic diffculties and challenges of Quantum Mind program are analyzed.The conclusion is that the recent form of quantum theory is not enough to overcome the challenges posed by the philosophical problems of quantum physics and quantum mind theories, and the puzzles of quantum biology and quantum neuroscience. Certain anomalies of recent day biology giving hints about how quantum theory should be generalized serve as an introduction to the summary of the aspects of quantum TGD especially relevant to the notion of Quantum Mind.These include the notions of many-sheeted space-time and field (magnetic) body, zero energy ontology, the identification of dark matter as a hierarchy of phases with large value of Planck constant, and p-adic physics proposed to define physical correlates for cognition and intentionality. Especially relevant is the number theoretic generalization of Shannon entropy: this entropy is well defined for rational or even algebraic entanglement probabilities and its minimum as a function of the prime defining p-adic norm appearing in the definition of the entropy is negative. Therefore the notion of negentropic entanglement makes sense in the intersection of real and p-adic worlds and is negative: this motivates the proposal that living matter resides in this intersection. TGD inspired theory of consciousness is introduced as a generalization of quantum measurement theory. The notions of quantum jump and self defining the generalization of the notion of observer are introduced and it is argued that the notion of self reduces to that for quantum jump. Negentropy Maximization Principle reproduces standard quantum measurement theory for ordinary entanglement but respects negentropic entanglement so that the outcome of state function reduction is not random for negentropic entanglement. The new view about the relationship of experienced time and geometric time combined with zero energy ontology is claimed to solve the basic philosophical diffculties ofquantum measurement theory and consciousness theory. The identication of the quantum correlates of sensory qualia and Boolean cognition, emotions, cognition and intentionality and self-referentiality of consciousness is discussed

    Proceedings of the Twenty Second Nordic Seminar on Computational Mechanics

    Get PDF

    Coherent algorithm for reconstructing the location of a coalescing binary system using a network of three gravitational interferometers

    Get PDF
    The Virgo project is one of the ground based interferometers on the earth surface that aim to detect gravitational waves. This thesis work concerns the data analysis for the coalescing binaries stars, that are among the most promising gravitational waves sources, since the shape of their signal is well known. The gravitational waves emission from a binary system of compact stars acts like a sort of feedback: the system radiates loosing its orbital energy, so the orbit shrinks and the emission becomes stronger. The signal is therefore called a chirp, due to this characteristic amplitude and frequency increasing with time. The expectation rate for the double neutron stars merging is 3.4 · 10−5 per year. Translated in detection expectation rate this corresponds to a detected event every 125 years for the LIGO detectors, and one every 148 years for the Virgo one. For the advanced new generation of detectors, that will be working within the next years, the expectation rate with the 2004 proposed configuration of advanced detectors is definitely better: 6 events per year for the so called Enhanced LIGO, and 3 every two years for the Advanced Virgo (updated scenarios for detection rates, with a more recent Advanced Virgo configuration are under development). The technique that suites at best the analysis of this kind of signal is the matched filter, that consists in computing the correlation between the data stream (output of the gravitational waves interferometer) and a set of theoretical tem- plates. From this analysis, using a single detector, it is possible to determine the masses of the two stars, and the so called optimal orientation distance, that is the source distance provided that the orbit has the best inclination with respect to the interferometer line of sight. Reconstructing the source position, so as to draw a gravitational waves sources sky map, requires at least three non-coincident detectors, in order to make a triangula- tion. Another very good reason to use a network of gravitational waves interferometers is that the detection rate can be improved considering a network of three detectors (Virgo, Hanford and Livingston) and operating a coherent analysis, since in this case the expected rate corresponds to one event each 26 years. There are two different methods used for the network analysis: the coincident method, and the coherent one. The first is the most intuitive one, and simply consists in a separate single detec- tor analysis performed by each interferometer, and a successive comparison between the single detector candidates, searching for compatible events. After that process, only the coincidences remain as candidate events, and they can be used for the source position reconstruction, using the time delays between detectors. The basic idea of the coherent method is to construct an ideal detector equivalent to the network, to which each real interferometer coherently contributes with its sensitivity, location, orientation. For this purpose a so called network statistic to maximize in order to extrapolate the source pa- rameters is constructed, first, and maximized then. For this thesis we have worked on coalescing binaries network analysis, trying to determine the best strategy for source position reconstruction. We have developed a pipeline that implements a fully coherent method, in a few different variations, and we have compared them with the classical time- of-light coincidence analysis. The coincident method has been optimized in order to make a fair comparison; in particular we have adopted the reference time, for implementing the coincidence, and we have further improved the arrival time accuracy by fitting the shape of the matched filter response. Among the coherent techniques tested, the sim- plest has been a direct maximization of the network likelihood. A fit of the likelihood to improve the determination of the likelihood maximum has also been attempted but the fitting procedure resulted unstable; instead, we have found most effective to define the most likely declination and right ascension by means of an average procedure weighted by the corresponding network likelihood. This procedure allows to remove the discretization effect due to the finite sampling rate of the analysis, and provides results compatible with the ones obtained with the time-of-light technique, and in a relatively automatic way. The study of the accuracy problem, comparing the two methods of analysis gives in a certain way two important consequences: first of all the determination of the best coherent strategy for reconstructing the source position among all the alternatives, both in terms of efficiency, and in term of computational costs; and as a secondary effect it gives us the incipit for push the coincident method to its best, provided that one uses all the correlators information. If we give a glance to the future, since new interferometric gravitational waves detectors are under construction and under project, another important feature of the coherent method is its exibility to be adapted to a larger number of detectors. The coherent method can tell us how to combine them in order to obtain with the best accuracy the source position, instead of analyzing all the possible independent triangulations, and loosing in that way part of the event astrophysical information

    High Energy Density Propulsion Systems and Small Engine Dynamometer

    Get PDF
    This study investigates all possible methods of powering small unmanned vehicles, provides reasoning for the propulsion system down select, and covers in detail the design and production of a dynamometer to confirm theoretical energy density calculations for small engines. Initial energy density calculations are based upon manufacturer data, pressure vessel theory, and ideal thermodynamic cycle efficiencies. Engine tests are conducted with a braking type dynamometer for constant load energy density tests, and show true energy densities in excess of 1400 WH/lb of fuel. Theory predicts lithium polymer, the present unmanned system energy storage device of choice, to have much lower energy densities than other conversion energy sources. Small engines designed for efficiency, instead of maximum power, would provide the most advantageous method for powering small unmanned vehicles because these engines have widely variable power output, loss of mass during flight, and generate rotational power directly. Theoretical predictions for the energy density of small engines has been verified through testing. Tested values up to 1400 WH/lb can be seen under proper operating conditions. The implementation of such a high energy density system will require a significant amount of follow-on design work to enable the engines to tolerate the higher temperatures of lean operation. Suggestions are proposed to enable a reliable, small -engine propulsion system in future work. Performance calculations show that a mature system is capable of month long flight times, and unrefueled circumnavigation of the globe.Mechanical & Aerospace Engineerin

    An advanced study of an Application Technology Satellite /ATS-4/ mission, volume I, book 2 Final study report, May - Nov. 1966

    Get PDF
    Application Technology Satellite /ATS/ SPACECRAFT tradeoff and analysis - configuration paraboloid antenna, guidance and control power, spacecraft design, and apogee motor selectio

    Reliability Analysis of Electrotechnical Devices

    Get PDF
    This is a book on the practical approaches of reliability to electrotechnical devices and systems. It includes the electromagnetic effect, radiation effect, environmental effect, and the impact of the manufacturing process on electronic materials, devices, and boards
    corecore