1,305 research outputs found

    Development and evaluation of a Fault-Tolerant Multiprocessor (FTMP) computer. Volume 2: FTMP software

    Get PDF
    The software developed for the Fault-Tolerant Multiprocessor (FTMP) is described. The FTMP executive is a timer-interrupt driven dispatcher that schedules iterative tasks which run at 3.125, 12.5, and 25 Hz. Major tasks which run under the executive include system configuration control, flight control, and display. The flight control task includes autopilot and autoland functions for a jet transport aircraft. System Displays include status displays of all hardware elements (processors, memories, I/O ports, buses), failure log displays showing transient and hard faults, and an autopilot display. All software is in a higher order language (AED, an ALGOL derivative). The executive is a fully distributed general purpose executive which automatically balances the load among available processor triads. Provisions for graceful performance degradation under processing overload are an integral part of the scheduling algorithms

    On developing mathematical software

    Get PDF
    AbstractThis paper is primarily a list of things we have found useful in developing mathematical software

    Spartan Daily, October 26, 1964

    Get PDF
    Volume 52, Issue 24https://scholarworks.sjsu.edu/spartandaily/4634/thumbnail.jp

    Tannins

    Get PDF
    Tannins are one of the polyphenols group found in plants and are mainly studied because of their structural properties and bioactive behavior. Every year new findings concerning their properties and functions are made, and today concerns are mainly focused on how they can be used efficiently in the wood, food, textile, health, and pharmaceutical industries. Thus, the aim of this book is to present the most updated information on the structural properties of tannins, their food sources and variations, biological properties, and health, among other important issues. In addition, the most recent methods used for their isolation, quantifications, and industrial applications will also be covered

    Elucidating the role of cathepsin B in the lifecycle of influenza A virus

    Get PDF
    Influenza virus type A (IVA) is the etiologic agent responsible for the febrile respiratory illness referred to as the flu. Seasonal and occasionally pandemic IVA-associated illness is a significant cause of morbidity and mortality worldwide, and presents a significant burden to the healthcare system. Our previous work showed that the propagation of IVA required the lysosomal protease cathepsin B (CTSB), though the mechanism behind this dependency was not elucidated. This study further examined the role of CTSB by using CTSB-deficient (CTSB-/-) macrophages and the CTSB-specific chemical inhibitor CA-074 Me (CaMe) in human lung epithelial cells. CTSB-/- and CA-074 Me-treated cells showed no defect in either the uptake of virus particles, nor their replication following endosomal escape compared to wildtype or non-treated cells, respectively. However, CTSB-/- and CA-074 Me-treated cells had significantly less hemagglutinin (HA) protein both inside and on the surface of infected cells, as determined by both Western blotting and confocal immunofluorescence microscopy. These results suggest that CTSB is required for a step(s) in the viral lifecycle following entry into host cells, either before or during the synthesis of viral proteins, and possibly during the transport of viral components to the host membrane. Further work is necessary to determine the mechanistic details of these observations, and may yield novel potential therapies for influenza infections

    Development of self-adaptive back propagation and derivative free training algorithms in artificial neural networks

    Get PDF
    Three new iterative, dynamically self-adaptive, derivative-free and training parameter free artificial neural network (ANN) training algorithms are developed. They are defined as self-adaptive back propagation, multi-directional and restart ANN training algorithms. The descent direction in self-adaptive back propagation training is determined implicitly by a central difference approximation scheme, which chooses its step size according to the convergence behavior of the error function. This approach trains an ANN when the gradient information of the corresponding error function is not readily available. The self- adaptive variable learning rates per epoch are determined dynamically using a constrained interpolation search. As a result, appropriate descent to the error function is achieved. The multi-directional training algorithm is self-adaptive and derivative free. It orients an initial search vector in a descent location at the early stage of training. Individual learning rates and momentum term for all the ANN weights are determined optimally. The search directions are derived from rectilinear and Euclidean paths, which explore stiff ridges and valleys of the error surface to improve training. The restart training algorithm is derivative free. It redefines a de-generated simplex at a re-scale phase. This multi-parameter training algorithm updates ANN weights simultaneously instead of individually. The descent directions are derived from the centroid of a simplex along a reflection point opposite to the worst vertex. The algorithm is robust and has the ability to improve local search. These ANN training methods are appropriate when there is discontinuity in corresponding ANN error function or the Hessian matrix is ill conditioned or singular. The convergence properties of the algorithms are proved where possible. All the training algorithms successfully train exclusive OR (XOR), parity, character-recognition and forecasting problems. The simulation results with XOR, parity and character recognition problems suggest that all the training algorithms improve significantly over the standard back propagation algorithm in average number of epoch, function evaluations and terminal function values. The multivariate ANN calibration problem as a regression model with small data set is relatively difficult to train. In forecasting problems, an ANN is trained to extrapolate the data in validation period. The extrapolation results are compared with the actual data. The trained ANN performs better than the statistical regression method in mean absolute deviations; mean squared errors and relative percentage error. The restart training algorithm succeeds in training a problem, where other training algorithms face difficulty. It is shown that a seasonal time series problem possesses a Hessian matrix that has a high condition number. Convergence difficulties as well as slow training are therefore not atypical. The research exploits the geometry of the error surface to identify self-adaptive optimized learning rates and momentum terms. Consequently, the algorithms converge with high success rate. These attributes brand the training algorithms as self-adaptive, automatic, parameter free, efficient and easy to use
    • …
    corecore