392 research outputs found

    Liquid State Machine with Dendritically Enhanced Readout for Low-power, Neuromorphic VLSI Implementations

    Full text link
    In this paper, we describe a new neuro-inspired, hardware-friendly readout stage for the liquid state machine (LSM), a popular model for reservoir computing. Compared to the parallel perceptron architecture trained by the p-delta algorithm, which is the state of the art in terms of performance of readout stages, our readout architecture and learning algorithm can attain better performance with significantly less synaptic resources making it attractive for VLSI implementation. Inspired by the nonlinear properties of dendrites in biological neurons, our readout stage incorporates neurons having multiple dendrites with a lumped nonlinearity. The number of synaptic connections on each branch is significantly lower than the total number of connections from the liquid neurons and the learning algorithm tries to find the best 'combination' of input connections on each branch to reduce the error. Hence, the learning involves network rewiring (NRW) of the readout network similar to structural plasticity observed in its biological counterparts. We show that compared to a single perceptron using analog weights, this architecture for the readout can attain, even by using the same number of binary valued synapses, up to 3.3 times less error for a two-class spike train classification problem and 2.4 times less error for an input rate approximation task. Even with 60 times larger synapses, a group of 60 parallel perceptrons cannot attain the performance of the proposed dendritically enhanced readout. An additional advantage of this method for hardware implementations is that the 'choice' of connectivity can be easily implemented exploiting address event representation (AER) protocols commonly used in current neuromorphic systems where the connection matrix is stored in memory. Also, due to the use of binary synapses, our proposed method is more robust against statistical variations.Comment: 14 pages, 19 figures, Journa

    Effects of SO(10) D-Terms on SUSY Signals at the Tevatron

    Get PDF
    We study signals for the production of superparticles at the Tevatron in supergravity scenarios based on the Grand Unified group SO(10). The breaking of this group introduces extra contributions to the masses of all scalars, described by a single new parameter. We find that varying this parameter can considerably change the size of various expected signals studied in the literature, with different numbers of jets and/or charged leptons in the final state. The ratios of these signal can thus serve as a diagnostic to detect or constrain deviations from the much--studied scenario where all scalar masses are universal at the GUT scale. Moreover, under favorable circumstances some of these signals, and/or new signals involving hard bb-jets, should be observable at the next run of the Tevatron collider even if the average scalar mass lies well above the gluino mass.Comment: 17 pages, LaTeX including 3 postscript figures, uses equation.st

    Time Delay in Rectification of Faults in Software Projects

    Get PDF
    Software reliability models, such as the Basic (i.e., Exponential) Model and the Logarithmic Poisson Model, make the idealizing assumption that when a failure occurs during a program run, the corresponding fault in the program code is corrected without any loss of time. In practice, it takes time to rectify a fault. This is perhaps one reason why, when the cumulative number of faults is computed using such a model and plotted against time, the fit with observed failure data is often not very close. In this paper, we show how the average delay to rectify a fault can be incorporated as a parameter in the Basic Model, changing the defining differential equation to a differential- difference equation. When this is solved, the time delay for which the fit with observed data is closest can be found. The delay need not be constant during the course of testing, but can change slowly with time, giving a yet closer fit. The pattern of variation during testing of the delay with time can be related both to the learning acquired by the testing team and to the difficulty level of the faults that remain to be discovered in the package. This is likely to prove useful to managers of software projects in the deployment of staff

    The Mechanics of Internet Diffusion in India: Lessons for Developing Countries

    Get PDF
    The issue of Internet diffusion in an economy over time is of interest to several stakeholders, including policy makers, regulators, investors, and businesses. It is particularly important in developing countries, which see the Internet as a major driver in achieving social and developmental goals. Concerns about the so-called ìdigital divideî also lend some urgency to the issue. However, Internet diffusion is driven by social as well as technical factors, and developing countries have distinctive characteristics that make their adoption process different from that in industrialized countries. This paper develops a causal model of Internet diffusion in developing countries, using the systems dynamics methodology. The modeling approach allows us to combine standard contagion mechanisms inherent in diffusion, such as innovators and imitators, with the distinctive regulatory, economic, and social circumstances in developing countries. The structure of the model is first justified using India as a specific developing country context. Next, the simulated values generated by this structural model are compared against actual values for Internet adoption in India for the period 1996ñ2001, and the fit is found to be reasonably good. These initial findings support model validity. Using a technique called dominant loop analysis the model suggests that, among all the different drivers, poor telecom- munications infrastructure and high telephone charges are the major barriers to diffusion. In conclusion, we discuss the issues to be addressed in the remainder of this ongoing work

    The Dynamics of Organizational Information Security

    Get PDF
    In recent times, it has become evident that information security is not achieved through technology alone. Rather, it depends on a complex interplay among technology, organizational and managerial issues, and events in the external environment. Senior management attention, training, and sound operating procedures are just as important as firewalls and virtual private networks in arriving at a robust security posture. In this paper, we represent the interactions among these technical and organizational drivers using the system dynamics methodology, to develop a high level model of organizational information security. Since the basic system dynamics construct is the feedback loop, our model is able to expose the counteracting mechanics that work to reinforce and erode security, respectively. By doing so, it can inform the process of crafting an appropriate level of security—a problem facing most organizations. Since the model is based on simulation, it is also possible to test what-if scenarios of how the security posture of the organization would fare under different levels of external threats and management policies

    The Development, Testing, and Release of Software Systems in the Internet Age: A Generalized Analytical Model

    Get PDF
    A major issue in the production of software by a software company is the estimation of the total expenditure likely to be incurred in developing, testing, and debugging a new package or product. If the development cost and development schedule are assumed known, then the major cost factors are the testing cost, the risk cost for the errors that remain in the software at the end of testing, and the opportunity cost. The control parameters are the times at which testing begins and ends, and the time at which the package is released in the market (or the product is supplied to the customer). By adjusting the values of these parameters, the total expenditure can be minimized. Internet technology makes it possible to provide software patches, and this encourages early release. Here we examine the major cost factors and derive a canonical expression for the minimum total expenditure. We show analytically that when the minimum is achieved (1) testing will continue beyond the time of release and (2) the number of software errors in the package when testing ends will be a constant (i.e., the package will have a guaranteed reliability). We apply the model to a few special scenarios of interest and derive their properties. It is shown that the incorporation, as a separate item, of the cost incurred to fix the errors discovered during testing has only a marginal effect on the canonical expression derived earlier

    Testing Gauge-Gravitino Coupling in Gauge-Mediated Supersymmetry Breaking Through Single Photon Events

    Get PDF
    We show that the process e+eγ+e^+e^- \to \gamma+ missing energy, arising from the pair production of neutralinos, can probe the \gamma-\tilde\gamma- \gravitino as well as the Z-\tilde Z-\gravitino couplings in Gauge Mediated Supersymmetry Breaking models. This enables one to study the mutual relationship of the Goldstino couplings of the different gauginos, a feature whose testability has not been emphasized so far. The Standard Model backgrounds get suppresed with the use of a right polarized electron beam. The energy and angular distribution of the emitted photon can distinguish such models from the minimal supersymmetric theory and its variants.Comment: Revised version to be published in Physics Letters B. Some minor changes were mad

    LHC Signature of the Minimal SUGRA Model with a Large Soft Scalar Mass

    Get PDF
    Thanks to the focus point phenomenon, it is quite {\it natural} for the minimal SUGRA model to have a large soft scalar mass m_0 > 1 TeV. A distinctive feature of this model is an inverted hierarchy, where the lighter stop has a significantly smaller mass than the other squarks and sleptons. Consequently, the gluino is predicted to decay dominantly via stop exchange into a channel containing 2b and 2W along with the LSP. We exploit this feature to construct a robust signature for this model at the LHC in leptonic channels with 3-4 b-tags and a large missing-E_T.Comment: Small clarifications added. Final version to appear in Phys. Lett.
    corecore