825 research outputs found

    Examining the Nonlinear Effects in Satisfaction-Loyalty-Behavioral Intentions Model

    Get PDF
    Extant research has widely investigated linear functional forms in satisfaction and loyalty models. Though complex nonlinear nature of satisfaction loyalty link is suggested by several researchers, few attempts have been made to empirically examine nonlinearity. Moreover, researchers have used divergent functional forms to model nonlinearity and their findings are often inconclusive. In this study we use nonlinear form to describe the relationship between satisfaction, attitudinal loyalty, purchase loyalty and customer behavioral intentions such as willingness to pay more and external and internal complaining responses in the context of business-to-consumer ecommerce. We find modest empirical support for nonlinear effects in the relationship. Results support nonlinearity only in the case of attitudinal loyalty to internal complaining response link. Results also present evidence about the mediating role of attitudinal loyalty in the relationship between satisfaction, purchase loyalty, willingness to pay more and internal complaining responses.

    Incremental Learning Using a Grow-and-Prune Paradigm with Efficient Neural Networks

    Full text link
    Deep neural networks (DNNs) have become a widely deployed model for numerous machine learning applications. However, their fixed architecture, substantial training cost, and significant model redundancy make it difficult to efficiently update them to accommodate previously unseen data. To solve these problems, we propose an incremental learning framework based on a grow-and-prune neural network synthesis paradigm. When new data arrive, the neural network first grows new connections based on the gradients to increase the network capacity to accommodate new data. Then, the framework iteratively prunes away connections based on the magnitude of weights to enhance network compactness, and hence recover efficiency. Finally, the model rests at a lightweight DNN that is both ready for inference and suitable for future grow-and-prune updates. The proposed framework improves accuracy, shrinks network size, and significantly reduces the additional training cost for incoming data compared to conventional approaches, such as training from scratch and network fine-tuning. For the LeNet-300-100 and LeNet-5 neural network architectures derived for the MNIST dataset, the framework reduces training cost by up to 64% (63%) and 67% (63%) compared to training from scratch (network fine-tuning), respectively. For the ResNet-18 architecture derived for the ImageNet dataset and DeepSpeech2 for the AN4 dataset, the corresponding training cost reductions against training from scratch (network fine-tunning) are 64% (60%) and 67% (62%), respectively. Our derived models contain fewer network parameters but achieve higher accuracy relative to conventional baselines

    SCANN: Synthesis of Compact and Accurate Neural Networks

    Full text link
    Deep neural networks (DNNs) have become the driving force behind recent artificial intelligence (AI) research. An important problem with implementing a neural network is the design of its architecture. Typically, such an architecture is obtained manually by exploring its hyperparameter space and kept fixed during training. This approach is time-consuming and inefficient. Another issue is that modern neural networks often contain millions of parameters, whereas many applications and devices require small inference models. However, efforts to migrate DNNs to such devices typically entail a significant loss of classification accuracy. To address these challenges, we propose a two-step neural network synthesis methodology, called DR+SCANN, that combines two complementary approaches to design compact and accurate DNNs. At the core of our framework is the SCANN methodology that uses three basic architecture-changing operations, namely connection growth, neuron growth, and connection pruning, to synthesize feed-forward architectures with arbitrary structure. SCANN encapsulates three synthesis methodologies that apply a repeated grow-and-prune paradigm to three architectural starting points. DR+SCANN combines the SCANN methodology with dataset dimensionality reduction to alleviate the curse of dimensionality. We demonstrate the efficacy of SCANN and DR+SCANN on various image and non-image datasets. We evaluate SCANN on MNIST and ImageNet benchmarks. In addition, we also evaluate the efficacy of using dimensionality reduction alongside SCANN (DR+SCANN) on nine small to medium-size datasets. We also show that our synthesis methodology yields neural networks that are much better at navigating the accuracy vs. energy efficiency space. This would enable neural network-based inference even on Internet-of-Things sensors.Comment: 13 pages, 8 figure

    Heat capacity and magnetoresistance in Dy(Co,Si)2 compounds

    Full text link
    Magnetocaloric effect and magnetoresistance have been studied in Dy(Co1-xSix)2 [x=0, 0.075 and 0.15] compounds. Magnetocaloric effect has been calculated in terms of adiabatic temperatue change (Delta Tad) as well as isothermal magnetic entropy change (Delta SM) using the heat capacity data. The maximum values of DeltaSM and DeltaTad for DyCo2 are found to be 11.4 JKg-1K-1 and 5.4 K, respectively. Both DSM and DTad decrease with Si concentration, reaching a value of 5.4 JKg-1K-1 and 3 K, respectively for x=0.15. The maximum magnetoresistance is found to about 32% in DyCo2, which decreases with increase in Si. These variations are explained on the basis of itinerant electron metamagnetism occurring in these compounds.Comment: Total 8 pages of text and figure
    corecore