1,512 research outputs found

    Linking business analytics to decision making effectiveness: a path model analysis

    Get PDF
    While business analytics is being increasingly used to gain data-driven insights to support decision making, little research exists regarding the mechanism through which business analytics can be used to improve decision-making effectiveness (DME) at the organizational level. Drawing on the information processing view and contingency theory, this paper develops a research model linking business analytics to organizational DME. The research model is tested using structural equation modeling based on 740 responses collected from U.K. businesses. The key findings demonstrate that business analytics, through the mediation of a data-driven environment, positively influences information processing capability, which in turn has a positive effect on DME. The findings also demonstrate that the paths from business analytics to DME have no statistical differences between large and medium companies, but some differences between manufacturing and professional service industries. Our findings contribute to the business analytics literature by providing useful insights into business analytics applications and the facilitation of data-driven decision making. They also contribute to manager's knowledge and understanding by demonstrating how business analytics should be implemented to improve DM

    Neural Speech Synthesis with Transformer Network

    Full text link
    Although end-to-end neural text-to-speech (TTS) methods (such as Tacotron2) are proposed and achieve state-of-the-art performance, they still suffer from two problems: 1) low efficiency during training and inference; 2) hard to model long dependency using current recurrent neural networks (RNNs). Inspired by the success of Transformer network in neural machine translation (NMT), in this paper, we introduce and adapt the multi-head attention mechanism to replace the RNN structures and also the original attention mechanism in Tacotron2. With the help of multi-head self-attention, the hidden states in the encoder and decoder are constructed in parallel, which improves the training efficiency. Meanwhile, any two inputs at different times are connected directly by self-attention mechanism, which solves the long range dependency problem effectively. Using phoneme sequences as input, our Transformer TTS network generates mel spectrograms, followed by a WaveNet vocoder to output the final audio results. Experiments are conducted to test the efficiency and performance of our new network. For the efficiency, our Transformer TTS network can speed up the training about 4.25 times faster compared with Tacotron2. For the performance, rigorous human tests show that our proposed model achieves state-of-the-art performance (outperforms Tacotron2 with a gap of 0.048) and is very close to human quality (4.39 vs 4.44 in MOS)
    • …
    corecore