1,803 research outputs found

    A Spectral Method that Worked Well in the SPiCe'16 Competition

    Get PDF
    We present methods used in our submission to the Sequence Prediction ChallengE (SPiCe’16) 1 . The two methods used to solve the competition tasks were spectral learning and a count based method. Spectral learning led to better results on most of the problems

    Predicting Sequential Data with LSTMs Augmented with Strictly 2-Piecewise Input Vectors

    Get PDF
    Abstract Recurrent neural networks such as Long-Short Term Memory (LSTM) are often used to learn from various kinds of time-series data, especially those that involved long-distance dependencies. We introduce a vector representation for the Strictly 2-Piecewise (SP-2) formal languages, which encode certain kinds of long-distance dependencies using subsequences. These vectors are added to the LSTM architecture as an additional input. Through experiments with the problems in the SPiCe datase

    Relating RNN layers with the spectral WFA ranks in sequence modelling

    Get PDF
    We analyse Recurrent Neural Networks (RNNs) to understand the significance of multiple LSTM layers. We argue that the Weighted Finite-state Automata (WFA) trained using a spectral learning algorithm are helpful to analyse RNNs. Our results suggest that multiple LSTM layers in RNNs help learning distributed hidden states, but have a smaller impact on the ability to learn long-term dependencies. The analysis is based on the empirical results, however relevant theory (whenever possible) was discussed to justify and support our conclusions

    NICE 2023 Zero-shot Image Captioning Challenge

    Full text link
    In this report, we introduce NICE project\footnote{\url{https://nice.lgresearch.ai/}} and share the results and outcomes of NICE challenge 2023. This project is designed to challenge the computer vision community to develop robust image captioning models that advance the state-of-the-art both in terms of accuracy and fairness. Through the challenge, the image captioning models were tested using a new evaluation dataset that includes a large variety of visual concepts from many domains. There was no specific training data provided for the challenge, and therefore the challenge entries were required to adapt to new types of image descriptions that had not been seen during training. This report includes information on the newly proposed NICE dataset, evaluation methods, challenge results, and technical details of top-ranking entries. We expect that the outcomes of the challenge will contribute to the improvement of AI models on various vision-language tasks.Comment: Tech report, project page https://nice.lgresearch.ai
    • …
    corecore