200 research outputs found

    An Algorithm for Computing the Ratliff-Rush Closure

    Full text link
    Let I\subset K[x,y] be a -primary monomial ideal where K is a field. This paper produces an algorithm for computing the Ratliff-Rush closure I for the ideal I= whenever m_{i} is contained in the integral closure of the ideal . This generalizes of the work of Crispin \cite{Cri}. Also, it provides generalizations and answers for some questions given in \cite{HJLS}, and enables us to construct infinite families of Ratliff-Rush ideals

    Reduced Gr\"obner Bases of Certain Toric Varieties; A New Short Proof

    Full text link
    Let K be a field and let m_0,...,m_{n} be an almost arithmetic sequence of positive integers. Let C be a toric variety in the affine (n+1)-space, defined parametrically by x_0=t^{m_0},...,x_{n}=t^{m_{n}}. In this paper we produce a minimal Gr\"obner basis for the toric ideal which is the defining ideal of C and give sufficient and necessary conditions for this basis to be the reduced Gr\"obner basis of C, correcting a previous work of \cite{Sen} and giving a much simpler proof than that of \cite{Ayy}

    Normality of Monomial Ideals

    Full text link
    Given the monomial ideal I=(x_1^{{\alpha}_1},...,x_{n}^{{\alpha}_{n}})\subset K[x_1,...,x_{n}] where {\alpha}_{i} are positive integers and K a field and let J be the integral closure of I . It is a challenging problem to translate the question of the normality of J into a question about the exponent set {\Gamma}(J) and the Newton polyhedron NP(J). A relaxed version of this problem is to give necessary or sufficient conditions on {\alpha}_1,...,{\alpha}_{n} for the normality of J. We show that if {\alpha}_{i}\epsilon{s,l} with s and l arbitrary positive integers, then J is normal

    Benchmarking open source deep learning frameworks

    Get PDF
    Deep Learning (DL) is one of the hottest fields. To foster the growth of DL, several open source frameworks appeared providing implementations of the most common DL algorithms. These frameworks vary in the algorithms they support and in the quality of their implementations. The purpose of this work is to provide a qualitative and quantitative comparison among three such frameworks: TensorFlow, Theano and CNTK. To ensure that our study is as comprehensive as possible, we consider multiple benchmark datasets from different fields (image processing, NLP, etc.) and measure the performance of the frameworks' implementations of different DL algorithms. For most of our experiments, we find out that CNTK's implementations are superior to the other ones under consideration

    Atar: Attention-based LSTM for Arabizi transliteration

    Get PDF
    A non-standard romanization of Arabic script, known as Arbizi, is widely used in Arabic online and SMS/chat communities. However, since state-of-the-art tools and applications for Arabic NLP expects Arabic to be written in Arabic script, handling contents written in Arabizi requires a special attention either by building customized tools or by transliterating them into Arabic script. The latter approach is the more common one and this work presents two significant contributions in this direction. The first one is to collect and publicly release the first large-scale “Arabizi to Arabic script” parallel corpus focusing on the Jordanian dialect and consisting of more than 25 k pairs carefully created and inspected by native speakers to ensure highest quality. Second, we present Atar, an attention-based encoder-decoder model for Arabizi transliteration. Training and testing this model on our dataset yields impressive accuracy (79%) and BLEU score (88.49)

    Neural Arabic Text Diacritization: State of the Art Results and a Novel Approach for Machine Translation

    Full text link
    In this work, we present several deep learning models for the automatic diacritization of Arabic text. Our models are built using two main approaches, viz. Feed-Forward Neural Network (FFNN) and Recurrent Neural Network (RNN), with several enhancements such as 100-hot encoding, embeddings, Conditional Random Field (CRF) and Block-Normalized Gradient (BNG). The models are tested on the only freely available benchmark dataset and the results show that our models are either better or on par with other models, which require language-dependent post-processing steps, unlike ours. Moreover, we show that diacritics in Arabic can be used to enhance the models of NLP tasks such as Machine Translation (MT) by proposing the Translation over Diacritization (ToD) approach.Comment: 18 pages, 17 figures, 14 table
    • …
    corecore