1,562 research outputs found

    Representations of the Quantum Algebra su_q(1,1) and Discrete q-Ultraspherical Polynomials

    Full text link
    We derive orthogonality relations for discrete q-ultraspherical polynomials and their duals by means of operators of representations of the quantum algebra su_q(1,1). Spectra and eigenfunctions of these operators are found explicitly. These eigenfunctions, when normalized, form an orthonormal basis in the representation space.Comment: Published in SIGMA (Symmetry, Integrability and Geometry: Methods and Applications) at http://www.emis.de/journals/SIGMA

    Degenerate Series Representations of the qq-Deformed Algebra soq(r,s){\rm so}'_q(r,s)

    Full text link
    The q-deformed algebra soq(r,s){\rm so}'_q(r,s) is a real form of the q-deformed algebra Uq(so(n,C))U'_q({\rm so}(n,\mathbb{C})), n=r+sn=r+s, which differs from the quantum algebra Uq(so(n,C))U_q({\rm so}(n,\mathbb{C})) of Drinfeld and Jimbo. We study representations of the most degenerate series of the algebra soq(r,s){\rm so}'_q(r,s). The formulas of action of operators of these representations upon the basis corresponding to restriction of representations onto the subalgebra soq(r)×soq(s){\rm so}'_q(r)\times {\rm so}'_q(s) are given. Most of these representations are irreducible. Reducible representations appear under some conditions for the parameters determining the representations. All irreducible constituents which appear in reducible representations of the degenerate series are found. All *-representations of soq(r,s){\rm so}'_q(r,s) are separated in the set of irreducible representations obtained in the paper.Comment: Published in SIGMA (Symmetry, Integrability and Geometry: Methods and Applications) at http://www.emis.de/journals/SIGMA

    INTERNATIONAL FINANCIAL REPORTING STANDARD (IFRS) WILL SUPPORT MANAGEMNET ACCOUNTING SYSTEM FOR SMALL AND MEDIUM ENTREPRISE (SME)?"

    Get PDF
    The problem of reporting financial data useful for readers in most of the countries andlanguages is receiving considerable attention with the implementation of the new financialreporting standards in the United States, Canada, Australia, Europe and Japan. The theoreticalmodel of the new standard forms that would be produced in a particular country and especially forpublic and world companies will expedite the search and analyses of usefulness of this reporting.The characteristic formulation of IFRS is implemented to obtain a common language in reportingfinancial data, capable to be interpreted by readers in the same meaning. There are a lot ofinterferences, convergences and divergences between accounting and financial reporting that stillshould be resolved for SMEs. Using a comparative method between management accounting in twocountries, Canada and Romania, it will be enable to show how IFRS can solve some of thosedifferences.IFRS ,Management Accounting. SWOT

    Measuring reasoning capabilities of ChatGPT

    Full text link
    I shall quantify the logical faults generated by ChatGPT when applied to reasoning tasks. For experiments, I use the 144 puzzles from the library \url{https://users.utcluj.ro/~agroza/puzzles/maloga}~\cite{groza:fol}. The library contains puzzles of various types, including arithmetic puzzles, logical equations, Sudoku-like puzzles, zebra-like puzzles, truth-telling puzzles, grid puzzles, strange numbers, or self-reference puzzles. The correct solutions for these puzzles were checked using the theorem prover Prover9~\cite{mccune2005release} and the finite models finder Mace4~\cite{mccune2003mace4} based on human-modelling in Equational First Order Logic. A first output of this study is the benchmark of 100 logical puzzles. For this dataset ChatGPT provided both correct answer and justification for 7\% only. %, while BARD for 5\%. Since the dataset seems challenging, the researchers are invited to test the dataset on more advanced or tuned models than ChatGPT3.5 with more crafted prompts. A second output is the classification of reasoning faults conveyed by ChatGPT. This classification forms a basis for a taxonomy of reasoning faults generated by large language models. I have identified 67 such logical faults, among which: inconsistencies, implication does not hold, unsupported claim, lack of commonsense, wrong justification. The 100 solutions generated by ChatGPT contain 698 logical faults. That is on average, 7 fallacies for each reasoning task. A third ouput is the annotated answers of the ChatGPT with the corresponding logical faults. Each wrong statement within the ChatGPT answer was manually annotated, aiming to quantify the amount of faulty text generated by the language model. On average, 26.03\% from the generated text was a logical fault
    corecore