852,136 research outputs found

    Interpretations of Association Rules by Granular Computing

    Get PDF
    We present interpretations for association rules. We first introduce Pawlak's method, and the corresponding algorithm of finding decision rules (a kind of association rules). We then use extended random sets to present a new algorithm of finding interesting rules. We prove that the new algorithm is faster than Pawlak's algorithm. The extended random sets are easily to include more than one criterion for determining interesting rules. We also provide two measures for dealing with uncertainties in association rules

    Interactive Data Exploration with Smart Drill-Down

    Full text link
    We present {\em smart drill-down}, an operator for interactively exploring a relational table to discover and summarize "interesting" groups of tuples. Each group of tuples is described by a {\em rule}. For instance, the rule (a,b,,1000)(a, b, \star, 1000) tells us that there are a thousand tuples with value aa in the first column and bb in the second column (and any value in the third column). Smart drill-down presents an analyst with a list of rules that together describe interesting aspects of the table. The analyst can tailor the definition of interesting, and can interactively apply smart drill-down on an existing rule to explore that part of the table. We demonstrate that the underlying optimization problems are {\sc NP-Hard}, and describe an algorithm for finding the approximately optimal list of rules to display when the user uses a smart drill-down, and a dynamic sampling scheme for efficiently interacting with large tables. Finally, we perform experiments on real datasets on our experimental prototype to demonstrate the usefulness of smart drill-down and study the performance of our algorithms

    Interplay of the Chiral and Large N_c Limits in pi N Scattering

    Get PDF
    Light-quark hadronic physics admits two useful systematic expansions, the chiral and 1/N_c expansions. Their respective limits do not commute, making such cases where both expansions may be considered to be especially interesting. We first study pi N scattering lengths, showing that (as expected for such soft-pion quantities) the chiral expansion converges more rapidly than the 1/N_c expansion, although the latter nevertheless continues to hold. We also study the Adler-Weisberger and Goldberger-Miyazawa-Oehme sum rules of pi N scattering, finding that both fail if the large N_c limit is taken prior to the chiral limit.Comment: 10 pages, ReVTe

    Categorization of interestingness measures for knowledge extraction

    Full text link
    Finding interesting association rules is an important and active research field in data mining. The algorithms of the Apriori family are based on two rule extraction measures, support and confidence. Although these two measures have the virtue of being algorithmically fast, they generate a prohibitive number of rules most of which are redundant and irrelevant. It is therefore necessary to use further measures which filter uninteresting rules. Many synthesis studies were then realized on the interestingness measures according to several points of view. Different reported studies have been carried out to identify "good" properties of rule extraction measures and these properties have been assessed on 61 measures. The purpose of this paper is twofold. First to extend the number of the measures and properties to be studied, in addition to the formalization of the properties proposed in the literature. Second, in the light of this formal study, to categorize the studied measures. This paper leads then to identify categories of measures in order to help the users to efficiently select an appropriate measure by choosing one or more measure(s) during the knowledge extraction process. The properties evaluation on the 61 measures has enabled us to identify 7 classes of measures, classes that we obtained using two different clustering techniques.Comment: 34 pages, 4 figure

    Endogenous fisheries management in a stochastic model: Why do fishery agencies use TAC

    Get PDF
    The aim of this paper is to explain under which circumstances using TACs as instrument to manage a fishery along with fishing periods may be interesting from a regulatory point of view. In order to do this, the deterministic analysis of Homans and Wilen (1997)and Anderson (2000) is extended to a stochastic scenario where the resource cannot be measured accurately. The resulting endogenous stochastic model is numerically solved for finding the optimal control rules in the Iberian sardine stock. Three relevant conclusions can be highligted from simulations. First, the higher the uncertainty about the state of the stock is, the lower the probability of closing the fishery is. Second, the use of TACs as management instrument in fisheries already regulated with fishing periods leads to: i) An increase of the optimal season length and harvests, especially for medium and high number of licences, ii) An improvement of the biological and economic variables when the size of the fleet is large; and iii) Eliminate the extinction risk for the resource. And third, the regulator would rather select the number of licences and do not restrict the season length.TAC, season lengths, fisheries management, stock uncertainty

    Islamic Banking Performance in the Middle East: A Case Study of Jordan

    Get PDF
    Islamic banking in Jordan started around two decades ago. Since then it has played an important role in financing and contributing to different economics and social sectors in the country in compliance with the principles of Shariah rules in Islamic banking practices. Since there have been limited studies on the financial performance of Islamic banks in the country. The aim of this paper is to examine and analyse the Jordanian experience with Islamic banking, and in particular the experience for the first and second Islamic bank in the country, Jordan Islamic Bank for Finance and Investment (JIBFI), and Islamic International Arab Bank (IIAB) in order to evaluate the Islamic banks’ performance in the county. The paper goes further to shed some light on the domestic as well as global challenges, which are facing this sector. However, this paper used the performance evaluation methodology by conducting the profit maximization, capital structure, and liquidity tests. This paper found that the efficiency and ability of both banks has increased and both have expanded their investment and activities and had played an important role in financing projects in Jordan. Another interesting finding of the paper that these banks have focused on the short-term investment, perhaps this seems to be the case in most Islamic banking practices. Another finding is that the Bank for Finance and Investment (JIBFI) has a high profitability that encourages other banks to practice the Islamic financial system. The paper also found that Islamic banks have a high growth in the credit facilities and in profitability.Islamic banking, Performance, Efficiency, Challenges, Jordan

    Frequent Lexicographic Algorithm for Mining Association Rules

    Get PDF
    The recent progress in computer storage technology have enable many organisations to collect and store a huge amount of data which is lead to growing demand for new techniques that can intelligently transform massive data into useful information and knowledge. The concept of data mining has brought the attention of business community in finding techniques that can extract nontrivial, implicit, previously unknown and potentially useful information from databases. Association rule mining is one of the data mining techniques which discovers strong association or correlation relationships among data. The primary concept of association rule algorithms consist of two phase procedure. In the first phase, all frequent patterns are found and the second phase uses these frequent patterns in order to generate all strong rules. The common precision measures used to complete these phases are support and confidence. Having been investigated intensively during the past few years, it has been shown that the first phase involves a major computational task. Although the second phase seems to be more straightforward, it can be costly because the size of the generated rules are normally large and in contrast only a small fraction of these rules are typically useful and important. As response to these challenges, this study is devoted towards finding faster methods for searching frequent patterns and discovery of association rules in concise form. An algorithm called Flex (Frequent lexicographic patterns) has been proposed in obtaining a good performance of searching li-equent patterns. The algorithm involved the construction of the nodes of a lexicographic tree that represent frequent patterns. Depth first strategy and vertical counting strategy are used in mining frequent patterns and computing the support of the patterns respectively. The mined frequent patterns are then used in generating association rules. Three models were applied in this task which consist of traditional model, constraint model and representative model which produce three kinds of rules respectively; all association rules, association rules with 1-consequence and representative rules. As an additional utility in the representative model, this study proposed a set-theoretical intersection to assist users in finding duplicated rules. Four datasets from UCI machine learning repositories and domain theories except the pumsb dataset were experimented. The Flex algorithm and the other two existing algorithms Apriori and DIC under the same specification are tested toward these datasets and their extraction times for mining frequent patterns were recorded and compared. The experimental results showed that the proposed algorithm outperformed both existing algorithms especially for the case of long patterns. It also gave promising results in the case of short patterns. Two of the datasets were then chosen for further experiment on the scalability of the algorithms by increasing their size of transactions up to six times. The scale-up experiment showed that the proposed algorithm is more scalable than the other existing algorithms. The implementation of an adopted theory of representative model proved that this model is more concise than the other two models. It is shown by number of rules generated from the chosen models. Besides a small set of rules obtained, the representative model also having the lossless information and soundness properties meaning that it covers all interesting association rules and forbid derivation of weak rules. It is theoretically proven that the proposed set-theoretical intersection is able to assist users in knowing the duplication rules exist in representative model

    The one dimensional semi-classical Bogoliubov-de Gennes Hamiltonian with PT symmetry: generalized Bohr-Sommerfeld quantization rules

    Full text link
    We present a method for computing first order asymptotics of semiclassical spectra for 1-D Bogoliubov-de Gennes (BdG) Hamiltonian from Supraconductivity, which models the electron/hole scattering through two SNS junctions. This involves: 1) reducing the system to Weber equation near the branching point at the junctions, 2) constructing local sections of the fibre bundle of microlocal solutions, 3) normalizing these solutions for the "flux norm" associated to the microlocal Wronskians, 4) finding the relative monodromy matrices in the gauge group that leaves invariant the flux norm, 5) from this we deduce Bohr-Sommerfeld (BS) quantization rules that hold precisely when the fibre bundle of microlocal solutions (depending on the energy parameter E) has trivial holonomy. Such a semi-classical treatement reveals interesting continuous symetries related to monodromy.Comment: IOP Conference series GROUP3

    Introduction to Pylog

    Full text link
    PyLog is a minimal experimental proof assistant based on linearised natural deduction for intuitionistic and classical first-order logic extended with a comprehension operator. PyLog is interesting as a tool to be used in conjunction with other more complex proof assistants and formal mathematics projects (such as Coq and Coq-based projects). Proof assistants based on dependent type theory are at once very different and profoundly connected to the one employed by Pylog via the Curry-Howard correspondence. The Tactic system of Coq presents us with a top-down approach to proofs (finding a term inhabiting a given type via backtracking the rules, typability and type-inference being automated) whilst the classical approach of Pylog follows how mathematical proofs are usually written. Pylog should be further developed along the lines of Coq in particular through the introduction of many "micro-automatisations" and a nice IDE

    Adaptive Regularization Minimization Algorithms with Non-Smooth Norms and Euclidean Curvature

    Get PDF
    A regularization algorithm (AR1pGN) for unconstrained nonlinear minimization is considered, which uses a model consisting of a Taylor expansion of arbitrary degree and regularization term involving a possibly non-smooth norm. It is shown that the non-smoothness of the norm does not affect the O(ϵ1(p+1)/p)O(\epsilon_1^{-(p+1)/p}) upper bound on evaluation complexity for finding first-order ϵ1\epsilon_1-approximate minimizers using pp derivatives, and that this result does not hinge on the equivalence of norms in n\Re^n. It is also shown that, if p=2p=2, the bound of O(ϵ23)O(\epsilon_2^{-3}) evaluations for finding second-order ϵ2\epsilon_2-approximate minimizers still holds for a variant of AR1pGN named AR2GN, despite the possibly non-smooth nature of the regularization term. Moreover, the adaptation of the existing theory for handling the non-smoothness results in an interesting modification of the subproblem termination rules, leading to an even more compact complexity analysis. In particular, it is shown when the Newton's step is acceptable for an adaptive regularization method. The approximate minimization of quadratic polynomials regularized with non-smooth norms is then discussed, and a new approximate second-order necessary optimality condition is derived for this case. An specialized algorithm is then proposed to enforce the first- and second-order conditions that are strong enough to ensure the existence of a suitable step in AR1pGN (when p=2p=2) and in AR2GN, and its iteration complexity is analyzed.Comment: A correction will be available soo
    corecore