5,511 research outputs found

    Brain Cancer Antibody Display Classification

    Get PDF
    This article explores real data on brain cancer. This type of biological data has a few particularities like a great number of attributes – antibodies and genes. However the number of entries is rather small because the data have to be obtained from real patients. This process is time consuming and very costly. Due to that, this research provides detailed data description as well as analyzes their particularities, type and structure. Correspondingly, classification rules are also difficult to discover. This research is dedicated to finding applications of classification methods aimed at determining interconnections that could be used to classify brain cancer. Working exactly with such unique data has a great practical value, because the data obtained can be used in future to continue the research and in practical diagnostics with the possibility to offer the data to biologists for interpretation. To speed up the obtaining of interconnections, only important attributes were used. Various methods of interconnection determination were employed. Conclusions about this type of data analysis, obtaining classification rules and the precision of obtained rules are made and directions of future work are outlined

    Aggregate disturbances, monetary policy, and the macroeconomy: the FRB/US perspective

    Get PDF
    The FRB/US macroeconometric model of the U.S. economy was created at the Federal Reserve Board for use in policy analysis and forecasting. This article begins with an examination of the model's characterization of the monetary transmission mechanism -- the chain of relationships describing how monetary policy actions influence financial markets and, in turn, output and inflation. The quantitative nature of this mechanism is illustrated by estimates of the effect of movements in interest rates and other factors on spending in different sectors and by simulations of the effect of a change in the stance of policy on the economy as a whole. After the discussion of the transmission mechanism, the article considers the influence of monetary policy on the macroeconomic consequences of specific events by showing how the predicted effects of selected disturbances change under alternative policy responses. These examples illustrate an important policy tradeoff in the FRB/US model involving the variability (but not the level) of output and inflation: Past some point, lower variability in inflation can be obtained only at the expense of greater fluctuations in output and interest rates.Forecasting ; Macroeconomics ; Monetary policy

    Type-2 Fuzzy Logic for Edge Detection of Gray Scale Images

    Get PDF

    Law, Logic and Communication

    Get PDF

    The Fountains of Istanbul in The 18th Century and The Shadirvan of Saint Sofia

    Get PDF
    [No Abstract Available

    Tachyon Condensation on the Elliptic Curve

    Full text link
    We use the framework of matrix factorizations to study topological B-type D-branes on the cubic curve. Specifically, we elucidate how the brane RR charges are encoded in the matrix factors, by analyzing their structure in terms of sections of vector bundles in conjunction with equivariant R-symmetry. One particular advantage of matrix factorizations is that explicit moduli dependence is built in, thus giving us full control over the open-string moduli space. It allows one to study phenomena like discontinuous jumps of the cohomology over the moduli space, as well as formation of bound states at threshold. One interesting aspect is that certain gauge symmetries inherent to the matrix formulation lead to a non-trivial global structure of the moduli space. We also investigate topological tachyon condensation, which enables us to construct, in a systematic fashion, higher-dimensional matrix factorizations out of smaller ones; this amounts to obtaining branes with higher RR charges as composites of ones with minimal charges. As an application, we explicitly construct all rank-two matrix factorizations.Comment: 69p, 6 figs, harvmac; v2: minor change

    Dynamic Rule Covering Classification in Data Mining with Cyber Security Phishing Application

    Get PDF
    Data mining is the process of discovering useful patterns from datasets using intelligent techniques to help users make certain decisions. A typical data mining task is classification, which involves predicting a target variable known as the class in previously unseen data based on models learnt from an input dataset. Covering is a well-known classification approach that derives models with If-Then rules. Covering methods, such as PRISM, have a competitive predictive performance to other classical classification techniques such as greedy, decision tree and associative classification. Therefore, Covering models are appropriate decision-making tools and users favour them carrying out decisions. Despite the use of Covering approach in data processing for different classification applications, it is also acknowledged that this approach suffers from the noticeable drawback of inducing massive numbers of rules making the resulting model large and unmanageable by users. This issue is attributed to the way Covering techniques induce the rules as they keep adding items to the rule’s body, despite the limited data coverage (number of training instances that the rule classifies), until the rule becomes with zero error. This excessive learning overfits the training dataset and also limits the applicability of Covering models in decision making, because managers normally prefer a summarised set of knowledge that they are able to control and comprehend rather a high maintenance models. In practice, there should be a trade-off between the number of rules offered by a classification model and its predictive performance. Another issue associated with the Covering models is the overlapping of training data among the rules, which happens when a rule’s classified data are discarded during the rule discovery phase. Unfortunately, the impact of a rule’s removed data on other potential rules is not considered by this approach. However, When removing training data linked with a rule, both frequency and rank of other rules’ items which have appeared in the removed data are updated. The impacted rules should maintain their true rank and frequency in a dynamic manner during the rule discovery phase rather just keeping the initial computed frequency from the original input dataset. In response to the aforementioned issues, a new dynamic learning technique based on Covering and rule induction, that we call Enhanced Dynamic Rule Induction (eDRI), is developed. eDRI has been implemented in Java and it has been embedded in WEKA machine learning tool. The developed algorithm incrementally discovers the rules using primarily frequency and rule strength thresholds. These thresholds in practice limit the search space for both items as well as potential rules by discarding any with insufficient data representation as early as possible resulting in an efficient training phase. More importantly, eDRI substantially cuts down the number of training examples scans by continuously updating potential rules’ frequency and strength parameters in a dynamic manner whenever a rule gets inserted into the classifier. In particular, and for each derived rule, eDRI adjusts on the fly the remaining potential rules’ items frequencies as well as ranks specifically for those that appeared within the deleted training instances of the derived rule. This gives a more realistic model with minimal rules redundancy, and makes the process of rule induction efficient and dynamic and not static. Moreover, the proposed technique minimises the classifier’s number of rules at preliminary stages by stopping learning when any rule does not meet the rule’s strength threshold therefore minimising overfitting and ensuring a manageable classifier. Lastly, eDRI prediction procedure not only priorities using the best ranked rule for class forecasting of test data but also restricts the use of the default class rule thus reduces the number of misclassifications. The aforementioned improvements guarantee classification models with smaller size that do not overfit the training dataset, while maintaining their predictive performance. The eDRI derived models particularly benefit greatly users taking key business decisions since they can provide a rich knowledge base to support their decision making. This is because these models’ predictive accuracies are high, easy to understand, and controllable as well as robust, i.e. flexible to be amended without drastic change. eDRI applicability has been evaluated on the hard problem of phishing detection. Phishing normally involves creating a fake well-designed website that has identical similarity to an existing business trustful website aiming to trick users and illegally obtain their credentials such as login information in order to access their financial assets. The experimental results against large phishing datasets revealed that eDRI is highly useful as an anti-phishing tool since it derived manageable size models when compared with other traditional techniques without hindering the classification performance. Further evaluation results using other several classification datasets from different domains obtained from University of California Data Repository have corroborated eDRI’s competitive performance with respect to accuracy, number of knowledge representation, training time and items space reduction. This makes the proposed technique not only efficient in inducing rules but also effective

    A Short Survey of Noncommutative Geometry

    Full text link
    We give a survey of selected topics in noncommutative geometry, with some emphasis on those directly related to physics, including our recent work with Dirk Kreimer on renormalization and the Riemann-Hilbert problem. We discuss at length two issues. The first is the relevance of the paradigm of geometric space, based on spectral considerations, which is central in the theory. As a simple illustration of the spectral formulation of geometry in the ordinary commutative case, we give a polynomial equation for geometries on the four dimensional sphere with fixed volume. The equation involves an idempotent e, playing the role of the instanton, and the Dirac operator D. It expresses the gamma five matrix as the pairing between the operator theoretic chern characters of e and D. It is of degree five in the idempotent and four in the Dirac operator which only appears through its commutant with the idempotent. It determines both the sphere and all its metrics with fixed volume form. We also show using the noncommutative analogue of the Polyakov action, how to obtain the noncommutative metric (in spectral form) on the noncommutative tori from the formal naive metric. We conclude on some questions related to string theory.Comment: Invited lecture for JMP 2000, 45
    • …
    corecore