24,648 research outputs found
An automated approach to the design of decision tree classifiers
The classification of large dimensional data sets arising from the merging of remote sensing data with more traditional forms of ancillary data is considered. Decision tree classification, a popular approach to the problem, is characterized by the property that samples are subjected to a sequence of decision rules before they are assigned to a unique class. An automated technique for effective decision tree design which relies only on apriori statistics is presented. This procedure utilizes a set of two dimensional canonical transforms and Bayes table look-up decision rules. An optimal design at each node is derived based on the associated decision table. A procedure for computing the global probability of correct classfication is also provided. An example is given in which class statistics obtained from an actual LANDSAT scene are used as input to the program. The resulting decision tree design has an associated probability of correct classification of .76 compared to the theoretically optimum .79 probability of correct classification associated with a full dimensional Bayes classifier. Recommendations for future research are included
On the automated extraction of regression knowledge from databases
The advent of inexpensive, powerful computing systems, together with the increasing amount of available data, conforms one of the greatest challenges for next-century information science. Since it is apparent that much future analysis will be done automatically, a good deal of attention has been paid recently to the implementation of ideas and/or the adaptation of systems originally developed in machine learning and other computer science areas. This interest seems to stem from both the suspicion that traditional techniques are not well-suited for large-scale automation and the success of new algorithmic concepts in difficult optimization problems. In this paper, I discuss a number of issues concerning the automated extraction of regression knowledge from databases. By regression knowledge is meant quantitative knowledge about the relationship between a vector of predictors or independent variables (x) and a scalar response or dependent variable (y). A number of difficulties found in some well-known tools are pointed out, and a flexible framework avoiding many such difficulties is described and advocated. Basic features of a new tool pursuing this direction are reviewed
Automated construction of a hierarchy of self-organized neural network classifiers
This paper documents an effort to design and implement a neural network-based, automatic classification system which dynamically constructs and trains a decision tree. The system is a combination of neural network and decision tree technology. The decision tree is constructed to partition a large classification problem into smaller problems. The neural network modules then solve these smaller problems. We used a variant of the Fuzzy ARTMAP neural network which can be trained much more quickly than traditional neural networks. The research extends the concept of self-organization from within the neural network to the overall structure of the dynamically constructed decision hierarchy. The primary advantage is avoidance of manual tedium and subjective bias in constructing decision hierarchies. Additionally, removing the need for manual construction of the hierarchy opens up a large class of potential classification applications. When tested on data from real-world images, the automatically generated hierarchies performed slightly better than an intuitive (handbuilt) hierarchy. Because the neural networks at the nodes of the decision hierarchy are solving smaller problems, generalization performance can really be improved if the number of features used to solve these problems is reduced. Algorithms for automatically selecting which features to use for each individual classification module were also implemented. We were able to achieve the same level of performance as in previous manual efforts, but in an efficient, automatic manner. The technology developed has great potential in a number of commercial areas, including data mining, pattern recognition, and intelligent interfaces for personal computer applications. Sample applications include: fraud detection, bankruptcy prediction, data mining agent, scalable object recognition system, email agent, resource librarian agent, and a decision aid agent
ReCon: Revealing and Controlling PII Leaks in Mobile Network Traffic
It is well known that apps running on mobile devices extensively track and
leak users' personally identifiable information (PII); however, these users
have little visibility into PII leaked through the network traffic generated by
their devices, and have poor control over how, when and where that traffic is
sent and handled by third parties. In this paper, we present the design,
implementation, and evaluation of ReCon: a cross-platform system that reveals
PII leaks and gives users control over them without requiring any special
privileges or custom OSes. ReCon leverages machine learning to reveal potential
PII leaks by inspecting network traffic, and provides a visualization tool to
empower users with the ability to control these leaks via blocking or
substitution of PII. We evaluate ReCon's effectiveness with measurements from
controlled experiments using leaks from the 100 most popular iOS, Android, and
Windows Phone apps, and via an IRB-approved user study with 92 participants. We
show that ReCon is accurate, efficient, and identifies a wider range of PII
than previous approaches.Comment: Please use MobiSys version when referencing this work:
http://dl.acm.org/citation.cfm?id=2906392. 18 pages, recon.meddle.mob
- …