20,215 research outputs found

    Uncertainty Management of Intelligent Feature Selection in Wireless Sensor Networks

    Get PDF
    Wireless sensor networks (WSN) are envisioned to revolutionize the paradigm of monitoring complex real-world systems at a very high resolution. However, the deployment of a large number of unattended sensor nodes in hostile environments, frequent changes of environment dynamics, and severe resource constraints pose uncertainties and limit the potential use of WSN in complex real-world applications. Although uncertainty management in Artificial Intelligence (AI) is well developed and well investigated, its implications in wireless sensor environments are inadequately addressed. This dissertation addresses uncertainty management issues of spatio-temporal patterns generated from sensor data. It provides a framework for characterizing spatio-temporal pattern in WSN. Using rough set theory and temporal reasoning a novel formalism has been developed to characterize and quantify the uncertainties in predicting spatio-temporal patterns from sensor data. This research also uncovers the trade-off among the uncertainty measures, which can be used to develop a multi-objective optimization model for real-time decision making in sensor data aggregation and samplin

    Finding patterns in student and medical office data using rough sets

    Get PDF
    Data have been obtained from King Khaled General Hospital in Saudi Arabia. In this project, I am trying to discover patterns in these data by using implemented algorithms in an experimental tool, called Rough Set Graphic User Interface (RSGUI). Several algorithms are available in RSGUI, each of which is based in Rough Set theory. My objective is to find short meaningful predictive rules. First, we need to find a minimum set of attributes that fully characterize the data. Some of the rules generated from this minimum set will be obvious, and therefore uninteresting. Others will be surprising, and therefore interesting. Usual measures of strength of a rule, such as length of the rule, certainty and coverage were considered. In addition, a measure of interestingness of the rules has been developed based on questionnaires administered to human subjects. There were bugs in the RSGUI java codes and one algorithm in particular, Inductive Learning Algorithm (ILA) missed some cases that were subsequently resolved in ILA2 but not updated in RSGUI. I solved the ILA issue on RSGUI. So now ILA on RSGUI is running well and gives good results for all cases encountered in the hospital administration and student records data.Master's These

    Fredkin Gates for Finite-valued Reversible and Conservative Logics

    Full text link
    The basic principles and results of Conservative Logic introduced by Fredkin and Toffoli on the basis of a seminal paper of Landauer are extended to d-valued logics, with a special attention to three-valued logics. Different approaches to d-valued logics are examined in order to determine some possible universal sets of logic primitives. In particular, we consider the typical connectives of Lukasiewicz and Godel logics, as well as Chang's MV-algebras. As a result, some possible three-valued and d-valued universal gates are described which realize a functionally complete set of fundamental connectives.Comment: 57 pages, 10 figures, 16 tables, 2 diagram

    Implications of uniformly distributed, empirically informed priors for phylogeographical model selection: A reply to Hickerson et al

    Full text link
    Establishing that a set of population-splitting events occurred at the same time can be a potentially persuasive argument that a common process affected the populations. Oaks et al. (2013) assessed the ability of an approximate-Bayesian method (msBayes) to estimate such a pattern of simultaneous divergence across taxa, to which Hickerson et al. (2014) responded. Both papers agree the method is sensitive to prior assumptions and often erroneously supports shared divergences; the papers differ about the explanation and solution. Oaks et al. (2013) suggested the method's behavior is caused by the strong weight of uniform priors on divergence times leading to smaller marginal likelihoods of models with more divergence-time parameters (Hypothesis 1); they proposed alternative priors to avoid strongly weighted posteriors. Hickerson et al. (2014) suggested numerical approximation error causes msBayes analyses to be biased toward models of clustered divergences (Hypothesis 2); they proposed using narrow, empirical uniform priors. Here, we demonstrate that the approach of Hickerson et al. (2014) does not mitigate the method's tendency to erroneously support models of clustered divergences, and often excludes the true parameter values. Our results also show that the tendency of msBayes analyses to support models of shared divergences is primarily due to Hypothesis 1. This series of papers demonstrate that if our prior assumptions place too much weight in unlikely regions of parameter space such that the exact posterior supports the wrong model of evolutionary history, no amount of computation can rescue our inference. Fortunately, more flexible distributions that accommodate prior uncertainty about parameters without placing excessive weight in vast regions of parameter space with low likelihood increase the method's robustness and power to detect temporal variation in divergences.Comment: 24 pages, 4 figures, 1 table, 14 pages of supporting information with 10 supporting figure

    Internet-based solutions to support distributed manufacturing

    Get PDF
    With the globalisation and constant changes in the marketplace, enterprises are adapting themselves to face new challenges. Therefore, strategic corporate alliances to share knowledge, expertise and resources represent an advantage in an increasing competitive world. This has led the integration of companies, customers, suppliers and partners using networked environments. This thesis presents three novel solutions in the tooling area, developed for Seco tools Ltd, UK. These approaches implement a proposed distributed computing architecture using Internet technologies to assist geographically dispersed tooling engineers in process planning tasks. The systems are summarised as follows. TTS is a Web-based system to support engineers and technical staff in the task of providing technical advice to clients. Seco sales engineers access the system from remote machining sites and submit/retrieve/update the required tooling data located in databases at the company headquarters. The communication platform used for this system provides an effective mechanism to share information nationwide. This system implements efficient methods, such as data relaxation techniques, confidence score and importance levels of attributes, to help the user in finding the closest solutions when specific requirements are not fully matched In the database. Cluster-F has been developed to assist engineers and clients in the assessment of cutting parameters for the tooling process. In this approach the Internet acts as a vehicle to transport the data between users and the database. Cluster-F is a KD approach that makes use of clustering and fuzzy set techniques. The novel proposal In this system is the implementation of fuzzy set concepts to obtain the proximity matrix that will lead the classification of the data. Then hierarchical clustering methods are applied on these data to link the closest objects. A general KD methodology applying rough set concepts Is proposed In this research. This covers aspects of data redundancy, Identification of relevant attributes, detection of data inconsistency, and generation of knowledge rules. R-sets, the third proposed solution, has been developed using this KD methodology. This system evaluates the variables of the tooling database to analyse known and unknown relationships in the data generated after the execution of technical trials. The aim is to discover cause-effect patterns from selected attributes contained In the database. A fourth system was also developed. It is called DBManager and was conceived to administrate the systems users accounts, sales engineers’ accounts and tool trial monitoring process of the data. This supports the implementation of the proposed distributed architecture and the maintenance of the users' accounts for the access restrictions to the system running under this architecture

    Machine learning and statistical techniques : an application to the prediction of insolvency in Spanish non-life insurance companies

    Get PDF
    Prediction of insurance companies insolvency has arisen as an important problem in the field of financial research. Most methods applied in the past to tackle this issue are traditional statistical techniques which use financial ratios as explicative variables. However, these variables often do not satisfy statistical assumptions, which complicates the application of the mentioned methods. In this paper, a comparative study of the performance of two non-parametric machine learning techniques (See5 and Rough Set) is carried out. We have applied the two methods to the problem of the prediction of insolvency of Spanish non-life insurance companies, upon the basis of a set of financial ratios. We also compare these methods with three classical and well-known techniques: one of them belonging to the field of Machine Learning (Multilayer Perceptron) and two statistical ones (Linear Discriminant Analysis and Logistic Regression). Results indicate a higher performance of the machine learning techniques. Furthermore, See5 and Rough Set provide easily understandable and interpretable decision models, which shows that these methods can be a useful tool to evaluate insolvency of insurance firms.El pronóstico sobre la insolvencia de las compañías de seguro ha surgido como un problema importante en el ámbito de investigación financiera. La mayoría de los métodos aplicados en el pasado para abordar este problema, son técnicas estadísticas tradicionales que usan los ratios financieros como variables explicativas. Aunque, estas variables a menudo no satisfacen las suposiciones estadísticas, lo que complica la aplicación de dichos métodos. En este artículo, se lleva a cabo un estudio comparativo sobre la actuación de dos técnicas de aprendizaje automático no paramétrico (See5 y Rough Set). Hemos aplicado ambos métodos al problema del pronóstico sobre la insolvencia de compañías españolas de seguros no de vida, sobre la base de un conjunto de ratios financieros. Además, hemos comparado estos métodos con tres técnicas clásicas y muy conocidas: una de ellas perteneciente al área del Aprendizaje Automático (Perceptrón Multicapa), y dos estadísticos (Análisis Discriminante Lineal y Regresión Logística). Los resultados indican un desempeño más elevado en las técnicas de aprendizaje automático. Es más, See5 y Rough Set aportan unos modelos de decisión fácilmente entendibles, e interpretables, lo que demuestra que estos métodos pueden ser útiles para evaluar la insolvencia de empresas de seguros
    • …
    corecore