132,985 research outputs found

    Functional-bandwidth kernel for Support Vector Machine with Functional Data:An alternating optimization algorithm

    Get PDF
    Functional Data Analysis (FDA) is devoted to the study of data which are functions. Support Vector Ma- chine (SVM) is a benchmark tool for classification, in particular, of functional data. SVM is frequently used with a kernel (e.g.: Gaussian) which involves a scalar bandwidth parameter. In this paper, we pro- pose to use kernels with functional bandwidths. In this way, accuracy may be improved, and the time intervals critical for classification are identified. Tuning the functional parameters of the new kernel is a challenging task expressed as a continuous optimization problem, solved by means of a heuristic. Our experiments with benchmark data sets show the advantages of using functional parameters and the ef- fectiveness of our approach

    A continuous model for dynamic pricing under costly price modifications

    Get PDF
    This paper presents a heuristic method to solve a dynamic pricing problem under costly price modifications. This is a remarkably difficult problem that is solvable only under very few special cases. The method is applied to a more general form of the problem and is numerically tested for a variety of demand functions in the literature. The results show that the method is quite accurate, approximating the optimal profit within usually much less than 1\%. A more important result is that the accuracy tend to be much greater as the number of price changes increases, precisely when the underlying optimization problem becomes much harder, which makes this approach particularly desirable

    A Domain Specific Approach to High Performance Heterogeneous Computing

    Full text link
    Users of heterogeneous computing systems face two problems: firstly, in understanding the trade-off relationships between the observable characteristics of their applications, such as latency and quality of the result, and secondly, how to exploit knowledge of these characteristics to allocate work to distributed computing platforms efficiently. A domain specific approach addresses both of these problems. By considering a subset of operations or functions, models of the observable characteristics or domain metrics may be formulated in advance, and populated at run-time for task instances. These metric models can then be used to express the allocation of work as a constrained integer program, which can be solved using heuristics, machine learning or Mixed Integer Linear Programming (MILP) frameworks. These claims are illustrated using the example domain of derivatives pricing in computational finance, with the domain metrics of workload latency or makespan and pricing accuracy. For a large, varied workload of 128 Black-Scholes and Heston model-based option pricing tasks, running upon a diverse array of 16 Multicore CPUs, GPUs and FPGAs platforms, predictions made by models of both the makespan and accuracy are generally within 10% of the run-time performance. When these models are used as inputs to machine learning and MILP-based workload allocation approaches, a latency improvement of up to 24 and 270 times over the heuristic approach is seen.Comment: 14 pages, preprint draft, minor revisio

    Modelling a Fractionated System of Deductive Reasoning over Categorical Syllogisms

    Get PDF
    The study of deductive reasoning has been a major research paradigm in psychology for decades. Recent additions to this literature have focused heavily on neuropsychological evidence. Such a practice is useful for identifying regions associated with particular functions, but fails to clearly define the specific interactions and timescale of these functions. Computational modelling provides a method for creating different cognitive architectures for simulating deductive processes, and ultimately determining which architectures are capable of modelling human reasoning. This thesis details a computational model for solving categorical syllogisms utilizing a fractionated system of brain regions. Lesions are applied to formal and heuristic systems to simulate accuracy and reaction time data for bi-lateral parietal and frontotemporal patients. The model successfully combines belief-bias and other known cognitive biases with a mental models formal approach to recreate the congruency by group effect present in the human data. Implications are drawn to major theories of reasoning

    Using a unified measure function for heuristics, discretization, and rule quality evaluation in Ant-Miner

    Get PDF
    Ant-Miner is a classification rule discovery algorithm that is based on Ant Colony Optimization (ACO) meta-heuristic. cAnt-Miner is the extended version of the algorithm that handles continuous attributes on-the-fly during the rule construction process, while ?Ant-Miner is an extension of the algorithm that selects the rule class prior to its construction, and utilizes multiple pheromone types, one for each permitted rule class. In this paper, we combine these two algorithms to derive a new approach for learning classification rules using ACO. The proposed approach is based on using the measure function for 1) computing the heuristics for rule term selection, 2) a criteria for discretizing continuous attributes, and 3) evaluating the quality of the constructed rule for pheromone update as well. We explore the effect of using different measure functions for on the output model in terms of predictive accuracy and model size. Empirical evaluations found that hypothesis of different functions produce different results are acceptable according to Friedman’s statistical test

    A Faster Method to Estimate Closeness Centrality Ranking

    Get PDF
    Closeness centrality is one way of measuring how central a node is in the given network. The closeness centrality measure assigns a centrality value to each node based on its accessibility to the whole network. In real life applications, we are mainly interested in ranking nodes based on their centrality values. The classical method to compute the rank of a node first computes the closeness centrality of all nodes and then compares them to get its rank. Its time complexity is O(n⋅m+n)O(n \cdot m + n), where nn represents total number of nodes, and mm represents total number of edges in the network. In the present work, we propose a heuristic method to fast estimate the closeness rank of a node in O(α⋅m)O(\alpha \cdot m) time complexity, where α=3\alpha = 3. We also propose an extended improved method using uniform sampling technique. This method better estimates the rank and it has the time complexity O(α⋅m)O(\alpha \cdot m), where α≈10−100\alpha \approx 10-100. This is an excellent improvement over the classical centrality ranking method. The efficiency of the proposed methods is verified on real world scale-free social networks using absolute and weighted error functions
    • 

    corecore