33,274 research outputs found

    On the inference and management of macro-actions in forward-chaining planning

    Get PDF
    In this paper we discuss techniques for online generation of macro-actions as part of the planning process and demonstrate their use in a forward chaining search planning framework. The macroactions learnt are specifically created at places in the search space where the heuristic is not informative. We present results to show that using macro-actions generated during planning can improve planning performance

    Rational Deployment of CSP Heuristics

    Full text link
    Heuristics are crucial tools in decreasing search effort in varied fields of AI. In order to be effective, a heuristic must be efficient to compute, as well as provide useful information to the search algorithm. However, some well-known heuristics which do well in reducing backtracking are so heavy that the gain of deploying them in a search algorithm might be outweighed by their overhead. We propose a rational metareasoning approach to decide when to deploy heuristics, using CSP backtracking search as a case study. In particular, a value of information approach is taken to adaptive deployment of solution-count estimation heuristics for value ordering. Empirical results show that indeed the proposed mechanism successfully balances the tradeoff between decreasing backtracking and heuristic computational overhead, resulting in a significant overall search time reduction.Comment: 7 pages, 2 figures, to appear in IJCAI-2011, http://www.ijcai.org

    Sensing Throughput Optimization in Fading Cognitive Multiple Access Channels With Energy Harvesting Secondary Transmitters

    Get PDF
    The paper investigates the problem of maximizing expected sum throughput in a fading multiple access cognitive radio network when secondary user (SU) transmitters have energy harvesting capability, and perform cooperative spectrum sensing. We formulate the problem as maximization of sum-capacity of the cognitive multiple access network over a finite time horizon subject to a time averaged interference constraint at the primary user (PU) and almost sure energy causality constraints at the SUs. The problem is a mixed integer non-linear program with respect to two decision variables namely spectrum access decision and spectrum sensing decision, and the continuous variables sensing time and transmission power. In general, this problem is known to be NP hard. For optimization over these two decision variables, we use an exhaustive search policy when the length of the time horizon is small, and a heuristic policy for longer horizons. For given values of the decision variables, the problem simplifies into a joint optimization on SU \textit{transmission power} and \textit{sensing time}, which is non-convex in nature. We solve the resulting optimization problem as an alternating convex optimization problem for both non-causal and causal channel state information and harvested energy information patterns at the SU base station (SBS) or fusion center (FC). We present an analytic solution for the non-causal scenario with infinite battery capacity for a general finite horizon problem.We formulate the problem with causal information and finite battery capacity as a stochastic control problem and solve it using the technique of dynamic programming. Numerical results are presented to illustrate the performance of the various algorithms

    Optimized complex power quality classifier using one vs. rest support vector machine

    Get PDF
    Nowadays, power quality issues are becoming a significant research topic because of the increasing inclusion of very sensitive devices and considerable renewable energy sources. In general, most of the previous power quality classification techniques focused on single power quality events and did not include an optimal feature selection process. This paper presents a classification system that employs Wavelet Transform and the RMS profile to extract the main features of the measured waveforms containing either single or complex disturbances. A data mining process is designed to select the optimal set of features that better describes each disturbance present in the waveform. Support Vector Machine binary classifiers organized in a ?One Vs Rest? architecture are individually optimized to classify single and complex disturbances. The parameters that rule the performance of each binary classifier are also individually adjusted using a grid search algorithm that helps them achieve optimal performance. This specialized process significantly improves the total classification accuracy. Several single and complex disturbances were simulated in order to train and test the algorithm. The results show that the classifier is capable of identifying >99% of single disturbances and >97% of complex disturbances.Fil: de Yong, David Marcelo. Universidad Nacional de RĂ­o Cuarto. Facultad de IngenierĂ­a. Departamento de Electricidad y ElectrĂłnica; Argentina. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Centro CientĂ­fico TecnolĂłgico Conicet - CĂłrdoba; ArgentinaFil: Bhowmik, Sudipto. Nexant Inc; Estados UnidosFil: Magnago, Fernando. Universidad Nacional de RĂ­o Cuarto. Facultad de IngenierĂ­a. Departamento de Electricidad y ElectrĂłnica; Argentina. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Centro CientĂ­fico TecnolĂłgico Conicet - CĂłrdoba; Argentin

    Fuzzy Adaptive Tuning of a Particle Swarm Optimization Algorithm for Variable-Strength Combinatorial Test Suite Generation

    Full text link
    Combinatorial interaction testing is an important software testing technique that has seen lots of recent interest. It can reduce the number of test cases needed by considering interactions between combinations of input parameters. Empirical evidence shows that it effectively detects faults, in particular, for highly configurable software systems. In real-world software testing, the input variables may vary in how strongly they interact, variable strength combinatorial interaction testing (VS-CIT) can exploit this for higher effectiveness. The generation of variable strength test suites is a non-deterministic polynomial-time (NP) hard computational problem \cite{BestounKamalFuzzy2017}. Research has shown that stochastic population-based algorithms such as particle swarm optimization (PSO) can be efficient compared to alternatives for VS-CIT problems. Nevertheless, they require detailed control for the exploitation and exploration trade-off to avoid premature convergence (i.e. being trapped in local optima) as well as to enhance the solution diversity. Here, we present a new variant of PSO based on Mamdani fuzzy inference system \cite{Camastra2015,TSAKIRIDIS2017257,KHOSRAVANIAN2016280}, to permit adaptive selection of its global and local search operations. We detail the design of this combined algorithm and evaluate it through experiments on multiple synthetic and benchmark problems. We conclude that fuzzy adaptive selection of global and local search operations is, at least, feasible as it performs only second-best to a discrete variant of PSO, called DPSO. Concerning obtaining the best mean test suite size, the fuzzy adaptation even outperforms DPSO occasionally. We discuss the reasons behind this performance and outline relevant areas of future work.Comment: 21 page

    Prioritized Random MAC Optimization via Graph-based Analysis

    Get PDF
    Motivated by the analogy between successive interference cancellation and iterative belief-propagation on erasure channels, irregular repetition slotted ALOHA (IRSA) strategies have received a lot of attention in the design of medium access control protocols. The IRSA schemes have been mostly analyzed for theoretical scenarios for homogenous sources, where they are shown to substantially improve the system performance compared to classical slotted ALOHA protocols. In this work, we consider generic systems where sources in different importance classes compete for a common channel. We propose a new prioritized IRSA algorithm and derive the probability to correctly resolve collisions for data from each source class. We then make use of our theoretical analysis to formulate a new optimization problem for selecting the transmission strategies of heterogenous sources. We optimize both the replication probability per class and the source rate per class, in such a way that the overall system utility is maximized. We then propose a heuristic-based algorithm for the selection of the transmission strategy, which is built on intrinsic characteristics of the iterative decoding methods adopted for recovering from collisions. Experimental results validate the accuracy of the theoretical study and show the gain of well-chosen prioritized transmission strategies for transmission of data from heterogenous classes over shared wireless channels
    • …
    corecore