20 research outputs found

    Upper bounds for the 2-hued chromatic number of graphs in terms of the independence number

    Get PDF
    A 2-hued coloring of a graph GG (also known as conditional (k,2)(k, 2)-coloring and dynamic coloring) is a coloring such that for every vertex vV(G)v\in V(G) of degree at least 22, the neighbors of vv receive at least 22 colors. The smallest integer kk such that GG has a 2-hued coloring with k k colors, is called the {\it 2-hued chromatic number} of GG and denoted by χ2(G)\chi_2(G). In this paper, we will show that if GG is a regular graph, then χ2(G)χ(G)2log2(α(G))+O(1) \chi_{2}(G)- \chi(G) \leq 2 \log _{2}(\alpha(G)) +\mathcal{O}(1) and if GG is a graph and δ(G)2\delta(G)\geq 2, then χ2(G)χ(G)1+4Δ2δ1(1+log2Δ(G)2Δ(G)δ(G)(α(G))) \chi_{2}(G)- \chi(G) \leq 1+\lceil \sqrt[\delta -1]{4\Delta^{2}} \rceil ( 1+ \log _{\frac{2\Delta(G)}{2\Delta(G)-\delta(G)}} (\alpha(G)) ) and in general case if GG is a graph, then χ2(G)χ(G)2+min{α(G),α(G)+ω(G)2} \chi_{2}(G)- \chi(G) \leq 2+ \min \lbrace \alpha^{\prime}(G),\frac{\alpha(G)+\omega(G)}{2}\rbrace .Comment: Dynamic chromatic number; conditional (k, 2)-coloring; 2-hued chromatic number; 2-hued coloring; Independence number; Probabilistic metho

    Sigma Partitioning: Complexity and Random Graphs

    Full text link
    A sigma partitioning\textit{sigma partitioning} of a graph GG is a partition of the vertices into sets P1,,PkP_1, \ldots, P_k such that for every two adjacent vertices uu and vv there is an index ii such that uu and vv have different numbers of neighbors in PiP_i. The  sigma number\textit{ sigma number} of a graph GG, denoted by σ(G)\sigma(G), is the minimum number kk such that G G has a sigma partitioning P1,,PkP_1, \ldots, P_k. Also, a  lucky labeling\textit{ lucky labeling} of a graph GG is a function :V(G)N \ell :V(G) \rightarrow \mathbb{N}, such that for every two adjacent vertices v v and u u of G G , wv(w)wu(w) \sum_{w \sim v}\ell(w)\neq \sum_{w \sim u}\ell(w) (xy x \sim y means that x x and yy are adjacent). The  lucky number\textit{ lucky number} of G G , denoted by η(G)\eta(G), is the minimum number kk such that G G has a lucky labeling :V(G)Nk \ell :V(G) \rightarrow \mathbb{N}_k. It was conjectured in [Inform. Process. Lett., 112(4):109--112, 2012] that it is NP \mathbf{NP} -complete to decide whether η(G)=2 \eta(G)=2 for a given 3-regular graph GG. In this work, we prove this conjecture. Among other results, we give an upper bound of five for the sigma number of a uniformly random graph

    Optimal Sensor Deception to Deviate from an Allowed Itinerary

    Full text link
    In this work, we study a class of deception planning problems in which an agent aims to alter a security monitoring system's sensor readings so as to disguise its adversarial itinerary as an allowed itinerary in the environment. The adversarial itinerary set and allowed itinerary set are captured by regular languages. To deviate without being detected, we investigate whether there exists a strategy for the agent to alter the sensor readings, with a minimal cost, such that for any of those paths it takes, the system thinks the agent took a path within the allowed itinerary. Our formulation assumes an offline sensor alteration where the agent determines the sensor alteration strategy and implement it, and then carry out any path in its deviation itinerary. We prove that the problem of solving the optimal sensor alteration is NP-hard, by a reduction from the directed multi-cut problem. Further, we present an exact algorithm based on integer linear programming and demonstrate the correctness and the efficacy of the algorithm in case studies

    Modeling of Steam Distillation Mechanism during Steam Injection Process Using Artificial Intelligence

    Get PDF
    Steam distillation as one of the important mechanisms has a great role in oil recovery in thermal methods and so it is important to simulate this process experimentally and theoretically. In this work, the simulation of steam distillation is performed on sixteen sets of crude oil data found in the literature. Artificial intelligence (AI) tools such as artificial neural network (ANN) and also adaptive neurofuzzy interference system (ANFIS) are used in this study as effective methods to simulate the distillate recoveries of these sets of data. Thirteen sets of data were used to train the models and three sets were used to test the models. The developed models are highly compatible with respect to input oil properties and can predict the distillate yield with minimum entry. For showing the performance of the proposed models, simulation of steam distillation is also done using modified Peng-Robinson equation of state. Comparison between the calculated distillates by ANFIS and neural network models and also equation of state-based method indicates that the errors of the ANFIS model for training data and test data sets are lower than those of other methods
    corecore