23 research outputs found

    Energy Efficient System for Wireless Sensor Networks using Modified RECHS Protocol

    Get PDF
    The area of wireless sensor networks (WSNs) is one of the emerging and fast growing fields in the scientific world. This has brought about developing low cost, low-power and multi-function sensor nodes. Prolonged network lifetime, scalability, node mobility and load balancing are important requirements for many WSN applications. Clustering the sensor nodes is an effective technique to achieve these goals. Cluster-based routing protocol is currently a hot research in wireless sensor network. In this paper, we have added additional criteria for the selection of cluster heads in a Redundant and Energy-efficient Cluster head Selection Protocol(RECHS) and compared results with Energy Aware Low Energy Adaptive Clustering Hierarchy (EA-LEACH) protocol. This modified RECHS significantly increases the lifetime, reliability of the network. Simulation results show that comparison between two methods (Modified RECHS and EA- LEACH) for LEACH protocol on the basis of network lifetime (stability period), number of cluster heads are present per round, number of alive node are present per round and throughput of data transfer in the network. DOI: 10.17762/ijritcc2321-8169.15016

    Quantum algorithms for connectivity and related problems

    Get PDF
    An important family of span programs, st-connectivity span programs, have been used to design quantum algorithms in various contexts, including a number of graph problems and formula evaluation problems. The complexity of the resulting algorithms depends on the largest positive witness size of any 1-input, and the largest negative witness size of any 0-input. Belovs and Reichardt first showed that the positive witness size is exactly characterized by the effective resistance of the input graph, but only rough upper bounds were known previously on the negative witness size. We show that the negative witness size in an st-connectivity span program is exactly characterized by the capacitance of the input graph. This gives a tight analysis for algorithms based on st-connectivity span programs on any set of inputs. We use this analysis to give a new quantum algorithm for estimating the capacitance of a graph. We also describe a new quantum algorithm for deciding if a graph is connected, which improves the previous best quantum algorithm for this problem if we're promised that either the graph has at least k > 1 components, or the graph is connected and has small average resistance, which is upper bounded by the diameter. We also give an alternative algorithm for deciding if a graph is connected that can be better than our first algorithm when the maximum degree is small. Finally, using ideas from our second connectivity algorithm, we give an algorithm for estimating the algebraic connectivity of a graph, the second largest eigenvalue of the Laplacian

    The random K-satisfiability problem: from an analytic solution to an efficient algorithm

    Full text link
    We study the problem of satisfiability of randomly chosen clauses, each with K Boolean variables. Using the cavity method at zero temperature, we find the phase diagram for the K=3 case. We show the existence of an intermediate phase in the satisfiable region, where the proliferation of metastable states is at the origin of the slowdown of search algorithms. The fundamental order parameter introduced in the cavity method, which consists of surveys of local magnetic fields in the various possible states of the system, can be computed for one given sample. These surveys can be used to invent new types of algorithms for solving hard combinatorial optimizations problems. One such algorithm is shown here for the 3-sat problem, with very good performances.Comment: 38 pages, 13 figures; corrected typo

    My Early Interactions with Jan and Some of His Lost Papers

    Get PDF
    It has been over 40 years since I got to know Jan. This period almost entirely overlaps my career as a psychometrician. During these years, I have had many contacts with him. This paper reviews some of my early interactions, focussing on the following topics: (1) An episode surrounding the inception of the ALSOS project, and (2) Jan's unpublished (and some lost) notes and papers that I cherished and quoted in my work, including (2a) the ELEGANT algorithm for squared distance scaling, (2b) the INDISCAL method for nonmetric multidimensional scaling (MDS), and (2c) notes on DEDICOM

    Administrative Law in the Automated State

    Get PDF
    In the future, administrative agencies will rely increasingly on digital automation powered by machine learning algorithms. Can U.S. administrative law accommodate such a future? Not only might a highly automated state readily meet longstanding administrative law principles, but the responsible use of machine learning algorithms might perform even better than the status quo in terms of fulfilling administrative law’s core values of expert decision-making and democratic accountability. Algorithmic governance clearly promises more accurate, data-driven decisions. Moreover, due to their mathematical properties, algorithms might well prove to be more faithful agents of democratic institutions. Yet even if an automated state were smarter and more accountable, it might risk being less empathic. Although the degree of empathy in existing human-driven bureaucracies should not be overstated, a large-scale shift to government by algorithm will pose a new challenge for administrative law: ensuring that an automated state is also an empathic one

    Machine Learning in Predicting Printable Biomaterial Formulations for Direct Ink Writing

    Get PDF
    Three-dimensional (3D) printing is emerging as a transformative technology for biomedical engineering. The 3D printed product can be patient-specific by allowing customizability and direct control of the architecture. The trial-and-error approach currently used for developing the composition of printable inks is time- and resource-consuming due to the increasing number of variables requiring expert knowledge. Artificial intelligence has the potential to reshape the ink development process by forming a predictive model for printability from experimental data. In this paper, we constructed machine learning (ML) algorithms including decision tree, random forest (RF), and deep learning (DL) to predict the printability of biomaterials. A total of 210 formulations including 16 different bioactive and smart materials and 4 solvents were 3D printed, and their printability was assessed. All ML methods were able to learn and predict the printability of a variety of inks based on their biomaterial formulations. In particular, the RF algorithm has achieved the highest accuracy (88.1%), precision (90.6%), and F1 score (87.0%), indicating the best overall performance out of the 3 algorithms, while DL has the highest recall (87.3%). Furthermore, the ML algorithms have predicted the printability window of biomaterials to guide the ink development. The printability map generated with DL has finer granularity than other algorithms. ML has proven to be an effective and novel strategy for developing biomaterial formulations with desired 3D printability for biomedical engineering applications

    A Framework for Certified Self-Stabilization

    No full text
    We propose a general framework to build certified proofs of distributed self-stabilizing algorithms with the proof assistant Coq. We first define in Coq the locally shared memory model with composite atomicity, the most commonly used model in the self-stabilizing area. We then validate our framework by certifying a non trivial part of an existing silent self-stabilizing algorithm which builds a kk-hop dominating set of the network. We also certified a quantitative property related to the output of this algorithm. Precisely, we show that the computed kk-hop dominating set contains at most n1k+1+1\lfloor \frac{n-1}{k+1} \rfloor + 1 nodes, where nn is the number of nodes in the network. To obtain these results, we also developed a library which contains general tools related to potential functions and cardinality of sets

    Polynomial-delay Enumeration Kernelizations for Cuts of Bounded Degree

    Full text link
    Enumeration kernelization was first proposed by Creignou et al. [TOCS 2017] and was later refined by Golovach et al. [JCSS 2022] into two different variants: fully-polynomial enumeration kernelization and polynomial-delay enumeration kernelization. In this paper, we consider the d-CUT problem from the perspective of (polynomial-delay) enumeration kenrelization. Given an undirected graph G = (V, E), a cut F = E(A, B) is a d-cut of G if every u in A has at most d neighbors in B and every v in B has at most d neighbors in A. Checking the existence of a d-cut in a graph is a well-known NP-hard problem and is well-studied in parameterized complexity [Algorithmica 2021, IWOCA 2021]. This problem also generalizes a well-studied problem MATCHING CUT (set d = 1) that has been a central problem in the literature of polynomial-delay enumeration kernelization. In this paper, we study three different enumeration variants of this problem, ENUM d-CUT, ENUM MIN-d-CUT and ENUM MAX-d-CUT that intends to enumerate all the d-cuts, all the minimal d-cuts and all the maximal d-cuts respectively. We consider various structural parameters of the input and provide polynomial-delay enumeration kernels for ENUM d-CUT and ENUM MAX-d-CUT and fully-polynomial enumeration kernels of polynomial size for ENUM MIN-d-CUT.Comment: 25 page

    Locality in Distributed Graph Algorithms

    Get PDF
    International audienceSurvey of core results in the context of locality in distributed graph algorithms
    corecore