228,459 research outputs found

    On the convergence of iterative voting: how restrictive should restricted dynamics be?

    No full text
    We study convergence properties of iterative voting procedures. Such procedures are defined by a voting rule and a (restricted) iterative process, where at each step one agent can modify his vote towards a better outcome for himself. It is already known that if the iteration dynamics (the manner in which voters are allowed to modify their votes) are unrestricted, then the voting process may not converge. For most common voting rules this may be observed even under the best response dynamics limitation. It is therefore important to investigate whether and which natural restrictions on the dynamics of iterative voting procedures can guarantee convergence. To this end, we provide two general conditions on the dynamics based on iterative myopic improvements, each of which is sufficient for convergence. We then identify several classes of voting rules (including Positional Scoring Rules, Maximin, Copeland and Bucklin), along with their corresponding iterative processes, for which at least one of these conditions hold

    Multi-hop Diffusion LMS for Energy-constrained Distributed Estimation

    Full text link
    We propose a multi-hop diffusion strategy for a sensor network to perform distributed least mean-squares (LMS) estimation under local and network-wide energy constraints. At each iteration of the strategy, each node can combine intermediate parameter estimates from nodes other than its physical neighbors via a multi-hop relay path. We propose a rule to select combination weights for the multi-hop neighbors, which can balance between the transient and the steady-state network mean-square deviations (MSDs). We study two classes of networks: simple networks with a unique transmission path from one node to another, and arbitrary networks utilizing diffusion consultations over at most two hops. We propose a method to optimize each node's information neighborhood subject to local energy budgets and a network-wide energy budget for each diffusion iteration. This optimization requires the network topology, and the noise and data variance profiles of each node, and is performed offline before the diffusion process. In addition, we develop a fully distributed and adaptive algorithm that approximately optimizes the information neighborhood of each node with only local energy budget constraints in the case where diffusion consultations are performed over at most a predefined number of hops. Numerical results suggest that our proposed multi-hop diffusion strategy achieves the same steady-state MSD as the existing one-hop adapt-then-combine diffusion algorithm but with a lower energy budget.Comment: 14 pages, 12 figures. Submitted for publicatio

    Visual Data Mining

    Get PDF
    Occlusion is one of the major problems for interactive visual knowledge discovery and data mining in the process of finding patterns in multidimensional data.This project proposes a hybrid method that combines visual and analytical means to deal with occlusion in visual knowledge discovery called as GLC-S which uses visualization of n-D data in 2D in a set of Shifted Paired Coordinates (SPC). A set of Shifted Paired Coordinates for n-D data consists of n/2 pairs of common Cartesian coordinates that are shifted relative to each other to avoid their overlap. Each n-D point A is represented as a directed graph A* in SPC, where each node is the 2D projection of A in a respective pair of the Cartesian coordinates. The proposed GLC-S method significantly decrease cognitive load for analysis of n-D data and simplify pattern discovery in n-D data. The GLC-S method iteratively splits n-D data into non-overlapping clusters (hyper-rectangles) around local centers and visualizes only data within these clusters at each iteration. The requirements for these clusters are to contain cases of only one class and be the largest cluster with this property in SPC visualization. Such sequential splitting allows: (1) avoiding occlusion, (2) finding visually local classification patterns, rules, and (3) combine local sub-rules to a global rule that classifies all given data of two or more classes. The computational experiment with Wisconsin Breast Cancer data(9-D), User Knowledge Modeling data(6-D), and Letter Recognition data(17-D) from UCI Machine Learning Repository confirm this capability. At each iteration, these data have been split into training (70%) and validation (30%) data. It required 3 iterations in Wisconsin Breast Cancer data, 4 iterations in User Knowledge Modeling and 5 iterations in Letter Recognition data and respectively 3, 4, 5 local sub-rules that covered over 95% of all n-D data points with 100% accuracy at both training and validation experiments. After each iteration, the data that were used in this iteration are removed and remaining data are used in the next iteration. This removal process helps to decrease occlusion too. The GLC-S algorithm refuses to classify remaining cases that are not covered by these rules, i.e.,., do not belong to found hyper-rectangles. The interactive visualization process in SPC allows adjusting the sides of the hyper-rectangles to maximize the size of the hyper-rectangle without its overlap with the hyper-rectangles of the opposing classes. The GLC-S method splits data using the fixed split of n coordinates to pairs. This hybrid visual and analytical approach avoids throwing all data of several classes into a visualization plot that typically ends up in a messy highly occluded picture that hides useful patterns. This approach allows revealing these hidden patterns. The visualization process in SPC is reversible (lossless). i.e.,., all n-D information is visualized in 2D and can be restored from 2D visualization for each n-D case. This hybrid visual analytics method allowed classifying n-D data in a way that can be communicated to the user’s in the understandable and visual form

    Classifying BISINDO Alphabet using TensorFlow Object Detection API

    Get PDF
    Indonesian Sign Language (BISINDO) is one of the sign languages used in Indonesia. The process of classifying BISINDO can be done by utilizing advances in computer technology such as deep learning. The use of the BISINDO letter classification system with the application of the MobileNet V2 FPNLite  SSD model using the TensorFlow object detection API. The purpose of this study is to classify BISINDO letters A-Z and measure the accuracy, precision, recall, and cross-validation performance of the model. The dataset used was 4054 images with a size of  consisting of 26 letter classes, which were taken by researchers by applying several research scenarios and limitations. The steps carried out are: dividing the ratio of the simulation dataset 80:20, and applying cross-validation (k-fold = 5). In this study, a real time testing using 2 scenarios was conducted, namely testing with bright light conditions of 500 lux and dim light of 50 lux with an average processing time of 30 frames per second (fps). With a simulation data set ratio of 80:20, 5 iterations were performed, the first iteration yielded a precision result of 0.758 and a recall result of 0.790, and the second iteration yielded a precision result of 0.635 and a recall result of 0.77, then obtained an accuracy score of 0.712, the third iteration provides a recall score of 0.746, the fourth iteration obtains a precision score of 0.713 and a recall score of 0.751, the fifth iteration gives a precision score of 0.742 for a fit score case and the recall score is 0.773. So, the overall average precision score is 0.712 and the overall average recall score is 0.747, indicating that the model built performs very well.Indonesian Sign Language (BISINDO) is one of the sign languages used in Indonesia. The process of classifying BISINDO can be done by utilizing advances in computer technology such as deep learning. The use of the BISINDO letter classification system with the application of the MobileNet V2 FPNLite  SSD model using the TensorFlow object detection API. The purpose of this study is to classify BISINDO letters A-Z and measure the accuracy, precision, recall, and cross-validation performance of the model. The dataset used was 4054 images with a size of  consisting of 26 letter classes, which were taken by researchers by applying several research scenarios and limitations. The steps carried out are: dividing the ratio of the simulation dataset 80:20, and applying cross-validation (k-fold = 5). In this study, a real time testing using 2 scenarios was conducted, namely testing with bright light conditions of 500 lux and dim light of 50 lux with an average processing time of 30 frames per second (fps). With a simulation data set ratio of 80:20, 5 iterations were performed, the first iteration yielded a precision result of 0.758 and a recall result of 0.790, and the second iteration yielded a precision result of 0.635 and a recall result of 0.77, then obtained an accuracy score of 0.712, the third iteration provides a recall score of 0.746, the fourth iteration obtains a precision score of 0.713 and a recall score of 0.751, the fifth iteration gives a precision score of 0.742 for a fit score case and the recall score is 0.773. So, the overall average precision score is 0.712 and the overall average recall score is 0.747, indicating that the model built performs very well

    Teaching the Scientific Method in Core Curricula Through Practical Lab Assignments

    Get PDF
    Most core curricula at liberal arts schools require a science component to their undergraduate education for non-science majors. It is possible to meet these requirements with modified forms of normal science classes. However, for many students these classes can be a monotonous throwaway more than meaningful learning. At Lynn University, the core classes in “Dialogues of Scientific Literacy” incorporate opportunities for students to learn the scientific process via simple lab exercises without burdensome prerequisites. In one iteration, we implement a 3-part lab series meant to take students from observation to hypothesis formulation, to experimental design. The starting point is a simple animal observation experiment where data is collected and combined on an ongoing basis from each class. The final lab is a multistep experiment where they perform the scientific method start to finish using their phones as instruments for collecting quantitative sound data

    Research Mentor Program at UNH Manchester: Peer Learning Partnerships

    Get PDF
    At the University of New Hampshire at Manchester (UNH Manchester), the librarians, the Center for Academic Enrichment (CAE) professional staff, and the First-Year Writing Program faculty established a rich collaboration for supporting undergraduate students throughout the research process. This effort was realized by adapting a highly effective peer-tutoring program, integrating basic information literacy instruction skills into the tutor training curriculum, and incorporating the peer tutors within library instruction classes and activities. This chapter focuses on the current iteration of the Research Mentor Program, describes recent changes to the mentors’ information literacy training, and examines valuable lessons learned throughout the program’s evolution

    Castle and Stairs to Learn Iteration: Co-Designing a UMC Learning Module with Teachers

    Get PDF
    This experience report presents a participatory process that involved primary school teachers and computer science education researchers. The objective of the process was to co-design a learning module to teach iteration to second graders using a visual programming environment and based on the Use-Modify-Create methodology. The co-designed learning module was piloted with three second-grade classes. We experienced that sharing and reconciling the different perspectives of researchers and teachers was doubly effective. On the one hand, it improved the quality of the resulting learning module; on the other hand, it constituted a very significant professional development opportunity for both teachers and researchers. We describe the co-designed learning module, discuss the most significant hinges in the process that led to such a product, and reflect on the lessons learned
    corecore