173 research outputs found

    Optimal leach protocol with improved bat algorithm in wireless sensor networks

    Full text link
    © 2019, Korean Society for Internet Information. All rights reserved. A low-energy adaptive clustering hierarchy (LEACH) protocol is a low-power adaptive cluster routing protocol which was proposed by MIT’s Chandrakasan for sensor networks. In the LEACH protocol, the selection mode of cluster-head nodes is a random selection of cycles, which may result in uneven distribution of nodal energy and reduce the lifetime of the entire network. Hence, we propose a new selection method to enhance the lifetime of network, in this selection function, the energy consumed between nodes in the clusters and the power consumed by the transfer between the cluster head and the base station are considered at the same time. Meanwhile, the improved FTBA algorithm integrating the curve strategy is proposed to enhance local and global search capabilities. Then we combine the improved BA with LEACH, and use the intelligent algorithm to select the cluster head. Experiment results show that the improved BA has stronger optimization ability than other optimization algorithms, which the method we proposed (FTBA-TC-LEACH) is superior than the LEACH and LEACH with standard BA (SBA-LEACH). The FTBA-TC-LEACH can obviously reduce network energy consumption and enhance the lifetime of wireless sensor networks (WSNs)

    An ABC Algorithm with Recombination

    Get PDF
    Artificial bee colony (ABC) is an efficient swarm intelligence algorithm, which has shown good exploration ability. However, its exploitation capacity needs to be improved. In this paper, a novel ABC variant with recombination (called RABC) is proposed to enhance the exploitation. RABC firstly employs a new search model inspired by the updating equation of particle swarm optimization (PSO). Then, both the new search model and the original ABC model are recombined to build a hybrid search model. The effectiveness of the proposed RABC is validated on ten famous benchmark optimization problems. Experimental results show RABC can significantly improve the quality of solutions and accelerate the convergence speed

    Evolving Multi-Resolution Pooling CNN for Monaural Singing Voice Separation

    Full text link
    Monaural Singing Voice Separation (MSVS) is a challenging task and has been studied for decades. Deep neural networks (DNNs) are the current state-of-the-art methods for MSVS. However, the existing DNNs are often designed manually, which is time-consuming and error-prone. In addition, the network architectures are usually pre-defined, and not adapted to the training data. To address these issues, we introduce a Neural Architecture Search (NAS) method to the structure design of DNNs for MSVS. Specifically, we propose a new multi-resolution Convolutional Neural Network (CNN) framework for MSVS namely Multi-Resolution Pooling CNN (MRP-CNN), which uses various-size pooling operators to extract multi-resolution features. Based on the NAS, we then develop an evolving framework namely Evolving MRP-CNN (E-MRP-CNN), by automatically searching the effective MRP-CNN structures using genetic algorithms, optimized in terms of a single-objective considering only separation performance, or multi-objective considering both the separation performance and the model complexity. The multi-objective E-MRP-CNN gives a set of Pareto-optimal solutions, each providing a trade-off between separation performance and model complexity. Quantitative and qualitative evaluations on the MIR-1K and DSD100 datasets are used to demonstrate the advantages of the proposed framework over several recent baselines

    Multi self-adapting particle swarm optimization algorithm (MSAPSO).

    Get PDF
    The performance and stability of the Particle Swarm Optimization algorithm depends on parameters that are typically tuned manually or adapted based on knowledge from empirical parameter studies. Such parameter selection is ineffectual when faced with a broad range of problem types, which often hinders the adoption of PSO to real world problems. This dissertation develops a dynamic self-optimization approach for the respective parameters (inertia weight, social and cognition). The effects of self-adaption for the optimal balance between superior performance (convergence) and the robustness (divergence) of the algorithm with regard to both simple and complex benchmark functions is investigated. This work creates a swarm variant which is parameter-less, which means that it is virtually independent of the underlying examined problem type. As PSO variants always have the issue, that they can be stuck-in-local-optima, as second main topic the MSAPSO algorithm do have a highly flexible escape-lmin-strategy embedded, which works dimension-less. The MSAPSO algorithm outperforms other PSO variants and also other swarm inspired approaches such as Memetic Firefly algorithm with these two major algorithmic elements (parameter-less approach, dimension-less escape-lmin-strategy). The average performance increase in two dimensions is at least fifteen percent with regard to the compared swarm variants. In higher dimensions (≥ 250) the performance gain accumulates to about fifty percent in average. At the same time the error-proneness of MSAPSO is in average similar or even significant better when converging to the respective global optima’s

    Multi-Object Shape Retrieval Using Curvature Trees

    Get PDF
    This work presents a geometry-based image retrieval approach for multi-object images. We commence with developing an effective shape matching method for closed boundaries. Then, a structured representation, called curvature tree (CT), is introduced to extend the shape matching approach to handle images containing multiple objects with possible holes. We also propose an algorithm, based on Gestalt principles, to detect and extract high-level boundaries (or envelopes), which may evolve as a result of the spatial arrangement of a group of image objects. At first, a shape retrieval method using triangle-area representation (TAR) is presented for non-rigid shapes with closed boundaries. This representation is effective in capturing both local and global characteristics of a shape, invariant to translation, rotation, scaling and shear, and robust against noise and moderate amounts of occlusion. For matching, two algorithms are introduced. The first algorithm matches concavity maxima points extracted from TAR image obtained by thresholding the TAR. In the second matching algorithm, dynamic space warping (DSW) is employed to search efficiently for the optimal (least cost) correspondence between the points of two shapes. Experimental results using the MPEG-7 CE-1 database of 1400 shapes show the superiority of our method over other recent methods. Then, a geometry-based image retrieval system is developed for multi-object images. We model both shape and topology of image objects including holes using a structured representation called curvature tree (CT). To facilitate shape-based matching, the TAR of each object and hole is stored at the corresponding node in the CT. The similarity between two CTs is measured based on the maximum similarity subtree isomorphism (MSSI) where a one-to-one correspondence is established between the nodes of the two trees. Our matching scheme agrees with many recent findings in psychology about the human perception of multi-object images. Two algorithms are introduced to solve the MSSI problem: an approximate and an exact. Both algorithms have polynomial-time computational complexity and use the DSW as the similarity measure between the attributed nodes. Experiments on a database of 13500 medical images and a database of 1580 logo images have shown the effectiveness of the proposed method. The purpose of the last part is to allow for high-level shape retrieval in multi-object images by detecting and extracting the envelope of high-level object groupings in the image. Motivated by studies in Gestalt theory, a new algorithm for the envelope extraction is proposed that works in two stages. The first stage detects the envelope (if exists) and groups its objects using hierarchical clustering. In the second stage, each grouping is merged using morphological operations and then further refined using concavity tree reconstruction to eliminate odd concavities in the extracted envelope. Experiment on a set of 110 logo images demonstrates the feasibility of our approach

    Flood Forecasting Using Machine Learning Methods

    Get PDF
    This book is a printed edition of the Special Issue Flood Forecasting Using Machine Learning Methods that was published in Wate

    A system for modeling social traits in realistic faces with artificial intelligence

    Full text link
    Los seres humanos han desarrollado especialmente su capacidad perceptiva para procesar caras y extraer información de las características faciales. Usando nuestra capacidad conductual para percibir rostros, hacemos atribuciones tales como personalidad, inteligencia o confiabilidad basadas en la apariencia facial que a menudo tienen un fuerte impacto en el comportamiento social en diferentes dominios. Por lo tanto, las caras desempeñan un papel fundamental en nuestras relaciones con otras personas y en nuestras decisiones cotidianas. Con la popularización de Internet, las personas participan en muchos tipos de interacciones virtuales, desde experiencias sociales, como juegos, citas o comunidades, hasta actividades profesionales, como e-commerce, e-learning, e-therapy o e-health. Estas interacciones virtuales manifiestan la necesidad de caras que representen a las personas reales que interactúan en el mundo digital: así surgió el concepto de avatar. Los avatares se utilizan para representar a los usuarios en diferentes escenarios y ámbitos, desde la vida personal hasta situaciones profesionales. En todos estos casos, la aparición del avatar puede tener un efecto no solo en la opinión y percepción de otra persona, sino en la autopercepción, que influye en la actitud y el comportamiento del sujeto. De hecho, los avatares a menudo se emplean para obtener impresiones o emociones a través de expresiones no verbales, y pueden mejorar las interacciones en línea o incluso son útiles para fines educativos o terapéuticos. Por lo tanto, la posibilidad de generar avatares de aspecto realista que provoquen un determinado conjunto de impresiones sociales supone una herramienta muy interesante y novedosa, útil en un amplio abanico de campos. Esta tesis propone un método novedoso para generar caras de aspecto realistas con un perfil social asociado que comprende 15 impresiones diferentes. Para este propósito, se completaron varios objetivos parciales. En primer lugar, las características faciales se extrajeron de una base de datos de caras reales y se agruparon por aspecto de una manera automática y objetiva empleando técnicas de reducción de dimensionalidad y agrupamiento. Esto produjo una taxonomía que permite codificar de manera sistemática y objetiva las caras de acuerdo con los grupos obtenidos previamente. Además, el uso del método propuesto no se limita a las características faciales, y se podría extender su uso para agrupar automáticamente cualquier otro tipo de imágenes por apariencia. En segundo lugar, se encontraron las relaciones existentes entre las diferentes características faciales y las impresiones sociales. Esto ayuda a saber en qué medida una determinada característica facial influye en la percepción de una determinada impresión social, lo que permite centrarse en la característica o características más importantes al diseñar rostros con una percepción social deseada. En tercer lugar, se implementó un método de edición de imágenes para generar una cara totalmente nueva y realista a partir de una definición de rostro utilizando la taxonomía de rasgos faciales antes mencionada. Finalmente, se desarrolló un sistema para generar caras realistas con un perfil de rasgo social asociado, lo cual cumple el objetivo principal de la presente tesis. La principal novedad de este trabajo reside en la capacidad de trabajar con varias dimensiones de rasgos a la vez en caras realistas. Por lo tanto, en contraste con los trabajos anteriores que usan imágenes con ruido, o caras de dibujos animados o sintéticas, el sistema desarrollado en esta tesis permite generar caras de aspecto realista eligiendo los niveles deseados de quince impresiones: Miedo, Enfado, Atractivo, Cara de niño, Disgustado, Dominante, Femenino, Feliz, Masculino, Prototípico, Triste, Sorprendido, Amenazante, Confiable e Inusual. Los prometedores resultados obtenidos permitirán investigar más a fondo cómo modelar lHumans have specially developed their perceptual capacity to process faces and to extract information from facial features. Using our behavioral capacity to perceive faces, we make attributions such as personality, intelligence or trustworthiness based on facial appearance that often have a strong impact on social behavior in different domains. Therefore, faces play a central role in our relationships with other people and in our everyday decisions. With the popularization of the Internet, people participate in many kinds of virtual interactions, from social experiences, such as games, dating or communities, to professional activities, such as e-commerce, e-learning, e-therapy or e-health. These virtual interactions manifest the need for faces that represent the actual people interacting in the digital world: thus the concept of avatar emerged. Avatars are used to represent users in different scenarios and scopes, from personal life to professional situations. In all these cases, the appearance of the avatar may have an effect not only on other person's opinion and perception but on self-perception, influencing the subject's own attitude and behavior. In fact, avatars are often employed to elicit impressions or emotions through non-verbal expressions, and are able to improve online interactions or even useful for education purposes or therapy. Then, being able to generate realistic looking avatars which elicit a certain set of desired social impressions poses a very interesting and novel tool, useful in a wide range of fields. This thesis proposes a novel method for generating realistic looking faces with an associated social profile comprising 15 different impressions. For this purpose, several partial objectives were accomplished. First, facial features were extracted from a database of real faces and grouped by appearance in an automatic and objective manner employing dimensionality reduction and clustering techniques. This yielded a taxonomy which allows to systematically and objectively codify faces according to the previously obtained clusters. Furthermore, the use of the proposed method is not restricted to facial features, and it should be possible to extend its use to automatically group any other kind of images by appearance. Second, the existing relationships among the different facial features and the social impressions were found. This helps to know how much a certain facial feature influences the perception of a given social impression, allowing to focus on the most important feature or features when designing faces with a sought social perception. Third, an image editing method was implemented to generate a completely new, realistic face from just a face definition using the aforementioned facial feature taxonomy. Finally, a system to generate realistic faces with an associated social trait profile was developed, which fulfills the main objective of the present thesis. The main novelty of this work resides in the ability to work with several trait dimensions at a time on realistic faces. Thus, in contrast with the previous works that use noisy images, or cartoon-like or synthetic faces, the system developed in this thesis allows to generate realistic looking faces choosing the desired levels of fifteen impressions, namely Afraid, Angry, Attractive, Babyface, Disgusted, Dominant, Feminine, Happy, Masculine, Prototypical, Sad, Surprised, Threatening, Trustworthy and Unusual. The promising results obtained in this thesis will allow to further investigate how to model social perception in faces using a completely new approach.Els sers humans han desenvolupat especialment la seua capacitat perceptiva per a processar cares i extraure informació de les característiques facials. Usant la nostra capacitat conductual per a percebre rostres, fem atribucions com ara personalitat, intel·ligència o confiabilitat basades en l'aparença facial que sovint tenen un fort impacte en el comportament social en diferents dominis. Per tant, les cares exercixen un paper fonamental en les nostres relacions amb altres persones i en les nostres decisions quotidianes. Amb la popularització d'Internet, les persones participen en molts tipus d'inter- accions virtuals, des d'experiències socials, com a jocs, cites o comunitats, fins a activitats professionals, com e-commerce, e-learning, e-therapy o e-health. Estes interaccions virtuals manifesten la necessitat de cares que representen a les persones reals que interactuen en el món digital: així va sorgir el concepte d'avatar. Els avatars s'utilitzen per a representar als usuaris en diferents escenaris i àmbits, des de la vida personal fins a situacions professionals. En tots estos casos, l'aparició de l'avatar pot tindre un efecte no sols en l'opinió i percepció d'una altra persona, sinó en l'autopercepció, que influïx en l'actitud i el comportament del subjecte. De fet, els avatars sovint s'empren per a obtindre impressions o emocions a través d'expressions no verbals, i poden millorar les interaccions en línia o inclús són útils per a fins educatius o terapèutics. Per tant, la possibilitat de generar avatars d'aspecte realista que provoquen un determinat conjunt d'impressions socials planteja una ferramenta molt interessant i nova, útil en un ampla varietat de camps. Esta tesi proposa un mètode nou per a generar cares d'aspecte realistes amb un perfil social associat que comprén 15 impressions diferents. Per a este propòsit, es van completar diversos objectius parcials. En primer lloc, les característiques facials es van extraure d'una base de dades de cares reals i es van agrupar per aspecte d'una manera automàtica i objectiva emprant tècniques de reducció de dimensionalidad i agrupament. Açò va produir una taxonomia que permet codificar de manera sistemàtica i objectiva les cares d'acord amb els grups obtinguts prèviament. A més, l'ús del mètode proposat no es limita a les característiques facials, i es podria estendre el seu ús per a agrupar automàticament qualsevol altre tipus d'imatges per aparença. En segon lloc, es van trobar les relacions existents entre les diferents característiques facials i les impressions socials. Açò ajuda a saber en quina mesura una determinada característica facial influïx en la percepció d'una determinada impressió social, la qual cosa permet centrar-se en la característica o característiques més importants al dissenyar rostres amb una percepció social desitjada. En tercer lloc, es va implementar un mètode d'edició d'imatges per a generar una cara totalment nova i realista a partir d'una definició de rostre utilitzant la taxonomia de trets facials abans mencionada. Finalment, es va desenrotllar un sistema per a generar cares realistes amb un perfil de tret social associat, la qual cosa complix l'objectiu principal de la present tesi. La principal novetat d'este treball residix en la capacitat de treballar amb diverses dimensions de trets al mateix temps en cares realistes. Per tant, en contrast amb els treballs anteriors que usen imatges amb soroll, o cares de dibuixos animats o sintètiques, el sistema desenrotllat en esta tesi permet generar cares d'aspecte realista triant els nivells desitjats de quinze impressions: Por, Enuig, Atractiu, Cara de xiquet, Disgustat, Dominant, Femení, Feliç, Masculí, Prototípic, Trist, Sorprés, Amenaçador, Confiable i Inusual. Els prometedors resultats obtinguts en esta tesi permetran investigar més a fons com modelar la percepció social en les cares utilitzant un enfocament completFuentes Hurtado, FJ. (2018). A system for modeling social traits in realistic faces with artificial intelligence [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/101943TESI

    Resiliency Mechanisms for In-Memory Column Stores

    Get PDF
    The key objective of database systems is to reliably manage data, while high query throughput and low query latency are core requirements. To date, database research activities mostly concentrated on the second part. However, due to the constant shrinking of transistor feature sizes, integrated circuits become more and more unreliable and transient hardware errors in the form of multi-bit flips become more and more prominent. In a more recent study (2013), in a large high-performance cluster with around 8500 nodes, a failure rate of 40 FIT per DRAM device was measured. For their system, this means that every 10 hours there occurs a single- or multi-bit flip, which is unacceptably high for enterprise and HPC scenarios. Causes can be cosmic rays, heat, or electrical crosstalk, with the latter being exploited actively through the RowHammer attack. It was shown that memory cells are more prone to bit flips than logic gates and several surveys found multi-bit flip events in main memory modules of today's data centers. Due to the shift towards in-memory data management systems, where all business related data and query intermediate results are kept solely in fast main memory, such systems are in great danger to deliver corrupt results to their users. Hardware techniques can not be scaled to compensate the exponentially increasing error rates. In other domains, there is an increasing interest in software-based solutions to this problem, but these proposed methods come along with huge runtime and/or storage overheads. These are unacceptable for in-memory data management systems. In this thesis, we investigate how to integrate bit flip detection mechanisms into in-memory data management systems. To achieve this goal, we first build an understanding of bit flip detection techniques and select two error codes, AN codes and XOR checksums, suitable to the requirements of in-memory data management systems. The most important requirement is effectiveness of the codes to detect bit flips. We meet this goal through AN codes, which exhibit better and adaptable error detection capabilities than those found in today's hardware. The second most important goal is efficiency in terms of coding latency. We meet this by introducing a fundamental performance improvements to AN codes, and by vectorizing both chosen codes' operations. We integrate bit flip detection mechanisms into the lowest storage layer and the query processing layer in such a way that the remaining data management system and the user can stay oblivious of any error detection. This includes both base columns and pointer-heavy index structures such as the ubiquitous B-Tree. Additionally, our approach allows adaptable, on-the-fly bit flip detection during query processing, with only very little impact on query latency. AN coding allows to recode intermediate results with virtually no performance penalty. We support our claims by providing exhaustive runtime and throughput measurements throughout the whole thesis and with an end-to-end evaluation using the Star Schema Benchmark. To the best of our knowledge, we are the first to present such holistic and fast bit flip detection in a large software infrastructure such as in-memory data management systems. Finally, most of the source code fragments used to obtain the results in this thesis are open source and freely available.:1 INTRODUCTION 1.1 Contributions of this Thesis 1.2 Outline 2 PROBLEM DESCRIPTION AND RELATED WORK 2.1 Reliable Data Management on Reliable Hardware 2.2 The Shift Towards Unreliable Hardware 2.3 Hardware-Based Mitigation of Bit Flips 2.4 Data Management System Requirements 2.5 Software-Based Techniques For Handling Bit Flips 2.5.1 Operating System-Level Techniques 2.5.2 Compiler-Level Techniques 2.5.3 Application-Level Techniques 2.6 Summary and Conclusions 3 ANALYSIS OF CODING TECHNIQUES 3.1 Selection of Error Codes 3.1.1 Hamming Coding 3.1.2 XOR Checksums 3.1.3 AN Coding 3.1.4 Summary and Conclusions 3.2 Probabilities of Silent Data Corruption 3.2.1 Probabilities of Hamming Codes 3.2.2 Probabilities of XOR Checksums 3.2.3 Probabilities of AN Codes 3.2.4 Concrete Error Models 3.2.5 Summary and Conclusions 3.3 Throughput Considerations 3.3.1 Test Systems Descriptions 3.3.2 Vectorizing Hamming Coding 3.3.3 Vectorizing XOR Checksums 3.3.4 Vectorizing AN Coding 3.3.5 Summary and Conclusions 3.4 Comparison of Error Codes 3.4.1 Effectiveness 3.4.2 Efficiency 3.4.3 Runtime Adaptability 3.5 Performance Optimizations for AN Coding 3.5.1 The Modular Multiplicative Inverse 3.5.2 Faster Softening 3.5.3 Faster Error Detection 3.5.4 Comparison to Original AN Coding 3.5.5 The Multiplicative Inverse Anomaly 3.6 Summary 4 BIT FLIP DETECTING STORAGE 4.1 Column Store Architecture 4.1.1 Logical Data Types 4.1.2 Storage Model 4.1.3 Data Representation 4.1.4 Data Layout 4.1.5 Tree Index Structures 4.1.6 Summary 4.2 Hardened Data Storage 4.2.1 Hardened Physical Data Types 4.2.2 Hardened Lightweight Compression 4.2.3 Hardened Data Layout 4.2.4 UDI Operations 4.2.5 Summary and Conclusions 4.3 Hardened Tree Index Structures 4.3.1 B-Tree Verification Techniques 4.3.2 Justification For Further Techniques 4.3.3 The Error Detecting B-Tree 4.4 Summary 5 BIT FLIP DETECTING QUERY PROCESSING 5.1 Column Store Query Processing 5.2 Bit Flip Detection Opportunities 5.2.1 Early Onetime Detection 5.2.2 Late Onetime Detection 5.2.3 Continuous Detection 5.2.4 Miscellaneous Processing Aspects 5.2.5 Summary and Conclusions 5.3 Hardened Intermediate Results 5.3.1 Materialization of Hardened Intermediates 5.3.2 Hardened Bitmaps 5.4 Summary 6 END-TO-END EVALUATION 6.1 Prototype Implementation 6.1.1 AHEAD Architecture 6.1.2 Diversity of Physical Operators 6.1.3 One Concrete Operator Realization 6.1.4 Summary and Conclusions 6.2 Performance of Individual Operators 6.2.1 Selection on One Predicate 6.2.2 Selection on Two Predicates 6.2.3 Join Operators 6.2.4 Grouping and Aggregation 6.2.5 Delta Operator 6.2.6 Summary and Conclusions 6.3 Star Schema Benchmark Queries 6.3.1 Query Runtimes 6.3.2 Improvements Through Vectorization 6.3.3 Storage Overhead 6.3.4 Summary and Conclusions 6.4 Error Detecting B-Tree 6.4.1 Single Key Lookup 6.4.2 Key Value-Pair Insertion 6.5 Summary 7 SUMMARY AND CONCLUSIONS 7.1 Future Work A APPENDIX A.1 List of Golden As A.2 More on Hamming Coding A.2.1 Code examples A.2.2 Vectorization BIBLIOGRAPHY LIST OF FIGURES LIST OF TABLES LIST OF LISTINGS LIST OF ACRONYMS LIST OF SYMBOLS LIST OF DEFINITION
    • …
    corecore