371 research outputs found

    Deep Learning Aided Data-Driven Fault Diagnosis of Rotatory Machine: A Comprehensive Review

    Get PDF
    This paper presents a comprehensive review of the developments made in rotating bearing fault diagnosis, a crucial component of a rotatory machine, during the past decade. A data-driven fault diagnosis framework consists of data acquisition, feature extraction/feature learning, and decision making based on shallow/deep learning algorithms. In this review paper, various signal processing techniques, classical machine learning approaches, and deep learning algorithms used for bearing fault diagnosis have been discussed. Moreover, highlights of the available public datasets that have been widely used in bearing fault diagnosis experiments, such as Case Western Reserve University (CWRU), Paderborn University Bearing, PRONOSTIA, and Intelligent Maintenance Systems (IMS), are discussed in this paper. A comparison of machine learning techniques, such as support vector machines, k-nearest neighbors, artificial neural networks, etc., deep learning algorithms such as a deep convolutional network (CNN), auto-encoder-based deep neural network (AE-DNN), deep belief network (DBN), deep recurrent neural network (RNN), and other deep learning methods that have been utilized for the diagnosis of rotary machines bearing fault, is presented

    Effects of Dynamically Weighting Autonomous Rules in a UAS Flocking Model

    Get PDF
    Within the U.S. military, senior decision-makers and researchers alike have postulated that vast improvements could be made to current Unmanned Aircraft Systems (UAS) Concepts of Operation through inclusion of autonomous flocking. Myriad methods of implementation and desirable mission sets for this technology have been identified in the literature; however, this thesis posits that specific missions and behaviors are best suited for autonomous military flocking implementations. Adding to Craig Reynolds\u27 basic theory that three naturally observed rules can be used as building blocks for simulating flocking behavior, new rules are proposed and defined in the development of an autonomous flocking UAS model. Simulation validates that missions of military utility can be accomplished in this method through incorporation of dynamic event- and time-based rule weights. Additionally, a methodology is proposed and demonstrated that iteratively improves simulated mission effectiveness. Quantitative analysis is presented on data from 570 simulation runs, which verifies the hypothesis that iterative changes to rule parameters and weights demonstrate significant improvement over baseline performance. For a 36 square mile scenario, results show a 100% increase in finding targets, a 40.2% reduction in time to find a target, a 4.5% increase in area coverage, with a 0% attribution rate due to collisions and near misses

    Swarm Robotics

    Get PDF
    Collectively working robot teams can solve a problem more efficiently than a single robot, while also providing robustness and flexibility to the group. Swarm robotics model is a key component of a cooperative algorithm that controls the behaviors and interactions of all individuals. The robots in the swarm should have some basic functions, such as sensing, communicating, and monitoring, and satisfy the following properties

    Collective Information Processing and Criticality, Evolution and Limited Attention.

    Get PDF
    Im ersten Teil analysiere ich die Selbstorganisation zur Kritikalität (hier ein Phasenübergang von Ordnung zu Unordnung) und untersuche, ob Evolution ein möglicher Organisationsmechanismus ist. Die Kernfrage ist, ob sich ein simulierter kohäsiver Schwarm, der versucht, einem Raubtier auszuweichen, durch Evolution selbst zum kritischen Punkt entwickelt, um das Ausweichen zu optimieren? Es stellt sich heraus, dass (i) die Gruppe den Jäger am besten am kritischen Punkt vermeidet, aber (ii) nicht durch einer verstärkten Reaktion, sondern durch strukturelle Veränderungen, (iii) das Gruppenoptimum ist evolutionär unstabiler aufgrund einer maximalen räumlichen Selbstsortierung der Individuen. Im zweiten Teil modelliere ich experimentell beobachtete Unterschiede im kollektiven Verhalten von Fischgruppen, die über mehrere Generationen verschiedenen Arten von größenabhängiger Selektion ausgesetzt waren. Diese Größenselektion soll Freizeitfischerei (kleine Fische werden freigelassen, große werden konsumiert) und die kommerzielle Fischerei mit großen Netzbreiten (kleine/junge Individuen können entkommen) nachahmen. Die zeigt sich, dass das Fangen großer Fische den Zusammenhalt und die Risikobereitschaft der Individuen reduziert. Beide Befunde lassen sich mechanistisch durch einen Aufmerksamkeits-Kompromiss zwischen Sozial- und Umweltinformationen erklären. Im letzten Teil der Arbeit quantifiziere ich die kollektive Informationsverarbeitung im Feld. Das Studiensystem ist eine an sulfidische Wasserbedingungen angepasste Fischart mit einem kollektiven Fluchtverhalten vor Vögeln (wiederholte kollektive Fluchttauchgängen). Die Fische sind etwa 2 Zentimeter groß, aber die kollektive Welle breitet sich über Meter in dichten Schwärmen an der Oberfläche aus. Es zeigt sich, dass die Wellengeschwindigkeit schwach mit der Polarisation zunimmt, bei einer optimalen Dichte am schnellsten ist und von ihrer Richtung relativ zur Schwarmorientierung abhängt.In the first part, I focus on the self-organization to criticality (here an order-disorder phase transition) and investigate if evolution is a possible self-tuning mechanism. Does a simulated cohesive swarm that tries to avoid a pursuing predator self-tunes itself by evolution to the critical point to optimize avoidance? It turns out that (i) the best group avoidance is at criticality but (ii) not due to an enhanced response but because of structural changes (fundamentally linked to criticality), (iii) the group optimum is not an evolutionary stable state, in fact (iv) it is an evolutionary accelerator due to a maximal spatial self-sorting of individuals causing spatial selection. In the second part, I model experimentally observed differences in collective behavior of fish groups subject to multiple generation of different types of size-dependent selection. The real world analog to this experimental evolution is recreational fishery (small fish are released, large are consumed) and commercial fishing with large net widths (small/young individuals can escape). The results suggest that large harvesting reduces cohesion and risk taking of individuals. I show that both findings can be mechanistically explained based on an attention trade-off between social and environmental information. Furthermore, I numerically analyze how differently size-harvested groups perform in a natural predator and fishing scenario. In the last part of the thesis, I quantify the collective information processing in the field. The study system is a fish species adapted to sulfidic water conditions with a collective escape behavior from aerial predators which manifests in repeated collective escape dives. These fish measure about 2 centimeters, but the collective wave spreads across meters in dense shoals at the surface. I find that wave speed increases weakly with polarization, is fastest at an optimal density and depends on its direction relative to shoal orientation

    Data fusion by using machine learning and computational intelligence techniques for medical image analysis and classification

    Get PDF
    Data fusion is the process of integrating information from multiple sources to produce specific, comprehensive, unified data about an entity. Data fusion is categorized as low level, feature level and decision level. This research is focused on both investigating and developing feature- and decision-level data fusion for automated image analysis and classification. The common procedure for solving these problems can be described as: 1) process image for region of interest\u27 detection, 2) extract features from the region of interest and 3) create learning model based on the feature data. Image processing techniques were performed using edge detection, a histogram threshold and a color drop algorithm to determine the region of interest. The extracted features were low-level features, including textual, color and symmetrical features. For image analysis and classification, feature- and decision-level data fusion techniques are investigated for model learning using and integrating computational intelligence and machine learning techniques. These techniques include artificial neural networks, evolutionary algorithms, particle swarm optimization, decision tree, clustering algorithms, fuzzy logic inference, and voting algorithms. This work presents both the investigation and development of data fusion techniques for the application areas of dermoscopy skin lesion discrimination, content-based image retrieval, and graphic image type classification --Abstract, page v

    A Survey on Energy Optimization Techniques in UAV-Based Cellular Networks: From Conventional to Machine Learning Approaches

    Get PDF
    Wireless communication networks have been witnessing an unprecedented demand due to the increasing number of connected devices and emerging bandwidth-hungry applications. Albeit many competent technologies for capacity enhancement purposes, such as millimeter wave communications and network densification, there is still room and need for further capacity enhancement in wireless communication networks, especially for the cases of unusual people gatherings, such as sport competitions, musical concerts, etc. Unmanned aerial vehicles (UAVs) have been identified as one of the promising options to enhance the capacity due to their easy implementation, pop up fashion operation, and cost-effective nature. The main idea is to deploy base stations on UAVs and operate them as flying base stations, thereby bringing additional capacity to where it is needed. However, because the UAVs mostly have limited energy storage, their energy consumption must be optimized to increase flight time. In this survey, we investigate different energy optimization techniques with a top-level classification in terms of the optimization algorithm employed; conventional and machine learning (ML). Such classification helps understand the state of the art and the current trend in terms of methodology. In this regard, various optimization techniques are identified from the related literature, and they are presented under the above mentioned classes of employed optimization methods. In addition, for the purpose of completeness, we include a brief tutorial on the optimization methods and power supply and charging mechanisms of UAVs. Moreover, novel concepts, such as reflective intelligent surfaces and landing spot optimization, are also covered to capture the latest trend in the literature.Comment: 41 pages, 5 Figures, 6 Tables. Submitted to Open Journal of Communications Society (OJ-COMS

    The use of computational intelligence for security in named data networking

    Get PDF
    Information-Centric Networking (ICN) has recently been considered as a promising paradigm for the next-generation Internet, shifting from the sender-driven end-to-end communication paradigma to a receiver-driven content retrieval paradigm. In ICN, content -rather than hosts, like in IP-based design- plays the central role in the communications. This change from host-centric to content-centric has several significant advantages such as network load reduction, low dissemination latency, scalability, etc. One of the main design requirements for the ICN architectures -since the beginning of their design- has been strong security. Named Data Networking (NDN) (also referred to as Content-Centric Networking (CCN) or Data-Centric Networking (DCN)) is one of these architectures that are the focus of an ongoing research effort that aims to become the way Internet will operate in the future. Existing research into security of NDN is at an early stage and many designs are still incomplete. To make NDN a fully working system at Internet scale, there are still many missing pieces to be filled in. In this dissertation, we study the four most important security issues in NDN in order to defense against new forms of -potentially unknown- attacks, ensure privacy, achieve high availability, and block malicious network traffics belonging to attackers or at least limit their effectiveness, i.e., anomaly detection, DoS/DDoS attacks, congestion control, and cache pollution attacks. In order to protect NDN infrastructure, we need flexible, adaptable and robust defense systems which can make intelligent -and real-time- decisions to enable network entities to behave in an adaptive and intelligent manner. In this context, the characteristics of Computational Intelligence (CI) methods such as adaption, fault tolerance, high computational speed and error resilient against noisy information, make them suitable to be applied to the problem of NDN security, which can highlight promising new research directions. Hence, we suggest new hybrid CI-based methods to make NDN a more reliable and viable architecture for the future Internet.Information-Centric Networking (ICN) ha sido recientemente considerado como un paradigma prometedor parala nueva generación de Internet, pasando del paradigma de la comunicación de extremo a extremo impulsada por el emisora un paradigma de obtención de contenidos impulsada por el receptor. En ICN, el contenido (más que los nodos, como sucede en redes IPactuales) juega el papel central en las comunicaciones. Este cambio de "host-centric" a "content-centric" tiene varias ventajas importantes como la reducción de la carga de red, la baja latencia, escalabilidad, etc. Uno de los principales requisitos de diseño para las arquitecturas ICN (ya desde el principiode su diseño) ha sido una fuerte seguridad. Named Data Networking (NDN) (también conocida como Content-Centric Networking (CCN) o Data-Centric Networking (DCN)) es una de estas arquitecturas que son objetode investigación y que tiene como objetivo convertirse en la forma en que Internet funcionará en el futuro. Laseguridad de NDN está aún en una etapa inicial. Para hacer NDN un sistema totalmente funcional a escala de Internet, todavía hay muchas piezas que faltan por diseñar. Enesta tesis, estudiamos los cuatro problemas de seguridad más importantes de NDN, para defendersecontra nuevas formas de ataques (incluyendo los potencialmente desconocidos), asegurar la privacidad, lograr una alta disponibilidad, y bloquear los tráficos de red maliciosos o al menos limitar su eficacia. Estos cuatro problemas son: detección de anomalías, ataques DoS / DDoS, control de congestión y ataques de contaminación caché. Para solventar tales problemas necesitamos sistemas de defensa flexibles, adaptables y robustos que puedantomar decisiones inteligentes en tiempo real para permitir a las entidades de red que se comporten de manera rápida e inteligente. Es por ello que utilizamos Inteligencia Computacional (IC), ya que sus características (la adaptación, la tolerancia a fallos, alta velocidad de cálculo y funcionamiento adecuado con información con altos niveles de ruido), la hace adecuada para ser aplicada al problema de la seguridad ND

    Information Theory and Its Application in Machine Condition Monitoring

    Get PDF
    Condition monitoring of machinery is one of the most important aspects of many modern industries. With the rapid advancement of science and technology, machines are becoming increasingly complex. Moreover, an exponential increase of demand is leading an increasing requirement of machine output. As a result, in most modern industries, machines have to work for 24 hours a day. All these factors are leading to the deterioration of machine health in a higher rate than before. Breakdown of the key components of a machine such as bearing, gearbox or rollers can cause a catastrophic effect both in terms of financial and human costs. In this perspective, it is important not only to detect the fault at its earliest point of inception but necessary to design the overall monitoring process, such as fault classification, fault severity assessment and remaining useful life (RUL) prediction for better planning of the maintenance schedule. Information theory is one of the pioneer contributions of modern science that has evolved into various forms and algorithms over time. Due to its ability to address the non-linearity and non-stationarity of machine health deterioration, it has become a popular choice among researchers. Information theory is an effective technique for extracting features of machines under different health conditions. In this context, this book discusses the potential applications, research results and latest developments of information theory-based condition monitoring of machineries
    corecore