431 research outputs found
Development of a Parallel BAT and Its Applications in Binary-state Network Reliability Problems
Various networks are broadly and deeply applied in real-life applications.
Reliability is the most important index for measuring the performance of all
network types. Among the various algorithms, only implicit enumeration
algorithms, such as depth-first-search, breadth-search-first, universal
generating function methodology, binary-decision diagram, and
binary-addition-tree algorithm (BAT), can be used to calculate the exact
network reliability. However, implicit enumeration algorithms can only be used
to solve small-scale network reliability problems. The BAT was recently
proposed as a simple, fast, easy-to-code, and flexible make-to-fit
exact-solution algorithm. Based on the experimental results, the BAT and its
variants outperformed other implicit enumeration algorithms. Hence, to overcome
the above-mentioned obstacle as a result of the size problem, a new parallel
BAT (PBAT) was proposed to improve the BAT based on compute multithread
architecture to calculate the binary-state network reliability problem, which
is fundamental for all types of network reliability problems. From the analysis
of the time complexity and experiments conducted on 20 benchmarks of
binary-state network reliability problems, PBAT was able to efficiently solve
medium-scale network reliability problems
Building Reliable Budget-Based Binary-State Networks
Everyday life is driven by various network, such as supply chains for
distributing raw materials, semi-finished product goods, and final products;
Internet of Things (IoT) for connecting and exchanging data; utility networks
for transmitting fuel, power, water, electricity, and 4G/5G; and social
networks for sharing information and connections. The binary-state network is a
basic network, where the state of each component is either success or failure,
i.e., the binary-state. Network reliability plays an important role in
evaluating the performance of network planning, design, and management. Because
more networks are being set up in the real world currently, there is a need for
their reliability. It is necessary to build a reliable network within a limited
budget. However, existing studies are focused on the budget limit for each
minimal path (MP) in networks without considering the total budget of the
entire network. We propose a novel concept to consider how to build a more
reliable binary-state network under the budget limit. In addition, we propose
an algorithm based on the binary-addition-tree algorithm (BAT) and stepwise
vectors to solve the problem efficiently
Building an Improved Internet of Things Smart Sensor Network Based on a Three-Phase Methodology
© 2013 IEEE. In recent years, the Internet of Things (IoT) has allowed the easy, intelligent, and efficient connection of many devices used in daily life by means of numerous smart sensors which communicate with each other using wireless signals. The rapid development of the IoT has been a result of recent advances in sensing technology. This paper proposes a three-phase methodology to improve the quality of experience for IoT system technologies. The proposed method employs the concepts of simple routing and two well-known multi-criteria decision-making method (MCDM) techniques: The Analytic Hierarchy Process (AHP) and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). First, all simple routings are obtained using the proposed depth-first search technology (DFS). AHP is applied to analyze the structure of the problem and to obtain weights for various selected criteria in the second phase. In the third phase, TOPSIS is utilized to rank the simple routings, which are simple paths. A case study example is provided to demonstrate the proposed three-phase methodology. The results from the numerical experiments show that the proposed methodology can successfully achieve the aim of this paper
Machine Learning for Camera-Based Monitoring of Laser Welding Processes
Der zunehmende Einsatz automatisierter Laserschweißprozesse stellt hohe Anforderungen an die Prozessüberwachung. Ziel ist es, eine hohe Fügequalität und eine frühestmögliche Fehlererkennung zu gewährleisten. Durch die Verwendung von Methoden des maschinellen Lernens können kostengünstigere und im Optimalfall bereits vorhandene Sensoren zur Überwachung des gesamten Prozesses eingesetzt werden. In dieser Arbeit werden Methoden aufgezeigt, die mit einer an der Fokussieroptik koaxial zum Laserstrahl integrierten Kamera eine Prozessüberwachung vor, während und nach dem Schweißprozess vornehmen. Zur Veranschaulichung der Methoden wird der Kontaktierungsprozess von Kupferdrähten zur Herstellung von Formspulenwicklungen verwendet. Die vorherige Prozessüberwachung umfasst eine durch ein faltendes neuronales Netz optimierte Bauteillagedetektion. Durch ei ne Formprüfung der detektierten Fügekomponenten können zudem vorverarbeitende Schritte überwacht und die Schweißung fehlerhafter Bauteile vermieden werden. Die prozessbegleitende Überwachung konzentriert sich auf die Erkennung von Spritzern, da diese als Indikator für einen instabilen Prozess dienen. Algorithmen des maschinellen Lernens führen eine semantische Segmentierung durch, die eine klare Unterscheidung zwischen Rauch, Prozesslicht und Materialauswurf ermöglicht. Die Qualitätsbewertung nach dem Prozess beinhaltet die Extraktion von Informationen über Größe und Form der Anbindungsfläche aus dem Kamerabild. Zudem wird ein Verfahren vorgeschlagen, welches anhand eines Kamerabildes mit Methoden des maschinellen Lernens die Höhendaten berechnet. Anhand der Höhenkarte wird eine regelbasierte Qualitätsbewertung der Schweißnähte durchgeführt. Bei allen Algorithmen wird die Integrierbarkeit in industrielle Prozesse berücksichtigt. Hierzu zählen unter anderem eine geringe Datengrundlage, eine begrenzte Inferenzhardware aus der industriellen Fertigung und die Akzeptanz beim Anwender
Fate of Duplicated Neural Structures
Statistical mechanics determines the abundance of different arrangements of
matter depending on cost-benefit balances. Its formalism and phenomenology
percolate throughout biological processes and set limits to effective
computation. Under specific conditions, self-replicating and computationally
complex patterns become favored, yielding life, cognition, and Darwinian
evolution. Neurons and neural circuits sit at a crossroads between statistical
mechanics, computation, and (through their role in cognition) natural
selection. Can we establish a {\em statistical physics} of neural circuits?
Such theory would tell what kinds of brains to expect under set energetic,
evolutionary, and computational conditions. With this big picture in mind, we
focus on the fate of duplicated neural circuits. We look at examples from
central nervous systems, with a stress on computational thresholds that might
prompt this redundancy. We also study a naive cost-benefit balance for
duplicated circuits implementing complex phenotypes. From this we derive {\em
phase diagrams} and (phase-like) transitions between single and duplicated
circuits, which constrain evolutionary paths to complex cognition. Back to the
big picture, similar phase diagrams and transitions might constrain I/O and
internal connectivity patterns of neural circuits at large. The formalism of
statistical mechanics seems a natural framework for thsi worthy line of
research.Comment: Review with novel results. Position paper. 16 pages, 3 figure
- …