39 research outputs found

    Simulated Annealing Algorithm Combined with Chaos for Task Allocation in Real-Time Distributed Systems

    Get PDF
    This paper addresses the problem of task allocation in real-time distributed systems with the goal of maximizing the system reliability, which has been shown to be NP-hard. We take account of the deadline constraint to formulate this problem and then propose an algorithm called chaotic adaptive simulated annealing (XASA) to solve the problem. Firstly, XASA begins with chaotic optimization which takes a chaotic walk in the solution space and generates several local minima; secondly XASA improves SA algorithm via several adaptive schemes and continues to search the optimal based on the results of chaotic optimization. The effectiveness of XASA is evaluated by comparing with traditional SA algorithm and improved SA algorithm. The results show that XASA can achieve a satisfactory performance of speedup without loss of solution quality

    Optimal use of computing equipment in an automated industrial inspection context

    Get PDF
    This thesis deals with automatic defect detection. The objective was to develop the techniques required by a small manufacturing business to make cost-efficient use of inspection technology. In our work on inspection techniques we discuss image acquisition and the choice between custom and general-purpose processing hardware. We examine the classes of general-purpose computer available and study popular operating systems in detail. We highlight the advantages of a hybrid system interconnected via a local area network and develop a sophisticated suite of image-processing software based on it. We quantitatively study the performance of elements of the TCP/IP networking protocol suite and comment on appropriate protocol selection for parallel distributed applications. We implement our own distributed application based on these findings. In our work on inspection algorithms we investigate the potential uses of iterated function series and Fourier transform operators when preprocessing images of defects in aluminium plate acquired using a linescan camera. We employ a multi-layer perceptron neural network trained by backpropagation as a classifier. We examine the effect on the training process of the number of nodes in the hidden layer and the ability of the network to identify faults in images of aluminium plate. We investigate techniques for introducing positional independence into the network's behaviour. We analyse the pattern of weights induced in the network after training in order to gain insight into the logic of its internal representation. We conclude that the backpropagation training process is sufficiently computationally intensive so as to present a real barrier to further development in practical neural network techniques and seek ways to achieve a speed-up. Weconsider the training process as a search problem and arrive at a process involving multiple, parallel search "vectors" and aspects of genetic algorithms. We implement the system as the mentioned distributed application and comment on its performance

    Applied (Meta)-Heuristic in Intelligent Systems

    Get PDF
    Engineering and business problems are becoming increasingly difficult to solve due to the new economics triggered by big data, artificial intelligence, and the internet of things. Exact algorithms and heuristics are insufficient for solving such large and unstructured problems; instead, metaheuristic algorithms have emerged as the prevailing methods. A generic metaheuristic framework guides the course of search trajectories beyond local optimality, thus overcoming the limitations of traditional computation methods. The application of modern metaheuristics ranges from unmanned aerial and ground surface vehicles, unmanned factories, resource-constrained production, and humanoids to green logistics, renewable energy, circular economy, agricultural technology, environmental protection, finance technology, and the entertainment industry. This Special Issue presents high-quality papers proposing modern metaheuristics in intelligent systems

    COBE's search for structure in the Big Bang

    Get PDF
    The launch of Cosmic Background Explorer (COBE) and the definition of Earth Observing System (EOS) are two of the major events at NASA-Goddard. The three experiments contained in COBE (Differential Microwave Radiometer (DMR), Far Infrared Absolute Spectrophotometer (FIRAS), and Diffuse Infrared Background Experiment (DIRBE)) are very important in measuring the big bang. DMR measures the isotropy of the cosmic background (direction of the radiation). FIRAS looks at the spectrum over the whole sky, searching for deviations, and DIRBE operates in the infrared part of the spectrum gathering evidence of the earliest galaxy formation. By special techniques, the radiation coming from the solar system will be distinguished from that of extragalactic origin. Unique graphics will be used to represent the temperature of the emitting material. A cosmic event will be modeled of such importance that it will affect cosmological theory for generations to come. EOS will monitor changes in the Earth's geophysics during a whole solar color cycle

    Reconstructing the galactic magnetic field

    Get PDF
    Diese Dissertation befasst sich mit der Rekonstruktion des Magnetfeldes der Milchstraße (GMF für Galaktisches Magnetfeld). Eine genaue Beschreibung des Magnetfeldes ist für mehrere Fragestellungen der Astrophysik relevant. Erstens spielt es eine wichtige Rolle dabei, wie sich die Struktur der Milchstraße entwickelt, da die Ströme von interstellarem Gas und kosmischer Strahlung durch das GMF abgelenkt werden. Zweitens stört es die Messung und Analyse von Strahlung extra-galaktischer Quellen. Drittens lenkt es ultra-hoch-energetische kosmische Strahung (UHECR) derartig stark ab, dass die Zuordnung von gemessenen UHECR zu potentiellen Quellen nicht ohne Korrekturrechnung möglich ist. Viertens kann mit dem GMF ein kosmischer Dynamo-Prozess inklusive dessen innerer Strukturen studiert werden. Im Gegensatz zum GMF ist bei Sternen und Planeten nur das äußere Magnetfeld zugänglich und messbar. So großen Einfluss das GMF auf eine Vielzahl von Effekten hat, genauso schwer ist es auch zu ermitteln. Der Grund dafür ist, dass das Magnetfeld nicht direkt, sondern nur durch seinen Einfluss auf verschiedene physikalische Observablen messbar ist. Messungen dieser Observablen liefern für eine konkrete Sichtlinie ihren gesamt-akkumulierten Wert. Aufgrund der festen Position des Sonnensystems in der Milchstraße ist es daher eine Herausforderung der gemessenen Wirkung des Magnetfelds einer räumlichen Tiefe zuzuordnen. Als Informationsquelle dienen vor allem Messungen der Intensität und Polarisation von Radiound Mikrowellen, sowohl für den gesamten Himmel, als auch für einzelne Sterne, deren Position im Raum bekannt ist. Durch die Betrachtung der zugrunde liegenden physikalischen Prozesse wie Synchrotronemission und Faraday Rotation kann auf das GMF rückgeschlossen werden. Voraussetzung dafür sind jedoch dreidimensionale Dichte-Karten anderer Konstituenten der Milchstraße, beispielsweise der thermischen Elektronen oder des interstellaren Staubes. Für die Erstellung dieser Hilfskarten sind physikalische Prozesse wie Dispersion und Staubabsorption von entscheidender Bedeutung. Um das GMF anhand der vorhandenen Messdaten zu rekonstruieren, gibt es im Wesentlichen zwei Herangehensweisen. Zum einen benutzt man den phänomenologischen Ansatz parametrischer Magnetfeld-Modelle. Dabei wird die Struktur des Magnetfeldes durch analytische Formeln mit einer begrenzten Anzahl von Parametern festgelegt. Diese Modelle beinhalten die generelle Morphologie des Magnetfeldes, wie etwa Galaxie-Arme und Feld-Umkehrungen, aber auch lokale Charakteristika wie Nebel in der Nachbarschaft des Sonnensystems. Gegeben einem Satz Messdaten versucht man nun, jene Modellparameter zu finden, die eine möglichst gute Übereinstimmung mit den Observablen ergeben. Zu diesem Zweck wurde im Rahmen dieser Doktorarbeit Imagine, die Interstellar MAGnetic field INference Engine, entwickelt. Aufgrund der verhältnismäßig geringen Anzahl an Parametern ist eine Parameteranpassung auch mit robusten all-sky maps möglich, auch wenn diese keine Tiefen-Information enthalten. Allerdings gibt es bei der Herangehensweise über parametrische Modelle das Problem der Beliebigkeit: es gibt eine Vielzahl an Modellen verschiedenster Komplexität, die sich darüber hinaus häufig gegenseitig widersprechen. In der Vergangenheit wurden dann meist auch noch die Unsicherheit der Parameter-Rekonstruktionen unterschätzt. Im Gegensatz dazu ermöglicht eine rigorose Bayes’sche Analyse, beispielsweise mit dem in dieser Doktorarbeit entwickelten Imagine, eine verlässliche Bestimmung der Modellparameter. Neben parametrischen Modellen kann das GMF auch über einen nicht-parametrischen Ansatz rekonstruiert werden. Dabei hat jedes Raumvoxel zwei unabhängige Freiheitsgrade für das Magnetfeld. Diese Art der Rekonstruktion stellt deutlich höhere Ansprüche an die Datenmenge und -qualität, die Algorithmik, und die Rechenkapazität. Aufgrund der hohen Anzahl an Freiheitsgraden werden Messdaten benötigt, die direkte (Parallax-Messungen) oder indirekte (über das Hertzsprung Russel Diagramm) Tiefeninformation beinhalten. Zudem sind starke Prior für jene Raumbereiche notwendig, die von den Daten nur schwach abgedeckt werden. Einfache Bayes’sche Methoden reichen hierfür nicht mehr aus. Vielmehr ist nun Informationsfeldtheorie (IFT) nötig, um die verschiedenen Informationsquellen korrekt zu kombinieren, und verlässliche Unsicherheiten zu erhalten. Für diese Aufgabe ist das Python Framework NIFTy (Numerical Information Field Theory) prädestiniert. In seiner ersten Release-Version war NIFTy jedoch noch nicht für Magnetfeldrekonstruktionen und die benötigten Größenordnungen geeignet. Um die Datenmengen verarbeiten zu können wurde daher zunächst d2o als eigenständiges Werkzeug für Daten-Parallelisierung entwickelt. Damit kann parallelisierter Code entwickelt werden, ohne das die eigentliche Entwicklungsarbeit behindert wird. Da im Grunde alle numerischen Disziplinen mit großen Datensätzen, die sich nicht in Teilmengen zerlegen lassen davon profitieren können, wurde d2o als eigenständiges Paket veröffentlicht. Darüber hinaus wurde NIFTy so umfassend in seinem Funktionsumfang und seiner Struktur überarbeitet, sodass nun unter anderem auch hochaufgelöste Magnetfeldrekonstruktionen durchgeführt werden können. Außerdem ist es jetzt mit NIFTy auch möglich Karten der thermischen Elektronendichte und des interstellaren Staubes auf Basis neuer und gleichzeitig auch sehr großer Datensätze zu erstellen. Damit wurde der Weg zu einer nicht-parametrischen Rekonstruktionen des GMF geebnet.This thesis deals with the reconstruction of the magnetic field of the MilkyWay (GMF for Galactic Magnetic Field). A detailed description of the magnetic field is relevant for several problems in astrophysics. First, it plays an important role in how the structure of the Milky Way develops as the currents of interstellar gas and cosmic rays are deflected by the GMF. Second, it interferes with the measurement and analysis of radiation from extra-galactic sources. Third, it deflects ultra-high energetic cosmic rays (UHECR) to such an extent that the assignment of measured UHECR to potential sources is not possible without a correcting calculations. Fourth, the GMF can be used to study a cosmic dynamo process including its internal structures. In contrast to the GMF, normally only the outer magnetic field of stars and planets is accessible and measurable. As much as the GMF has an impact on a variety of effects, it is just as diffcult to determine. The reason for this is that the magnetic field cannot be measured directly, but only by its influence on various physical observables. Measurements of these observables yield their total accumulated value for a certain line of sight. Due to the fixed position of the solar system in the Milky Way, it is therefore a challenge to map the measured effect of the magnetic field to a spatial depth. Measurements of the intensity and polarization of radio and microwaves, both for the entire sky and for individual stars whose position in space is known, serve as a source of information. Based on physical processes such as synchrotron emission and Faraday rotation, the GMF can be deduced. However, this requires three-dimensional density maps of other constituents of the Milky Way, such as thermal electrons or interstellar dust. Physical processes like dispersion and dust absorption are crucial for the creation of these auxiliary maps. To reconstruct the GMF on the basis of existing measurement data, there are basically two approaches. On the one hand, the phenomenological approach of parametric magnetic field models can be used. This involves defining the structure of the magnetic field using analytical formulas with a limited number of parameters. These models include the general morphology of the magnetic field, such as galaxy arms and field reversals, but also local characteristics like nebulae in the solar system’s neighbourhood. If a set of measurement data is given, one tries to find those model parameter values that are in concordance with the observables as closely as possible. For this purpose, within the course of this doctoral thesis Imagine, the Interstellar MAGnetic field INference Engine was developed. Due to parametric model’s relatively small number of parameters, a fit is also possible with robust all-sky maps, even if they do not contain any depth information. However, there is the problem of arbitrariness in the approach of parametric models: there is a large number of models of different complexity available, which on top of that often contradict each other. In the past, the reconstructed parameter’s uncertainty was often underestimated. In contrast, a rigorous Bayesian analysis, as for example developed in this doctoral thesis with Imagine, provides a reliable analysis. On the other hand, in addition to parametric models the GMF can also be reconstructed following a non-parametric approach. In this case, each space voxel has two independent degrees of freedom for the magnetic field. Hence, this type of reconstruction places much higher demands on the amount and quality of data, the algorithms, and the computing capacity. Due to the high number of degrees of freedom, measurement data are required which contain direct (parallax measurements) or indirect (by means of the Russel diagram) depth information. In addition, strong priors are necessary for those areas of space that are only weakly covered by the data. Simple Bayesian methods are no longer suffcient for this. Rather, information field theory (IFT) is now needed to combine the various sources of information correctly and to obtain reliable uncertainties. The Python framework NIFTy (Numerical Information Field Theory) is predestined for this task. In its first release version, however, NIFTy was not yet natively capable of reconstructing a magnetic field and dealing with the order of magnitude of the problem’s data. To be able to process given data, d2o was developed as an independent tool for data parallelization. With d2o parallel code can be developed without any hindrance of the actual development work. Basically all numeric disciplines with large datasets that cannot be broken down into subsets can benefit from this, which is the reason why d2o has been released as an independent package. In addition, NIFTy has been comprehensively revised in its functional scope and structure, so that now, among other things, high-resolution magnetic field reconstructions can be carried out. With NIFTy it is now also possible to create maps of thermal electron density and interstellar dust on the basis of new and at the same time very large datasets. This paved the way for a non-parametric reconstruction of the GMF

    Virtual machine scheduling in dedicated computing clusters

    Get PDF
    Time-critical applications process a continuous stream of input data and have to meet specific timing constraints. A common approach to ensure that such an application satisfies its constraints is over-provisioning: The application is deployed in a dedicated cluster environment with enough processing power to achieve the target performance for every specified data input rate. This approach comes with a drawback: At times of decreased data input rates, the cluster resources are not fully utilized. A typical use case is the HLT-Chain application that processes physics data at runtime of the ALICE experiment at CERN. From a perspective of cost and efficiency it is desirable to exploit temporarily unused cluster resources. Existing approaches aim for that goal by running additional applications. These approaches, however, a) lack in flexibility to dynamically grant the time-critical application the resources it needs, b) are insufficient for isolating the time-critical application from harmful side-effects introduced by additional applications or c) are not general because application-specific interfaces are used. In this thesis, a software framework is presented that allows to exploit unused resources in a dedicated cluster without harming a time-critical application. Additional applications are hosted in Virtual Machines (VMs) and unused cluster resources are allocated to these VMs at runtime. In order to avoid resource bottlenecks, the resource usage of VMs is dynamically modified according to the needs of the time-critical application. For this purpose, a number of previously not combined methods is used. On a global level, appropriate VM manipulations like hot migration, suspend/resume and start/stop are determined by an informed search heuristic and applied at runtime. Locally on cluster nodes, a feedback-controlled adaption of VM resource usage is carried out in a decentralized manner. The employment of this framework allows to increase a cluster’s usage by running additional applications, while at the same time preventing negative impact towards a time-critical application. This capability of the framework is shown for the HLT-Chain application: In an empirical evaluation the cluster CPU usage is increased from 49% to 79%, additional results are computed and no negative effect towards the HLT-Chain application are observed

    Virtual machine scheduling in dedicated computing clusters

    Get PDF
    Time-critical applications process a continuous stream of input data and have to meet specific timing constraints. A common approach to ensure that such an application satisfies its constraints is over-provisioning: The application is deployed in a dedicated cluster environment with enough processing power to achieve the target performance for every specified data input rate. This approach comes with a drawback: At times of decreased data input rates, the cluster resources are not fully utilized. A typical use case is the HLT-Chain application that processes physics data at runtime of the ALICE experiment at CERN. From a perspective of cost and efficiency it is desirable to exploit temporarily unused cluster resources. Existing approaches aim for that goal by running additional applications. These approaches, however, a) lack in flexibility to dynamically grant the time-critical application the resources it needs, b) are insufficient for isolating the time-critical application from harmful side-effects introduced by additional applications or c) are not general because application-specific interfaces are used. In this thesis, a software framework is presented that allows to exploit unused resources in a dedicated cluster without harming a time-critical application. Additional applications are hosted in Virtual Machines (VMs) and unused cluster resources are allocated to these VMs at runtime. In order to avoid resource bottlenecks, the resource usage of VMs is dynamically modified according to the needs of the time-critical application. For this purpose, a number of previously not combined methods is used. On a global level, appropriate VM manipulations like hot migration, suspend/resume and start/stop are determined by an informed search heuristic and applied at runtime. Locally on cluster nodes, a feedback-controlled adaption of VM resource usage is carried out in a decentralized manner. The employment of this framework allows to increase a cluster’s usage by running additional applications, while at the same time preventing negative impact towards a time-critical application. This capability of the framework is shown for the HLT-Chain application: In an empirical evaluation the cluster CPU usage is increased from 49% to 79%, additional results are computed and no negative effect towards the HLT-Chain application are observed

    Engineering Physics and Mathematics Division progress report for period ending December 31, 1994

    Full text link
    corecore