102 research outputs found

    Uncertainty modelling in power spectrum estimation of environmental processes

    Get PDF
    For efficient reliability analysis of buildings and structures, robust load models are required in stochastic dynamics, which can be estimated in particular from environmental processes, such as earthquakes or wind loads. To determine the response behaviour of a dynamic system under such loads, the power spectral density (PSD) function is a widely used tool for identifying the frequency components and corresponding amplitudes of environmental processes. Since the real data records required for this purpose are often subject to aleatory and epistemic uncertainties, and the PSD estimation process itself can induce further uncertainties, a rigorous quantification of these is essential, as otherwise a highly inaccurate load model could be generated which may yield in misleading simulation results. A system behaviour that is actually catastrophic can thus be shifted into an acceptable range, classifying the system as safe even though it is exposed to a high risk of damage or collapse. To address these issues, alternative loading models are proposed using probabilistic and non-deterministic models, that are able to efficiently account for these uncertainties and to model the loadings accordingly. Various methods are used in the generation of these load models, which are selected in particular according to the characteristic of the data and the number of available records. In case multiple data records are available, reliable statistical information can be extracted from a set of similar PSD functions that differ, for instance, only slightly in shape and peak frequency. Based on these statistics, a PSD function model is derived utilising subjective probabilities to capture the epistemic uncertainties and represent this information effectively. The spectral densities are characterised as random variables instead of employing discrete values, and thus the PSD function itself represents a non-stationary random process comprising a range of possible valid PSD functions for a given data set. If only a limited amount of data records is available, it is not possible to derive such reliable statistical information. Therefore, an interval-based approach is proposed that determines only an upper and lower bound and does not rely on any distribution within these bounds. A set of discrete-valued PSD functions is transformed into an interval-valued PSD function by optimising the weights of pre-derived basis functions from a Radial Basis Function Network such that they compose an upper and lower bound that encompasses the data set. Therefore, a range of possible values and system responses are identified rather than discrete values, which are able to quantify the epistemic uncertainties. When generating such a load model using real data records, the problem can arise that the individual records exhibit a high spectral variance in the frequency domain and therefore differ too much from each other, although they appear to be similar in the time domain. A load model derived from these data may not cover the entire spectral range and is therefore not representative. The data are therefore grouped according to their similarity using the Bhattacharyya distance and k-means algorithm, which may generate two or more load models from the entire data set. These can be applied separately to the structure under investigation, leading to more accurate simulation results. This approach can also be used to estimate the spectral similarity of individual data sets in the frequency domain, which is particularly relevant for the load models mentioned above. If the uncertainties are modelled directly in the time signal, it can be a challenging task to transform them efficiently into the frequency domain. Such a signal may consist only of reliable bounds in which the actual signal lies. A method is presented that can automatically propagate this interval uncertainty through the discrete Fourier transform, obtaining the exact bounds on the Fourier amplitude and an estimate of the PSD function. The method allows such an interval signal to be propagated without making assumptions about the dependence and distribution of the error over the time steps. These novel representations of load models are able to quantify epistemic uncertainties inherent in real data records and induced due to the PSD estimation process. The strengths and advantages of these approaches in practice are demonstrated by means of several numerical examples concentrated in the field of stochastic dynamics.FĂŒr eine effiziente ZuverlĂ€ssigkeitsanalyse von GebĂ€uden und Strukturen sind robuste Belastungsmodelle in der stochastischen Dynamik erforderlich, die insbesondere aus Umweltprozessen wie Erdbeben oder Windlasten geschĂ€tzt werden können. Um das Antwortverhalten eines dynamischen Systems unter solchen Belastungen zu bestimmen, ist die Funktion der Leistungsspektraldichte (PSD) ein weit verbreitetes Werkzeug zur Identifizierung der Frequenzkomponenten und der entsprechenden Amplituden von Umweltprozessen. Da die zu diesem Zweck benötigten realen DatensĂ€tze hĂ€ufig mit aleatorischen und epistemischen Unsicherheiten behaftet sind und der PSD-SchĂ€tzprozess selbst weitere Unsicherheiten induzieren kann, ist eine strenge Quantifizierung dieser Unsicherheiten unerlĂ€sslich, da andernfalls ein sehr ungenaues Belastungsmodell erzeugt werden könnte, das zu fehlerhaften Simulationsergebnissen fĂŒhren kann. Ein eigentlich katastrophales Systemverhalten kann so in einen akzeptablen Bereich verschoben werden, so dass das System als sicher eingestuft wird, obwohl es einem hohen Risiko der BeschĂ€digung oder des Zusammenbruchs ausgesetzt ist. Um diese Probleme anzugehen, werden alternative Belastungsmodelle vorgeschlagen, die probabilistische und nicht-deterministische Modelle verwenden, welche in der Lage sind, diese Unsicherheiten effizient zu berĂŒcksichtigen und die Belastungen entsprechend zu modellieren. Bei der Erstellung dieser Lastmodelle werden verschiedene Methoden verwendet, die insbesondere nach dem Charakter der Daten und der Anzahl der verfĂŒgbaren DatensĂ€tze ausgewĂ€hlt werden. Wenn mehrere DatensĂ€tze verfĂŒgbar sind, können zuverlĂ€ssige statistische Informationen aus einer Reihe Ă€hnlicher PSD-Funktionen extrahiert werden, die sich z.B. nur geringfĂŒgig in Form und Spitzenfrequenz unterscheiden. Auf der Grundlage dieser Statistiken wird ein Modell der PSD-Funktion abgeleitet, das subjektive Wahrscheinlichkeiten verwendet, um die epistemischen Unsicherheiten zu erfassen und diese Informationen effektiv darzustellen. Die spektralen Leistungsdichten werden als Zufallsvariablen charakterisiert, anstatt diskrete Werte zu verwenden, somit stellt die PSD-Funktion selbst einen nicht-stationĂ€ren Zufallsprozess dar, der einen Bereich möglicher gĂŒltiger PSD-Funktionen fĂŒr einen gegebenen Datensatz umfasst. Wenn nur eine begrenzte Anzahl von DatensĂ€tzen zur VerfĂŒgung steht, ist es nicht möglich, solche zuverlĂ€ssigen statistischen Informationen abzuleiten. Daher wird ein intervallbasierter Ansatz vorgeschlagen, der nur eine obere und untere Grenze bestimmt und sich nicht auf eine Verteilung innerhalb dieser Grenzen stĂŒtzt. Ein Satz von diskret wertigen PSD-Funktionen wird in eine intervallwertige PSD-Funktion umgewandelt, indem die Gewichte von vorab abgeleiteten Basisfunktionen aus einem Radialbasisfunktionsnetz so optimiert werden, dass sie eine obere und untere Grenze bilden, die den Datensatz umfassen. Damit wird ein Bereich möglicher Werte und Systemreaktionen anstelle diskreter Werte ermittelt, welche in der Lage sind, epistemische Unsicherheiten zu erfassen. Bei der Erstellung eines solchen Lastmodells aus realen DatensĂ€tzen kann das Problem auftreten, dass die einzelnen DatensĂ€tze eine hohe spektrale Varianz im Frequenzbereich aufweisen und sich daher zu stark voneinander unterscheiden, obwohl sie im Zeitbereich Ă€hnlich erscheinen. Ein aus diesen Daten abgeleitetes Lastmodell deckt möglicherweise nicht den gesamten Spektralbereich ab und ist daher nicht reprĂ€sentativ. Die Daten werden daher mit Hilfe der Bhattacharyya-Distanz und des k-means-Algorithmus nach ihrer Ähnlichkeit gruppiert, wodurch zwei oder mehr Belastungsmodelle aus dem gesamten Datensatz erzeugt werden können. Diese können separat auf die zu untersuchende Struktur angewandt werden, was zu genaueren Simulationsergebnissen fĂŒhrt. Dieser Ansatz kann auch zur SchĂ€tzung der spektralen Ähnlichkeit einzelner DatensĂ€tze im Frequenzbereich verwendet werden, was fĂŒr die oben genannten Lastmodelle besonders relevant ist. Wenn die Unsicherheiten direkt im Zeitsignal modelliert werden, kann es eine schwierige Aufgabe sein, sie effizient in den Frequenzbereich zu transformieren. Ein solches Signal kann möglicherweise nur aus zuverlĂ€ssigen Grenzen bestehen, in denen das tatsĂ€chliche Signal liegt. Es wird eine Methode vorgestellt, mit der diese Intervallunsicherheit automatisch durch die diskrete Fourier Transformation propagiert werden kann, um die exakten Grenzen der Fourier-Amplitude und der SchĂ€tzung der PSD-Funktion zu erhalten. Die Methode ermöglicht es, ein solches Intervallsignal zu propagieren, ohne Annahmen ĂŒber die AbhĂ€ngigkeit und Verteilung des Fehlers ĂŒber die Zeitschritte zu treffen. Diese neuartigen Darstellungen von Lastmodellen sind in der Lage, epistemische Unsicherheiten zu quantifizieren, die in realen DatensĂ€tzen enthalten sind und durch den PSD-SchĂ€tzprozess induziert werden. Die StĂ€rken und Vorteile dieser AnsĂ€tze in der Praxis werden anhand mehrerer numerischer Beispiele aus dem Bereich der stochastischen Dynamik demonstriert

    Adaptive Multi-Priority Rule Approach To Control Agile Disassembly Systems In Remanufacturing

    Get PDF
    End-of-Life (EOL) products in remanufacturing are prone to a high degree of uncertainty in terms of product quantity and quality. Therefore, the industrial shift towards a circular economy emphasizes the need for agile and hybrid disassembly systems. These systems feature a dynamic material flow. Besides that, they combine the endurance of robots with the dexterity of human operators for an effective and economically reasonable EOL-product treatment. Moreover, being reconfigurable, agile disassembly systems allow an alignment of their functional and quantitative capacity to volatile production programs. However, changes in both the system configuration and the production program to be processed call for adaptive approaches to production control. This paper proposes a multi-priority rule heuristic combined with an optimization tool for adaptive re-parameterization. First, domain-specific priority rules are introduced and incorporated into a weighted priority function for disassembly task allocation. Besides that, a novel metaheuristic parameter optimizer is devised to facilitate the adaption of weights in response to evolving requirements in a reasonable timeframe. Different metaheuristics such as simulated annealing or particle swarm optimization are incorporated as black-box optimizers. Subsequently, the performance of these metaheuristics is meticulously evaluated across six distinct test cases, employing discrete event simulation for evaluation, with a primary focus on measuring both speed and solution quality. To gauge the efficacy of the approach, a robust set of weights is employed as a benchmark. Encouragingly, the results of the experimentation reveal that the metaheuristics exhibit a notable proficiency in rapidly identifying high-quality solutions. The results are promising in that the metaheuristics can quickly find reasonable solutions, thus illustrating the compelling potential in enhancing the efficiency of agile disassembly systems

    Adaptive Multi-Priority Rule Approach To Control Agile Disassembly Systems In Remanufacturing

    Get PDF
    End-of-Life (EOL) products in remanufacturing are prone to a high degree of uncertainty in terms of product quantity and quality. Therefore, the industrial shift towards a circular economy emphasizes the need for agile and hybrid disassembly systems. These systems feature a dynamic material flow. Besides that, they combine the endurance of robots with the dexterity of human operators for an effective and economically reasonable EOL-product treatment. Moreover, being reconfigurable, agile disassembly systems allow an alignment of their functional and quantitative capacity to volatile production programs. However, changes in both the system configuration and the production program to be processed call for adaptive approaches to production control. This paper proposes a multi-priority rule heuristic combined with an optimization tool for adaptive re-parameterization. First, domain-specific priority rules are introduced and incorporated into a weighted priority function for disassembly task allocation. Besides that, a novel metaheuristic parameter optimizer is devised to facilitate the adaption of weights in response to evolving requirements in a reasonable timeframe. Different metaheuristics such as simulated annealing or particle swarm optimization are incorporated as black-box optimizers. Subsequently, the performance of these metaheuristics is meticulously evaluated across six distinct test cases, employing discrete event simulation for evaluation, with a primary focus on measuring both speed and solution quality. To gauge the efficacy of the approach, a robust set of weights is employed as a benchmark. Encouragingly, the results of the experimentation reveal that the metaheuristics exhibit a notable proficiency in rapidly identifying high-quality solutions. The results are promising in that the metaheuristics can quickly find reasonable solutions, thus illustrating the compelling potential in enhancing the efficiency of agile disassembly systems

    Uncertainty Propagation of Missing Data Signals with the Interval Discrete Fourier Transform

    Get PDF
    The interval discrete Fourier transform (DFT) algorithm can propagate signals carrying interval uncertainty. By addressing the repeated variables problem, the interval DFT algorithm provides exact theoretical bounds on the Fourier amplitude and estimates of the power spectral density (PSD) function while running in polynomial time. Thus, the algorithm can be used to assess the worst-case scenario in terms of maximum or minimum power, and provide insights into the amplitude spectrum bands of the transformed signal. To propagate signals with missing data, an upper and lower value for the missing data present in the signal must be assumed, such that the uncertainty in the spectrum bands can also be interpreted as an indicator of the quality of the reconstructed signal. For missing data reconstruction, there are a number of techniques available that can be used to obtain reliable bounds in the time domain, such as Kriging regressors and interval predictor models. Alternative heuristic strategies based on variable—as opposed to fixed—bounds can also be explored. This work aims to investigate the sensitivity of the algorithm against interval uncertainty in the time signal. The investigation is conducted in different case studies using signals of different lengths generated from the Kanai-Tajimi PSD function, representing earthquakes, and the Joint North Sea Wave Observation Project (JONSWAP) PSD function, representing sea waves as a narrowband PSD model

    Extended Production Planning of Reconfigurable Manufacturing Systems by Means of Simulation-based Optimization

    Get PDF
    Reconfigurable manufacturing systems (RMS) are capable of adjusting their operating point to the requirements of current customer demand with high degrees of freedom. In light of recent events, such as the covid crisis or the chip crisis, this reconfigurability proves to be crucial for efficient manufacturing of goods. Reconfigurability aims thereby not only at adjust production capacities but also for fast integration of new product variants or technologies. However, the operation of such systems is linked to high efforts concerning manual work in production planning and control. Simulation-based optimization provides the possibility to automate processes in production planning and control with the advantage of relying on mostly existing models such as material flow simulations. This paper studies the capabilities of the meta heuristics evolutionary algorithm, linear annealing and tabu search to automate the search for optimal production reconfiguration strategies. Two distinct use cases are regarded: an increase of customer demand and the introduction of a previously unknown product variant. A parametrized material flow simulation is used as function approximator for the optimizers, whereby the production system's structure as well as logic are target variables of the optimizers. The analysis shows that meta-heuristics find good solutions in a short time with only little manual configuration needed. Thus, metaheuristics illustrate the potential to automate the production planning of RMS. However, the results indicate that the performance of the three meta-heuristics considering optimization quality and speed differs strongly

    Stochastic Processes Identification from Data Ensembles via Power Spectrum Classification

    Get PDF
    Modern approaches to solve dynamic problems where random vibration is of significance will in most of cases rely upon the fundamental concept of the power spectrum as a core model for excitation and response process representation. This is partly due to the practicality of spectral models for frequency domain analysis, as well as their ease of use for generating compatible time domain samples. Such samples may be utilised for numerical performance evaluation of structures, those represented by complex non-linear models. Utilisation of ensemble statistics will be considered first for stationary processes only. For a stationary stochastic process, its power spectrum can be estimated statistically across all time or for a single window in time across an ensemble of records. In this work, it is first shown that ensemble characteristics can be utilised to improve the resulting power spectra by using estimations of the median instead of the mean of multiple data records. The improved power spectrum will be more robust in the presence of spectral outliers. The median spectrum will result in more reliable response statistics, particularly when source ensemble records contain low power spectra that are significantly below the mean. A weighted median spectrum will also be utilised, based upon the spectral distance of each record from the median, which will shift the estimated spectrum in the direction of the closest samples. In some cases, the data records exhibit high spectral variance so such an extent that a single power spectrum estimate is insufficient to adequately model the process statistics. In such cases, a more realistic representation of the spectral range of the process is captured by estimating two or more power spectra. This is done by classifying individual process records based upon their individual spectral estimates distance from each other, and therefore the only parameterisation required is to choose the number of spectrum models to be defined.This work was funded by the Deutsche Forschungsgemeinschaft (German Research Foundation) grants BE 2570/4-1 and CO 1849/1-1 as part of the project Uncertainty modelling in power spectrum estimation of environmental processes with applications in high-rise building performance evaluation

    Assessing the severity of missing data problems with the interval discrete Fourier transform algorithm

    Get PDF
    The interval discrete Fourier transform (DFT) algorithm can propagate in polynomial time signals carrying interval uncertainty. By computing the exact theoretical bounds on signal with missing data, the algorithm can be used to assess the worst-case scenario in terms of maximum or minimum power, and to provide insights into the amplitude spectrum bands of the transformed signal. The uncertainty width of the spectrum bands can also be interpreted as an indicator of the quality of the reconstructed signal. This strategy must however, assume upper and lower values for the missing data present in the signal. While this may seem arbitrary, there are a number of existing techniques that can be used to obtain reliable bounds in the time domain, for example Kriging regressor or interval predictor models. Alternative heuristic strategies based on variable (as opposed to fixed) bounds can also be explored, thanks to the flexibility and efficiency of the interval DFT algorithm. This is illustrated by means of numerical examples and sensitivity analyses

    Towards a Service-Oriented Architecture for Production Planning and Control: A Comprehensive Review and Novel Approach

    Get PDF
    The trends of shorter product lifecycles, customized products, and volatile market environments require manufacturers to reconfigure their production increasingly frequent to maintain competitiveness and customer satisfaction. More frequent reconfigurations, however, are linked to increased efforts in production planning and control (PPC). This poses a challenge for manufacturers, especially in regard of demographic change and shortage of qualified labour, since many tasks in PPC are performed manually by domain experts. Following the paradigm of software-defined manufacturing, this paper targets to enable a higher degree of automation and interoperability in PPC by applying the concepts of service-oriented architecture. As a result, production planners are empowered to orchestrate tasks in PPC without consideration of underlying implementation details. At first, it is investigated how tasks in PPC can be represented as services with the aim of encapsulation and reusability. Secondly, a software architecture based on asset administration shells is presented that allows connection to production data sources and enables integration and usage of such PPC services. In this sense, an approach for mapping asset administrations shells to OpenAPI Specifications is proposed for interoperable and semantic integration of existing services and legacy systems. Lastly, challenges and potential solutions for data integration are discussed considering the present heterogeneity of data sources in manufacturing

    Interoperable Architecture For Logical Reconfigurations Of Modular Production Systems

    Get PDF
    Individualisation of products and ever-shorter product lifecycles require manufacturing companies to quickly reconfigure their production and adapt to changing requirements. While most of the existing literature focuses on organisational structures or hardware requirements for reconfigurability, requirements and best practices for logical reconfigurations of automated production systems are only sparsely covered. In practice, logical system reconfigurations require adjustments to the software, which is often done manually by experts. With the ongoing automation and digitisation of manufacturing systems in the context of Industry4.0, the need for automated software reconfigurations is increasing. However, heterogeneous and proprietary technologies in the field of industrial automation pose a hurdle to overcome for generally applicable approaches for logical reconfigurations in the industrial domain. Therefore, this paper reviews available technologies that can be used to solve the problem of automated software reconfigurations. For this purpose, an architecture and a procedure are proposed on how to use these technologies for automatic adaptation and virtual commissioning of control software in industrial automation. To demonstrate the interoperability of the approach, collective cloud manufacturing is used as a composing platform. The presented approach further includes a domain-specific capability model for the specification of software artefacts to be generated, allowing jobs to be described and matched on the platform. The core element is a code generator for generating and orchestrating the control code for process execution using the reconfigurable digital twin as a validator on the platform. The approach is evaluated and demonstrated in a real-world use case of a modular disassembly station
    • 

    corecore