16 research outputs found

    Dynamics of the Size and Orientation Distribution of Microcracks and Evolution of Macroscopic Damage Parameters

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.We are dealing with damage of brittle materials caused by growth of microcracks. In our model the cracks are penny-shaped. They can only enlarge but not heal. For a single crack a Rice–Griffith growth law is assumed: There is crack growth only if tension is applied normally to the crack surface, exceeding a critical value. Our aim is to investigate the effect of crack growth on macroscopic constitutive quantities. A possible approach taking into account such an internal structure within continuum mechanics is the mesoscopic theory. A distribution of crack lengths and crack orientations within the continuum element is introduced. Macroscopic quantities are calculated as averages with the distribution function. A macroscopic measure of the progressing damage, i.e., a damage parameter, is the average crack length. For this scalar damage parameter we derive an evolution equation. Due to the unilateral growth law for the single crack, it turns out that the form of this differential equation depends explicitly on the initial crack length distribution. In order to treat biaxial loading, it is necessary to introduce a tensorial damage parameter. We define a second-order tensor damage parameter in terms of the crack length and orientation distribution function

    Infrastructure-aided Automated Driving in Highly Dynamic Urban Environments

    No full text
    Automated driving without driver involvement requires that the automated driving system (ADS) is able to detect relevant environmental factors at any time and to make a prediction of the situational evolution within the planning horizon on this basis. However, in unclear traffic areas, a comprehensive detection from the first-person perspective cannot be guaranteed, which may result in safety issues, a defensive planning behavior of the automated vehicle and thus reduce the traffic flow. A system is presented that generates collision-free paths based on single-object predictions and a cyclic trajectory planning process, incorporating a traffic infrastructure, enhanced with sensors, processing and communication units. Within this work, the usefulness of infrastructure extensions for the support of connected and automated vehicle (CAV) is investigated in a complex intersection scenario with the help of the above-mentioned methods

    From monolithic to component-based performance evaluation of software architectures. A series of experiments analysing accuracy and effort

    Full text link
    Background: Model-based performance evaluation methods for softwarearchitectures can help architects to assess design alternatives andsave costs for late life-cycle performance fixes. A recent trendis component-based performance modelling, which aims at creatingreusable performance models; a number of such methods have been proposedduring the last decade. Their accuracy and the needed effort formodelling are heavily influenced by human factors, which are so farhardly understood empirically. Objective: Do component-based methods allow to make performance predictions with a comparable accuracy while saving effort in a reuse scenario? We examined three monolithic methods (SPE, umlPSI, Capacity Planning (CP)) and one component-based performance evaluation method (PCM) with regard to their accuracy and effort from the viewpoint of method users.Methods: We conducted a series of three experiments (with different levels of control) involving 47 computer science students. In the first experiment, we compared the applicability of the monolithic methods in order to choose one of them for comparison. In the second experiment, we compared the accuracy and effort of this monolithic and the component-based method for the model creation case. In the third, we studied the effort reduction from reusing component-based models. Data were collected based on the resulting artefacts, questionnaires and screen recording. They were analysed using hypothesis testing, linear models, and analysis of variance.Results: For the monolithic methods, we found that using SPE and CP resulted in accurate predictions, while umlPSI produced over-estimates. Comparing the component-based method PCM with SPE, we found that creating reusable models using PCM takes more (but not drastically more) time than using SPE and that participants can create accurate models with both techniques. Finally, we found that reusing PCM models can save time, because effort to reuse can be explained by a model that is independent of the inner complexity of a component.Limitations: The tasks performed in our experiments reflect only a subset of the actual activities when applying model-based performance evaluation methods in a software development process.Conclusions: Our results indicate that sufficient prediction accuracy can be achieved with both monolithic and component-based methods, and that the higher effort for component-based performance modelling will indeed pay off when the component models incorporate and hide a sufficient amount of complexity

    Kriegerinnen in den Leges?

    No full text
    corecore