70,061 research outputs found

    Testing for knowledge : maximising information obtained from fire tests by using machine learning techniques

    Get PDF
    A machine learning (ML) algorithm was applied to predict the onset of flashover in 1:5 scale Room Corner Test experiments with sandwich panels. Towards this end, a penalized logistic regression model was chosen to detect the relevant variables and consequently provided a tool that can be used to make predictions of unseen samples. The method indicates that a deeper understanding of the contributing factors leading to flashover can be achieved. Furthermore, it allows a more nuanced ranking than currently offered by the commonly used classification methods for reaction to fire tests. The proposed methodology shows a substantial value in terms of guidance for future large and intermediate scale testing. In particular, it is foreseen that the method will be extremely useful for assessing and understanding the behaviour of innovative materials and design solutions

    Analytical Models of the Performance of C-V2X Mode 4 Vehicular Communications

    Get PDF
    The C-V2X or LTE-V standard has been designed to support V2X (Vehicle to Everything) communications. The standard is an evolution of LTE, and it has been published by the 3GPP in Release 14. This new standard introduces the C-V2X or LTE-V Mode 4 that is specifically designed for V2V communications using the PC5 sidelink interface without any cellular infrastructure support. In Mode 4, vehicles autonomously select and manage their radio resources. Mode 4 is highly relevant since V2V safety applications cannot depend on the availability of infrastructure-based cellular coverage. This paper presents the first analytical models of the communication performance of C-V2X or LTE-V Mode 4. In particular, the paper presents analytical models for the average PDR (Packet Delivery Ratio) as a function of the distance between transmitter and receiver, and for the four different types of transmission errors that can be encountered in C-V2X Mode 4. The models are validated for a wide range of transmission parameters and traffic densities. To this aim, this study compares the results obtained with the analytical models to those obtained with a C-V2X Mode 4 simulator implemented over Veins

    A Learning-Based Framework for Two-Dimensional Vehicle Maneuver Prediction over V2V Networks

    Full text link
    Situational awareness in vehicular networks could be substantially improved utilizing reliable trajectory prediction methods. More precise situational awareness, in turn, results in notably better performance of critical safety applications, such as Forward Collision Warning (FCW), as well as comfort applications like Cooperative Adaptive Cruise Control (CACC). Therefore, vehicle trajectory prediction problem needs to be deeply investigated in order to come up with an end to end framework with enough precision required by the safety applications' controllers. This problem has been tackled in the literature using different methods. However, machine learning, which is a promising and emerging field with remarkable potential for time series prediction, has not been explored enough for this purpose. In this paper, a two-layer neural network-based system is developed which predicts the future values of vehicle parameters, such as velocity, acceleration, and yaw rate, in the first layer and then predicts the two-dimensional, i.e. longitudinal and lateral, trajectory points based on the first layer's outputs. The performance of the proposed framework has been evaluated in realistic cut-in scenarios from Safety Pilot Model Deployment (SPMD) dataset and the results show a noticeable improvement in the prediction accuracy in comparison with the kinematics model which is the dominant employed model by the automotive industry. Both ideal and nonideal communication circumstances have been investigated for our system evaluation. For non-ideal case, an estimation step is included in the framework before the parameter prediction block to handle the drawbacks of packet drops or sensor failures and reconstruct the time series of vehicle parameters at a desirable frequency

    Opportunity to Test non-Newtonian Gravity Using Interferometric Sensors with Dynamic Gravity Field Generators

    Get PDF
    We present an experimental opportunity for the future to measure possible violations to Newton's 1/r^2 law in the 0.1-10 meter range using Dynamic gravity Field Generators (DFG) and taking advantage of the exceptional sensitivity of modern interferometric techniques. The placement of a DFG in proximity to one of the interferometer's suspended test masses generates a change in the local gravitational field that can be measured at a high signal to noise ratio. The use of multiple DFGs in a null experiment configuration allows to test composition independent non-Newtonian gravity significantly beyond the present limits. Advanced and third-generation gravitational-wave detectors are representing the state-of-the-art in interferometric distance measurement today, therefore we illustrate the method through their sensitivity to emphasize the possible scientific reach. Nevertheless, it is expected that due to the technical details of gravitational-wave detectors, DFGs shall likely require dedicated custom configured interferometry. However, the sensitivity measure we derive is a solid baseline indicating that it is feasible to consider probing orders of magnitude into the pristine parameter well beyond the present experimental limits significantly cutting into the theoretical parameter space.Comment: 9 pages, 6 figures; Physical Review D, vol. 84, Issue 8, id. 08200

    Inflation in asymptotically safe f(R) theory

    Full text link
    We discuss the existence of inflationary solutions in a class of renormalization group improved polynomial f(R) theories, which have been studied recently in the context of the asymptotic safety scenario for quantum gravity. These theories seem to possess a nontrivial ultraviolet fixed point, where the dimensionful couplings scale according to their canonical dimensionality. Assuming that the cutoff is proportional to the Hubble parameter, we obtain modified Friedmann equations which admit both power law and exponential solutions. We establish that for sufficiently high order polynomial the solutions are reliable, in the sense that considering still higher order polynomials is very unlikely to change the solution.Comment: Presented at 14th Conference on Recent Developments in Gravity: NEB 14, Ioannina, Greece, 8-11 Jun 201

    Vulnerability assessments of pesticide leaching to groundwater

    Get PDF
    Pesticides may have adverse environmental effects if they are transported to groundwater and surface waters. The vulnerability of water resources to contamination of pesticides must therefore be evaluated. Different stakeholders, with different objectives and requirements, are interested in such vulnerability assessments. Various assessment methods have been developed in the past. For example, the vulnerability of groundwater to pesticide leaching may be evaluated by indices and overlay-based methods, by statistical analyses of monitoring data, or by using process-based models of pesticide fate. No single tool or methodology is likely to be appropriate for all end-users and stakeholders, since their suitability depends on the available data and the specific goals of the assessment. The overall purpose of this thesis was to develop tools, based on different process-based models of pesticide leaching that may be used in groundwater vulnerability assessments. Four different tools have been developed for end-users with varying goals and interests: (i) a tool based on the attenuation factor implemented in a GIS, where vulnerability maps are generated for the islands of Hawaii (U.S.A.), (ii) a simulation tool based on the MACRO model developed to support decision-makers at local authorities to assess potential risks of leaching of pesticides to groundwater following normal usage in drinking water abstraction districts, (iii) linked models of the soil root zone and groundwater to investigate leaching of the pesticide mecoprop to shallow and deep groundwater in fractured till, and (iv) a meta-model of the pesticide fate model MACRO developed for 'worst-case' groundwater vulnerability assessments in southern Sweden. The strengths and weaknesses of the different approaches are discussed

    Modeling the Effect of Traffic Calming on Local Animal Population Persistence

    Get PDF
    A steady growth in traffic volumes in industrialized countries with dense human populations is expected, especially on minor roads. As a consequence, the fragmentation of wildlife populations will increase dramatically. In human-dominated landscapes, typically minor roads occur in high densities, and animals encounter them frequently. Traffic calming is a new approach to mitigate negative impacts by reducing traffic volumes and speeds on minor roads at a regional scale. This leads to a distinction between roads with low volumes as being part of the traffic-calmed area, whereas roads with bundled traffic are located around this area. Within the traffic-calmed area, volumes and speeds can be decreased substantially; this is predicted to decrease the disturbance and mortality risk for animals. Thus far, data on the effects of traffic calming on wildlife population persistence remain scarce. Using metapopulation theory, we derived a model to estimate thresholds in the size of traffic-calmed areas and traffic volumes that may allow persistent populations. Our model suggests that traffic calming largely increases the persistence of roe deer in a landscape with a dense road network. Our modeling results show trade-offs between traffic volume on roads within the traffic-calmed area and both the area of habitat available for this species in the traffic-calmed area and the size of the traffic-calmed area. These results suggest ways to mitigate the fragmentation of wildlife habitat by road networks and their expected traffic volume

    Fully automatic worst-case execution time analysis for MATLAB/Simulink models

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”In today's technical world (e.g., in the automotive industry), more and more purely mechanical components get replaced by electro-mechanical ones. Thus the size and complexity of embedded systems steadily increases. To cope with this development, comfortable software engineering tools are being developed that allow a more functionality-oriented development of applications. The paper demonstrates how worst-case execution time (WCET) analysis is integrated into such a high-level application design and simulation tool MATLAB/Simulink-thus providing a higher-level interface to WCET analysis. The MATLAB/Simulink extensions compute and display worst-case timing data for all blocks of a MATLAB/Simulink simulation, which gives the developer of an application valuable feedback about the correct timing of the application being developed. The solution facilitates a fully-automated WCET analysis, i.e., in contrast to existing approaches the programmer does not have to provide path information
    • …
    corecore