894 research outputs found

    High-performance electric vehicle duty cycles and their impact on lithium ion battery performance and degradation

    Get PDF
    High performance (HP) battery electric vehicle (BEV) and racing applications represent significantly different use cases than those associated with conventional consumer vehicles and road driving. The differences between HP-BEV use cases and the duty cycles embodied within established lithium ion battery cell (LIB) test standards will lead to unrepresentative estimates for battery life and performance within HP-BEV applications. Furthermore, the behaviour of LIBs in these applications is not well understood due to a lack of suitable testing cycles and experimental data. The research presented within this thesis addresses this knowledge gap through the definition and implementation of a new framework for LIB performance and degradation testing. The new framework encompasses the definition of a methodology through which a suitable duty cycle may be derived, and subsequent definition of the experimental procedures required to conduct LIB performance and degradation testing. To underpin the development of a suitable duty cycle, a method is presented to simulate race circuits, a HP-BEV and a driver model to generate a database that defines a range of HP duty cycles that are deemed representative of the real-world use of a HP-BEV. Subsequently, two methods to design a HP duty cycle are evaluated and validated. One of the methods studied (HP Random Pulse Cycle) extends an established driving-cycle construction technique, based on the derivation of micro-trips. The second method (HP Multisine Cycle) utilises a time-frequency domain-swapping algorithm to develop a duty cycle with a target amplitude spectrum and histogram. The design criteria for both construction techniques are carefully selected based on their potential impact on battery degradation. The new HP duty cycles provide a more representative duty cycle compared to a traditional battery test standard and facilitate experimental work, which will more accurately describe the performance and degradation rate of cells within HP-BEV use. Utilising the newly developed HP-Multisine Cycle, an experimental procedure for LIB performance and degradation testing is presented. Six lithium ion cells are characterised, followed by a performance and degradation study. The performance study investigates the thermal behaviour of the cells when subjected to HP-BEV scenarios and a standard testing cycle (IECC). Results show an increase in excess of 200% in surface temperature gradients for the HP use case compared to the standard testing cycle. The degradation study compares the degradation progression between the HP-BEV environment and conventional testing standards. Two test groups of cells are subject to an experimental evaluation using the HP Multisine Cycle and the IECC. After 200 cycles, both test groups display, counter to expectations, an increased energy capacity, increased pure Ohmic resistance, lower charge transfer resistance and an extended OCV operating window. The changes are more pronounced for the cells subjected to the HP Multisine Cycle. It is hypothesised that the ’improved’ changes in cell characteristics are caused by cracking of the electrode material caused by high electrical current pulses. With continued cycling, the cells cycled with the HP Multisine Cycle are expected to show degradation at an increased rate. The results from the experimental studies provide new insights into the thermal management requirements and evolution of cell characteristics during use within HP-BEVs, and highlight the limitations in the understanding of the complex cell degradation in this area. The new framework addresses the lack of suitable testing cycles and experimental investigations for the HP-BEV environment. The methodologies presented are not limited to the automotive sector but may be used in all areas, where existing testing standards are unrepresentative of the typical usage profile, and LIB degradation and performance are a concern

    Fusion of Data from Heterogeneous Sensors with Distributed Fields of View and Situation Evaluation for Advanced Driver Assistance Systems

    Get PDF
    In order to develop a driver assistance system for pedestrian protection, pedestrians in the environment of a truck are detected by radars and a camera and are tracked across distributed fields of view using a Joint Integrated Probabilistic Data Association filter. A robust approach for prediction of the system vehicles trajectory is presented. It serves the computation of a probabilistic collision risk based on reachable sets where different sources of uncertainty are taken into account

    Simulation-based modelling of the unpaved road deterioration and maintenance program in heavy construction and mining sectors

    Get PDF
    Saving cost is a significant factor in the successful operation of heavy civil engineering, such as highway and dam construction, and mining projects, which can be achieved through the reduction of operating costs in large projects. In particular, earthmoving operations form a large portion of these projects, and some of the main components of the earthmoving operating cost are fuel, parts, and tires costs, which directly depend on the quality of the unpaved access roads in the field. This study aims at developing a simulation-based model which dynamically estimates the Roughness Defect Score (RDS) of the road (performance of the road condition) as the traffic increases and provides an optimal maintenance management program based on the affected cost factors using Simphony.Net modelling environment. Simhony.Net is a useful tool for the simulation because it provides an overview of the system’s performance in cyclic and long-term operations. This model uses a stochastic approach to calculate the road resistance and the frequency of maintenance, by considering the variations in nondeterministic variables, such as speed and hauled loads. Also, a Markov model-based algorithm is being incorporated in the system to provide more realistic modelling of the road deterioration over time. Markov modelling involves discrete-event transitions, which model’s road deterioration from a state to another state over time. Comparison of these three modelling methods (deterministic, stochastic- Monte Carlo and Stochastic- Markov chain modelling) is demonstrated in this study. Constant values were used for the deterministic modelling, probability distributions were used for the Monte Carlo Simulation, and transition matrices were used for the Markov chain modelling. Based on the results, the stochastic modelling was able to provide reliable vehicle operating cost (VOC), optimum frequency of maintenance, and road deterioration for ongoing or future cases

    Future Transportation

    Get PDF
    Greenhouse gas (GHG) emissions associated with transportation activities account for approximately 20 percent of all carbon dioxide (co2) emissions globally, making the transportation sector a major contributor to the current global warming. This book focuses on the latest advances in technologies aiming at the sustainable future transportation of people and goods. A reduction in burning fossil fuel and technological transitions are the main approaches toward sustainable future transportation. Particular attention is given to automobile technological transitions, bike sharing systems, supply chain digitalization, and transport performance monitoring and optimization, among others

    Increasing the robustness of autonomous systems to hardware degradation using machine learning

    Get PDF
    Autonomous systems perform predetermined tasks (missions) with minimum supervision. In most applications, the state of the world changes with time. Sensors are employed to measure part or whole of the world’s state. However, sensors often fail amidst operation; feeding as such decision-making with wrong information about the world. Moreover, hardware degradation may alter dynamic behaviour, and subsequently the capabilities, of an autonomous system; rendering the original mission infeasible. This thesis applies machine learning to yield powerful and robust tools that can facilitate autonomy in modern systems. Incremental kernel regression is used for dynamic modelling. Algorithms of this sort are easy to train and are highly adaptive. Adaptivity allows for model adjustments, whenever the environment of operation changes. Bayesian reasoning provides a rigorous framework for addressing uncertainty. Moreover, using Bayesian Networks, complex inference regarding hardware degradation can be answered. SpeciïŹcally, adaptive modelling is combined with Bayesian reasoning to yield recursive estimation algorithms that are robust to sensor failures. Two solutions are presented by extending existing recursive estimation algorithms from the robotics literature. The algorithms are deployed on an underwater vehicle and the performance is assessed in real-world experiments. A comparison against standard ïŹlters is also provided. Next, the previous algorithms are extended to consider sensor and actuator failures jointly. An algorithm that can detect thruster failures in an Autonomous Underwater Vehicle has been developed. Moreover, the algorithm adapts the dynamic model online to compensate for the detected fault. The performance of this algorithm was also tested in a real-world application. One step further than hardware fault detection, prognostics predict how much longer can a particular hardware component operate normally. Ubiquitous sensors in modern systems render data-driven prognostics a viable solution. However, training is based on skewed datasets; datasets where the samples from the faulty region of operation are much fewer than the ones from the healthy region of operation. This thesis presents a prognostic algorithm that tackles the problem of imbalanced (skewed) datasets

    Probabilistic modeling for single-photon lidar

    Full text link
    Lidar is an increasingly prevalent technology for depth sensing, with applications including scientific measurement and autonomous navigation systems. While conventional systems require hundreds or thousands of photon detections per pixel to form accurate depth and reflectivity images, recent results for single-photon lidar (SPL) systems using single-photon avalanche diode (SPAD) detectors have shown accurate images formed from as little as one photon detection per pixel, even when half of those detections are due to uninformative ambient light. The keys to such photon-efficient image formation are two-fold: (i) a precise model of the probability distribution of photon detection times, and (ii) prior beliefs about the structure of natural scenes. Reducing the number of photons needed for accurate image formation enables faster, farther, and safer acquisition. Still, such photon-efficient systems are often limited to laboratory conditions more favorable than the real-world settings in which they would be deployed. This thesis focuses on expanding the photon detection time models to address challenging imaging scenarios and the effects of non-ideal acquisition equipment. The processing derived from these enhanced models, sometimes modified jointly with the acquisition hardware, surpasses the performance of state-of-the-art photon counting systems. We first address the problem of high levels of ambient light, which causes traditional depth and reflectivity estimators to fail. We achieve robustness to strong ambient light through a rigorously derived window-based censoring method that separates signal and background light detections. Spatial correlations both within and between depth and reflectivity images are encoded in superpixel constructions, which fill in holes caused by the censoring. Accurate depth and reflectivity images can then be formed with an average of 2 signal photons and 50 background photons per pixel, outperforming methods previously demonstrated at a signal-to-background ratio of 1. We next approach the problem of coarse temporal resolution for photon detection time measurements, which limits the precision of depth estimates. To achieve sub-bin depth precision, we propose a subtractively-dithered lidar implementation, which uses changing synchronization delays to shift the time-quantization bin edges. We examine the generic noise model resulting from dithering Gaussian-distributed signals and introduce a generalized Gaussian approximation to the noise distribution and simple order statistics-based depth estimators that take advantage of this model. Additional analysis of the generalized Gaussian approximation yields rules of thumb for determining when and how to apply dither to quantized measurements. We implement a dithered SPL system and propose a modification for non-Gaussian pulse shapes that outperforms the Gaussian assumption in practical experiments. The resulting dithered-lidar architecture could be used to design SPAD array detectors that can form precise depth estimates despite relaxed temporal quantization constraints. Finally, SPAD dead time effects have been considered a major limitation for fast data acquisition in SPL, since a commonly adopted approach for dead time mitigation is to operate in the low-flux regime where dead time effects can be ignored. We show that the empirical distribution of detection times converges to the stationary distribution of a Markov chain and demonstrate improvements in depth estimation and histogram correction using our Markov chain model. An example simulation shows that correctly compensating for dead times in a high-flux measurement can yield a 20-times speed up of data acquisition. The resulting accuracy at high photon flux could enable real-time applications such as autonomous navigation

    Localization Precise in Urban Area

    Get PDF
    Nowadays, stand-alone Global Navigation Satellite System (GNSS) positioning accuracy is not sufficient for a growing number of land users. Sub-meter or even centimeter accuracy is becoming more and more crucial in many applications. Especially for navigating rovers in the urban environment, final positioning accuracy can be worse as the dramatically lack and contaminations of GNSS measurements. To achieve a more accurate positioning, the GNSS carrier phase measurements appear mandatory. These measurements have a tracking error more precise by a factor of a hundred than the usual code pseudorange measurements. However, they are also less robust and include a so-called integer ambiguity that prevents them to be used directly for positioning. While carrier phase measurements are widely used in applications located in open environments, this thesis focuses on trying to use them in a much more challenging urban environment. To do so, Real Time-Kinematic (RTK) methodology is used, which is taking advantage on the spatially correlated property of most code and carrier phase measurements errors. Besides, the thesis also tries to take advantage of a dual GNSS constellation, GPS and GLONASS, to strengthen the position solution and the reliable use of carrier phase measurements. Finally, to make up the disadvantages of GNSS in urban areas, a low-cost MEMS is also integrated to the final solution. Regarding the use of carrier phase measurements, a modified version of Partial Integer Ambiguity Resolution (Partial-IAR) is proposed to convert as reliably as possible carrier phase measurements into absolute pseudoranges. Moreover, carrier phase Cycle Slip (CS) being quite frequent in urban areas, thus creating discontinuities of the measured carrier phases, a new detection and repair mechanism of CSs is proposed to continuously benefit from the high precision of carrier phases. Finally, tests based on real data collected around Toulouse are used to test the performance of the whole methodology

    Optimising outcomes for potentially resectable pancreatic cancer through personalised predictive medicine : the application of complexity theory to probabilistic statistical modeling

    Get PDF
    Survival outcomes for pancreatic cancer remain poor. Surgical resection with adjuvant therapy is the only potentially curative treatment, but for many people surgery is of limited benefit. Neoadjuvant therapy has emerged as an alternative treatment pathway however the evidence base surrounding the treatment of potentially resectable pancreatic cancer is highly heterogeneous and fraught with uncertainty and controversy. This research seeks to engage with conjunctive theorising by avoiding simplification and abstraction to draw on different kinds of data from multiple sources to move research towards a theory that can build a rich picture of pancreatic cancer management pathways as a complex system. The overall aim is to move research towards personalised realistic medicine by using personalised predictive modeling to facilitate better decision making to achieve the optimisation of outcomes. This research is theory driven and empirically focused from a complexity perspective. Combining operational and healthcare research methodology, and drawing on influences from complementary paradigms of critical realism and systems theory, then enhancing their impact by using Cilliers’ complexity theory ‘lean ontology’, an open-world ontology is held and both epistemic reality and judgmental relativity are accepted. The use of imperfect data within statistical simulation models is explored to attempt to expand our capabilities for handling the emergent and uncertainty and to find other ways of relating to complexity within the field of pancreatic cancer research. Markov and discrete-event simulation modelling uncovered new insights and added a further dimension to the current debate by demonstrating that superior treatment pathway selection depended on individual patient and tumour factors. A Bayesian Belief Network was developed that modelled the dynamic nature of this complex system to make personalised prognostic predictions across competing treatments pathways throughout the patient journey to facilitate better shared clinical decision making with an accuracy exceeding existing predictive models.Survival outcomes for pancreatic cancer remain poor. Surgical resection with adjuvant therapy is the only potentially curative treatment, but for many people surgery is of limited benefit. Neoadjuvant therapy has emerged as an alternative treatment pathway however the evidence base surrounding the treatment of potentially resectable pancreatic cancer is highly heterogeneous and fraught with uncertainty and controversy. This research seeks to engage with conjunctive theorising by avoiding simplification and abstraction to draw on different kinds of data from multiple sources to move research towards a theory that can build a rich picture of pancreatic cancer management pathways as a complex system. The overall aim is to move research towards personalised realistic medicine by using personalised predictive modeling to facilitate better decision making to achieve the optimisation of outcomes. This research is theory driven and empirically focused from a complexity perspective. Combining operational and healthcare research methodology, and drawing on influences from complementary paradigms of critical realism and systems theory, then enhancing their impact by using Cilliers’ complexity theory ‘lean ontology’, an open-world ontology is held and both epistemic reality and judgmental relativity are accepted. The use of imperfect data within statistical simulation models is explored to attempt to expand our capabilities for handling the emergent and uncertainty and to find other ways of relating to complexity within the field of pancreatic cancer research. Markov and discrete-event simulation modelling uncovered new insights and added a further dimension to the current debate by demonstrating that superior treatment pathway selection depended on individual patient and tumour factors. A Bayesian Belief Network was developed that modelled the dynamic nature of this complex system to make personalised prognostic predictions across competing treatments pathways throughout the patient journey to facilitate better shared clinical decision making with an accuracy exceeding existing predictive models

    Situational awareness-based energy management for unmanned electric surveillance platforms

    Get PDF
    In the present day fossil fuel availability, cost, security and the pollutant emissions resulting from its use have driven industry into looking for alternative ways of powering vehicles. The aim of this research is to synthesize/design and develop a framework of novel control architectures which can result in complex powered vehicle subsystems to perform better with reduced exogeneuous information. This research looks into the area of energy management by proposing an intelligent based system which not only looks at the beaten path of where energy comes from and how much of it to use, but it goes further by taking into consideration the world around it. By operating without GPS, it realies on data such as usage, average consumption, system loads and even other surrounding vehicles are considered when making the difficult decisions of where to direct the energy into, how much of it, and even when to cut systems off in benefit of others. All this is achieved in an integrated way by working within the limitations of non-fossil fuelled energy sources like fuel cells, ultracapacitors and battery banks using driver-provided information or by crafting an artificial usage profile from historicaly learnt data. By using an organic computing philosophy based on artificial intelligence this alternative approach to energy supply systems presents a different perspective beginning by accepting the fact that when hardware is set energy can be optimized only so much and takes a step further by answering the question of how to best manage it when refuelling might not be an option. The result is a situationally aware system concept that is portable to any type of electrically powered platform be it ground, aerial or marine since it operates on the fact that all operate within three dimensional space. The system®s capabilities are then verified in a virtual reality environment which can be tailored to the meet reseach needs including allowing for different altitudes, environmental temperature and humidity profiles. This VR system is coupled with a chassis dynamometer to allow for testing of real physical prototype unmanned ground vehicles where the intelligent system will benefit by learning from real platform data. The Thesis contributions and objectives are summarised next: The control system proposed includes an awareness of the surroundings within which the vehicle is operating without relying on GPS position information. The system proposed is portable and could be used to control other systems. The test platform developed within the Thesis is flexible and could be used for other systems. The control system for the fuel cell system described within the work has included an allowance for altitude and humidity. These factors would appear to be significant for such systems. The structure of the control system and its hierarchy is novel. The ability of the system to be applied to a UAV and as such control a ‘vehicle’ in 3 dimensions, and yet be also applied to a ground vehicle, where roll and pitch are largely a function of the ground over which it travels (so the UGV only uses a subset of the control functionality). The mission awareness of the control structure appears to be the heart of the potential contribution to knowledge, and that this also includes the ability to create an estimated, artificial mission profile should one not be input by the operators. This learnt / adaptive input could be expanded on to highlight this aspect
    • 

    corecore