3 research outputs found

    Energy and Emissions Conscious Optimal Following for Automated Vehicles with Diesel Powertrains

    Full text link
    The emerging application of autonomous driving provides the benefit of eliminating the driver from the control loop, which offers opportunities for safety, energy saving and green house gas emissions reduction by adjusting the speed trajectory. The technological advances in sensing and computing make it realistic for the vehicle to obtain a preview information of its surrounding environment, and optimize its speed trajectory accordingly using predictive planning methods. Conventional speed following algorithms usually adopt an energy-centric perspective and improve fuel economy by means of reducing the power loss due to braking and operating the engine at its high fuel efficiency region. This could be a problem for diesel-powered vehicles, which rely on catalytic aftertreatment system to reduce overall emissions, as reduction efficiency drops significantly with a cold catalyst that would result from a smoother speed profile. In this work, control and constrained optimization techniques are deployed to understand the potential for and achieve concurrent reduction of fuel consumption and emissions. Trade-offs between fuel consumption and emissions are shown using results from a single objective optimal planning problem when the calculation is performed offline assuming full knowledge of the whole cycle. Results indicate a low aftertreatment temperature when energy-centric objectives are used, and this motivates the inclusion of temperature performance metric inside the optimization problem. An online optimal speed planner is then designed for concurrent treatment of energy and emissions, with a limited but accurate preview information. An objective function comprising an energy conscious term and an emissions conscious term is proposed based on its effectiveness of 1) concurrent reduction of fuel and emissions, 2) flexible balancing between the emphasis on fuel saving or emissions reduction based on performance requirements and 3) low computational complexity and ease of numerical treatment. Simulation results of the online optimal speed planner over multiple drive cycles are presented, and for the vehicle simulated in this work, concurrent reduction of fuel and emissions is demonstrated using a specific powertrain, when allowing flexible modification of the drive cycle. Hardware-in-the-loop experiment is also performed over the Federal Test Procedure (FTP) drive cycle, and shows up to 15% reduction in fuel consumption and 70% reduction in NOx emissions when allowing a flexible following distance. Finally, the stringent requirement of accurate preview information is relaxed by designing a robust re-formulation of the energy and emissions conscious speed planner. Improved fuel economy and emissions are shown while satisfying the constraints even in the presence of perturbations in the preview information. A Gaussian mixture regression-based speed prediction is applied to test the performance of the speed following strategy without assuming knowledge of the preview information. A performance degradation is observed in simulation results when using the predicted velocity compared with an accurate preview, but the speed planner preserves the capability to improve fuel and tailpipe emissions performance compared with a non-optimal controller.PHDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/170004/1/huangchu_1.pd

    Socially Responsible Machine Learning: On the Preservation of Individual Privacy and Fairness

    Full text link
    Machine learning (ML) techniques have seen significant advances over the last decade and are playing an increasingly critical role in people's lives. While their potential societal benefits are enormous, they can also inflict great harm if not developed or used with care. In this thesis, we focus on two critical ethical issues in ML systems, the violation of privacy and fairness, and explore mitigating approaches in various scenarios. On the privacy front, when ML systems are developed with private data from individuals, it is critical to prevent privacy violation. Differential privacy (DP), a widely used notion of privacy, ensures that no one by observing the computational outcome can infer a particular individual’s data with high confidence. However, DP is typically achieved by randomizing algorithms (e.g., adding noise), which inevitably leads to a trade-off between individual privacy and outcome accuracy. This trade-off can be difficult to balance, especially in settings where the same or correlated data is repeatedly used/exposed during the computation. In the first part of the thesis, we illustrate two key ideas that can be used to balance an algorithm's privacy-accuracy tradeoff: (1) the reuse of intermediate computational results to reduce information leakage; and (2) improving algorithmic robustness to accommodate more randomness. We introduce a number of randomized, privacy-preserving algorithms that leverage these ideas in various contexts such as distributed optimization and sequential computation. It is shown that our algorithms can significantly improve the privacy-accuracy tradeoff over existing solutions. On the fairness front, ML systems trained with real-world data can inherit biases and exhibit discrimination against already-disadvantaged or marginalized social groups. Recent works have proposed many fairness notions to measure and remedy such biases. However, their effectiveness is mostly studied in a static framework without accounting for the interactions between individuals and ML systems. Since individuals inevitably react to the algorithmic decisions they are subjected to, understanding the downstream impacts of ML decisions is critical to ensure that these decisions are socially responsible. In the second part of the thesis, we present our research on evaluating the long-term impacts of (fair) ML decisions. Specifically, we establish a number of theoretically rigorous frameworks to model the interactions and feedback between ML systems and individuals, and conduct equilibrium analysis to evaluate the impact they each have on the other. We will illustrate how ML decisions and individual behavior evolve in such a system, and how imposing common fairness criteria intended to promote fairness may nevertheless lead to undesirable pernicious effects. Aided with such understanding, mitigation approaches are also discussed.PHDElectrical and Computer EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169960/1/xueru_1.pd
    corecore