13 research outputs found

    To-Do Map- Location Focused Task Management Framework

    No full text

    Smart EV Charging With Context-Awareness: Enhancing Resource Utilization via Deep Reinforcement Learning

    No full text
    The widespread adoption of electric vehicles (EVs) has introduced new challenges for stakeholders ranging from grid operators to EV owners. A critical challenge is to develop an effective and economical strategy for managing EV charging while considering the diverse objectives of all involved parties. In this study, we propose a context-aware EV smart charging system that leverages deep reinforcement learning (DRL) to accommodate the unique requirements and goals of participants. Our DRL-based approach dynamically adapts to changing contextual factors such as time of day, location, and weather to optimize charging decisions in real time. By striking a balance between charging cost, grid load reduction, fleet operator preferences, and charging station energy efficiency, the system offers EV owners a seamless and cost-efficient charging experience. Through simulations, we evaluate the efficiency of our proposed Deep Q-Network (DQN) system by comparing it with other distinct DRL methods: Proximal Policy Optimization (PPO), synchronous Advantage Actor-Critic (A3C), and Deep Deterministic Policy Gradient (DDPG). Notably, our proposed methodology, DQN, demonstrated superior computational performance compared to the others. Our results reveal that the proposed system achieves a remarkable, approximately 18% enhancement in energy efficiency compared to traditional methods. Moreover, it demonstrates about a 12% increase in cost-effectiveness for EV owners, effectively reducing grid strain by 20% and curbing CO2 emissions by 10% due to the utilization of natural energy sources. The system’s success lies in its ability to facilitate sequential decision-making, decipher intricate data patterns, and adapt to dynamic contexts. Consequently, the proposed system not only meets the efficiency and optimization requirements of fleet operators and charging station maintainers but also exemplifies a promising stride toward sustainable and balanced EV charging management

    iMobilAkou: The Role of Machine Listening to Detect Vehicle using Sound Acoustics

    No full text
    Machine Learning can work very well with image recognition, but it is used to recognize audio patterns. Machine listening identifies audio patterns of different entities like the car engine, human speaking, nature sounds, etc. The environmental sound classification plays an important role to encourage citizens to travel smartly within a city without creating unbearable noises. On the other hand, it also promotes the city council to maintain and predict a sustainable sound at rush hour with ins the city. The aim of this early-stage research is to present a methodology that will read the labeled audio files, extract features from them, feed features to a sequential model. Moreover, the model will have the ability to classify these audio files of vehicles based on their input feature(s) and then further categorize them as it either light-weight, medium-weight, heavy-weight, rail-bound or two-wheeled vehicle using the applications of machine listening and deep learning in the field of sound acoustics. Therefore, It will also classify unlabelled test data files on a pre-trained model. This research provides us the base model for the vehicle classification giving both advantages and disadvantages along with the possibility for future extensions

    Context-Aware Optimal Charging Distribution using Deep Reinforcement Learning

    No full text
    The expansion of charging infrastructure and the optimal utilization of existing infrastructure are key influencing factors for the future growth of electric mobility. The main objective of this paper is to present a novel methodology which identifies the necessary stakeholders, processes their contextual information and meets their optimality criteria using a constraint satisfaction strategy. A deep reinforcement learning algorithm is used for optimally distributing the electric vehicle charging resources in a smart-mobility ecosystem. The algorithm performs context-aware, constrained-optimization such that the on-demand requests of each stakeholder, e.g., vehicle owner as end-user, grid-operator, fleet-operator, charging-station service operator, is fulfilled. In the proposed methodology, the system learns from the surrounding environment until the optimal charging resource allocation strategy within the limitations of the system constraints is reached. We look at the concept of optimality from the perspective of multiple stakeholders who participate in the smart-mobility ecosystem. A simple use case is presented in detail. Finally, we discuss the potential to develop this concept further to enable more complex digital interactions between the actors of a smart-mobility eco-system
    corecore