Improving the efficiency of plug-in-hybrid electric vehicles
Dr. Neil Canter, Contributing Editor | TLT Tech Beat May 2016
A new system learns from its experiences and rewards itself for better driving decisions.
KEY CONCEPTS
•
A new reinforcement-learning energy management system improves the efficiency of plug-in hybrid electric vehicles by facilitating better driving decisions.
•
A reward-by-action factor allows the model to learn, maximizing the reward for optimizing efficiency.
•
Fuel consumption was reduced by nearly 12% when a plug-in hybrid electric vehicle was used on a 20-mile commute on a Southern California highway.
IN THE PUSH TO IMPROVE THE FUEL ECONOMY of automobiles, much emphasis is placed on reducing the weight of vehicles and developing new lubricants with lower viscosities. Other factors to be considered include how to optimize the ability of a specific vehicle to perform as it goes on a specific route from Point A to Point B.
In a previous TLT article, a second factor considered is how to assist the driver with gaining better information on traveling in highly congested, urban areas (
1). The approach that has been used is Green Light Optimal Speed Advisory (GLOSA) systems to help drivers move through the series of intersections they encounter in an urban environment. This article describes the development of a GLOSA system that relies on a mobile phone network established by drivers placing the phones on their windshields. A 20% reduction in fuel usage was achieved with a specific automobile using the GLOSA system in driving through the streets of Cambridge, Mass., USA.
The fuel economy initiative has led automotive OEMs to produce vehicles that can utilize electricity to boost efficiency through the commercialization of plug-in hybrid electric vehicles (PHEVs,
see Figure 1). These vehicles combine the use of energy stored in a battery pack with conventional internal combustion gasoline or diesel engines.
Figure 1. Fuel efficiency of plug-in hybrid electric vehicles is improved via a reinforcement-learning energy management system model. (Figure courtesy of the University of California, Riverside.)
Even though PHEVs are designed to display better fuel economy than conventional vehicles, this does not mean their performance cannot be further improved. Xuewei Qi, doctoral candidate and research assistant in the department of electrical and computer engineering at the University of California Riverside in Riverside, says, “The objective of our research has been to improve the fuel efficiency and reduce the emissions generated of PHEVs through the use of sustainable and intelligent vehicle technologies. PHEVs are much more practical to study in this manner than electrical vehicles because they are not limited by driving range.”
The use of the battery pack and the engine in a PHEV is controlled by an energy management system (EMS). Qi says, “Two types of EMS can be used to control the operation of a PHEV. One approach known as binary mode control relies on using the battery pack until its exhausted and then switching to the engine. In this manner, the EMS does not take any outside factors such as the driving distance, traffic conditions and the time of the trip into consideration. The second approach takes into account driving conditions and the dynamics of the vehicle in order to improve efficiency.”
The second approach has more potential but is more difficult to achieve. Qi says, “Previous EMS used in PHEVs that account for driving conditions use an optimization-based approach that seeks to optimize some preselected cost functions assuming that the driving conditions of the entire trip are known before the trip starts. In practice, however, it is very difficult to know all the detailed driving conditions before the trip.”
The first approach is easy to implement but far from optimal. The second approach is theoretically sound and optimal but hard to apply in real-world situations. A new strategy that is between using a rule-based approach and an optimization-based approach has now been developed that provides a more realistic way for maximizing efficiency.
REINFORCEMENT LEARNING
Qi and his colleagues, including Matthew Barth—a professor of electrical and computer engineering and also the director of the University of California Riverside’s Bourns College of Engineering’s Center for Environmental Research and Technology—developed a real-time EMS based on the concept of reinforcement learning. Qi says, “Reinforcement learning is a type of learning that mimics the human learning process. Our EMS modeled in this manner has an understanding of the historical driving behavior of the individual and combines this element with real-time information obtained as the vehicle is being driven.”
The result is that the reinforcement learning EMS model uses a learning agent to receive continuous input from the environment that the PHEV is in and then selects an action that is inputted into the environment. Based on feedback from the environment, the EMS model will learn and receive rewards for positive actions.
An additional benefit is that the reinforcement- learning model also takes into consideration opportunities that the driver may have during the route to recharge the PHEV. The designed model can give the driver a suggestion when and where to recharge the vehicle battery in an optimal way so that the best fuel efficiency of the entire trip can be achieved. Qi adds, “We configured control options that can take place second by second for this model. This enables us to implement a reward by action factor in this EMS model to enable the system to learn from its actions and to maximize the reward for optimizing efficiency.”
The power train used in PHEVs operates in series, parallel or using a power-split mode that combines the first two. In using the reinforcement learning EMS model, the researchers decided to use the power-split strategy, which focuses on deciding on the ratio of splitting power between the battery pack and the internal combustion engine.
A case study was developed by the researchers to compare the performance of a PHEV using the reinforcement learning EMS model versus the same PHEV that utilizes binary mode control on a 20-mile commute on a highway in Southern California. The researchers found that the vehicle using the reinforcement learning EMS model achieved greater efficiency by reducing fuel consumption by nearly 12%.
Qi says, “We are now looking to apply our model to multiple vehicle scenarios. In fact, it is our objective to evaluate a fleet of vehicles using the multi-agent reinforcement-learning model. Performance will be maximized by information sharing between different vehicles by wireless communications. We also seek to apply this model to different types of vehicles other than PHEVs.
Additional information on this research can be found in a recent article (
2) or by contacting Qi at
xqi001@ucr.edu.
REFERENCES
1.
Canter, N. (2011), “Better driving decisions = improved fuel economy,”
Tribology and Lubrication Technology,
67(11), p. 10-11.
2.
Qi, X., Wu, G., Boriboonsomsin, K., Barth. M. and Gonder, J. (2016), “Data-Driven Reinforcement Learning-Based Real-Time Energy Management System for Plug-In Hybrid Electric Vehicles,”
Journal of the Transportation Research Board,
2572, p. 1-8.
Neil Canter heads his own consulting company, Chemical Solutions, in Willow Grove, Pa. Ideas for Tech Beat can be submitted to him at neilcanter@comcast.net.