Abstract:
Accurate prediction of the remaining useful life (RUL) of lithium-ion batteries is critical for the efficient and reliable operation of automotive battery management systems, which are crucial in the new energy sector. However, RUL prediction accuracy is affected by factors such as capacity regeneration and model performance. This paper introduces a novel approach that combines advanced signal processing, health feature extraction, and machine learning optimization to improve RUL predictive precision for lithium-ion batteries. This paper improves the accurate prediction of the RUL from the following five aspects. First, the incremental capacity (IC) curve, derived from the charge–discharge cycles, is extracted from battery performance data. Since the IC curve is highly sensitive to battery degradation trends, it is a valuable feature for predicting RUL. Second, to mitigate noise and irregularities in raw IC data, a Kalman filter method is applied to denoise the curves, improving the reliability and clarity of the extracted features. Third, 10 health factors (HFs) related to capacity are extracted, and their correlation with battery capacity is analyzed using the Spearman correlation method. This statistical analysis method identifies the most relevant and informative HFs, eliminating weakly correlated ones to reduce model complexity and improve performance. By eliminating HFs with weak correlations, the computational complexity of the prediction model is reduced, while its performance is further refined. Fourth, the extreme learning machine (ELM), known for its fast training speed and good generalization, is optimized to address challenges such as instability caused by random initialization of weights and biases. Using the subtraction-average-based optimization (SABO) method, a novel RUL prediction method is proposed. The SABO algorithm optimizes the weights and bias thresholds of the ELM model, which effectively reduces the risk of local optima and improves its predictive performance and stability. The proposed model is validated against different training datasets published by NASA. Experimental results show that the approach outperforms alternatives such as long short-term memory (LSTM), ELM, and beluga whale optimization (BWO) for ELM at different prediction starting points.This method has good accuracy in predicting the mean absolute percentage error (MAPE) and root mean square error (RMSE) of RUL in B05, B06, and B07 data sets and is the least error-prone among all models. Compared with the LSTM deep learning model, this method reduces the MAPE index of the RUL prediction error by 51.98%, significantly improving the overall performance. The MAE index decreased by 52.03%, and the RMSE index decreased by 42.99%. These results demonstrate the effectiveness of this method in improving the efficiency of RUL prediction.