Abstract:
Material data are prepared in batches and stages, and data distribution in different batches varies. However, the average accuracy of neural networks declines when learning material data by batch, resulting in great challenges to the application of artificial intelligence in the materials field. Therefore, an incremental learning framework based on parameter penalty and experience replay was applied to learn streaming data. The average accuracy decline is due to two reasons: sudden variations of model parameters and a quite homogeneous sample feature space. By analyzing the model parameter variation, a mechanism of parameter penalty was established to limit the phenomenon of model parameters fitting toward new data when the model learns new data. The penalty strength of the parameters can be dynamically adjusted according to the speed of parameter change. The faster the speed of parameter changes, the higher the penalty strength, and vice versa, the lower the penalty strength. To enhance sample diversity, experience replay methods were proposed, which train the new and old data obtained by sampling from the cache pool. At the end of each incremental task, the incremental data were sampled and used for the update of the cache pool. Specifically, random sampling was adopted for the joint training, whereas reservoir sampling was used for the update of the cache pool. Further, the proposed methods (i.e., experience replay and parameter penalty) were applied to the material absorption coefficient regression and image classification tasks, respectively. The experimental results indicate that experience replay was more effective than parameter penalty, but the best results were obtained when both methods were used. Specifically, when both methods were used, the average accuracy of the benchmark increased by 45.93% and 2.62% and reduced the average forgetting rate by 86.60% and 67.20%, respectively. A comparison with existing methods reveals that our approach is more competitive. Additionally, the effects of specific parameters on the average accuracy were analyzed for both methods. The results indicate that the average accuracy increases with the proportion of experience replay and increases and then decreases when the penalty factor increases. In general, our approach is not limited by data modalities and learning tasks and can perform incremental learning on tabular or image data, regression, or classification tasks. Further, owing to the quite flexible parameter settings, it can be adapted to different environments and tasks.