Abstract:
Hepatocellular carcinoma (HCC) is a type of primary malignant tumor and an urgent problem to be solved, particularly in China, one of the countries with the highest prevalence of HCC. In the choice of treatment methods for patients with hepatocellular carcinoma, accurate histological grading of the lesion area undoubtedly plays a vital role that helps the management and therapy of HCC patients. However, the current pathological detection as the gold standard has defects, such as invasiveness and a large sampling error. Therefore, it is an important direction of intelligent medical treatment to provide noninvasive and accurate lesion grading using imaging technology combined with artificial intelligence technology. On the basis of the radiologists' experience in reading clinical images, this paper proposed a self-attentional guidance-based histological differentiation discrimination model combined with multi-modality fusion and an attention weight calculation scheme for dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) sequences of hepatocellular carcinoma. The model combined the spatiotemporal information contained in the enhancement sequence and learned the importance of each sequence and the slice in the sequence for the classification task. It effectively used the feature information contained in the enhancement sequence in the temporal and spatial dimensions to improve the classification performance. During the experiment, the model was trained and tested on the clinical data set of the top three hospitals in China. The experimental results show that the self-attention-guided model proposed in this paper achieves higher classification performance than several benchmark and mainstream models. Comprehensive experiments were performed on the clinical dataset with labels annotated by professional radiologists. The results show that our proposed self-attention model can achieve acceptable quantitative measuring of HCC histologic grading based on the MRI sequences. In the WHO histological classification task, the classification accuracy of the proposed model reaches 80%, the sensitivity is 82%, and the precision is 82%.