Data used in the model training is assumed to have a similar distribution when the model is applied. However, in some applications, the data distributions may change over time. This condition, known as the concept drift, might decrease the model performance because the model is trained and evaluated in different distributions. To solve this problem in the audio scene classification task, we previously proposed the Combine-merge Gaussian mixture model (CMGMM) algorithm, where Mel-frequency cepstral coefficients (MFCCs) are used as the feature vector. In this paper, in the CMGMM algorithm, we propose to use the Pre-trained audio neural networks (PANNs) to model event audio that exists in the scene. A motivation is, instead of acoustic features, to make the best use of high-level features obtained by a model that trained using a large amount of audio data. The experiment result shows that the proposed method using PANNs improves model accuracy. In the active methods with abrupt and gradual concept drift, it is recommended to use PANNs to have significant accuracy improvement and obtain optimal adaptation results.