Skip to content

Transfer Learning

Transfer learning is to transfer the knowledge from one domain to another, which aims to improve the model performance using a small number of labeled data in the target domain [1]. It means the source and target domain can be and usually different. However, transfer learning may result in negtive results, called negative transfer, which means the performance can be reduced using trainsfer learning.

Mathematically, it can be defined as follows. There is a source domain $\mathcal{D^s}={X^s,p(x)^s}$ and corresponding to a task to approximate the labels $\mathcal{T}^s={Y^s,f^s}$. Therefore, given the source domain $\mathcal{D}^s$, $\mathcal{T}^s$, and target domains $\mathcal{D}^t$, transfer learning is to get a better performance with an improved $\hat{f}^t$. There can be multiple source and target domains, but most works mainly focus on a single source and a single target.

Transfer learning can be categorized in different ways. For example, it is categorized into two based on feature space:

  1. Homogeneous transfer learning, where domains are in the same feature space.
  2. Heterogeneous transfer learning, where domains are in the different feature space.

Or four based on the object to be transferred: instance, feature, parameter, and relation.


[1] F. Zhuang et al., “A Comprehensive Survey on Transfer Learning,” Proc. IEEE, vol. 109, no. 1, pp. 43–76, Jan. 2021, doi: 10.1109/JPROC.2020.3004555.