A collection of extra topics as a sequel to the GPR series.
Updating scheme
Consider a scenario where a GP model is trained using a large sample data set (N points). Recall from the first and the second articles, the training process determines the hyperparameters and generates a number of coefficient matrices for the efficient computation of the predictive mean and variance. Now suppose a few new samples (M points, M≪N) are to be appended to the data set. The question is, does one simply retrain the GP model?
Not necessarily. The new samples can reduce the prediction error and the overall uncertainty but may have limited effect on the hyperparameters. Therefore, with a few new samples, one can update the GP model by keeping the old hyperparameters and recomputing the coefficient matrices, including L, F, Q and R.
Strictly speaking, the addition of new samples will result in the recomputation of everything, even though the hyperparameters remain the same: The std and mean of the samples are modified, so is the correlation matrix K, and so are the follow-up matrix decompositions. However, now that M≪N, one can assume that the std and mean of the new sample data set are the same as those of the old one. As a result, the portion of K associated with the old sample data set remains the same, and the new coefficient matrices can be computed via partial matrix decompositions, which could save considerable amount of time - dropping from O(N3) to O(N2).
The new covariance matrix is K=[KssKnsKsnKnn] where Kss is the matrix associated with the old data set, whose Cholesky factor Ls is known. Matrices Kns=KsnT and Knn are due to the new samples. The Cholesky factor L of K is obtained via the following procedure of partial Cholesky decomposition,
Cholesky decomposition: Knn=LnLnT.
Linear solve: Lsn=Ls−1Ksn
Assemble the Cholesky factor, L=[LsLsnTOLn]
Next, the new F matrix is F=L−1H=[LsLsnTOLn]−1[HsHn]=[FsLn−1(Hn−LsnTFs)] where block matrix inversion is used and Fs=Ls−1Hs is known from the old model.
Finally, since Fs=QsRs is known, one can utilize the QR update algorithm to obtain the QR decomposition of F. Such algorithm has been implemented in standard packages for scientific computation, such as scipy.linalg.qr_insert.
With the new coefficient matices L, F, Q and R, the GP model is updated with the new sample data set.
Gradients
In some scenarios where a GP model is involved, the gradients of the mean and the variance w.r.t. the input are required. One example is the surrogate-based optimization, where the GP model is used as the surrogate and the gradient-based algorithm is chosen to find the optima. It is necessary to compute the gradient analytically (or using automatic differentiation), instead of using finite difference. The latter could destabilize the iterations of gradient descent near some “singularity” points at which the GP model is ill-defined.
Note that, in the trivial treatment of the multiple output case, the process variances σf2 of the output are determined independently. Therefore, it is sufficient to consider the single output case. From the first article of the GPR series, the mean and variance at a single point are, respectively, m(x)σ2(x)=g¯Tku(x)+b¯Thu(x)=σf2−[L−1ku(x)]2+[R−T[hu(x)−FT[L−1ku(x)]]]2≡σf2−e1Te1+e2Te2 where ku=Ksu and hu=HuT and b¯g¯=R−1[QT(L−1ys)]=L−T[L−1(ys−Hb¯)]
The gradient of the mean is computed in a straight-forward manner, ∂x∂m=g¯T∂x∂ku+b¯T∂x∂hu where ∂x∂ku and ∂x∂hu are obtained from the regression and mean functions, respectively.
The computation of the gradient of the variance is a little bit more involved, ∂x∂σ2=−2e1TL−1∂x∂ku+2e2TR−T(∂x∂hu−FTL−1∂x∂ku)=−2[L−T(Fr+e1)]T∂x∂ku+2rT∂x∂hu where r=R−1e2. The computation is organized so as to minimize the number of linear solves of the triangular matrices.