Gaussian Process Regressor - Part II

  1. 1. Introduction
  2. 2. Covariance function
  3. 3. Non-dimensionalization
  4. 4. Learning the hyperparameters
    1. 4.1. Log-likelihood
    2. 4.2. Process variances
    3. 4.3. Finding the hyperparameters

This time let’s focus on the hyperparameters.

Introduction

In the last article, the Gaussian process regression (GPR) model is discussed. Given the training data, it looks like all the coefficients, the predictive mean and the variances, can be determined from those formulations. Well, that is not the whole story. There are some extra coefficients in the covariance function of the Gaussian process that need to be determined, namely the hyperparameters.

The determination of the hyperparameters is a little more involved than the formulations in the last article, as this problem is non-convex.

Covariance function

A covariance function in the Gaussian process takes the form of, k(xi,xj)=σf2R(xi,xj)\displaystyle k({\mathbf{x}}_i,{\mathbf{x}}_j) = {\sigma_f^2}R({\mathbf{x}}_i,{\mathbf{x}}_j) where σf2{\sigma_f^2} is the process variance, and RR is typically stationary, i.e. R(x,x)=R(xx)R({\mathbf{x}},{\mathbf{x}}')=R({\mathbf{x}}-{\mathbf{x}}'), and monotonically decreasing. It is also assumed R(0)=1R(0)=1, meaning that a point correlates with itself the most. When the observation is noisy, the covariance function is modified as, k(xi,xj)=σf2R(xi,xj)+σn2δijσf2[R(xi,xj)+σ~n2δij]\displaystyle k({\mathbf{x}}_i,{\mathbf{x}}_j) = {\sigma_f^2}R({\mathbf{x}}_i,{\mathbf{x}}_j) + {\sigma_n^2}\delta_{ij} \equiv {\sigma_f^2}[R({\mathbf{x}}_i,{\mathbf{x}}_j) + {\tilde{\sigma}_n^2}\delta_{ij}] A typical choice of RR is the squared exponential function, R(xi,xj)=exp[(xixj)TM(xixj)]\displaystyle R({\mathbf{x}}_i,{\mathbf{x}}_j) = \exp[-({\mathbf{x}}_i-{\mathbf{x}}_j)^T{\mathbf{M}}({\mathbf{x}}_i-{\mathbf{x}}_j)] where in the isotropic case, M=12l2I{\mathbf{M}}=\frac{1}{2l^2}{\mathbf{I}} meaning that the length scales in all dimensions are the same; while in the anisotropic case, M=12diag(l)2{\mathbf{M}}={\frac{1}{2}}diag({\mathbf{l}})^{-2} is a diagonal matrix with positive diagonal entries, representing the length scales in different dimensions. In both cases, the length scales have to be figured out from the training data. The latter case is sometimes also called automatic relevance determination (ARD), meaning that the training process automatically determines if the particular dimension has a significant influence on the output. See the DACE manual for more discussion.

The length scales l{\mathbf{l}} and the variances σf2{\sigma_f^2} and σn2{\sigma_n^2} are the hyperparameters to be determined in the training process. In the general sense, the coefficients for the basis functions b¯\bar{\mathbf{b}} are hyperparameters too.

Non-dimensionalization

Before proceeding to the training process, it is beneficial to non-dimensionalize some quantities in the GPR model. First, the covariance matrices Ky{\mathbf{K}}_y and Ksu{\mathbf{K}}_{su} can be “non-dimensionalized” by σf2{\sigma_f^2}, Kyσf2K~yσf2(K~ss+σ~n2I)Ksuσf2K~su\displaystyle \begin{aligned} {\mathbf{K}}_y &\equiv {\sigma_f^2}{\tilde{\mathbf{K}}}_y \equiv {\sigma_f^2}({\tilde{\mathbf{K}}}_{ss} + {\tilde{\sigma}_n^2}{\mathbf{I}}) \\ {\mathbf{K}}_{su} &\equiv {\sigma_f^2}{\tilde{\mathbf{K}}}_{su} \end{aligned}

Subsequently, examine the effect of σf2{\sigma_f^2} on the predictive mean, mp=KsuTg¯+HuTb¯=K~suTg~+HuTb~\displaystyle {\mathbf{m}}_p^* = {\mathbf{K}}_{su}^T\bar{\mathbf{g}}+ {\mathbf{H}}_u^T\bar{\mathbf{b}}= {\tilde{\mathbf{K}}}_{su}^T{\tilde{\mathbf{g}}}+ {\mathbf{H}}_u^T{\tilde{\mathbf{b}}} where, b¯=(HTKy1H)1HTKy1ys=(HTK~y1H)1HTK~y1ysb~g¯=Ky1(ysHb¯)=σf2K~y1(ysHb~)\displaystyle \begin{aligned} \bar{\mathbf{b}}&= ({\mathbf{H}}^T{\mathbf{K}_y^{-1}}{\mathbf{H}})^{-1}{\mathbf{H}}^T{\mathbf{K}_y^{-1}}{\mathbf{y}}_s = ({\mathbf{H}}^T{\tilde{\mathbf{K}}_y^{-1}}{\mathbf{H}})^{-1}{\mathbf{H}}^T{\tilde{\mathbf{K}}_y^{-1}}{\mathbf{y}}_s \equiv {\tilde{\mathbf{b}}}\\ \bar{\mathbf{g}}&= {\mathbf{K}_y^{-1}}({\mathbf{y}}_s-{\mathbf{H}}\bar{\mathbf{b}}) = \sigma_f^{-2} {\tilde{\mathbf{K}}_y^{-1}}({\mathbf{y}}_s-{\mathbf{H}}{\tilde{\mathbf{b}}}) \end{aligned} And the covariance, Kp=KuuKusKy1Ksu+DT(HTKy1H)1D=σf2[K~uuK~usK~y1K~su+D~T(HTK~y1H)1D~]\displaystyle \begin{aligned} {\mathbf{K}}_p^* &= {\mathbf{K}}_{uu} - {\mathbf{K}}_{us}{\mathbf{K}}_y^{-1}{\mathbf{K}}_{su} + {\mathbf{D}}^T ({\mathbf{H}}^T{\mathbf{K}_y^{-1}}{\mathbf{H}})^{-1} {\mathbf{D}}\\ &= {\sigma_f^2}[{\tilde{\mathbf{K}}}_{uu} - {\tilde{\mathbf{K}}}_{us}{\tilde{\mathbf{K}}_y^{-1}}{\tilde{\mathbf{K}}}_{su} + {\tilde{\mathbf{D}}}^T ({\mathbf{H}}^T{\tilde{\mathbf{K}}_y^{-1}}{\mathbf{H}})^{-1} {\tilde{\mathbf{D}}}] \end{aligned} where, D=HuTHTKy1Ksu=HuTHTK~y1K~suD~\displaystyle {\mathbf{D}}= {\mathbf{H}}_u^T-{\mathbf{H}}^T {\mathbf{K}_y^{-1}}{\mathbf{K}}_{su} = {\mathbf{H}}_u^T-{\mathbf{H}}^T {\tilde{\mathbf{K}}_y^{-1}}{\tilde{\mathbf{K}}}_{su} \equiv {\tilde{\mathbf{D}}} In sum, σf2{\sigma_f^2} has no direct effect on the mean, but the covariance is proportional to σf2{\sigma_f^2}, once the hyperparameters are given. The question is, how to obtain the hyperparameters?

Learning the hyperparameters

Log-likelihood

The hyperparameters are determined using the maximum likelihood estimation (MLE). With the joint Gaussian distribution, the log marginal likelihood of the training data is, L=logp(ysXs,b,B)=12(ysHb)T(Ky+HTBH)1(ysHb)12logKy+HTBHn2log2π\displaystyle \begin{aligned} {\mathcal{L}}&= \log p({\mathbf{y}}_s|{\mathbf{X}}_s,{\mathbf{b}},{\mathbf{B}}) \\ &= - {\frac{1}{2}}({\mathbf{y}}_s-{\mathbf{H}}{\mathbf{b}})^T ({\mathbf{K}}_y+{\mathbf{H}}^T{\mathbf{B}}{\mathbf{H}})^{-1} ({\mathbf{y}}_s-{\mathbf{H}}{\mathbf{b}}) - {\frac{1}{2}}\log |{\mathbf{K}}_y+{\mathbf{H}}^T{\mathbf{B}}{\mathbf{H}}| - \frac{n}{2}\log 2\pi \end{aligned} where nn is number of training data points. Utilizing the determinant counterpart of matrix inversion lemma, A+UCVT=ACC1+VTA1U\displaystyle |{\mathbf{A}}+{\mathbf{U}}{\mathbf{C}}{\mathbf{V}}^T| = |{\mathbf{A}}| |{\mathbf{C}}| |{\mathbf{C}}^{-1} + {\mathbf{V}}^T{\mathbf{A}}^{-1}{\mathbf{U}}| and let b=0{\mathbf{b}}=0 and B1O{\mathbf{B}}^{-1}\rightarrow{\mathbf{O}} as was done last time, 2L=ysT[Ky1Ky1H(HTKy1H)1HTKy1]ys+logKy+logHTKy1H+(nm)log2π\displaystyle -2{\mathcal{L}}= {\mathbf{y}}_s^T [{\mathbf{K}_y^{-1}}- {\mathbf{K}_y^{-1}}{\mathbf{H}}({\mathbf{H}}^T{\mathbf{K}_y^{-1}}{\mathbf{H}})^{-1} {\mathbf{H}}^T {\mathbf{K}_y^{-1}}] {\mathbf{y}}_s + \log|{\mathbf{K}}_y| + \log|{\mathbf{H}}^T{\mathbf{K}_y^{-1}}{\mathbf{H}}| + (n-m)\log 2\pi where mm is number of basis functions, and was introduced due to the singularity caused by B|{\mathbf{B}}|. An interesting simplification can be done to the first term in the above expression, I1=ysT[Ky1Ky1H(HTKy1H)1HTKy1]ys=ysTKy1(ysHb¯)I2I1=(ysHb¯)TKy1ysI3I1=ysTKy1ysb¯THTKy1Hb¯I4I1=I2+I3I4=(ysHb¯)TKy1(ysHb¯)\displaystyle \begin{aligned} I_1 &= {\mathbf{y}}_s^T [{\mathbf{K}_y^{-1}}- {\mathbf{K}_y^{-1}}{\mathbf{H}}({\mathbf{H}}^T{\mathbf{K}_y^{-1}}{\mathbf{H}})^{-1} {\mathbf{H}}^T {\mathbf{K}_y^{-1}}] {\mathbf{y}}_s \\ &= {\mathbf{y}}_s^T {\mathbf{K}_y^{-1}}({\mathbf{y}}_s - {\mathbf{H}}\bar{\mathbf{b}}) \equiv I_2 \\ I_1 &= ({\mathbf{y}}_s - {\mathbf{H}}\bar{\mathbf{b}})^T {\mathbf{K}_y^{-1}}{\mathbf{y}}_s \equiv I_3 \\ I_1 &= {\mathbf{y}}_s^T {\mathbf{K}_y^{-1}}{\mathbf{y}}_s - \bar{\mathbf{b}}^T{\mathbf{H}}^T {\mathbf{K}_y^{-1}}{\mathbf{H}}\bar{\mathbf{b}}\equiv I_4 \\ I_1 &= I_2+I_3-I_4 = ({\mathbf{y}}_s - {\mathbf{H}}\bar{\mathbf{b}})^T {\mathbf{K}_y^{-1}}({\mathbf{y}}_s - {\mathbf{H}}\bar{\mathbf{b}}) \end{aligned} Indicating that the term I1I_1 behaves as if it is centered at Hb¯{\mathbf{H}}\bar{\mathbf{b}}, with a covariance matrix Ky{\mathbf{K}}_y. Furthermore, factor L{\mathcal{L}} with σf2{\sigma_f^2}, 2L~1=σf2(ysHb~)TK~y1(ysHb~)+logK~y+logHTK~y1H+(nm)logσf2\displaystyle -2{\tilde{\mathcal{L}}}_1 = \sigma_f^{-2} ({\mathbf{y}}_s - {\mathbf{H}}{\tilde{\mathbf{b}}})^T {\tilde{\mathbf{K}}_y^{-1}}({\mathbf{y}}_s - {\mathbf{H}}{\tilde{\mathbf{b}}}) + \log|{\tilde{\mathbf{K}}}_y| + \log|{\mathbf{H}}^T{\tilde{\mathbf{K}}_y^{-1}}{\mathbf{H}}| + (n-m)\log{\sigma_f^2} where the constant term is neglected.

If the prior on the coefficients of the basis functions is ignored, the log-likelihood would be simplified to, 2L~2=σf2(ysHb~)TK~y1(ysHb~)+logK~y+nlogσf2\displaystyle -2{\tilde{\mathcal{L}}}_2 = \sigma_f^{-2} ({\mathbf{y}}_s - {\mathbf{H}}{\tilde{\mathbf{b}}})^T {\tilde{\mathbf{K}}_y^{-1}}({\mathbf{y}}_s - {\mathbf{H}}{\tilde{\mathbf{b}}}) + \log|{\tilde{\mathbf{K}}}_y| + n\log{\sigma_f^2}

Process variances

A natural step next would be finding σf2{\sigma_f^2} by setting (2L~)/σf2=0\partial(-2{\tilde{\mathcal{L}}})/\partial{\sigma_f^2}=0 and solving for σf2{\sigma_f^2}, σf2=1N(ysHb~)TK~y1(ysHb~)\displaystyle {\sigma_f^2}= \frac{1}{N}({\mathbf{y}}_s - {\mathbf{H}}{\tilde{\mathbf{b}}})^T {\tilde{\mathbf{K}}_y^{-1}}({\mathbf{y}}_s - {\mathbf{H}}{\tilde{\mathbf{b}}}) where N=nmN=n-m for L~1{\tilde{\mathcal{L}}}_1, and N=nN=n for L~2{\tilde{\mathcal{L}}}_2. The former case is the center estimate of the variance, while the latter the MLE. When there are multiple outputs, the output variables are usually assumed to be independent, and σf2{\sigma_f^2} can be computed separately for each output.

Plug the value back to the likelihood, and remove the constant terms [Welch1992], one obtains reduced log likelihood, F1=logK~y+logHTK~y1H+(nm)logσf2F2=logK~y+nlogσf2\displaystyle \begin{aligned} -{\mathcal{F}}_1 &= \log|{\tilde{\mathbf{K}}}_y| + \log|{\mathbf{H}}^T{\tilde{\mathbf{K}}_y^{-1}}{\mathbf{H}}| + (n-m)\log{\sigma_f^2}\\ -{\mathcal{F}}_2 &= \log|{\tilde{\mathbf{K}}}_y| + n\log{\sigma_f^2}\end{aligned} where the only remaining unknown hyperparameters are the length scales l{\mathbf{l}}. Note that F2{\mathcal{F}}_2 should be used if a general mean function, without priors on its coefficients, is employed.

Finding the hyperparameters

There appears to be two strategies for determining the hyperparameters, the two-step and the one-step methods.

The two-step method utilizes the reduced log likelihood. This is the method used in the DACE toolbox and its Python translation in sklearn. The hyperparameters are divided into two groups: (1) length scales l{\mathbf{l}}, (2) variances σf2{\sigma_f^2} and σn2{\sigma_n^2}. The primary unknown is l{\mathbf{l}}, σf2{\sigma_f^2} is a function of l{\mathbf{l}}, and σn2{\sigma_n^2} is externally set, mainly as a device for numerical stability (namely the “nugget”). The objective function is F2{\mathcal{F}}_2. In the multi-output case, F{\mathcal{F}} for each output is computed using the same set of l{\mathbf{l}}, and summed up as the final objective function.

The one-step method tries to determine all hyperparameters, even including b¯\bar{\mathbf{b}}, using gradient-based method. The difficulties are the efficiency of gradient-based method for the many variables and the implementation of the gradients. The first item is not a major issue with the developments in modern computational science. The second item could be indeed troublesome. But that is where the elegance of GPflow is revealed. It is based on TensorFlow, and TF handles the gradients efficiently with the computation graph model. In GPflow, one can focus on the objective functions. In its GPR implementation, L~2{\tilde{\mathcal{L}}}_2 is employed. A limitation is that, in the multi-output case, all the outputs have to use the same σf2{\sigma_f^2}. A remedy is to add a wrapper to its GPR class, and compute the MLE’s of σf2{\sigma_f^2} for each output.

In the newer version of GPR in sklearn, the gradient of F2{\mathcal{F}}_2 w.r.t. l{\mathbf{l}} is introduced, and gradient-based optimization methods like L-BFGS-B are employed. However, in the new implementation, the mean function is hardcoded to be zero, meaning that F1=F2{\mathcal{F}}_1={\mathcal{F}}_2. Furthermore, the variance σf2{\sigma_f^2} is fixed to be one, probably for easier computation of the gradients. This version essentially rules out the second group of the hyperparameters and determines the first group using gradient-based method. Therefore, the approach in this version is closer to the one-step method.

The GPy implementation will not be discussed here, as I am not familiar with it.