This time let’s talk about a non-parametric model.
Introduction
A non-parametric model uses all the training data to do predictions. In contrast, a parametric model obtains its “parameters” from the training data before prediction, and only uses those parameters for prediction.
The non-parametric model based on Gaussian process (GPR) has been around for quite a while. It has deep connections to methods such as radial basis function network (RBFN). In geostatistics, and later in the field of engineering, the method is called kriging. The GPR with explicit mean function is also called generalized least squares (GLS). There are two good properties of GPR. First, GPR can handle the noise in the sample data. Second, the model not only provides the interpolated value, but also the uncertainty of the prediction. These properties can be particularly useful in engineering applications - if there is time, this matter will be explained.
The idea of GPR is simple. It assumes that all data points are sampled from a Gaussian process, and therefore these points subject to a joint Gaussian distribution. The new data point should subject to a Gaussian distribution as well, determined by the training data. In that Gaussian distribution, the mean becomes the prediction, and the variance becomes the error estimation. In this article, the derivation of GPR will be presented, with some discussion in computational aspects. Most of the materials are based on the GPML book (for derivation) and the manual of the DACE toolbox (for computation). The DACE manual also provides the derivation from the pointview of GLS.
Derivation
The basic form
Assume the unknown function subjects to a GP, f(x)∼GP(m(x),k(x,x′)) where m(x) is the mean function and k(x,x′) is the covariance function. By definition, they satisfies, m(x)k(x,x′)=E[f(x)]=E[(f(x)−m(x))(f(x′)−m(x′))] The probabilistic distribution of a set of points D={X,y} sampled from GP is Gaussian, y∼N(m,K) where [m]i=m(Xi) and [K]ij=k(Xi,Xj). Here the input variable X can be a matrix with each row being one sample, but only scalar output is considered for now. Now slipt the points into two sets: the training set Xs with known output ys, and the target set Xu with unknown output ys. The joint distribution is still Gaussian, with the following mean, mT=[m(Xs)T,m(Xu)T]≡[msT,muT] and the block covariance matrix, K=[KssKusKsuKuu] where the i,jth element of K is associated with the ith and jth rows of X, [Kab]ij=k([Xa]i,[Xb]j) In the case that ys is noisy, a white noise term with variance σn2 is added to the Gaussian distribution, adding a diagonal matrix to KssKy=Kss+σn2Iss
The distribution of yu given ys is found by conditional probability, yu∣ys∼N(mp,Kp) where the predictive mean is, mp=mu−KusKy−1(ys−ms) and the covariance is, Kp=Kuu−KusKy−1Ksu
Using basis functions for the mean function
The form of mean function is usually unknown beforehand, so it is more common to use a set of basis functions, m(x)=h(x)Tb The coefficients b have to be fitted from the sample data. Under a Bayesian framework, b is assumed to subject to, b∼N(b,B) Taking b into account, the GP from previous section is modified as, f(x)∼GP(h(x)Tb,k(x,x′)+h(x)TBh(x)) Extra terms due to the basis functions shall be added to the covariance matrix, Kab=K(Xa,Xb)+HaBHbT,[H∗]i=h([X∗]i)T
The extra terms make the predictive mean and covariance from the previous section extremely cumbersome to compute. Simplifications are needed and enabled by invoking the matrix inversion lemma, (A+UCVT)−1=A−1−A−1U(C−1+VTA−1U)−1VTA−1 In current case, the lemma is applied as follows, (Ky+HBHT)−1=Ky−1−Ky−1HC−1HTKy−1≡Ky−1−A
where C=B−1+HTKy−1H and the subscript s is dropped in Hs. Furthermore, the following identities can be derived, BHT(Ky−1−A)(Ky−1−A)HB=C−1HTKy−1=Ky−1HC−1 The simplification procedure is bascally to cancel out matrices by combining the H and (Ky−1−A) terms.
The predictive mean is simplified to, mp∗=Hub+(Kus+HuBHT)(Ky+HBHT)−1(ys−Hb)=KusKy−1ys+DTC−1(HTKy−1ys+B−1b) where D=HuT−HTKy−1Ksu. The covariance matrix is simplied to, Kp∗=Kuu+HuBHuT−(Kus+HuBHT)(Ky+HBHT)−1(Ksu+HBHuT)=Kp+DTC−1D
Finally, consider the case that the coefficients are distributed uniformly, instead of Gaussian. That means b→0 and B−1→O. With that, we arrive at the commonly used form of GPR, mp∗Kp∗=KusKy−1ys+DT(HTKy−1H)−1HKy−1ys=Kusg¯+Hub¯=Kp+DT(HTKy−1H)−1D where b¯=(HTKy−1H)−1HTKy−1ys and g¯=Ky−1(ys−Hb¯). The term b¯ is essentially the coefficients of the mean function fitted from the sample data. Also, note that some people argue that the last term in Kp∗ can neglected for simplicity [Sasena2002].
Computational aspects
The computation of mp∗ and Kp∗ can be tricky due to the ill-conditioned matrix inversion. The strategy is to combine cholesky decomposition and QR decomposition to stabilize the inversions, i.e. Ky−1 and (HTKy−1H)−1. For inversion of Ky, Ky−1x=(LLT)−1x=L−T(L−1x) where L is a lower triangular matrix, and the matrix inversion is converted to two consecutive triangular solves. To inverse HTKy−1H, HTKy−1H=(L−1H)T(L−1H)≡FTF=RTR where QR decomposition is used QR=F, Q is an orthogonal matrix, and R is an upper triangular matrix. Q is tall and slim, because the number of rows equals to the number of samples, while the number of columns equals to the number of basis functions. With the stabilized matrix inversion introduced above, the quantities b¯ and g¯ in mp∗ are computed as follows, b¯g¯=(HTKy−1H)−1HTKy−1ys=R−1[QT(L−1ys)]=Ky−1(ys−Hb¯)=L−T[L−1(ys−Hb¯)] The covariance matrix is computed as follows, Kp∗=Kuu−(L−1Ksu)2+(R−T[HuT−FT(L−1Ksu)])2 where (□)2=□T□.
Implementations for GPR
There are a lot of libraries that implements GPR. Here provides a big list, which is last updated in 2011. Besides those in the list, there are four noteworthy libraries. The first one is the DACE MATLAB toolbox, which has some GLS flavor. It appears to be very popular in the engineering fields, especially those that relies on the MATLAB ecosystem heavily. The second one is the sklearn Python package. Up to version 0.17, the GPR is implemented in the gaussian_process.GaussianProcess class, as a translation of the DACE toolbox. Afterwards, the GPR is re-implemented in the gaussian_process.GaussianProcessRegressor class using the formulation in GPML. Unfortunately, the new implementation only supports zero mean function. The third library is GPy, which is a large, comprehensive repository of GP-related models. Finally, the fourth library is the GPflow, which is an elegant implementation under the TensorFlow framework with roots from GPy. Modern variants of GPR are also included, some of which will be discussed in the follow-up articles.