Posterior Shape Models

The goal in this tutorial is to use Gaussian processes for regression tasks and experiment with the concept of posterior shape models.

In case you would like to learn more about the syntax of the Scala programming language, have a look at the Scala Cheat Sheet. This document is accessible under:

Documents -> Scala Cheat Sheet

Note: for optimal performance and to keep the memory footprint low, we recommend to restart Scalismo Lab at the beginning of every tutorial.

Fitting observed data

Gaussian processes can not only be used for sampling and marginalization but also for fitting observed data.

Let us start by loading and interpolating a Gaussian process of a face model:

val model = StatismoIO.readStatismoMeshModel(new File("datasets/bfm.h5")).get
show(model, "model")
val gp = model.gp.interpolateNearestNeighbor

Using our shape model, we can track the position of the nose tip by building a marginal model and sampling from it:

val tipNoseID = PointId(8156)
val tipMarginal = model.marginal(IndexedSeq(tipNoseID))
val tips = (0 to 200).map(i => tipMarginal.sample.point(PointId(0)))
show(tips, "noseTips")

Here we built a shape model defined over a single point that is the tip of the nose, and then sampled 200 instances from it.

Let us suppose now that, for some strange reason, we are interested in faces with a large nose only :)

An easy way to implement this interest is to feed an observation to our GP regression framework.

Exercise: in the nose tip samples you had (noseTips), click a landmark on the furthest point away from the face.
Note: in case you have trouble clicking the landmark, execute the code below to add it programmatically
// run this only if you encountered problems clicking landmarks
val furthest = tips.maxBy(p => p.toVector dot model.mean.point(tipNoseID).toVector * -1  )
addLandmarksTo(Seq(Landmark("A",furthest)), "noseTips")

Let us now retrieve and visualize the deformation we will be feeding to our regression framework:

val furthestPoint : Point[_3D]= getLandmarksOf("noseTips").get.head.point
val refNoseTip : Point[_3D] = model.referenceMesh.point(tipNoseID)
val observation : Vector[_3D] = furthestPoint - refNoseTip

val dom = UnstructuredPointsDomain(IndexedSeq(refNoseTip))
val field = DiscreteVectorField(dom, IndexedSeq(observation))
show(field, "observation")

By deciding to fix the nose tip at the indicated position, we can say that we observed one deformation vector, displayed in the 3D scene, that indicates how the tip of the nose should be deformed.

Question: the observed deformation vector does end on the indicated point position. However it does not start from the tip of the mean mesh. Why is that?
Exercise: display the reference mesh of the model. You can either do it programmatically, or via the GUI by right-clicking the model element and selecting "Add reference mesh as new static object". Have a look again at where the observation vector starts.
Answer: it should become clear from the exercise above that the displayed deformation vector starts from the reference mesh of the model and not from the displayed mean mesh. This is due to the fact that the Gaussian Process associated with the model is defined over the points of the reference mesh (and so it is for deformations sampled from this GP).

The whole idea behind using GP regression now, is to retrieve the rest of the deformation field yielding a full face deformation such that:

  1. The retrieved deformation field satisfies our observation and maps the tip of nose where we want it.

  2. The retrieved deformation field obeys to the properties dictated by the Gaussian process. In this particular case, this means that the obtained deformation should yield a proper face mesh.

Posterior Gaussian process

Now that we have this observation, we can feed it to the GP regression framework in Scalismo as such:

val littleNoise = NDimensionalNormalDistribution(Vector(0,0,0), SquareMatrix((1f,0,0), (0,1f,0), (0,0,1f)))
val trainingData = IndexedSeq((refNoseTip, observation, littleNoise))
val posterior : LowRankGaussianProcess[_3D,_3D] = gp.posterior(trainingData)

In the code above, we start by building a noisy training dataset. This dataset is a sequence of triplets, containing for every data point, in this case the nose tip on the reference mesh, the corresponding deformation (given by a user or some other source), plus a noise variable indicating how certain we are about the input data.

Given that our function values are 3-dimensional deformation vectors, such a noise distribution is actually a zero-mean multivariate normal distribution with a 3 by 3 diagonal covariance matrix.

As you can see from the returned type above, the obtained posterior model is a continuous Gaussian Process from which we can now sample deformations at any set of points:

val posteriorSample : DiscreteVectorField[_3D, _3D] = posterior.sampleAtPoints(model.referenceMesh)
show(posteriorSample, "posteriorSample")
Exercise: zoom in on the tip of the nose and compare the sampled deformation vector at the tip of the nose with the indicated observation vector. How similar are they? Draw another sample deformation field and compare again.

As you can see, by using the posterior method, we do not retrieve a single deformation field satisfying our observation, but a full Gaussian process, or distribution of deformation fields satisfying the two conditions above.

Posterior of a StatisticalMeshModel:

Given that the StatisticalMeshModel is merely a wrapper around a GP, the same posterior functionality is available for statistical mesh models:

val discreteTrainingData = IndexedSeq((PointId(8156), furthestPoint, littleNoise))
val discretePosterior : StatisticalMeshModel = model.posterior(discreteTrainingData)

Notice in this case how the training data triplet contains the point identifier of the nose tip point instead of its 3D coordinates as we did above.

This is due to the fact that the StatisticalMeshModel relies underneath on a discrete Gaussian process, in which case the observations are indicated for the points of the discrete domain only.

Let's visualize the obtained posterior model:

show(discretePosterior, "NoseyModel")
Exercise: sample a few random faces from the graphical interface using the random button. Notice how all faces display large noses :) with the tip of the nose remaining close to the selected landmark.

Here again we obtain much more than just a single face instance fitting the input data: we get a full normal distribution of shapes fitting the observation. The most probable shape, and hence our best fit, being the mean of the posterior.

Hopefully you noticed by sampling from the posterior model that we tend to get faces with rather large noses. This is an interesting result when considering that we indicated only one deformation vector as observation.

In fact, this is a perfect demonstrator that our face model based on sample covariances is a global model, where, an observation at a particular point has an effect on the full shape. In our case, the tip of the nose being strongly correlated with the rest of the nose points, the deformations defined at those points end up being similar to the large one observed at the tip and yield therefore faces with big noses.

Moreover, when looking at the mean of the posterior one can immediately see that this single point observation affected other face features to be similar to the sample meshes where similar nose tip deformations were observed.

Landmark uncertainty:

As you could see previously, when specifying the training data for the posterior GP computation, one indicates a noise distribution to model the uncertainty of the input data. This can allow to tune the fitting results.

Below, we perform the posterior computation again with, this time, a 5 times bigger noise variance.

val largenoise = NDimensionalNormalDistribution(Vector(0,0,0), SquareMatrix((5f,0,0), (0,5f,0), (0,0,5f)))
val discreteTrainingDataN = IndexedSeq((PointId(8156), furthestPoint, largenoise))
val discretePosteriorN = model.posterior(discreteTrainingDataN)
show(discretePosteriorN, "NoisyNoseyModel")
Exercise: sample a few faces from this noisier posterior model. How flexible is the model compared to the previous posterior? How well are the sample tips fitting to the indicated landmark when compared with the previous posterior?

In addition to the possibility of defining the noise programmatically, the Scalismo GUI allows to define such a distribution graphically by changing the landmark uncertainty.

Exercise: select the landmark you clicked from the scene tree graph (under "noseTips") and navigate to the "Uncertainty" tab. There, change the values of the standard deviations and rotation axis and notice what happens.

As you can see, this allows to visually depict the spatial positions that the clicked landmark is likely to assume according to the indicated noise parameters. Such a spatial configuration can then be mapped into a noise MVN as follows:

val landmark = getLandmarksOf("noseTips").get.head
val manualNoise : NDimensionalNormalDistribution[_3D] = landmark.uncertainty.get
Exercise: build and display a posterior model making use of your manually set landmark uncertainty.
val discreteTrainingDataN = IndexedSeq((PointId(8156), furthestPoint, manualNoise))
// discreteTrainingDataN: IndexedSeq[(scalismo.common.PointId, scalismo.geometry.Point[scalismo.geometry._3D], scalismo.statisticalmodel.NDimensionalNormalDistribution[scalismo.geometry._3D])] =
// Vector((PointId(8156),Point3D(97.92483,35.367783,248.5994),NDimensionalNormalDistribution(Vector3D(0.0,0.0,0.0),[[1.0,0.0,0.0]
// [0.0,1.0,0.0]
// [0.0,0.0,1.0]])))

val discretePosteriorN = model.posterior(discreteTrainingDataN)
// discretePosteriorN: scalismo.statisticalmodel.StatisticalMeshModel =
// StatisticalMeshModel(TriangleMesh(Vector(Point3D(160.17378,-17.296175,304.52985), Point3D(160.14029,-17.019136,304.42145), Point3D(159.84317,-17.010626,304.26697), Point3D(159.90207,-17.334055,304.43097), Point3D(160.1261,-16.761335,304.27863), Point3D(159.79639,-16.700468,304.0788), Point3D(160.20013,-16.45093,304.1492), Point3D(159.8454,-16.378088,303.92065), Point3D(160.27332,-16.139132,304.02313), Point3D(159.89441,-16.052195,303.76627), Point3D(160.39047,-15.789373,303.88214), Point3D(159.9709,-15.63917,303.5791), Point3D(160.50453,-15.43601,303.74777), Point3D(160.04532,-15.216211,303.40942), Point3D(160.64966,-15.042761,303.6172), Point3D(160.167,-14.780188,303.26065), Point3D(160.7941,-14.652414,303.48322), Poi...

show(discretePosteriorN, "ManualNoseyModel")