Incorporating 3D Gaussian Splats into the graphics pipeline

3D Gaussian splatting is the emerging rendering technique that is overtaking NeRFs. Since it is centered around point primitives, it is more compatible with traditional graphics pipelines that already support point rendering.

Gaussian splats essentially enhance the concept of point rendering by converting the point primitive into a 3D ellipsoid, which is then projected into 2D during the rendering process.. This concept was initially described in 2002 [3], but the technique of extending Structure from Motion scans in this way was only detailed more recently [1].

In this post, I explore how to integrate Gaussian splats into the traditional graphics pipeline. This allows them to be used alongside triangle-based primitives and interact with them through the depth buffer for occlusion (see header image). This approach also simplifies deployment by eliminating the need for CUDA.

Storage

The original implementation uses .ply files as their checkpoint format, focusing on maintaining training-relevant data structures at the expense of storage efficiency, leading to increased file sizes.

For example, it stores the covariance as scaling and a rotation quaternion, necessitating reconstruction during rendering. A more efficient approach would be to leverage orthogonality, storing only the diagonal and upper triangular vectors, thereby eliminating reconstruction and reducing storage requirements.

Further analysis of the storage usage for each attribute shows that the spherical harmonics of orders 1-3 are the main contributors to the file size. However, according to the ablation study in the original publication [1], these harmonics only lead to a modest PSNR improvement of 0.5.

Therefore, the most straightforward way to decrease storage is by discarding the higher-order spherical harmonics. Additionally, the level 0 spherical harmonics can be converted into a diffuse color and merged with opacity to form a single RGBA value. These simple yet effective methods were implemented in one of the early WebGL implementations, resulting in the .splat format. As an added benefit, this format can be easily interpreted by viewers unaware of Gaussian splats as a simple colored point cloud:

Results using a non Gaussian-splat aware renderer

By directly storing the covariance as previously mentioned we can reduce the precision from float32 to float16, thereby halving the storage needed for that data. Furthermore, since most splats have limited spatial extents, we can also utilize float16 for position data, yielding additional storage savings.

With these changes, we achieve a storage requirement of 22 bytes per splat, in contrast to the 44 bytes needed by the .splat format and 236 bytes in the original implementation. Thus, we have attained a 10x reduction in storage compared to the original implementation simply by using more suitable data types.

Blending

The image formation model presented in the original paper [1] is similar to the NeRF rendering, as it is compared to it. This involves casting a ray and observing its intersection with the splats, which leads to front-to-back blending. This is precisely the approach taken by the provided CUDA implementation.

Blending remains a component of the fixed-function unit within the graphics pipeline, which can be set up for front-to-back blending [2] by using the factors (one_minus_dest_alpha, one) and by multiplying color and alpha in the shader as color.rgb * color.a. This results in the following equation:

\begin{aligned}C_{dst} &= (1 - \alpha_{dst}) \cdot \alpha_{src} C_{src} &+ C_{dst}\\ \alpha_{dst} &= (1 - \alpha_{dst})\cdot\alpha_{src} &+ \alpha_{dst}\end{aligned}

However, this method requires the framebuffer alpha value to be zero before rendering the splats, which is not typically the case as any previous render pass could have written an arbitrary alpha value.

A simple solution is to switch to back-to-front sorting and use the standard alpha blending factors (src_alpha, one_minus_src_alpha) for the following blending equation:

C_{dst} = \alpha_{src} \cdot C_{src} + (1 - \alpha_{src}) \cdot C_{dst}

This allows us to regard Gaussian splats as a special type of particles that can be rendered together with other transparent elements within a scene.

References

  1. Kerbl, Bernhard, et al. “3d gaussian splatting for real-time radiance field rendering.” ACM Transactions on Graphics 42.4 (2023): 1-14.
  2. Green, Simon. “Volumetric particle shadows.” NVIDIA Developer Zone (2008).
  3. Zwicker, Matthias, et al. “EWA splatting.” IEEE Transactions on Visualization and Computer Graphics 8.3 (2002): 223-238.

stb_image_resize2.h – performance

Recently there was an large rework to the STB single-file image_resize library (STBIR) bumping it to 2.0. While the v1 was really slow and merely usable if you needed to quickly get some code running, the 2.0 rewrite claims to be more considerate of performance by using SIMD. So lets put it to a test.

As references, I chose the moderately optimized C only implementation of Ogre3D and the highly optimized SIMD implementation in OpenCV.

Below you find time to scale a 1024x1024px byte image to 512x512px. All libraries were set to linear interpolation. The time is the accumulated time for 200 runs.

RGBRGBA
Ogre3D 14.1.2660 ms668 ms
STBIR 2.01632 ms690 ms
OpenCV 4.8245 ms254 ms

For the RGBA test, STIBIR was set to the STBIR_4CHANNEL pixel layout. All libraries were compiled with -O2 -msse. Additionally OpenCV could dispatch AVX2 code. Enabling AVX2 with STBIR actually decreased performance.

Note that while STBIR has no performance advantage over a C only implementation for the simple resizing case, it offers some neat features if you want to handle SRGB data or non-premultiplied alpha.

How to generate random points on a sphere

This question often pops up, when you need a random direction vector to place things in 3D or you want to do a particle simulation.

We recall that a 3D unit-sphere (and hence a direction) is parametrized only by two variables; elevation \theta \in [0; \pi] and azimuth \varphi \in [0; 2\,\pi] which can be converted to Cartesian coordinates as

\begin{aligned} x &= \sin\theta \, \cos\varphi \\ y &= \sin\theta \, \sin\varphi \\ z &= \cos\theta \end{aligned}

If one takes the easy way and uniformly samples this parametrization in numpy like

phi = np.random.rand() * 2 * np.pi
theta = np.random.rand() * np.pi

One (i.e. you as you are reading this) ends with something like this:

While the 2D surface of polar coordinates uniformly sampled (left), we observe a bias of sampling density towards the poles when projecting to the Cartesian coordinates (right).
The issue is that the cos mapping of the elevation has an uneven step size in Cartesian space, as you can easily verify: cos^{'}(x) = sin(x).

The solution is to simply sample the elevation in the Cartesian space instead of the spherical space – i.e. sampling z \in [-1; 1]. From that we can get back to our elevation as \theta = \arccos z:

z = 1 - np.random.rand() * 2 # convert rand() range 0..1 to -1..1
theta = np.arccos(z)

As desired, this compensates the spherical coordinates such that we end up with uniform sampling in the Cartesian space:

Custom opening angle

If you want to further restrict the opening angle instead of sampling the full sphere you can also easily extend the above. Here, you must re-map the cos values from [1; -1] to [0; 2] as

cart_range = -np.cos(angle) + 1 # maximal range in cartesian coords
z = 1 - np.random.rand() * cart_range
theta = np.arccos(z)

Optimized computation

If you do not actually need the parameters \theta, \varphi, you can spare some trigonometric functions by using \sin \theta = \sqrt { 1 - z^2} as

\begin{aligned} x &= \sqrt { 1 - z^2} \, \cos\varphi \\ y &= \sqrt { 1 - z^2} \, \sin\varphi \end{aligned}

Fast wire-frame rendering with OpenCV

Lets say you have mesh data in the typical format, triangulated, vertex buffer and index buffer. E. g. something like

>>> vertices

[[[ 46.27500153  19.2329998   48.5       ]]

 [[  7.12050009  15.28199959  59.59049988]]

 [[ 32.70849991  29.56100082  45.72949982]]

 ..., 

>>> indices

[[1068 1646 1577]
 [1057  908  938]
 [ 420 1175  237]
 ..., 

Typically you would need to feed it into OpenGL to get an image out of it. However, there are occasions when setting up OpenGL would be too much hassle or when you deliberately want to render on the CPU.

In this case we can use the OpenCV to do the rendering in two function calls as:

img = np.full((720, 1280, 3), 64, dtype=np.uint8)

pts2d = cv2.projectPoints(vertices, rot, trans, K, None)[0].astype(int)
cv2.polylines(img, pts2d[indices], True, (255, 255, 255), 1, cv2.LINE_AA)

See the documentation of cv2.projectPoints for the meaning of the parameters.

Note how we only project each vertex once and only apply the mesh topology afterwards. Here, we just use the numpy advanced indexing as pts2d[indices] to perform the expansion.

This is pretty fast too. The code above only takes about 9ms on my machine.

In case you want filled polygons, this is pretty easy as well

for face in indices:
    cv2.fillConvexPoly(img, pts2d[face], (64, 64, 192))

However, as we need to a python loop in this case and also have quite some overdraw, it is considerable slower at 20ms.

Of course you can also combine both to get an image like in the post title.

From here on you can continue to go crazy and compute face normals to do culling and shading.

OGRECave 1.10 release

The 1.10.0 release of the OGRECave fork was just created. This means that the code is considered stable enough for general usage and the current interfaces will be supported in subsequent patch releases (i.e. 1.10.1, 1.10.2 …).

SampleBrowser running GLES2 on desktop

This release represents more than 3 years of work from various contributors when compared to the previous 1.9 release. At the time of writing it contains all commits from the bitbucket version as well as many fork specific features and fixes.

If you are reading about the fork for the first time and wonder why it was created, see this blog post. For a comparison between the github and bitbucket version see this log.

For a general overview of the 1.10 features when compared to 1.9, see the OGRECave 1.10 release notes.

The highlights probably are:

  • upstream Python bindings as an component
  • improved GL3+/ GLES2 renderers
  • A new HLMS Component implementing physically based shading
  • SDL2 based input handling
  • Bites Component for rapid prototyping of applications
  • Emscripten platform support

For further information see the github page of the fork.

On OGRE versions

Currently one can choose between the following OGRE versions
1.9, 1.10, 2.0 and 2.1

However the versioning scheme has become completely arbitrary while still resembling semantic versioning.
As a consequence somebody even had to put a “What version to choose?” guide on the OGRE homepage.

Unfortunately the guide confuses more than it helps:

Continue reading On OGRE versions

Introducing the OGRE fork on GitHub

in this post I want to introduce the OGRE fork on github. The goal of the fork is to provide a stable and reliable OGRE 1.x series while at the same time modernizing parts under the hood updates.

The idea behind this is that there are many existing 1.x codebases, actually a whole 1.x ecosystem, that can be modernized that way.
The last release of the 1.x series was over 2 years ago, so using the current 1.10 branch already gives a lot of improvements.

Continue reading Introducing the OGRE fork on GitHub

Learning Modern 3D Graphics Programming

one of the best resources to learn modern OpenGL and the one which helped me quite a lot is the Book at www.arcsynthesis.org/gltut/ – or lets better say was. Unfortunately the domain expired so the content is no longer reachable.

Luckily the Book was designed as an open source project and the code to generate the website is still available at Bitbucket. Unfortunately this repository does not seem to be actively maintained any more.

Therefore I set out to make the Book to be available again using Github Pages. You can find the results here:

https://paroj.github.io/gltut/

However I did not simply mirror the pages, but also improved it at several places. So what has changed so far?

Continue reading Learning Modern 3D Graphics Programming

OpenGL Matrices – the missing bits

While generally the available documentation on how the OpenGL matrices work is quite good, there are some missing bits. Although not necessary for your everyday rendering, they give one some insight on how rasterization in general and OpenGL in special works.

W coordinate after perspective divide

After conversion to normalized device coordinates(ndc) by applying a matrix like

\begin{bmatrix}A & 0 & B & 0\\ 0 & C & D & 0\\ 0 & 0 & E & F\\ 0& 0 & -1 & 0\end{bmatrix} \begin{pmatrix} x \\ y \\ z \\ w \end{pmatrix}

one might think that projection is applied on each vertex looks like

\vec{v}_{ndc} = \frac{1}{w} \begin{pmatrix} x \\ y \\ z \\ w \end{pmatrix} = \begin{pmatrix} \tfrac{x}{w} \\ \tfrac{y}{w} \\ \tfrac{z}{w} \\ 1 \end{pmatrix}

however it actually looks like

\vec{v}_{ndc} = \begin{pmatrix} \tfrac{x}{w} \\ \tfrac{y}{w} \\ \tfrac{z}{w} \\ \tfrac{1}{w} \end{pmatrix}

the w coordinate is not divided by itself, but is inverted instead. This is done because the interpolation between vertices still needs to take place and for perspective correct interpolation one needs the camera space depth z = -w_{cam}.

\begin{aligned} \vec{v}_{\alpha} &= \frac{(1-\alpha) \tfrac{\vec{v}_0}{-z_0} + \alpha \tfrac{\vec{v}_1}{-z_1}}{(1 - \alpha)\tfrac{1}{-z_0} + \alpha \tfrac{1}{-z_1}} \\[1.5em] &= \frac{(1-\alpha)\vec{v}_0 w_{0_{ndc}} + \alpha\vec{v}_1 w_{1_{ndc}}}{(1-\alpha) w_{0_{ndc}} + \alpha w_{1_{ndc}}} \end{aligned}

instead of dividing by -z we can multiply with w_{ndc} as multiplication is faster than division.

Note that for brevity the given formula assumes a scanline based rasterizer as it interpolates only between two vertices. The general approach is to use barycentric coordinates to interpolate between all three vertices simultaneously.

Row major or column major

Even though even Wikipedia says OpenGL is column major, it is actually storage agnostic. However by default it interprets your 16 element array as:

\begin{bmatrix}m_0 & m_4 & m_8 & m_{12}\\ m_1 & m_5& m_9 & m_{13}\\ m_2 & m_6 & m_{10} & m_{14}\\ m_3 & m_7 & m_{11} & m_{15}\end{bmatrix}

Yet most OpenGL functions dealing with matrices offer a transpose parameter which you can use to specify the used order. For a comparison of storage orders see the Eigen documentation.

Notably however, GLSL matrices do neither follow C nor mathematical notation; the mat2x4 M type has 2 columns and 4 rows and thus M \in \mathbb{R}^{4 \times 2} mathematically.
Consequently though – albeit breaking with the C notation – M[0] will return the first column (vec4).

Now, if you use the transpose parameter mentioned before, prepare to think hard about the data you are actually getting in the Shader.

How to draw a line interpolating 2 colors with opencv

The built-in opencv line() drawing function allows to draw a variety of lines. Unfortunately it does not allow drawing a gradient line interpolating the colors at its start and end.

However implementing this on our own is quite easy:

using namespace cv;

void line2(Mat& img, const Point& start, const Point& end, 
                     const Scalar& c1,   const Scalar& c2) {
    LineIterator iter(img, start, end, LINE_8);

    for (int i = 0; i < iter.count; i++, iter++) {
       double alpha = double(i) / iter.count;
       // note: using img.at<T>(iter.pos()) is faster, but 
       // then you have to deal with mat type and channel number yourself
       img(Rect(iter.pos(), Size(1, 1))) = c1 * (1.0 - alpha) + c2 * alpha;
    }
}