Petz Rendering in OpenGL

Why 3D rather than real 2D like the original game?

Because I'm lazy and I want the game engine to handle placing and rotating 3D objects for me!

Rendering the ballz

Rendering a solid-color circle is easy-peasy. Create a billboarded quad and draw a circle on it.

But we want something slightly more complex. We want a pixel-perfect circle. If Petz says the ball size is 11, then our circle must be exactly 11 pixels wide (assuming the default pet/ball scale).

Imagine a quad that fills the entire viewport. It's easy to draw a circle anywhere in screen-space on this quad using FRAGCOORD.xy, and since FRAGCOORD gives you the exact screen pixel, you can make the circle an exact pixel width too. Here's how to draw an 11-pixel circle centered around pixel (300,300).

void fragment() {
  vec2 center_pixel = vec2(300.5);
  ALBEDO = vec3(step(length(FRAGCOORD.xy - center_pixel), 5.5));

We have to center the circle around a half-point and extend it to a half-point, i.e. have an odd-numbered circle, for it to actually look like a circle and not like a weird square at certain small sizes. Petz always ensures that the balls are odd widths, so let's stick with that.

So the only thing we need to do in order to render a given ball, is to find out where its center is on-screen. This is easily done, as long as we know exactly what the viewport size is.

varying flat vec2 center_fragcoord;
uniform int ball_radius;

void vertex() {
  // billboard

  // Calculate center fragcoord. Do in vertex shader to avoid calculating per frag.
  vec4 center_clip_space = MODELVIEW_MATRIX * vec4(vec3(0.0), 1.0);
  vec4 center_view_space = PROJECTION_MATRIX * center_clip_space;
  vec2 center_ndc = (center_view_space.xy + 1.0) / 2.0;
  center_fragcoord = floor(center_ndc * VIEWPORT_SIZE) + 0.5;

void fragment() {
  ALBEDO = vec3(0.0);
  ALPHA = vec3(step(length(FRAGCOORD.xy - center_fragcoord), float(ball_radius) + 0.5));

And, to stop tons of overdraw, move the vertices of the quad inward so that they match the ball_radius.

The pixel-perfect drawing isn't implemented properly currently.


Ball fuzz (hehe) is easy to draw because the ball is always really a square. Fuzzing is just moving the lines of the ball left and right a bit on-screen.

I use the random number generation method suggested by Book of Shaders. You can tell Petz is using something similar because of the specific pattern of fuzzing and the fact that multiple balls can visibly have the same fuzz pattern, so it's clearly deterministic.

Simply find a random number per FRAGCOORD.y + center_fragcoord.y and offset FRAGCOORD.x by that random number. The reason you offset by center_fragcoord.y is so that the fuzz doesn't appear to shift and change as the ball moves around the screen - it seems to have a persistent amount of fuzz. Make sure the quad is expanded enough to contain the maximum possible fuzz.


This is reasonably simple, only complicated by the fact that Petz has so many outline variations.

Outlines always cut into the core ball. That is, if a ball has size 11 with a real outline width (see below) of 2, the ball is still 11 pixels wide, but the base color only takes up 9 of those pixels.

Outline size does not scale with the width of the ball. It's a set x pixels.

Outline -1 is no outline. Outlines -2 and 0 give outlines on one side of the ball only (i.e. the outline looks like the ball shifted left/right by 1 pixel). Outline -3 draws as a nose, i.e. with a white rectangle of shine. Outline -4 gives a weird glitchy effect. Outline -5 or below seems to give a normal 1px outline. Outline of 1 gives a 'dotted' outline, that is, like outline -2 and 0 together. The left and right side of the ball is outlined by 1px, but not the top/bottom. Outlines of 2+ give a full outline of (x-1) width.

To render most of these it's just a matter of shifting around FRAGCOORD to get the desired effect.

Rendering linez

Lines are a lot harder to deal with. I wrote about them in a previous '3D in 2D' article. Here we must also make the lines a pixel-perfect width, which can be done when forcing the vertices outwards along the line normal.


Normal lines must always render behind their balls. Applying a z-penalty can kinda get you there, but there's always the possibility it will render too far back and get hidden, or too far forward and clip a little bit.

To try and prevent this, I pass through the two world positions of the linked balls. I find their screen-Z coordinates, take the maximum, apply a tiny z-penalty, and use that as the z for every line vertex. The base game won't have animations where lines need to cross z-coordinates so this is fine.

Fuzz and outlines

Fuzz/outlines are hard to draw on lines for two reasons:

To draw the line, you have to know the direction it's pointing onscreen and the normal to push the vertices out along (90 degrees rotation from the direction). The vector normalize(vec2(1.0, 0.0) - screen_normal) is therefore a direction which we can push each fragment along to fuzz it. The lines are just padded along the normal to give enough space for the fuzz (which is not a perfect solution, but good enough). We then know that the actual line is between 0.25 < UV.x < 0.75 and the outlines are between 0.25 + 1px < UV.x < 0.75 - 1px; the rest is reserved for fuzz.

There is then a problem with using the UVs when the line is of varying width. You can see this if you make a simple quad in Blender, set up a texture with a vertical line, and then make the top of the quad thinner than the bottom. The UV distorts.

If you were going to fix this in a UV mapping tool, you would scale down the UV at the smaller end to match its visual width. You can do the same thing in the vertex shader. The proportion at this vertex's end of the line will be (line_width / max(line_width_start, line_width_end)). Assign this to UV.x.

Rendering paintballs

Paintballs act as if masked by their parent ball. They have a true 3D position (given by the normalized vector + the base ball's radius) but obviously their display should not 'leak' outside of the parent ball.

To get this effect, the paintball needs to know the base ball's world position and radius. The world position is projected into a screen-space position, I calculate whether the current FRAGCOORD is within the base ball's radius, and modify the ALPHA accordingly. Everything else about their shader is the same as a regular ball.

Irises are rendered as paintballs since they're masked by the eye balls.

Rendering textures

This one is kind of fun. Petz textures are indexed, but Godot/OpenGL doesn't know anything about that by default. It sees them as full-color images. But Petz uses those indexes heavily to modify textures according to the ball color. The palette is arranged in tens: ten reds in decreasing brightness, ten oranges, ten blues, etc. You can take a texture which uses the ten red colors, and 'palette shift' them to using the ten blue colors instead if the base ball is blue. This preserves shading but switches the color. Or you can take a red texture with orange stripes, shift the reds to blue, and have a blue ball with orange stripes.

Petz also forces images to this palette by closest-color-matching. Some user-made textures do not use the actual Petz palette, but the colours are close enough to be forced to the correct ones on load.

I originally went with a lazy solution to this. I generated a lookup table: a 1x256px image with each pixel set to the corresponding palette color. At each pixel, I scanned through this image to find the pixel with the closest matching color. Now I know the color index. If this index is to be shifted, I calculate the new index, and look up the real color in the LUT.

For color shifting, the new index is just same 'shade' but in the new color range. e.g. if the texture color is 65, and the ball's color is 80, then the new color index is 85.

However, this color matching algorithm caused a lot of work to be done on the GPU, especially when zooming in so that the 256-loop had to be performed over more pixels. I had to implement a custom BMP loader in Godot which does the color-matching algorithm once and saves the image as a one-channel R8 image where R value = the color index. This image is then passed to the shader instead of the full-color image.

Move, project, addballs and animation


There's already an explanation of animation files in Nick Sherlock's repo, so I won't go over that here except to say that the first 3 bytes of his 'tags' are rotational data (x,y,z). The last byte I'm not sure about. There is also some extra info at the end of some animation frames which I don't know the purpose of.

Addball positions

It's important to note that the x,y,z positional data of the addball is relative to the base ball's position and rotation. That is, if the addball is positioned 'above' the base ball but the base ball is rotated to face downwards, the addball will show below the base ball.


The positional data here is probably relative to rotation too but I haven't fully implemented that yet. It doesn't seem to screw things up too badly.


It's important to note that this section is processed in order. This forces balls closer/further apart along their original vectors.