I made a 3D SVG Renderer that projects textures without rasterization
How can you make affine transformations (the only ones SVGs are capable of) resemble perspective transformations?
I’ve been building a vanilla 3D object to SVG renderer in Typescript to help render circuit boards that are made in React and discovered an interesting trick to keep the SVGs small while getting approximately-correct looking perspective transformations with image textures.

SVGs don’t support perspective transforms like CSS (or at least they’re not guaranteed to work in image viewers), so we need a way to simulate this perspective transformation without creating a massive SVG. It’s easy to draw the box below, you can just project the face of each side of the cube into a polygon, but mapping the texture to that perspective transform isn’t natively possible!
So SVGs don’t support perspective transforms, what do they support? SVGs support this nice little transform called an affine transform. This 6 number transform is what you get when you do transform: matrix(a,b,c,d,e,f)
in CSS. They are super useful for 2D transformations, like panning/scaling/dragging, but can’t really project into 3d.


How can we approximate the transform? Here are some ideas I mulled over that could achieve a good result:
Redraw the image with the distortion. This is potentially expensive and means that we can’t use SVGs as the images without converting them to bitmaps. It also means that things might look “fuzzy”
Ray trace everything! By projecting a ray to compute each pixel for the image, I could get a very conventional 3d renderer. This doesn’t achieve my goal of lightweight SVGs though
Subdivide the image and project each subdivision in the most locally-correct affine transformation. Use projected polygon clip paths to cut off the edges of regions.
I was really curious how the last bullet point could work, and I could think of no other ideas that didn’t require rasterization. So with OpenAI O3’s help I implemented it into the vanilla Typescript 3D renderer. To test it, we’re going to project a checkerboard pattern onto a cube.
Ok here’s our starting point, 2 subdivisions, the 2 checkboard images with affine transformations.
Oof, looks like it has some kind of bulge, let’s see it with 4 subdivisions!
Looks a bit rough (literally, it looks like it is not a flat surface). Let’s keep going!
Alright let’s go to the end, we want it to look flat!!
At around 512 images, it’s really hard to tell the difference. Awesome! We did image projection without any rasterization! Here’s an animated version that incorporates a fix from an HN commenter (thanks Masterjun!)
Now for the exciting part: The SVG isn’t that big! Because we can use the `defs` of the SVG to avoid repeating the image, we only need to define each clip path!

Here’s a table of the file size as you increase subdivisions. I _think_ some differences in the matrix calculation account for the somewhat weird scaling.
The math is fun but in the age of AI, below our pay-grade! You can check out the full source code here!

I’m excited to flesh out this 3D renderer because 3D SVGs can make great artifacts on GitHub, we want to make it so that people can easily review changes to circuit boards made with tscircuit in pull requests.
I hope you enjoyed this neat little 3D trick! Back to coding…
Edit: A fine HN user found a bug in the projection (thank you Masterjun and badmintonbaseba!) the code now correctly finds the subdivided triangle’s affine transformations properly!
So much amazing work!
Is it not possible to manually project each vertex of the triangle (i.e. multiply with projection matrix as opposed to providing the projection matrix to SVG) to create a new triangle on the 2D clip plane... essentially emulating the vertex shader step but not going all the way through the rasterizer to generate the fragments.