Centering and Scaling
2022-12-27The function I’m sharing here is probably my most used piece of code when it comes to making generative art. It shows up in practically every project I’ve worked on, and originated from my plotter practice. Most of the time, I use it to center and set margins for a piece after it’s generated during rendering (but there are many other scenarios where it’s useful as well that I’ll show below).
Centering, scaling, and setting margins on 2D generative art is something I see a lot of new generative artists struggle with, so I’m sharing this to (hopefully) make some of your lives a little easier.
Background
Generally speaking (and there are always exceptions) when making generative art it’s a bad idea to use a single, fixed resolution for your output (e.g. only 1000x1000px):
- Your art may be viewed on a range of screen shapes and sizes, and you probably want it to look as good as possible in these different conditions.
- If you ever want to produce prints, you roughly want 300px/inch of resolution, meaning a (relatively small) 10x10" print would need at least 3000x3000px of resolution.
A common approach is to generate all of your shapes/geometry on the unit square
(meaning all x, y
coordinates fall in the range [0, 1]
), and then when
rendering multiply all values by the screen resolution. This works fine if your
output resolution is still square, and your source geometry is perfectly
bounded by the unit square.
However, if you want to adapt your artwork to a range of screen sizes, simply scaling the unit square by the window size doesn’t work.
I also often see generative artworks that have uneven margins, likely due to the fact that their source geometry doesn’t entirely fill the unit square before it is projected onto the screen:
And sometimes, trying to fit your work into a unit square is impossible, because the underlying generative system is chaotic and unpredictable. The below code and approach solves all these problems for 2D generative artwork.
High-level idea
The below code takes two pairs of coordinates, one bounding the “source”
objects to be drawn, and another bounding the “destination” to be drawn upon.
Using these coordinates, it returns a function that maps from the source to the
destination coordinates, while preserving the ratio between x
and y
dimensions (to avoid stretching the source material like we saw with the
smiley face above) by centering along the shorter axis.
The way I usually do this is something like:
- Compute all of the shapes I want to draw (e.g. in “how you see me” this is all of the vertices making up the paint blobs).
- Find the source bounding box around these shapes, and calculate the destination bounding box using the window size and desired margin.
- Pass these coordinates to the below
transformFn
, and use it to compute the new coordinates in screen space when making all the necessary draw calls.
Bounding boxes
For historical reasons, computer screens generally treat the top-left corner as
the origin (0, 0)
with values growing towards the bottom-right corner. I
apply this same terminology to “bounding boxes”, which are the smallest
rectangles possible that entirely contain some set of shapes. Bounding boxes
are critical to this approach, as they define the area to be scaled & centered.
To illustrate this, here is the bounding box for two triangles, labeled with its top-left and bottom-right corners.
Finding the bounding box when you know some shapes' vertices is
simple: the top-left corner is the smallest x
and y
coordinates among the
vertices, and the bottom-right corner is the largest. You may need to account
for objects' stroke-width, radius, and similar here too.
Code
I’m releasing this under the Apache-2.0 License. It’s available for copy-paste below, or here. If you use or modify this, please give proper attribution :)
Note: to avoid external dependencies (and make it easy to copy-paste into your project), this code represents 2D coordinates as 2-element lists. You’re welcome to translate this to use P5’s Vectors if you prefer.
Another note: the
transformFn
code returns a function. This is done to avoid redundant recomputation. If this is foreign to you, check out the usage examples below.
/** * Returns a function that transforms between the source and destination * coordinate space while preserving the ratio between the input x & y * dimensions. * * @param {[number, number]} stl Top-left point bounding the source. * @param {[number, number]} sbr Bottom-right point bounding the source. * @param {[number, number]} dtl Top-left point bounding the destination. * @param {[number, number]} dbr Bottom-right point bounding the destination. */ function transformFn(stl, sbr, dtl, dbr) { const [stlx, stly] = stl; const [sbrx, sbry] = sbr; const [dtlx, dtly] = dtl; const [dbrx, dbry] = dbr; // Compute the diagonal vector for both bounding rects. const [sdx, sdy] = [sbrx - stlx, sbry - stly]; const [ddx, ddy] = [dbrx - dtlx, dbry - dtly]; // Find the minimum amount to scale the user draw-area by to fill the screen. const [rx, ry] = [ddx / sdx, ddy / sdy]; const a = Math.min(rx, ry); // Compute the translation to the center of the new coordinates, accounting // for the fact that rx may not equal ry by centering the smaller dimension. const [ox, oy] = [(ddx - sdx * a) * 0.5 + dtlx, (ddy - sdy * a) * 0.5 + dtly]; // At this point, we transform from user to screen coordinates using // (pt - tl) * a + o // We can skip some arithmetic in our output function by rewriting as // pt * a - tl * a + o // ... and folding the constants into the form // pt * a + b const [bx, by] = [-stlx * a + ox, -stly * a + oy]; return (inp) => { // Scalar values (such as stroke-width, or radius) are only scaled by a // constant, not translated. if (typeof inp === 'number') { return inp * a; } const [x, y] = inp; return [x * a + bx, y * a + by]; } }
Usage examples
On the unit square
Say we want to draw the following scene, with coordinates normalized onto the unit square:
let triangleVertices = [[.5, 0], [1, 1], [0, 1]]; let circleCenter = [0.8, 0.5]; let circleRadius = 0.15; fill([0, 0, 255]); beginShape(); for (const [x, y] of triangleVertices) vertex(x, y); closeShape(); fill([255, 0, 0]); circle(...circleCenter, circleRadius * 2);
Perhaps we want to draw that scene on a 6000x3000px window. The bounding box
for our source objects is defined by [0, 0]
and [1, 1]
(as these objects
fill the unit square), and the bounding destination coordinates are [0, 0]
and [6000, 3000]
(to fill the window).
const transform = transformFn([0, 0], [1, 1], [0, 0], [6000, 3000]); triangleVertices = triangleVertices.map(transform); circleCenter = transform(circleCenter); circleRadius = transform(circleRadius); // rest of draw code...
Notice that we call
transform
on coordinates (like the vertices and the circle center), as well as on scalar values (like the circle diameter). It’s important totransform
all values that you want to draw.
Now say we want to add a 500px margin around our scene. To do this, we
shrink our destination draw area by 500px, giving us bounding coordinates of
[500, 500]
, and [5500, 2500]
.
const transform = transformFn([0, 0], [1, 1], [500, 500], [5500, 2500]); triangleVertices = triangleVertices.map(transform); circleCenter = transform(circleCenter); circleRadius = transform(circleRadius); // rest of draw code...
I’ve drawn the destination coordinates with a dashed line to better show how the margins are being set.
Non-unit square
Now say our generative program is chaotic and unpredictable – if we confine
the source coordinates to [0, 0]
and [1, 1]
we may not neatly capture what
we want to draw. In these cases, we need to compute the bounding box for our
source shapes.
Say we create a scene like this:
let triangle1Vertices = Array(3).fill(null).map(_ => [random(), random()]); let triangle2Vertices = Array(3).fill(null).map(_ => [random(), random()]); fill([0, 0, 255]); beginShape(); for (const [x, y] of triangle1Vertices) vertex(x, y); closeShape(); fill([255, 0, 0]); beginShape(); for (const [x, y] of triangle2Vertices) vertex(x, y); closeShape();
Before we go to call our transformFn
, we need to compute the bounds for our
source scene. This could look something like this:
// Returns the "top-left-most" point of a pair of points. const tl = (a, b) => [Math.min(a[0], b[0]), Math.min(a[1], b[1])]; // Returns the "bottom-right-most" point of a pair of points. const br = (a, b) => [Math.max(a[0], b[0]), Math.max(a[1], b[1])]; const allVertices = [triangle1Vertices, triangle2Vertices].flat(); const sourceTl = allVertices.reduce(tl); const sourceBr = allVertices.reduce(br);
And use our sourceTl
and sourceBr
to transform this scene onto a larger
canvas (say the 6000x3000px window with 500px margins from before):
const transform = transformFn(sourceTl, sourceBr, [500, 500], [5500, 2500]); triangle1Vertices = triangle1Vertices.map(transform); triangle2Vertices = triangle2Vertices.map(transform); // rest of draw code ...
And voila! A nicely centered render.
Tiled rendering
If you’re making massive generative artworks (e.g. printing a mural) that
require a resolution larger than what your browser or library supports, a
common workaround is to render your artwork piece by piece in “tiles”, each
small enough to render individually. The transformFn
can be used here as well
to project your output onto each individual tile.
const FINAL_RES = [10000, 10000]; const TILE_COUNT = [10, 10]; const TILE_RES = [ Math.floor(FINAL_RES[0] / TILE_COUNT[0]), Math.floor(FINAL_RES[1] / TILE_COUNT[1]), ]; function transformToTileFn(sourceTl, sourceBr, tileCoords) { const destTl = [-tileCoords[0] * TILE_RES[0], -tileCoords[1] * TILE_RES[1]]; const destBr = [destTl[0] + FINAL_RES[0], destTl[1] + FINAL_RES[1]]; return transformFn(sourceTl, sourceBr, destTl, destBr); }
And that’s it! I hope this approach is helpful to you. Personally, I find that the biggest advantage is never having to think in terms of “screen” coordinate space (i.e. in pixels, or centimeters for plotting) when writing code for a generative system, knowing that however the program evaluates, it can neatly be mapped onto any size screen at the end.