When WebGL shipped in browsers around 2011, the pitch was eye-catching. GPU-accelerated graphics, right in the browser, no plugin required. Developers who'd spent years fighting Flash's limitations suddenly had access to something that felt like cheating.
Direct access to the graphics card, shaders written in a C-like language, the ability to render millions of polygons at 60 frames per second inside a <canvas> tag.
The demos were insane, and you can still find many of them out over on CodePen.io. Rotating 3D globes. Fluid simulations. Real-time ray marching.
And then... it kind of didn't happen. Not the way everyone expected. Or at least not in the way that I expected.
The 3D web never arrived. Most users never consciously encountered a WebGL experience when casually web surfing. And yet today, WebGL is quietly running inside some of the most-used applications on the planet. The story of how it got there is more interesting than the story everyone expected.
What went wrong with the dream
The vision was a 3D internet. Immersive websites, navigable spaces, games that rivaled native apps. What arrived instead were a thousand beautiful demos that nobody needed, and a handful of genuinely useful applications that couldn't get anyone to care about the underlying technology.
The problem wasn't WebGL. The problem was that most things people do on the web don't need three dimensions. Reading, watching, shopping, messaging. Flat 2D is just fine. More than fine. The killer apps of the browser were never going to be spatial.
Meanwhile, writing raw WebGL is genuinely brutal for any developer. You're managing buffer objects, compiling shaders, handling matrix math by hand, wrestling with a stateful API that punishes you silently for mistakes.
Three.js made it approachable, but approachable is a long way from casual. Most developers looked at the complexity and correctly decided they didn't need it.
So the 3D web quietly faded as a mainstream aspiration. And something more interesting happened in its place.
Where WebGL actually won
Maps
To understand how far maps have come, you have to remember what they used to be.
The original Google Maps (launched in 2005) was, at its core, a grid of pre-rendered JPEG tiles stitched together by JavaScript shuffling image tags around a div.
You could watch them load, grey squares filling in from the edges as your connection caught up. Zoom wasn't smooth at all it was a jump between discrete levels, each requiring a fresh batch of tiles from the server.
Every label, every road, every building outline had already been baked into images on Google's servers. There was no real-time rendering happening in your browser at all. It was an impressive illusion of a continuous map, and it worked, right up until you had a choppy connection.
The transition away from raw images took the better part of a decade. Vector maps arrived on Android in 2010, and WebGL rendering in the browser followed a few years later. What it unlocked was immediate and obvious: smooth zoom at any level, labels that stay upright as you rotate, 3D buildings that extrude in real time, and a map whose entire visual style can be changed on the fly without re-rendering a single tile on a server. No JPG's.
Today, Mapbox, whose entire business is built on a WebGL rendering engine, renders vector tiles directly on the GPU. Millions of apps use it without their users ever hearing the word shader.
For those who want the open-source alternative, MapLibre GL JS is a community-maintained fork that's become the default for anyone who needs the same capability without the Mapbox license.
Data visualization
deck.gl, originally built by the team at Uber, brought WebGL to geospatial data visualization and changed what "big data on a map" could mean. Rendering a million GPS points at interactive frame rates isn't a data problem or a JavaScript problem, it's a GPU problem. WebGL solved it. Observable, Flourish, and a generation of data tools quietly use GPU acceleration for anything that would choke a canvas 2D context.
Figma runs on WebGL. The entire canvas (every frame, every layer, every multiplayer cursor) is GPU-rendered. This is not a small implementation detail. It's the reason Figma can handle files that would make Sketch or Adobe XD crawl.
When Figma launched in 2015, people talked about the multiplayer. The rendering engine was the actual technical moat. They've since upgraded to WebGPU, but the original WebGL bet is what made the whole thing possible.
Photopea, the browser-based Photoshop alternative, uses WebGL for compositing. Canva's export pipeline. Video editors. Real-time filters. The moment you need to apply a non-destructive effect to an image without blocking the main thread, WebGL is the answer.
Games
Nobody dethroned Steam with a WebGL game. But browser-based gaming never died. It fragmented into casual games, HTML5 game engines like Phaser and PlayCanvas, and increasingly, serious engines like Unity shipping WebGL export targets.
More notably, the infrastructure WebGL proved out became the foundation for WebGPU, its successor, which landed in Chrome 113 in 2023 and finally gives developers the low-level GPU access that modern applications actually demand.
What it actually takes to use WebGL in a real app
Nobody writes raw WebGL in production. The API is a state machine from another era. It's verbose, error-prone, and designed for C programmers who don't mind managing memory. In practice, you're almost always one abstraction layer up.
For 3D scenes and objects, Three.js remains the default standard. It abstracts the shader plumbing, handles the render loop, and has an ecosystem large enough to find solutions for most standard problems.
For more control, custom shaders, post-processing, performance-critical paths, you drop into its lower levels or write ShaderMaterial directly.
For 2D GPU acceleration (think games, interactive UIs, particle systems) PixiJS remains the go-to. It targets the 2D canvas API in feel but uses WebGL under the hood, falling back gracefully when it isn't available.
Writing a shader for the first time
At some point, if you go deep enough into any of these tools, you'll end up looking at a shader. It will look something like this:
void main() {
vec2 uv = vTexCoord;
vec4 color = texture2D(uTexture, uv);
gl_FragColor = vec4(color.rgb * uBrightness, color.a);
}
This is a fragment shader. It runs on the GPU, once per pixel, in parallel. The entire rendering model of WebGL flows toward this moment. Getting data to the GPU so this function can execute millions of times per frame without the CPU knowing about it.
Understanding this (even abstractly) changes how you think about performance. The CPU is sequential and general-purpose. The GPU is massively parallel and narrow. Any time you have a problem that can be decomposed into independent per-element operations, you have a candidate for GPU acceleration. Color grading an image. Simulating particles. Computing a heatmap over a geographic region. Blending layers.
Most browser performance problems are CPU problems. The ones that WebGL solves are a specific category, but it's a category that keeps expanding as applications get more visually complex.
Last few words
WebGL didn't change how people experience the web in the way its early advocates imagined. There is no 3D internet. Websites are still mostly flat. But WebGL became load-bearing infrastructure for the tools a generation of developers and designers use every day. Figma. Mapbox. Google Maps. Data visualization at scale. The creative web.
It won by becoming invisible, which is, if you think about it, exactly what good infrastructure is supposed to do.
The lesson for developers isn't "learn WebGL so you can build 3D websites." It's that understanding what the GPU is good at, and knowing which layer of abstraction to reach for, is increasingly a real skill, one that separates apps that perform from apps that choke, and products that scale from products that don't.