The 512KB limit isn't just minimalism - it forces architectural discipline.
I built a Trello alternative (frustrated with limitations: wanted rows and decent performance). Came in at ~55KB gzipped by following patterns, some of which I'm open sourcing as genX (genx.software - releasing this month):
- Server renders complete HTML (not JSON that needs client-side parsing)
- JavaScript progressively enhances (doesn't recreate what's in the DOM)
- Shared data structures (one index for all items, not one per item)
- Use native browser features (DOM is already a data structure - article coming)
Most sites ship megabytes because modern tooling treats size as a rounding error. The 512KB constraint makes you think about what's expensive and get creative. Got rewarded with a perfect Lighthouse score in dev - striving to maintain it through release.
Would love feedback from this community when it's out.
Even for larger sites, it can be trivial, but I prefer to look at it from a non SPA/state-mgmt point of view.
Not every site needs to be an SPA. Or even a 'react app'. I visit a page, record your metrics on the backend for all I care, you have the request headers etc, just send me the data I need, nothing else.
It doesn't have to be ugly or even lacking some flair, 500KB is a lot of text. Per page request, with ootb browser caching, there's no excuse. People have forgotten that's all you need.
> People have forgotten that's all you need.
Edit : No they havent, they just can't monetize optimizations.
webp is not much of an upgrade in my experience - jpegli pretty much matches it in quality/size while having better compatibility, and if you don't have the original photo and are working with old crusty jpegs it's often best to just leave them alone rather than re-encoding. jpeg-xl does make a noticeable difference, but it's not widely supported.
> Building a sub 512KB website that satisfies all departments of a company of non-trivial size; that is hard.
And yet tons of personal blogs likely weigh in well over that mark, despite having no requirements beyond personally imposed ideas about how to share information with the world.
> Just don't use external trackers, ads, fonts, videos.
The Internet is likely full of "hero" images that weigh more than 512KB by themselves. For that matter, `bootstrap.min.css` + `bootstrap.min.js` is over half of that budget already.
Not that people need those things, either. But many have forgotten how to do without. (Or maybe bilekas is right; but I like the idea of making things small because of my aesthetic sense. I don't need a financial incentive for that. One of these days I should really figure out what I actually need for my own blog....)
Assuming most of these sites use Javascript, perhaps the size of memory use should also be considered
I use a text-only HTML viewer, no Javascript interpreter. This is either a 2M or 1.3M static binary
The size of the web page does not slow it down much, and I have never managed to crash it in over 15 years of use, unlike a popular browser
I routinely load catenated HTML files much larger than those found on the web. For example, on a severly underpowered computer, loading a 16M stored HTML file into the text-only client's cache takes about 8 seconds
I can lazily write custom commandline HTML filters that are much faster than Python or Javascript to extract and transform any web page into SQL or CSV. These filter are each ~40K static binary
As an experment I sloppily crammed 63 different web page styles into a single filter. The result was a 1.6M static binary
I use this filter every day for command line search
I get the appeal of the 512KB Club. Most modern websites are bloat, slow and a privacy nightmare. I even get the nerdy thrill of fitting an entire website into a single IP packet, but honestly, this obsession with raw file size is kinda boring. It just encourages people micromanage whitespace, skip images or cut features like accessibility or responsive layouts.
A truly "suckless" website isn't about size. It's one that uses non-intrusive JS, embraces progressive enhancement, prioritizes accessibility, respects visitor's privacy and looks clean and functional on any device or output medium. If it ends up small: great! But that shouldn't be the point.
> The 512KB limit isn't just minimalism - it forces architectural discipline.
True. I skimmed the biggest sites in that list, and they still are extremely fast. It's not just that size limit that makes the difference, but rather knowing that there is one and therefore forcing oneself to reason and use the right tools without cramming unneeded features.
It would be worth adding some information on the page about the best tools to help the creation of small yet functionally complete and pleasant to look at static sites. A few years ago I'd have said Hugo (https://gohugo.io/), but didn't check for a while and there could be better ones. Also ultra cheap hosting options comparable to Neocities (.org) but located in the EU.
It calls out the NYT at the beginning, but am I supposed to be impressed that a bunch of mostly obscure minimalist blogs are a few megabytes smaller than the biggest online news site (by subscribers) in the world?
What are we doing here? And to brag about this while including image media in the size is just onanistic.
> Your total UNCOMPRESSED web resources must not exceed 512KB.
I would be interested to know how they define web resources. HN would only fit this description if we don't count every possible REST resource you could request, but instead just the images (3 svgs), CSS (news.css), and JS (hn.js).
The second you count any possible response to `https://news.ycombinator.com/item?...` in the total, we've blown the cap on 512kb... and that's where the actual useful content lays.
Feels like regular ol' REST-and-forms webapps aren't quite the target of this list though, so who knows.
> I would be interested to know how they define web resources.
They explain things in the FAQ. You're supposed to do a "Cloudflare URL Scan" and read the "Total bytes". For HN this is 47kB [1], which, yes, is just the 6 requests needed for / and nothing more.
While you might not get the exact same numbers (^1), you can get a very similar result in your browser's devtools, in the network tab, by doing a clean reload of the page. It will give you a total (both compressed/transferred bytes, and uncompressed).
---
(^1) If the page, as HN does, has some headers or additional content for logged in visitors, the numbers will generally be a bit different. But the difference will usually be small.
I use an Intel Atom netbook from 2010 as my test system. It has 1 GB of RAM and an in-order x86 processor. CPU Benchmark gives it 120 Mop/s integer and 42 MiB/s for AES. (For comparison, my usual laptop, which is also nearly obsolete with an i5-8350u gives 22,000 Mop/s and 2000 MiB/s respectively.)
The netbook can load Firefox in just a few seconds. And Hacker News loads almost instantly as on a modern machine. (Hit enter and the page is rendered before you can blink.)
The same machine can also play back 720p H.264 video smoothly.
And yet, if I go to Youtube or just about any other modern site, it takes literally a minute to load and render, none of the UI elements are responsive, and the site is unusable for playing videos. Why? I'm not asking for anything the hardware isn't capable of doing.
If my own work isn't snappy on the Atom I consider it a bug. There are a lot of people using smartphones and tablets with processors in the same class.
Youtube serves AV1 video these days, which needs CPU rendering on older machines. It might become usable if you switch it to 144p resolution. (For reference, such low-resolution video was very common in the mid-1990s, so it's not wildly out of place on a machine from 2010.)
> And yet, if I go to Youtube or just about any other modern site, it takes literally a minute to load and render, none of the UI elements are responsive, and the site is unusable for playing videos. Why? I'm not asking for anything the hardware isn't capable of doing.
but the website and web renderer are definitely not optimized for a netbook from 2010 - even modern smartphones are better at rendering pages and video than your atom (or even 8350u) computers.
That's an understatement if I've ever seen one! For web rendering single-threaded performance is what mostly matters and smartphones got crazy good single-core performance these days. The latest iPhone has faster single core than even most laptops
Yes, but parent comment definitely implied they weren't talking about people running on the latest and best out there. Even the middle-grade smartphones today are leaps and bounds better than the atom from 2010.
Other than that, I would've understood this notion better in the 90's when we were all on dialups. Maybe my perception is skewed growing up and seeing in real-time a picture loading on a website?
Now, even with outdated hardware on an ok connection, even larger sites like WAPO (3MB) loads what I feel like instantly (within 5-10 seconds). If it loaded in 2 seconds or 1 second, I really don't know how that would impact my life in any way.
As long as a site isn't sluggish while you browse around.
I like this website. It's very entertaining to me, and a bit nostalgic too. And those minimalist websites also help us remember the importance of building things to last the effects of time. Most of them are good candidates to stay online for the next 15 or 20 Internet years to come (almost like eternity in human terms).
It's a fun way to push for a lighter web but without a way to distinguish the complexity of the sites on the list it's really not all that useful. It's kind of addressed in the FAQ "The whole point of the 512KB Club is to showcase what can be done with 512KB of space. Anyone can come along and make a <10KB site containing 3 lines of CSS and a handful of links to other pages" but without a way for the user to distinguish the site complexity at a glance I'm not sure I understand the point. Regardless of the FAQ the first few sites I clicked on while yes were quite light in size but also had nothing more than some text and background colors on their sites. Also any search site is going to be at the near top of the list e.g. https://steamosaic.com/
Complexity would be a subjective metric but without it I'm not sure what you take from this other than a fun little experiment, which is maybe all it's meant to be.
Tag-based organization system with drag-drop, real-time filtering, row/column layouts. ~55KB gzipped.
Built it frustrated with Trello's limitations. The 512KB constraint forced good architecture: server-side rendering, progressive enhancement, shared indexes instead of per-item duplication.
Perfect Lighthouse score so far - the real test is keeping it through release.
Extracting patterns into genX framework (genx.software) for release later this month.
While a fun idea, arbitrary limits like this just aren’t necessary. Yes it’s all well and good in the name of reducing trackers, etc but what if I want to have an image heavy site? Why does that get perceived as a bad thing?
the very first website has either 404 links or pages with over a megabyte of total payload.
idea is good but I don't buy "fast" website that only serve text with the css from 70s
While a lot of sites break when you disable JavaScript, browsing is very fluid when you do. I’m also using the html version of DDG. There’s only an handful of websites (and the majority are apps) that I’ve enabled JS for.
And one of these days, I will write a viewer for GitHub links, that will clone the repo and allows me to quickly browse it. For something that is aimed at dev, the platform is horrendous.
So, “Let’s build carbon-titanium-foldable bicycles instead of bloated modern cars, and still get from A to B?”
How many mainstream online platform users care about the difference in KB in their experience, anyway?
The sites in the list are hobbyist clubs with a technical point of view, which wouldn’t make sense for a mass media outlet with millions of daily traffic, and real interdepartmental complexity and compliance issues to deal with.
> Let’s build carbon-titanium-foldable bicycles instead of bloated modern cars, and still get from A to B?
Yeah, that sounds awesome.
Nobody _needs_ to run a 4-minute mile, or win a chess tournament, or climb a mountain, or make the most efficient computer program. They're still worthwhile human endeavors.
- <https://textonly.website/> - my site got removed (I guess because it has a logo and this makes it not text-only...)
There used to be also a 10 KB club and per its rules my site would have qualified except for the requirement to be featured on HN or otherwise be a “noteworthy site” if I recall correctly. However, 10KB club seems to be offline for some time already...
In general the issue with these kinds of pages is mostly that they only check _one_ page (typically the homepage but sometimes I see people submit a special “reduced version” of their homepage, too...). Of course if _all_ pages were to be relevant I think even my (pretty miminmalist) page wouldn't qualify because some pages have high-resolution images I guess...
Well yes, we had low resolution images, no consideration for different viewports, no non-default fonts and little interactivity beyond links or other queries to the server.
The 512KB limit isn't just minimalism - it forces architectural discipline.
I built a Trello alternative (frustrated with limitations: wanted rows and decent performance). Came in at ~55KB gzipped by following patterns, some of which I'm open sourcing as genX (genx.software - releasing this month):
- Server renders complete HTML (not JSON that needs client-side parsing) - JavaScript progressively enhances (doesn't recreate what's in the DOM) - Shared data structures (one index for all items, not one per item) - Use native browser features (DOM is already a data structure - article coming)
Most sites ship megabytes because modern tooling treats size as a rounding error. The 512KB constraint makes you think about what's expensive and get creative. Got rewarded with a perfect Lighthouse score in dev - striving to maintain it through release.
Would love feedback from this community when it's out.
I'd like too see a 512 KB club but for apps most of them are blogs and that's easier to do than an app. Link your app when you finish it pls
Wil do!
Building a sub 512KB website is trivial.
Just don't use external trackers, ads, fonts, videos.
Building a sub 512KB website that satisfies all departments of a company of non-trivial size; that is hard.
> Building a sub 512KB website is trivial.
Even for larger sites, it can be trivial, but I prefer to look at it from a non SPA/state-mgmt point of view.
Not every site needs to be an SPA. Or even a 'react app'. I visit a page, record your metrics on the backend for all I care, you have the request headers etc, just send me the data I need, nothing else.
It doesn't have to be ugly or even lacking some flair, 500KB is a lot of text. Per page request, with ootb browser caching, there's no excuse. People have forgotten that's all you need.
> People have forgotten that's all you need.
Edit : No they havent, they just can't monetize optimizations.
I thought that too assumed my blog fit the criteria. I was wrong; weighed in at just over 100KB too heavy to get in the club.
My guess is the photos.
Exactly, so it's not so much a demonstration of how nice a website fits in 512K as it's about _just not using any media_. Not very interesting imho.
maybe lower the resolution?
at least to test your guess
are you using webp photos?
webp is not much of an upgrade in my experience - jpegli pretty much matches it in quality/size while having better compatibility, and if you don't have the original photo and are working with old crusty jpegs it's often best to just leave them alone rather than re-encoding. jpeg-xl does make a noticeable difference, but it's not widely supported.
Literally yesterday there were people defending 15MB websites:
https://news.ycombinator.com/item?id=45798681
> Building a sub 512KB website that satisfies all departments of a company of non-trivial size; that is hard.
And yet tons of personal blogs likely weigh in well over that mark, despite having no requirements beyond personally imposed ideas about how to share information with the world.
> Just don't use external trackers, ads, fonts, videos.
The Internet is likely full of "hero" images that weigh more than 512KB by themselves. For that matter, `bootstrap.min.css` + `bootstrap.min.js` is over half of that budget already.
Not that people need those things, either. But many have forgotten how to do without. (Or maybe bilekas is right; but I like the idea of making things small because of my aesthetic sense. I don't need a financial incentive for that. One of these days I should really figure out what I actually need for my own blog....)
Assuming most of these sites use Javascript, perhaps the size of memory use should also be considered
I use a text-only HTML viewer, no Javascript interpreter. This is either a 2M or 1.3M static binary
The size of the web page does not slow it down much, and I have never managed to crash it in over 15 years of use, unlike a popular browser
I routinely load catenated HTML files much larger than those found on the web. For example, on a severly underpowered computer, loading a 16M stored HTML file into the text-only client's cache takes about 8 seconds
I can lazily write custom commandline HTML filters that are much faster than Python or Javascript to extract and transform any web page into SQL or CSV. These filter are each ~40K static binary
As an experment I sloppily crammed 63 different web page styles into a single filter. The result was a 1.6M static binary
I use this filter every day for command line search
I'm a hobbyist, an "end user", not a developer
I get the appeal of the 512KB Club. Most modern websites are bloat, slow and a privacy nightmare. I even get the nerdy thrill of fitting an entire website into a single IP packet, but honestly, this obsession with raw file size is kinda boring. It just encourages people micromanage whitespace, skip images or cut features like accessibility or responsive layouts.
A truly "suckless" website isn't about size. It's one that uses non-intrusive JS, embraces progressive enhancement, prioritizes accessibility, respects visitor's privacy and looks clean and functional on any device or output medium. If it ends up small: great! But that shouldn't be the point.
Or to be even more suckless it would require the user toboatch in whatever feather they need.
A rather perfect example of "correlation is not causation". But being "suckless" is a lot harder to measure than just running `length(string)`.
Is there a way of enforcing memory limits on websites from browser (user) side?
These clubs have little effect if there's no incentives in demand
easy way: chrome extension to popup a banner when the heap > limit
hard way: custom chrome build to block websites from allocating heap > limit
> The 512KB limit isn't just minimalism - it forces architectural discipline.
True. I skimmed the biggest sites in that list, and they still are extremely fast. It's not just that size limit that makes the difference, but rather knowing that there is one and therefore forcing oneself to reason and use the right tools without cramming unneeded features.
It would be worth adding some information on the page about the best tools to help the creation of small yet functionally complete and pleasant to look at static sites. A few years ago I'd have said Hugo (https://gohugo.io/), but didn't check for a while and there could be better ones. Also ultra cheap hosting options comparable to Neocities (.org) but located in the EU.
Seems like we can join the club! https://www.firefly-lang.org/ is 218 kB uncompressed.
It calls out the NYT at the beginning, but am I supposed to be impressed that a bunch of mostly obscure minimalist blogs are a few megabytes smaller than the biggest online news site (by subscribers) in the world?
What are we doing here? And to brag about this while including image media in the size is just onanistic.
> Your total UNCOMPRESSED web resources must not exceed 512KB.
I would be interested to know how they define web resources. HN would only fit this description if we don't count every possible REST resource you could request, but instead just the images (3 svgs), CSS (news.css), and JS (hn.js).
The second you count any possible response to `https://news.ycombinator.com/item?...` in the total, we've blown the cap on 512kb... and that's where the actual useful content lays.
Feels like regular ol' REST-and-forms webapps aren't quite the target of this list though, so who knows.
> I would be interested to know how they define web resources.
They explain things in the FAQ. You're supposed to do a "Cloudflare URL Scan" and read the "Total bytes". For HN this is 47kB [1], which, yes, is just the 6 requests needed for / and nothing more.
[1] https://radar.cloudflare.com/scan/4c2b759c-b690-44f0-b108-b9...
> Cloudflare URL Scan
> Cloudflare
any chance for a non-monopoly version?
While you might not get the exact same numbers (^1), you can get a very similar result in your browser's devtools, in the network tab, by doing a clean reload of the page. It will give you a total (both compressed/transferred bytes, and uncompressed).
---
(^1) If the page, as HN does, has some headers or additional content for logged in visitors, the numbers will generally be a bit different. But the difference will usually be small.
I use an Intel Atom netbook from 2010 as my test system. It has 1 GB of RAM and an in-order x86 processor. CPU Benchmark gives it 120 Mop/s integer and 42 MiB/s for AES. (For comparison, my usual laptop, which is also nearly obsolete with an i5-8350u gives 22,000 Mop/s and 2000 MiB/s respectively.)
The netbook can load Firefox in just a few seconds. And Hacker News loads almost instantly as on a modern machine. (Hit enter and the page is rendered before you can blink.)
The same machine can also play back 720p H.264 video smoothly.
And yet, if I go to Youtube or just about any other modern site, it takes literally a minute to load and render, none of the UI elements are responsive, and the site is unusable for playing videos. Why? I'm not asking for anything the hardware isn't capable of doing.
If my own work isn't snappy on the Atom I consider it a bug. There are a lot of people using smartphones and tablets with processors in the same class.
Youtube serves AV1 video these days, which needs CPU rendering on older machines. It might become usable if you switch it to 144p resolution. (For reference, such low-resolution video was very common in the mid-1990s, so it's not wildly out of place on a machine from 2010.)
If it goes down you can always upgrade to a Raspberry Pi 3B+.
> And yet, if I go to Youtube or just about any other modern site, it takes literally a minute to load and render, none of the UI elements are responsive, and the site is unusable for playing videos. Why? I'm not asking for anything the hardware isn't capable of doing.
but the website and web renderer are definitely not optimized for a netbook from 2010 - even modern smartphones are better at rendering pages and video than your atom (or even 8350u) computers.
> even modern smartphones are better
That's an understatement if I've ever seen one! For web rendering single-threaded performance is what mostly matters and smartphones got crazy good single-core performance these days. The latest iPhone has faster single core than even most laptops
Yes, but parent comment definitely implied they weren't talking about people running on the latest and best out there. Even the middle-grade smartphones today are leaps and bounds better than the atom from 2010.
As an engineering challenge, I love it.
Other than that, I would've understood this notion better in the 90's when we were all on dialups. Maybe my perception is skewed growing up and seeing in real-time a picture loading on a website?
Now, even with outdated hardware on an ok connection, even larger sites like WAPO (3MB) loads what I feel like instantly (within 5-10 seconds). If it loaded in 2 seconds or 1 second, I really don't know how that would impact my life in any way.
As long as a site isn't sluggish while you browse around.
My mobile phone's data connection isn't free. I'd prefer it not be wasted on sloppily-made websites.
I like this website. It's very entertaining to me, and a bit nostalgic too. And those minimalist websites also help us remember the importance of building things to last the effects of time. Most of them are good candidates to stay online for the next 15 or 20 Internet years to come (almost like eternity in human terms).
It's a fun way to push for a lighter web but without a way to distinguish the complexity of the sites on the list it's really not all that useful. It's kind of addressed in the FAQ "The whole point of the 512KB Club is to showcase what can be done with 512KB of space. Anyone can come along and make a <10KB site containing 3 lines of CSS and a handful of links to other pages" but without a way for the user to distinguish the site complexity at a glance I'm not sure I understand the point. Regardless of the FAQ the first few sites I clicked on while yes were quite light in size but also had nothing more than some text and background colors on their sites. Also any search site is going to be at the near top of the list e.g. https://steamosaic.com/
Complexity would be a subjective metric but without it I'm not sure what you take from this other than a fun little experiment, which is maybe all it's meant to be.
So they should invert it like the demoscene.
Set the limit first, and then request folks to join the contest:
What crazy website can _you_ build in 512KB?
Tag-based organization system with drag-drop, real-time filtering, row/column layouts. ~55KB gzipped.
Built it frustrated with Trello's limitations. The 512KB constraint forced good architecture: server-side rendering, progressive enhancement, shared indexes instead of per-item duplication. Perfect Lighthouse score so far - the real test is keeping it through release.
Extracting patterns into genX framework (genx.software) for release later this month.
"What crazy website can _you_ build in 512KB?" Exactly! That would be super fun and interesting.
While a fun idea, arbitrary limits like this just aren’t necessary. Yes it’s all well and good in the name of reducing trackers, etc but what if I want to have an image heavy site? Why does that get perceived as a bad thing?
Would help if there was a short description of what the websites are about, instead of just a list of random URLs.
> Your total UNCOMPRESSED web resources must not exceed 512KB.
I only see domains listed. Does this refer to the main page only, or the entire site?
Most thorough discussion here from a few years ago: <https://news.ycombinator.com/item?id=30125633>
Seems like lichess dropped off
the very first website has either 404 links or pages with over a megabyte of total payload. idea is good but I don't buy "fast" website that only serve text with the css from 70s
> Why does any site need to be that huge? It’s crazy.
It's advertising and data tracking.. Every. Single. Time.
PiHole/Adblocker have become essential for traversing the cesspool that is the modern internet.
While a lot of sites break when you disable JavaScript, browsing is very fluid when you do. I’m also using the html version of DDG. There’s only an handful of websites (and the majority are apps) that I’ve enabled JS for.
And one of these days, I will write a viewer for GitHub links, that will clone the repo and allows me to quickly browse it. For something that is aimed at dev, the platform is horrendous.
> It's advertising and data tracking.. Every. Single. Time.
Use bootstrap and one image larger than 16x16 and you're near 500KB already.
It's easy to blame the boogeyman but sometimes it's worth looking in the mirror too...
So, “Let’s build carbon-titanium-foldable bicycles instead of bloated modern cars, and still get from A to B?”
How many mainstream online platform users care about the difference in KB in their experience, anyway?
The sites in the list are hobbyist clubs with a technical point of view, which wouldn’t make sense for a mass media outlet with millions of daily traffic, and real interdepartmental complexity and compliance issues to deal with.
> So, “Let’s build carbon-titanium-foldable bicycles instead of bloated modern cars, and still get from A to B?”
That's when you fit the core of your website into 14KB so it can be sent in a single round trip.
512KB is a lot. You can fit a reasonable "car" into it.
> Let’s build carbon-titanium-foldable bicycles instead of bloated modern cars, and still get from A to B?
Yeah, that sounds awesome.
Nobody _needs_ to run a 4-minute mile, or win a chess tournament, or climb a mountain, or make the most efficient computer program. They're still worthwhile human endeavors.
14KB Club > 512KB Club
There’s basically nothing on that list lol. Is there a list of all these N-KB Club sites?
I don't know of any list, but I know at least of the following “clubs” pages:
size-related
- <https://14kbclub.com/> - only learned about it today but I am not sure if my site would qualify (it is only ~12 KiB, but does multiple requests...)
- <https://250kb.club/>
- <https://512kb.club> - my site got removed as “ulta minimal” :(
- <https://1mb.club/>
not specifically size-related
- <https://no-js.club/members/>
- <https://xhtml.club/>
- <https://textonly.website/> - my site got removed (I guess because it has a logo and this makes it not text-only...)
There used to be also a 10 KB club and per its rules my site would have qualified except for the requirement to be featured on HN or otherwise be a “noteworthy site” if I recall correctly. However, 10KB club seems to be offline for some time already...
In general the issue with these kinds of pages is mostly that they only check _one_ page (typically the homepage but sometimes I see people submit a special “reduced version” of their homepage, too...). Of course if _all_ pages were to be relevant I think even my (pretty miminmalist) page wouldn't qualify because some pages have high-resolution images I guess...
> Your total UNCOMPRESSED web resources must not exceed 512KB
JavaScript gets all the hate for size, but images easily surpass even your most bloated frameworks.
Which is why the websites on this list largely don't use media.
---
The problem with JavaScript is the network size (well, not as much); it's the execution time.
Not sure how a site can fit in that club?
For e.g., if someone uses Google Analytics, that alone comes to 430kb (which most people do)
Then don't use google analytics.
One option is to use access logs on the server and process stats
Back in the early internet, nobody had enough bandwidth to transmit 512 kB in a reasonable time, so clearly it has to be possible.
Well yes, we had low resolution images, no consideration for different viewports, no non-default fonts and little interactivity beyond links or other queries to the server.
> no non-default fonts
That's a win!!!
> For e.g., if someone uses Google Analytics, that alone comes to 430kb (which most people do)
Perhaps someone might not use Google Analytics. Perhaps someone might apply 430kb to actual content instead.
It’s very easy. 512kb can fit a whole novel in epub format. And HTML is a very verbose language.
Just sticking with html, it's easy peasy
> Not sure how a site can fit in that club?
That's the challenge
true
I've never used it. Some browsers even honor that stupid beacon header now too