I love this! I'm pretty biased, but I think everyone should be doing VRT. I used to work at Percy and now I'm building Vizzly. :p
Playwright uses pixelmatch, which is decent but limited. It returns the number of mismatched pixels and a diff image, but nothing about where changes cluster or what kind of change you're looking at. I built Honeydiff (https://github.com/vizzly-testing/honeydiff) for Vizzly to go further: spatial clustering to group changes into regions, SSIM for perceptual scoring, and intensity stats to characterize what actually changed. My comparison vs odiff & pixelmatch here: https://vizzly.dev/blog/honeydiff-vs-odiff-pixelmatch-benchm...
I love OSS (https://vizzly.dev/open-source/), so if anyone wants to drop Vizzly into their project to get baseline management, review workflows, dynamic content handling, local VRT TDD, etc, it's there.
Anyway, I've been thinking about and building around this problem for a while now. Love to see others doing it too. I feel like a lot of folks have a bad experience that turns them off and I'm hopeful I can make a dent in that problem.
I frequently break my site in ways that aren't obvious. Right now, I use a combination of visualping and a homebrew tests.sh that hits various endpoints and runs checks, but I have been meaning to integrate screenshoting into my tests script (via selenium or cutycapt) rather than relying on a hosted service.
Have you found a good way of diffing the screenshots? DiffPDF works pretty well, but I haven't found a good solution for checking whether there are relevant changes automatically, rather than just has-changed, in a way that could be integrated into a script.
I only use the built-in diffing by Playwright. It comes with a nice overview page [0] that shows all the failed tests including traces and screenshots. There you have a pixel diff. If you have some notion of irrelevant changes that shouldn't warrant a test failure, I wouldn't know of a way to pull that off.
Ah forgot to mention it in the post. This comes built in by Playwright. Normally, you invoke the test suite by running `npx playwright test`. This fails your test if a screenshot is missing or if it differs. By running `npx playwright test --update-snapshots` you tell Playwright to just overwrite the snapshots and not fail tests.
I have these running in a CI/CD process, compare to previous commit. Results uploaded to R2. Few problems:
- Playwright regularly fails by timeout. This is flaky and go figure out what went wrong.
- You can do a matrix test (chrome/firefox/etc.) (mobile/tablet/etc.) but the problem is, you'll need to run these tests in parallel. The bare functional minimum is 16Gb vps with 4vcpu. For my test suite, it already take 20 minutes. If you want a larger matrix and have more pages, you'll be looking at a 64Gb with a dozen or so vpcus. That's hundreds of dollars a month...
- If you have an animation, it's a struggle to filter it out.
- From my knowledge, there is no "version slider" where you can go commit by commit and see how things changed.
- Playwright takes images and videos. These consumes a lot of data. Like Gbs of data for a few commits.
- Any of the managed solutions (like BrowserStack) costs hundreds of dollars.
Overall, I think it's great though a bit cumbersome to setup everything to work flawlessly and prevent from breaking every now and then. You can also do full flows (sigup-signin-do action-etc.. -> success/failure) which can test more than UI.
Thanks for the example of a Playwright report page.
I agree that getting browser tests (not even just visual tests) to work reliably is considerable work. I built out a suite at work for a rather complex web application and it certainly looks easier than it is.
A couple of notes:
- I disagree that you need a powerful VPS to run these tests, we run our suite once a day at midnight instead of on every commit. You still get most of the benefit for much cheaper this way.
- We used BrowserStack initially but stopped due to flakiness. The key to getting a stable suite was to run tests against a local nginx image serving the web app and wiremock serving the API. This way you have short, predictable latency and can really isolate what you're trying to test.
> - I disagree that you need a powerful VPS to run these tests, we run our suite once a day at midnight instead of on every commit. You still get most of the benefit for much cheaper this way.
Then how do you know which commit is responsible for the regression? I can see that working for a very small team where the amount of changes is limited but even so, especially with css, where a change in some place can affect the styles in another.
We probably have max 50 commits a day in our team spread across many areas in the application. So when breakages occur it's typically easy to tell which commit caused it.
But I agree, if you have a large team or a large monorepo you probably want to know about breaking changes already at the PR stage.
I love this! I'm pretty biased, but I think everyone should be doing VRT. I used to work at Percy and now I'm building Vizzly. :p
Playwright uses pixelmatch, which is decent but limited. It returns the number of mismatched pixels and a diff image, but nothing about where changes cluster or what kind of change you're looking at. I built Honeydiff (https://github.com/vizzly-testing/honeydiff) for Vizzly to go further: spatial clustering to group changes into regions, SSIM for perceptual scoring, and intensity stats to characterize what actually changed. My comparison vs odiff & pixelmatch here: https://vizzly.dev/blog/honeydiff-vs-odiff-pixelmatch-benchm...
I love OSS (https://vizzly.dev/open-source/), so if anyone wants to drop Vizzly into their project to get baseline management, review workflows, dynamic content handling, local VRT TDD, etc, it's there.
Anyway, I've been thinking about and building around this problem for a while now. Love to see others doing it too. I feel like a lot of folks have a bad experience that turns them off and I'm hopeful I can make a dent in that problem.
This is great.
I frequently break my site in ways that aren't obvious. Right now, I use a combination of visualping and a homebrew tests.sh that hits various endpoints and runs checks, but I have been meaning to integrate screenshoting into my tests script (via selenium or cutycapt) rather than relying on a hosted service.
Have you found a good way of diffing the screenshots? DiffPDF works pretty well, but I haven't found a good solution for checking whether there are relevant changes automatically, rather than just has-changed, in a way that could be integrated into a script.
I only use the built-in diffing by Playwright. It comes with a nice overview page [0] that shows all the failed tests including traces and screenshots. There you have a pixel diff. If you have some notion of irrelevant changes that shouldn't warrant a test failure, I wouldn't know of a way to pull that off.
[0] https://playwright.dev/docs/trace-viewer-intro#opening-the-h...
I'm familiar with TurnTrout's The Pond using visual regression testing as well: https://turntrout.com/design#visual-regression-testing
> you have to first generate a screenshot by running your suite with --update-snapshots.
How is it executed? Is it something build in into the Playwright, or there is missing part of the code presented, responsible for executing it?
Ah forgot to mention it in the post. This comes built in by Playwright. Normally, you invoke the test suite by running `npx playwright test`. This fails your test if a screenshot is missing or if it differs. By running `npx playwright test --update-snapshots` you tell Playwright to just overwrite the snapshots and not fail tests.
Love it. As a blogger myself, I can't think of the amount of time wasted to check every page of my blog during hugo upgrades. :)
If anyone is wondering what the test results look like, here is an example from my site: https://pub-1fbd8591bf7a40cea36fa130fb2ba6bc.r2.dev/playwrig...
I have these running in a CI/CD process, compare to previous commit. Results uploaded to R2. Few problems:
- Playwright regularly fails by timeout. This is flaky and go figure out what went wrong.
- You can do a matrix test (chrome/firefox/etc.) (mobile/tablet/etc.) but the problem is, you'll need to run these tests in parallel. The bare functional minimum is 16Gb vps with 4vcpu. For my test suite, it already take 20 minutes. If you want a larger matrix and have more pages, you'll be looking at a 64Gb with a dozen or so vpcus. That's hundreds of dollars a month...
- If you have an animation, it's a struggle to filter it out.
- From my knowledge, there is no "version slider" where you can go commit by commit and see how things changed.
- Playwright takes images and videos. These consumes a lot of data. Like Gbs of data for a few commits.
- Any of the managed solutions (like BrowserStack) costs hundreds of dollars.
Overall, I think it's great though a bit cumbersome to setup everything to work flawlessly and prevent from breaking every now and then. You can also do full flows (sigup-signin-do action-etc.. -> success/failure) which can test more than UI.
Thanks for the example of a Playwright report page. I agree that getting browser tests (not even just visual tests) to work reliably is considerable work. I built out a suite at work for a rather complex web application and it certainly looks easier than it is. A couple of notes:
- I disagree that you need a powerful VPS to run these tests, we run our suite once a day at midnight instead of on every commit. You still get most of the benefit for much cheaper this way.
- We used BrowserStack initially but stopped due to flakiness. The key to getting a stable suite was to run tests against a local nginx image serving the web app and wiremock serving the API. This way you have short, predictable latency and can really isolate what you're trying to test.
> - I disagree that you need a powerful VPS to run these tests, we run our suite once a day at midnight instead of on every commit. You still get most of the benefit for much cheaper this way.
Then how do you know which commit is responsible for the regression? I can see that working for a very small team where the amount of changes is limited but even so, especially with css, where a change in some place can affect the styles in another.
We probably have max 50 commits a day in our team spread across many areas in the application. So when breakages occur it's typically easy to tell which commit caused it.
But I agree, if you have a large team or a large monorepo you probably want to know about breaking changes already at the PR stage.