Mastering cross-browser testing
Learn how to ensure all your users get the app experience you intended by testing across all major browsers
Your web app might look great in Chrome but break in Safari. Why does that happen? And how can you prevent your users from receiving browser incompatibility errors in production?
Today, we’ll explore the challenges of cross-browser testing and share how you can optimize your web app’s experience to support all your users across their browser choices. We’ll highlight what causes browser incompatibility errors, the most popular browsers used today, how to decide which browsers you need to be testing, and answer some common FAQs.
What causes browser compatibility errors?
The reason why apps look different in different browsers is because different browsers use different rendering engines to interpret CSS, HTML, and JavaScript. While your code may be the same, how it gets rendered is unique to each specific browser.
One example of browser inconsistency is the history of support for the gap
property in Flexbox layouts. While gap
support was introduced to Chrome in version 84 (July 16, 2020), it took another year for gap
support to arrive to Safari. This incompatibility meant that there was a lengthy period of time in which many layouts rendered poorly on Safari, even though they looked perfect in Chrome.
What is cross-browser testing?
Cross-browser testing is a testing method that combines functional tests and visual tests to validate that your UI across different browsers to detect instances of browser incompatibility. This coverage ensures that you provide the user experience you intended across all the necessary browsers.
Can you perform cross-browser testing manually?
Yes! In fact, in the web’s early days, that’s what testers had to do.
However, it’s much faster and more precise to QA your application’s functionality and appearance in different browsers with cross-browser testing tools that automate cross-browser tests via CI/CD jobs. Validating browser compatibility in CI means you’ll never ship a commit that accidentally breaks your code in an untested browser.
Which browsers should you test?
When you don’t have much time or resource to allocate towards testing, it’s crucial to understand the scope of the cross-browser tests that you need to perform. To make that decision, there are four primary factors that you can consider. Let’s explore each of these and their significance.
Current browser market share
Browser market share analysis enables you to understand the volume of users on any specific browser. When you’re deciding how to maximize your tests’ efficiency, it makes sense in virtually all cases to prioritize the browsers with the highest usage.
Among desktop usage, Chrome dominates with 66% of total users. Edge is the second most popular desktop browser with 14% of total users. Safari and Firefox together account for roughly another 15% of total users. All other browsers combined have much less usage.
Mobile browsers have a clearer usage distribution. In almost every market, Chrome and Safari capture 90%+ of all users. Worldwide, Chrome has about a 65% share in total mobile users in comparison to Safari’s 25%. For North America as a whole, Safari’s share of total mobile usage is 50% while Chrome has 44%.
From these numbers, we can see that for desktop devices, the four key browsers to support are Chrome, Edge, Safari, and Firefox. Among those, you should focus most of your time on Chrome, because it’s over twice as popular as the other browsers combined. Meanwhile, for mobile browsers, focusing on Safari and Chrome gives you the most coverage.
Number of users
For small apps, you can predict that most visitors will use one of two browsers: Chrome on desktop and mobile, or Safari on mobile. By focusing your testing resources on those two browsers, you’ll cover around 75% of desktop and almost all mobile users. If you’re not planning to support mobile users, you can focus on Chrome.
The picture is slightly different if you’re an enterprise app with hundreds of thousands of monthly visitors. It makes sense to broaden your desktop tests to include Edge, Safari, and Firefox. Ensuring that most customers have the experience you intended is well worth your team’s time.
Regional variation
Browser usage changes by region. This variation means that knowing your users’ location is as important as knowing their choice of browser.
Are most of your visitors in the US? Adding Safari and Edge to your desktop tests is a good idea: Chrome, Safari, and Edge represent over 90% of users. Alternatively, if they are mostly in India, almost 90% of India’s desktop market share is Chrome, so focusing on that one browser seems reasonable.
You can find desktop usage for wherever makes sense for your users. For instance:
- 🇦🇺 Australia: Chrome + Safari + Edge = 95%
- 🇨🇦 Canada: Chrome + Safari + Edge = 94%
- 🇮🇳 India: Chrome = 87%
- 🇯🇵 Japan: Chrome + Edge + Safari = 96%
- 🇳🇱 Netherlands: Chrome + Safari + Edge = 94%
- 🇸🇬 Singapore: Chrome + Safari + Edge = 94%
- 🇰🇷 South Korea: Chrome + Edge = 93%
- 🇺🇸 US: Chrome + Edge + Safari = 94%
- 🇬🇧 UK: Chrome + Safari + Edge = 94%
- 🇦🇺 Australia: Chrome + Safari + Edge = 95%
- 🇨🇦 Canada: Chrome + Safari + Edge = 94%
- 🇮🇳 India: Chrome = 87%
- 🇯🇵 Japan: Chrome + Edge + Safari = 96%
- 🇳🇱 Netherlands: Chrome + Safari + Edge = 94%
- 🇸🇬 Singapore: Chrome + Safari + Edge = 94%
- 🇰🇷 South Korea: Chrome + Edge = 93%
- 🇺🇸 US: Chrome + Edge + Safari = 94%
- 🇬🇧 UK: Chrome + Safari + Edge = 94%
The size of your team
The resource at your disposal decides how much time you can dedicate to browser testing. Large teams may have enough people to test in all browsers, while smaller teams must balance which subset of browsers gives the highest test coverage for their limited time.
Summary
Ultimately, your decisions break down to something like this:
Screen type
Chrome | Safari | Edge | Firefox | |
---|---|---|---|---|
Desktop users | ✅ | |||
Mobile users | ✅ | ✅ |
minimum browsers to test, based on platform
How many users
Chrome | Safari | Edge | Firefox | |
---|---|---|---|---|
1-15K users per month | ✅ | |||
15K-250K users per month | ✅ | ✅ | ✅ | |
250K+ users per month | ✅ | ✅ | ✅ | ✅ |
minimum browsers to test in based on monthly visitor numbers
Location of users
Chrome | Safari | Edge | Firefox | |
---|---|---|---|---|
North America | ✅ | ✅ | ✅ | |
South America | ✅ | |||
Europe | ✅ | ✅ | ✅ | |
Africa | ✅ | ✅ | ||
Oceania | ✅ | ✅ | ✅ | |
Asia | ✅ | ✅ | ✅ | ✅ |
minimum browsers to test in based on visitor location
Tools for cross-browser tests
To understand browser compatibility differences, sites like Can I Use are invaluable. Can I Use provides a virtually complete overview of how different browsers support different JS, CSS, and HTML features and properties.
However, you can also run browser compatibility checks without relying on external documentation, by automating them as a CI job using cross-browser testing tools like Chromatic!
Chromatic tests how your app renders in different browsers on each commit and runs all tests in parallel, so you don’t have to finish testing in one browser before moving onto the next. You can integrate Chromatic with Storybook for story-based visual testing, or integrate with Playwright and Cypress to run visual tests within your end-to-end test suite.
Chromatic provides a free tier with 5,000 snapshots per month. Try it out today. Alternatively, learn more about Chromatic’s UI Test workflows or discover more about Chromatic’s visual testing.
FAQs
What’s the difference between cross-browser testing and parallel testing?
Rather than being a specific type of test, parallel testing refers to how tests are performed by testing tools. When you test in parallel, you run your tests simultaneously, rather than waiting for one test to complete before proceeding to the next. Naturally, this speeds up your testing workflow. Some testing tools make parallel testing more expensive. However, Chromatic run all tests in parallel at no extra cost.
So, cross-browser testing can be performed in parallel – for example, testing Chrome and Safari at the same time – but it isn’t parallel by default, necessarily.