Question:
How do I test my emails?
If you want to ask me a question and have me answer it here, please go to this a11y.email questions google form. This is a no-judgment way to get your questions answered!
Answer:
The answer is… it kinda depends.
When I’m testing an email from start to finish (like for an accessibility audit), I will review the file in a browser and also dig into the code for best practices. I have a specific checklist that I use to make sure I get everything. If there are any truly unique/different coding solutions, I will take those and test them in different email clients with assistive technologies, but most of the emails I audit are pretty similar. I will usually review an email a few different times to make sure that I’m not missing anything. It’s not a super fast process and not something I’m going to do in one day.
Generally, I end up testing specific things based on client requests, or if I hear something weird in our industry that I want to check into, or if I just get curious and wonder what works or doesn’t.
When I’m testing, I am usually testing on two main things: keyboard-based navigation and screen readers. Some screen readers change the functions on a keyboard or the tap interface of mobile devices and there’s a lot of variation in how all of these work.
We know that email clients will process the code it receives. Usually this is for security reasons, but most of the time it’s just a lack of email standards. We live with that reality every time we have to fix a rendering issue, so I’m not going to go too deep into that. We know that email clients process our accessibility-specific coding also – to various levels of support. I’ve found that whatever processing the email service provider does + the processing the email client does + the way that the different screen readers interpret that result = some really weird outputs.
Because of this I’ve spent some time (and money) investing in some different ways to test my emails.
My testing devices and email clients:
On my Mac:
- Apple Mail (Voiceover)
- Outlook 2016 for Mac (Voiceover)
- Gmail.com (Voiceover)
- Yahoo.com (Voiceover)
On my PC:
- Outlook 2024 (NVDA, Narrator, JAWs)
- Outlook 365 (NVDA, Narrator, JAWs)
- Gmail.com (NVDA, Narrator, JAWs)
- Yahoo.com(NVDA, Narrator, JAWs)
On my iPhone:
- iOS Mail (Voiceover)
- Outlook App (Voiceover)
- Gmail App (Voiceover)
- Yahoo App (Voiceover)
On my Android phone:
- Outlook App (Talkback)
- Gmail App (Talkback)
- Yahoo App (Talkback)
I use putsmail to send to my various testing email addresses and when I review the emails, I have these screen reader cheat sheets up on another screen: https://dequeuniversity.com/screenreaders/
It’s important to note that there are two distinct types of accessibility testing I do:
- Testing actual emails for clients, where I review the whole email top to bottom and make sure that the email will meet all needed WCAG criteria to serve as many different recipients as possible.
- Testing individual components to see how they work in specific situations. This includes my tests about alt text and aria-label, VML code components, different kinds of CTA options, etc.
In this post, I’m largely talking about testing for specific components because that is most of what I test and post about on this blog. An accessibility audit takes information gleaned from these component tests to best direct the client towards best practices, but it focuses on much more than specific components.
When I’m testing components, I always make sure I have a control – ie. the basic version of whatever I’m testing. So when I’m testing Hawaiian diatrics, I make sure I have one that’s just text without diatrics, along with just text with the diatrics converted to character entities, along with whatever I’m actually testing for. If I’m testing CTA link solutions, I’ll make sure I have a plain text link in there, as well. I also have some sort of break of content because I’ve found that if I test things stacked on top of each other, issues from one solution might “bleed” into the other. (This mostly happened with VML, sometimes the readout would read multiple different things in a weird way, so I just make sure to include some other content, usually just a paragraph with a single sentence in it.)
It’s also important to think about what you’re testing for and what would make it easier to test with. For instance, when I was testing aria-label vs alt text, I didn’t have the “correct” text alternative for my kitten image placeholder in there at all – that would not have been helpful because I wouldn’t have known which was which. So when I’m testing specific attributes to see what is working or not, I will just label the thing. So I would have the aria-label value be “This is the aria label” and the alt value would be “This is the alt text” which makes it super clear what’s being read, if it’s being read multiple times, or if it’s not being read at all.
Note: These are not placeholders, these are specific testing strategies that help me know what I’m reading on screen readers or other devices. It makes it easier to match up what is working correctly and what is not.
Usually, I try to make sure my test is just in one email, it makes it so much easier to manage. However, in some situations you have to send separate test emails – I found this to be true when I was testing different language declarations to try to figure out what was needed in each email client. (My results there were weird, it wasn’t a clear-cut answer which is why you don’t have a recommendation from me on that, more to come)
For each device that I’m testing on, I find it infinitely easier to record a video of my tests and then translate the video into data. Mobile devices easily let you do a screenshot recording, so that’s pretty easy. On my computers, I use Zoom to share screen and sound and record the session. This allows me to break up the serious time commitment of these tests – I make the HTML file, send the HTML file, record reading the email, review the recording and make notes, and then put my data together. If I had to take notes while reading I would miss many details that the recordings pick up and allow me to go back to over and over again.
Recording the sessions also allows me to focus on the experience of reading the email with screen readers. Screen readers will change the readout depending on a few factors, like if you are in browse mode or focus mode, etc., so having a way to read clean through the email and then go to specific testing criteria is much better than going in and out to make notes and accidentally missing something. If you have no idea what I’m talking about with browse and focus modes, this article is incredibly insightful for better understanding the differences between the two.
I’m also careful to test specific things that are relevant to whatever I’m testing. When I was testing VML background images, I needed to know if users could still jump from heading to heading, link to link, etc. And I needed to make sure that users could get into the content just like any other table cell. (Pro tip: They could not, which is one of many reasons why I don’t recommend using VML in emails) All of this gets recorded so I can easily go back and reference whatever happened. Sometimes when I’m reviewing the video recording, I notice that I did something stupid and I have to go back and confirm that it wasn’t an email client/ screen reader error. So we’re not just testing the readout, we’re also testing the functionality of whatever we’re testing. It’s not always easy to think of all the different things to test for any specific component. I usually have a list of things I want to test and look through them and add notes whenever I think of some other aspect of the testing that needs to be done whenever I get around to testing it. Sometimes I’ll get halfway through a description of my best practices and then I’ll realize I missed something big and have to go back and retest.
Each email client and screen reader has their own key commands to navigate the email. If you’re using pretty standard coding elements, you don’t have to worry too much about how the emails are navigated within the email client, so we don’t need to test that too much, but I think it’s worth learning how to navigate them regardless. It helps you better understand the experience of reading emails using a screen reader, and that is really helpful information.
As you might be picking up on, this is a pretty complicated process. On the one hand, if I have something I want to test on a specific device, I can quickly code it, send it, test it and tally it, but generally speaking it’s pretty time consuming. I used to just immediately test things as soon as I thought of it, but lately I’ve been just keeping a list of things to test and when my schedule allows I will code the email and deploy it with a specific subject line. Later on I will usually spend an evening recording the emails in various devices and screen readers, sometimes that takes a few evenings. Then I’ll sit down even later and review all the recordings and start tallying it on my base testing spreadsheet so I can start picking up on patterns and concerns. It takes weeks if not months to test as thoroughly as I do. I could do it faster, sure, but the slow pace helps me make sure that I’m really thinking of all the possibilities.
At this point in my testing experience, I’m usually pretty sure how things will work when I test them, but that isn’t universal and I am often surprised by the output in my tests. I had gotten some feedback on VML coding issues early in my accessible email journey, so I was pretty sure where that test was going to go, but I was incredibly surprised to discover that alt text was not read on linked images in Apple Mail.
There are other assistive technologies to test on, but I’ve found that screen readers tend to be a fairly good baseline on how assistive technologies work with our emails. I’m hopeful that at some point I can better understand how to test on other AT’s as I think it’s important work, but I think it’s also important to realize that some assistive technologies are incredibly hard to work with unless you work with them day in and day out. Screen readers fall into that category as well. There’s a reason why I don’t post my videos of tests – it’s because it sounds SUPER clumsy when I’m working through it. I’m better than I was, but… definitely not a super user.
It’s taken me a few years to get my combination of devices and screen readers together, and another year or two to be able to work with them with any kind of confidence. I highly recommend getting started with what you have and experiencing those tools – but I will give the caveat that a LOT of what you first encounter as a screen reader user is either totally normal and expected (every screen reader will read out an image slightly differently or in a different order – that’s fine!) or because you don’t quite know how to use the tool yet.
That’s okay!! Keep practicing!