Methodologies for Accessibility Eveluation
Recently I was on a panel discussion (what is a "panel discussion" and how to properly tune in and prepare materials for it, read on verified sources, and also with the words Do My Assignment refer to good source bases) organized by The Global Initiative for Inclusive ICTs (G3ICT), World Wide Web Consortium and The Centre for Internet and Society at 20th World Wide Web Conference and during the panel we have discussed the various methodologies for Accessibility Evaluation. It was a pleasure to share the panel with Shadi Abou-Zahra from W3C, Glenda Sims of Deque Systems Inc, Dr. Neeta Varma of National Informatics Centre (NIC) and the panel was moderated by Nirmita Narasimhan of Centre for Internet and Society. Here is the transcript of my thoughts.
Good evening once again, it’s great pleasure to be part of this panel!
I will be sharing the approach I take both at Yahoo! as well for the Government Projects I take up. Ground rule is that it’s always recommended that accessibility must be considered right at the beginning stage. There are two scenarios one is about making existing websites accessible and second is ensuring accessibility for all new websites. So at Yahoo!, what we do is for existing websites, we identify the issues across the site, document them and then schedule them in phase wise (we call it as sprints) to fix them. And for new websites, we partner with the product team right at the initial stage and eveluate them right at the mock stage, then support at the engineering stage, QA stage to ensure complete accessibility. Off late, we have created an accessibility automation tool to make the eveulation process simple.
While I look at the Government websites, what I would prefer to do is first use an automated tool such as WAVE Tool bar and if there are several issues, I would prepare a “Global findings report” that will enable their teams to address all the major issues. Once the major issues are addressed, I will take up detailed eveluation including manual testing. If automated tool shows less errors, I will do a quick manual test and if the page is bad, again I will prefer to submit global findings report and if I find the page better, then do a complete manual review.
Also, what I have observed is that it would be important for Design, SEO, Content and Accessibility teams to partner and work together so as to provide accurate support to the engineering team.
Lastly, I would suggest, do not rely just on automated tools. There can make you fool in scenarios like they can only check if there are text alternatives provided or not, but not if they are accurate.
Posted in Web Accessibility