Websites Accessibility Evaluation Methodologies: Conference Report
The panel began with an introduction and background by Nirmita Narasimhan on the digital dispositions of the UNCRPD, obligations of states parties and the need to have clearly defined and credible evaluation methodologies for effective policy formulation and implementation.
Shadi Abou Zahra gave a brief overview of the WCAG 2.0 guidelines and discussed some of the important points which need to be borne in mind while doing large scale evaluation of websites. He talked about the selection of tools, limitations of automated tools, importance of selection of pages for manual testing, sampling techniques, qualitative versus quantitative analysis, different types of testing such as expert and user testing and evaluation goal and scalability issues.
Neeta Verma discussed the guidelines for Indian websites brought out by the NIC in February 2010 and said that as per the checkpoints in the guidelines, there were a small percentage of checkpoints which could be tested using automated tools, some percentage for which expert and user testing was required. She presented one approach to evaluation adopted by the NIC, which was to certify the CMS rather than individual pages since the latter would be extremely difficult in cases of websites having thousands of pages, as was the case with several government websites. She stressed the need for positive thinking, user involvement and on the need to have an organized community of trained accessibility experts in India to whom the government could outsource testing work.
Srinivasu made a distinction in Yahoo’s approach with regard to accessible websites between existing and upcoming websites. He said that for existing websites, the approach was to do an evaluation, prepare a report and prioritize the issues to be addressed. And as far as new websites are concerned, the attempt should be to keep accessibility in the loop right from the development stage itself. In terms of doing evaluation, he said that his methodology was to first quickly run an automated tool to check for errors. Then depending upon the number and kinds of errors, he would decide whether or not to follow up with a manual test. If the errors thrown up were few or nil, then he would do a manual test of some pages. However, if there were many errors and many of the errors were very basic ones like no alt text attributes, no headings, etc., then he may decide not to go ahead with the manual test at all.
Shadi also pointed out that sometimes it was possible that a website could be error free, except for one error, but if this one error was that the pay button on a shopping site was inaccessible, then the website would have to be evaluated as inaccessible since this was the most important button in the website rendering it usable or unusable. Shadi pointed that automated testing was critical to do large scale evaluations, but that this would only help in getting a quantitative analysis while aggregating results, whereas for a qualitative analysis, one would still have to do a manual test with users and experts and pay special attention to the kinds of pages which are selected for this type of test.
While highlighting the importance of manual testing, Srinivasu pointed out that although an automated tool could tell you whether or not an alt attribute was present, it could not determine whether that attribute was the appropriate one. When asked to share his impression on the common inaccessible features on government websites which he has been testing in large numbers over the past few weeks, he said that he found a lot of errors which were very basic, like no headings, no alt attributes, table based layouts, missing keyboard functionality for drop down menus, dynamic websites which used Java script and Ajax instead of Aria and so on.
Glenda walked the audience through the methodology which she used for evaluating a single client’s website, how she used manual testing to do a baseline accessibility survey of the Texas University website and then used different tools to test different things, for example, desktop tools like Fire Eyes and accessibility tools for testing page by page. Glenda also talked about the importance of testing authoring tools, producing enterprise accessibility report, code validation, and accessibility validators to test with assistive technology. Glenda concurred with the other speakers that accessibility evaluation and monitoring should be at all stages of the website’s development life cycle — accessibility at the design stage, testing and mediating during development to ensure that it continues to remain accessible, because a lot of websites start out by being accessible, but lose accessibility somewhere along the way and finally test for monitoring accessibility of the website.
Some other issues which were discussed were the importance of user level for determining accessibility and choice of users, evaluation methodology to include reporting of minor changes in order to allow for monitoring of progress even if it is on a small scale, need for testers to think from every person and every device perspective, doing component and template testing for new websites as a good way to check for accessibility and the importance of aggregation and report writing. Overall there was a consensus amongst speakers that any effective and credible evaluation methodology, especially for large scale evaluation, would involve a mix of automated and manual testing with users and experts and would have to be done at every stage of development and maintenance of a website.
See the event on W3C website here