One of my ideas is checking the markup and style code to find potential render bugs in various browsers. But creating all the rules needed is a huge task.
The approach I’ve chosen is based on using humans to detect errors. The reason being that it’s most likely cheaper, and an automated system that analyzes browser screenshots feels like a pipe dream. By permutating partitions of different HTML tags and CSS styles a tester is shown the HTML/CSS on the left side, the rendered result on the right side, and a simple question: Does it look correct? Yes or No. Some knowledge of webdesign is needed, of course. The answer is stored with the permutation and browser information as a rule.
Rules can be weighted and various machine learning techniques could be used to “train” the system. But I won’t go into that today. Instead I’m proposing two ways of amassing this huge number of inputs that’s required:
1) Crowdsourcing. Let the testsite be public and generate needed tests based on the browser the visitor is using.
2) Outsourcing. Uses the same system but in a controlled environment that guarantees that all browsers are tested. Needs funding though.
The biggest hurdle is getting up to speed with current browsers, then a continous work to keep the rules updated for new browsers and versions.
Another way would be to have trained professionals, or just an active online community that write the rules directly from memory. But I don’t think that’s practical or even feasible in reality.