The dangers of backwards thinking on software security

I noticed the following story today:

Offensive security research community helping bad guys

Starting with this quote from Adobe Security Chief Brad Arkin:
"We are involved in a cat-and-mouse game on [the software] engineering side. Every time we come up with something new and build new defenses, it creates incentive for the bad guy to look beyond that."
My immediate thought was 'What about the customers of Adobe Products?'. Even still, what about everyone in the security community who has to use Adobe Reader or Flash? It would seem that YES, research has lead to less vulnerabilities in Adobe products, which drives down the cost for customers to patch their systems and at the same time, requiring more complex attacks to exploit vulnerabilities.

Shouldn't users have any say in the security of tools that are in some cases mandatory to work? Then there is the following statement:
"We may fix one vulnerability that has a security characteristic but when we change that code, we are creating a path to other vulnerabilities that may cause bigger problems in the future"

I am pretty sure this is the responsibility of the software developer to actually test their code and validate that it doesn't create new code paths for vulnerabilities. There are numerous accounts of the lack of quality assurance that leads to security vulnerabilities being exploited in the wild but how is the failure of software security the fault of anyone but the developers?

It is a stretch to blame open discourse of security information which in fact makes software more secure then the unknown exploitation by criminals using previously unknown vulnerabilities. The quicker the information can reach the vendor, the quicker a remediation can be created. Adobe needs to strive to engage the security community the same way in which Microsoft has placed a focus on providing secure products to their customers (see the Trustworthy Computing SDL)