Copyright 2008-2009, Paul Jackson, all rights reserved
I knew it. You knew it. We just didn’t want to admit it. There had to be software at the heart of the world’s financial woes and some way to blame a programmer.
Sure enough, the press has tracked down one of the programmers responsible for writing the software that helped financial companies with “securitisation”, or the turning of regular mortgage notes into derivative securities that no one actually understands.
Michael Osinski, 55, retired from programming and now farming premium oysters off Long Island, was one of the programmers for a company that supplied this software to many financial firms and was involved in the new “feature” that added subprime mortgage market to the process.
Osinski decided to go public after being called a “devil” and “facilitator” by people who found out what he used to do.
Now, if I’d written that software I don’t think I’d go public … there’d probably be a gap on my resume, even, but Osinski decided to talk to the press, and he makes a really valid point:
"Securitisation is a good thing when it allows firms correctly to price risk into their calculations," he said. "If people are re-paying their mortgages, then the process works fine. But if you put garbage in, you'll get junk out."
The good or bad of the software depends on how it’s used, the program itself is just a tool. Like eBay, which is a great auction site, but has had prevalent fraud – is that the fault of the software or the users?
But do we, as developers, have any responsibility to think about how the software we’re writing could be misused and what the consequences might be?
We do have a responsibility to think about how it might be attacked by someone malicious, right? The security of a system is our responsibility and we’re supposed to analyze the possible attack surfaces and methods to ensure that the system can’t be misused by an attacker – but what about misuse by a legitimate user?
When dealing with security, I always tell my team to assume the client (whether the end-user or another development team) is either stupid or malicious. Meaning that they will, at some point, send the most damaging input possible, either because they’re deliberately trying to break the system or they don’t know what they’re doing – and so the system must be protected from that.
So does that extend to analyzing the possible negative impact of a new “feature” or system outside of the software itself? Do we have a responsibility to ask: What will using this software as designed do to the user, company or world?
Or, even if we don’t have the responsibility, should we do it anyway because, sure-as-shootin’, somebody’ll say it’s our fault?