(Editor’s Note: The following is a guest blog post from Tomás Touceda, Privacy Officer at SpiderOak)
I’ve had this subject in my head for quite some time now. Especially these past few months when privacy products are appearing left and right.
When is an approach lean and when does it stop being that way?
The lean approach to entrepreneurship basically means you do the least amount of things in order to see whether an idea would work. The goal is to not waste a lot of time doing what won’t give any returns.
Test frequently, quit bad paths as early as possible. I’m a newbie in this area however, so one of the questions I have is: when does an approach stop being lean?
This may have really easy answers for a lot of different products but things are quite different in the privacy field.
Does going at something the lean way mean not protecting things as much as you need? Doing secure software takes quite a bit of time. It’s not an ‘Oh, by the way, would you throw in some privacy?’ kind of thing.
So, what do you do? Do you prototype something with a nice UI, add a padlock icon somewhere and say ‘it’s completely secure and private’? Perhaps upon seeing if you get users, then you add the real security?
In some situations you may argue that is an okay approach. But what kind of user did you have in that insecure lean approach? Activist? Journalists in complex political situations? You might have gotten some people killed.
So what? You test with users who aren’t really the target of your system? That might be a good ethical compromise, because you don’t want to get people into dangerous situations because they are using your new privacy product. If you do that though, how on earth are you going to polish your system to accommodate your real users?
What’s the right balance?
When is something privacy-preserving?
Then we move to this problem: when can something be called ‘privacy-preserving’?
A lot… a lot of new systems call themselves privacy-preserving. But it looks like it’s a tag that is added quite lightly.
What are the minimum requirements for a system to be able to say that it protects your privacy? Nobody has defined this, and it makes sense in a way because it depends on each system’s threat model, but most don’t even have a threat model!
Privacy can be compared to a characteristic such as stability. Any system can say ‘We provide stability’, and that will be true until the system goes down and suddenly the company behind it was lying because it turned out to be not that stable.
Privacy works in the same way, anybody can say ‘We protect your privacy’, and that will remain true until a user realizes their information is somehow outside of this system when it shouldn’t be. So how do we make sure something is truly private?
Stability can be regained by improving the system. Privacy, once breached, can’t be regained that easily or at all.
Growth hacking for privacy products
How does this story sound:
- Company A launches privacy-preserving product but it turns out its security is questionable at best. The user interface looks really nice though.
- Security experts call it out for these problems.
- Company A says ‘If you can break it under these constraints, then we’ll accept our system is not secure’.
- Security experts explain why those constraints aren’t reasonable. Nobody wins the challenge even if the system used known insecure techniques.
- Journalists pick up the challenge, call the system NSA-proof, and make a big fuss about it.
- Company A gets millions of users.
Sounds neat from the startup perspective, right? It actually happened. The security of the product is still questionable at best but nobody cares.
Growth hacking everybody!
And this “technique” has been repeated with incredible success. Everybody wants to make millions in revenue, but at what cost?
We need to figure all this out
I’ve written more questions than answers here, and that’s not by chance. We really do have more question marks than big definitive answers. We need that to change though and we need to figure this whole privacy startup thing out before it comes back to bite us.
Real security is boring, I won’t lie. Therefore it’s no surprise it hasn’t gone viral. How can we change that? How can we make high security and privacy the basic requirement for everything?
Privacy by default, real privacy. That’s the only way. Now we need to figure out how to get there in a way that is financially viable.