This is a bit of a rambling post covering some thoughts on data privacy, mobile phones and social networking.
A general and continued decrease in personal privacy seems inevitable in our age of data (NSA Files at The Guardian). We generate a lot of data, we rarely know how or where it is stored and we don’t understand how easy it is to make certain inferences based on aggregated forms of our data. Cory Doctorow has some points on why we should care about this topic.
Will we now see the introduction of active countermeasures in a data stream by way of protest or camouflage by regular folk?
Update – hat tip to Kyran for prism-break.org, listing open-source alternatives to Operating Systems and communication clients/systems. I had a play earlier today with the Tor-powered Orweb on Android – it Just Worked and whatsmyip.org didn’t know where my device was coming from (running traceroute went from whatsmyip to the Tor entry node and [of course] no further). It seems that installing Tor on a raspberrypi or Tor on EC2 is pretty easy too (Tor runs faster when more people start Tor relays [which carry the internal encrypted traffic, so there's none of the fear of running an edge nodes that sends the traffic onto the unencrypted Internet]). Here are some Tor network statistic graphs.
I’ve long been unhappy with the fact that my email is known to be transmitted and stored in the clear (accepting that I turn on HTTPS-only in Gmail). I’d really like for it to be readable only for the recipient, not for anyone (sysadmin or Government agency) along the chain. Maybe someone can tell me if adding PGP into Gmail via the browser and Android phone is an easy thing to do?
I’m curious to see how long it’ll be before we have a cypherpunk mobile OS, preconfigured with sensible defaults. CyanogenMod is an open build of Android (so you could double-check for Government backdoors [if you took the time]), there’s no good reason why a distro couldn’t be setup that uses Tor, HTTPSEverywhere (eff.org post on this combo, this Tor blog post comments on Tor vs PRISM) and Incognito Mode by default as a start for private web usage. Add on a secure and open source VoIP client (not Skype) and an IM tool and you’re most of the way there for better-than-normal-folk privacy.
Compared to an iOS device it’ll be a bit clunky (so maybe my mum won’t use it) but I’d like the option, even if I have to jump through a few hoops. You might also choose not to trust your handset provider, we’re just starting to see designs for build-it-yourself cellphones (albeit very basic non-data phones at present).
Maybe we’ll start to consider the dangers of entrusting our data to near-monopolies in the hope that they do no evil (and aren’t subject to US Government secret & uninvestigable
disclosures to people who we personally may or may not trust, and may or may not be decent, upright, solid, incorruptible citizens). Perhaps far-sighted governments in other countries will start to educate their citizens about the dangers of trusting US Data BigCorps (“Loose Lips Sink Ships“)?
So what about active countermeasures? For the social networking example above we’d look at communications traffic (‘friends’ are cheap to acquire but communication takes effort). What if we started to lie about who we talk to? What if my email client builds a commonly-communicated-with list and picks someone from outside of that list, then starts to send them reasonably sensible-looking emails automatically? Perhaps it contains a pre-agreed codeword, then their client responds at a sensible frequency with more made-up but intelligible text. Suddenly they appear to be someone I closely communicate with, but that’s a lie.
My email client knows this so I’m not bothered by it but an eavesdropper has to process this text. It might not pass human inspection but it ought to tie up more resources, forcing more humans to get involved, driving up the cost and slowing down response times. Maybe our email clients then seed these emails with provocative keywords in innocuous phrases (“I’m going to get the bomb now! The bomb is of course the name for my football”) which tie up simple keyword scanners.
The above will be a little like the war on fake website signups for spam being defeated by CAPTCHAs (and in turn defeating the CAPTCHAs), driving perhaps improvements in NLP technologies. I seem to recall that Hari Seldon in Asimov’s Foundation novels used auto-generated plausible speech generators to mask private in-person communications from external eavesdropping (I can’t find a reference – am I making this up?), this stuff doesn’t feel like science fiction any more.
Maybe with FourSquare people will practice fake check-ins. Maybe during a protest you comfortably sit at home and take part in remote virtual check-ins to spots that’ll upset the police (“quick! join the mass check-in in the underground coffee shop! the police will have to spend resources visiting it to see if we’re actually there!”). Maybe you’ll physically be in the protest but will send spoofed GPS co-ords with your check-ins pretending to be elsewhere.
Maybe people start to record and replay another person’s check-ins, a form of ‘identify theft’ where they copy the behaviour of another to mask their own movements?
Maybe we can extend this idea to photo sharing. Some level of face detection and recognition already exists and it is pretty good, especially if you bound the face recognition problem to a known social group. What if we use a graphical smart-paste to blend a person-of-interest’s face into some of our group photos? Maybe Julian Assange appears in background shots around London or a member of Barack Obama’s Government in photos from Iranian photobloggers?
The photos could be small and perhaps reasonably well disguised so they’re not obvious to humans, but obvious enough to good face detection & recognition algorithms. Again this ties up resources (and computer vision algorithms are terribly CPU-expensive). It would no doubt upset the intelligence services if it impacted their automated analysis, maybe this becomes a form of citizen protest?
Hidden Mickeys appear in lots of places (did you spot the one in Tron?), yet we don’t notice them. I’m pretty sure a smart paste could hide a small or distorted or rotated or blended image of a face in some photos, without too much degradation.
Figuring out who is doing what given the absence of information is another interesting area. With SocialTies (built by Emily and I) I could track who was at a conference via their Lanyrd sign-up, and also track people via nearby FourSquare check-ins and geo-tagged tweets (there are plenty of geo-tagged tweets in London…). Inferring where you were was quite possible, even if you only tweeted (and had geo-locations enabled). Double checking your social group and seeing that friends are marked as attending the event that you are near only strengthens the assertion that you’re also present.
Facebook typically knows the address book of your friends, so even if you haven’t joined the service it’ll still have your email. If 5 members of Facebook have your email address then that’s 5 directed edges in a social network graph pointing at a not-yet-active profile with your name on it. You might never join Facebook but they still have your email, name and some of your social connections. You can’t make those edges disappear. You just leaked your social connectivity without ever going near the service.
Anyhow, enough with the prognostications. Privacy is dead. C’est la vie. As long as we trust the good guys to only be good, nothing bad can happen.
Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight
and Mor Consulting
, founded the image and text annotation API Annotate.io
, co-authored SocialTies
, programs Python, authored The Screencasting Handbook
, lives in London and is a consumer of fine coffees.