About

Ian Ozsvald picture

This is Ian Ozsvald's blog, I'm an entrepreneurial geek, a Data Science/ML/NLP/AI consultant, founder of the Annotate.io social media mining API, author of O'Reilly's High Performance Python book, co-organiser of PyDataLondon, co-founder of the SocialTies App, author of the A.I.Cookbook, author of The Screencasting Handbook, a Pythonista, co-founder of ShowMeDo and FivePoundApps and also a Londoner. Here's a little more about me.

High Performance Python book with O'Reilly View Ian Ozsvald's profile on LinkedIn Visit Ian Ozsvald's data science consulting business Protecting your bits. Open Rights Group

23 January 2013 - 0:09Layers of “data science”?

The field of “data science” covers a lot of areas, it feels like there’s a continuum of layers that can be considered and lumping them all as “data science” is perhaps less helpful than it could be. Maybe by sharing my list you can help me with further insight. In terms of unlocking value in the underlying data I see the least to most valuable being:

  • Storing data
  • Making it searchable/accessible
  • Augmenting it to fashion new data and insights
  • Understanding what drives the trends in the data
  • Predicting the future

Storing a “large” amount of data has always been feasible (data warehouses of the 90s don’t sound all that different to our current Big Data processing needs). If you’re dealing with daily Terabyte dumps from telecomms, astro arrays or LHCs then storing it might not be economical but it feels that more companies can easily store more data this decade than in previous decades.

Making the data instantly accessible is harder, this used to be the domain of commercial software and now we have the likes of postgres, mongodb and solr which scale rather well (though there will always be room for higher-spec solutions that deal with things like fsync down to the platter level reliably regardless of power supply and modeling less usual data structures like graphs efficiently). Since CPUs are cheap building a cluster of commodity high-spec machines is no longer a heavy task.

Augmenting our data can makes it more valuable. By example – applying sentiment analysis to a public tweet stream and adding private demographic information gives YouGov’s SoMA (disclosure – I’m working on this via AdaptiveLab) an edge in the brand-analysis game. Once you start joining datasets you have to start dealing with the thorny problems – how do we deal with missing data? If the tools only work with some languages (e.g. English), how do we deal with other languages (e.g. the variants of Spanish) to offer a similarly good product? How do we accurately disambiguate a mention of “apple” between a fruit and a company?

Modeling textual data is somewhat mainstream (witness the availability of Sentiment, NER and categorisation tools). Doing the same for photographs (e.g. Instagram photos) is in the quite-hard domain (have you ever seen a food-identifier classifier for photos that actually works?). We rarely see any augmentations for video. For audio we have song identification and speech recognition, I don’t recall coming across dog-bark/aeroplane/giggling classifiers (which you might find in YouTube videos). Graph network analysis tools are at an interesting stage, we’re only just witnessing them scale to large data amounts of data on commodity PCs and tieing this data to social networks or geographic networks still feels like the domain of commercial tools.

Understanding the trends and communicating them – combining different views on the data to understand what’s really occurring is hard, it still seems to involve a fair bit of art and experience. Visualisations seems to take us a long way to intuitively understanding what’s happening. I’ve started to play with a few for tweets, social graphs and email (unpublished as yet). Visualising many dimensions in 2 or 3D plots is rather tricky, doubly so when your data set contains >millions of points.

Predicting the future – in ecommerce this would be the pinacle – understanding the underlying trends well enough to be able to predict future outcomes from hypothesised actions. Here we need mathematical models that are strong enough to stand up to some rigorous testing (financial prediction is obviously an example, another would be inventory planning). This requires serious model building and thought and is solidly the realm of the statistician.

Currently we just talk about “data science” and often we should be specifying more clearing which sub-domain we’re involved with. Personally I sit somewhere in the middle of this stack, with a goal to move towards the statistical end. I’m not sure one how to define the names for these layers, I’d welcome insight.

This is probably too simple a way of thinking about the field – if you have thoughts I’d be most happy to receive them.


Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

2 Comments | Tags: ArtificialIntelligence, Data science, Life

15 January 2013 - 21:35Do self-driving cars make the courier redundant?

I’ll start with a quote via “Why workers are losing the war against the machines” taken from A Farewell to Alms by economist Gregory Clark:

“There was a type of employee at the beginning of the Industrial Revolution whose job and livelihood largely vanished in the early twentieth century. This was the horse. The population of working horses actually peaked in England long after the Industrial Revolution, in 1901, when 3.25 million were at work. … There was always a wage at which all these horses could have remained employed. But that wage was so low that it did not pay for their feed. “

Now I’m back in London I’m watching the prevalence of couriers and delivery people bringing a constant stream of packages through the busy streets. I’m betting this will be automated in the near future. Couple self-driving cars and a physical-packet-delivery-platform that looks a bit like the Internet protocol and then you’ve got (I think) a bit of a game changer.

Update – future posts discuss other outcomes for self-driving cars and a hackday looked at making parking-bay utilisation more efficient in London.

Self-driving cars have the potential to be legal in cities (they’re legal in a few US states at present, accepting longer legal battles to come). They’ll drive safely and predictably, they’re unlikely to react erratically (e.g. no pulling out in busy streets for a foolish maneuver and hitting a cyclist), they don’t need a lunch break and they could pick-up and drop-off from depots a long way from traditional storage facilities (as nobody has to commute to the facility).

Consider one of these vehicles arriving outside your office and phoning you to give you a secret ID number. You come out to the street, key in the number, a panel pops open and there’s your package. Internally the packages are retrieved in a similar way to automated warehouses. Since the system is always calling home to report its status it could notify all upcoming delivery recipients of its expected ETA. You could probably buy an upgrade to reserve your delivery slot (giving delivery companies a new revenue stream?).

If they’re controlled via a derivative of the Internet Protocol then we have a decentralised physical-packet-routing system. If the cars can ‘mate’, perhaps by backing on to each other, they can trade packages so the packages travel further without human intervention. Maybe you end up with an open market for atoms-distribution, assuming compatible protocols exist amongst the courier companies.

I’ve followed John Robb’s recent discussion of DroneNet (more) – it is the same idea (props – I’m tagging on his/others’ thinking) applied to low cost drones. I think drones will follow later as they’re constrained by weight and flight restrictions and so they are far less useful in the city at present.

At the end of the day I think that humans will be pushed out of the physical package delivery game (be it via drones or via delivery cars). Trying to understand the speed at which humans will be removed from traditional working disciplines in specialist area continues to baffle me.

Update – economist Philippe Bracke notes that government legislation might slow the adoption of self-driving vehicles, giving drivers time to cross-train into other areas of work. He also notes that the adoption of driverless cars, perhaps operating at night (and maybe filled by petrol stations that offer discounted fuel at night as an attractor?), would reduce daytime congestion. This in turn might make it more likely that human-driver cars are more abundant by day, increasing urbanisation and raising house prices. Personally I’m not sure how to think about the second-order effects of changes like these.


Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

4 Comments | Tags: ArtificialIntelligence, Life

13 January 2013 - 20:10Map/Reduce (Disco) on millions of tweets

Whilst working on data sciencey problems for AdaptiveLab I’m becoming more involved in simple visualisations for proof-of-concepts for clients. This ties in nicely with my PyCon Parallel Computing tutorial with Minesh. I’ve been prototyping a Disco map/reduce tutorial (part 2 for PyCon) using tweets collected during the life of SocialTies during 2011-2012.

Using 11,645,331 tweets on 1 machine running through Disco with a modified word_count example it is easy to filter to keep tweets with a certain word (“loving” in this case) and to plot a word cloud (thanks Andreas!) of the remaining tweets:

Words in “loving” tweets

Tweet analysis often shows a self-referential nature – here we see “i’m” as one of the most popular words. It is nice to see “:)” making an appearance. Brands mentioned include “Google”, “iPhone”, “iPad”. We also see “thanks”, “love”, “nice” and “watching” along with “London” and “music”. Annoyingly I’m not cleaning the words so we see “it!”, “it.”, “(via” (with erroneous brackets) and the like which clutter the results a bit.

Next I’ve applied “hating” as the filter to the same set:

Words in “hating” tweets

One of the most mentioned words is “people” which is a bit of a shame, along with “i’m”. Thankfully we see some “love” and “loving” there. “apple” appears more frequently than “twitter” or “google”. Lots of related negative words also appear e.g. “stupid”, “hate”, “shit”, “fuck”, “bitch”.

Interestingly few of the terms shown include Twitter users or hashtags.

Finally I tried the same using “apple” on an earlier smaller set (859,157 tweets):

Words in “apple” tweets

Unsurprisingly we see “store”, “iphone”, “ipad”  “steve”. Hashtags include “#wwdc”, “#apple” and “#ipad”. The Twitter accounts shown are errors due to string-matching on “apple” except for @techcrunch.

I find it interesting to see competitor brands being mentioned in the same tweets (e.g. “google”, “microsoft”, “android”, “samsung”, “amazon”, “nokia”), although the firms are obviously related to “apple”.

An improvement would be to remove words from the chart that match the original pattern (hence removing words like “apple” and “#apple” but keeping everything else). Removing near-duplicate terms (e.g. “apple”, “apples”, “apple'”) and performing common string clean-ups (removing punctuation) which also help.

It would also be good to change the colour channels – perhaps using red for commonly-negative words and green for commonly-positive words, with the rest in a neutral colour. Maybe we could also colour the neutral words differently if they’re commonly associated with the key word (e.g. brands of the key word).

Getting started with Disco was easy enough. The installation takes a few hours (the Disco project instructions assume a certain familiarity with networked systems), after that editing the examples is straightforward. Visualising using Andreas’ code was very straight-forward. The source will be posted around the time of my PyCon tutorial in March.


Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

4 Comments | Tags: ArtificialIntelligence, Data science, Python