Ian Ozsvald picture

This is Ian Ozsvald's blog, I'm an entrepreneurial geek, a Data Science/ML/NLP/AI consultant, founder of the Annotate.io social media mining API, author of O'Reilly's High Performance Python book, co-organiser of PyDataLondon, co-founder of the SocialTies App, author of the A.I.Cookbook, author of The Screencasting Handbook, a Pythonista, co-founder of ShowMeDo and FivePoundApps and also a Londoner. Here's a little more about me.

View Ian Ozsvald's profile on LinkedIn Visit Ian Ozsvald's data science consulting business Protecting your bits. Open Rights Group

22 March 2013 - 1:16Analysing #pydata, London and Brighton tweets for concept mapping

Below I’ve visualised tweets for #PyData conference and the cities of London and Brighton – this builds on my ‘concept cloud‘ from a few days ago at the #PyCon conference. Props to Maksim for his Social Media Analysis tutorial for inspiration.

Update – Maksim’s Analying Social Networks tutorial video is online.

For the earlier #PyCon 2013 analysis I visualised #hashtags and @usernames from #pycon tagged tweets during the conference. I’ve built upon this to add some natural language processing for ‘noun phrase extraction’ which I detail below – this helps me to pull out phrases that are descriptive but haven’t been tagged. It also helps us to see which people are connected with which subjects. For the PyCon analysis I collected 22k tweets, after removing retweets I was left with 7,853 for analysis.

#PyData (PyData Santa Clara 2013)


PyData 2013 is a much smaller conference than PyCon (PyCon had 2,500 people and 20% female attendance, PyData had around 400 with 10% female attendance). Being smaller it had far fewer tweets – after removing retweets I had just 225 tweets to analyse. Cripes! This is clearly not big data. The other problem was that people weren’t using many #hashtags, they were referring to topics using natural language. For example:

“Peter Norvig was giving a talk at PyData in Santa Clara, CA on the topic of innovation in education.” (source)

Clearly some natural language processing was required. I took two approaches:

  • Extract capitalised sub-phrases (e.g. “Peter Norvig”, “Santa Clara”) of one or more words
  • Use NLTK’s bigram collocation analyser (to find lowercased phrases such as “ipython notebook”, “machine learning”)

Starting at the bottom of the plot we see three types of colour:

  • white is for #hashtags
  • light blue is for @usernames
  • dark green is for phrases (extracted using natural language processing)

We see a cluster of references around @fperez_org (Fernando Perez of IPython), one cluster is around @swcarpentry (the scientist-friendly software carpentry movement), the other is around IPython and the IPython Notebook (@minrk of IPython/parallel is linked too). I like the connection to Julia – Fernando discussed during his keynote that Julia now interoperates with Python.

The day before we had Peter Norvig (Director of research at Google) giving a keynote on the use of Python in education at Udacity including a discussion of how machine learning could be used to identify the mistakes that new coders make so we could make friendlier error messages to help students correct their code. See the clustering around this at the top of the graph.

Later the same day Henrik (@brinkar) spoke on Wise.io‘s Random Forest classifier. Their approach was efficient enough to demo live on a RaspberryPi. The connection from Peter to Henrik goes via #venturebeat who covered wise.io’s new software release during the conference.

Connecting IPython and Wise.io is @ogrisel (Olivier Grisel) of scikit-learn. He gave an impressive (and given the variability of conference wifi – slightly ballsy) live demo of scaling a machine learning system via IPython Parallel on EC2.

In the middle we see @teoliphant (Travis Oliphant) joined to Continuum (his company). Off to the right I get to blow my own trumpet – the phrases “awesome python” and “network analysis” connect to “russel brand” which is how one wag described my lightning talk. I got a chance to demo the earlier version of this at the end of @katychuang‘s talk on networkx.

London (geo-tagged tweets)


For the last month I’ve been grabbing tweets in the London geo area for another project. I had to raise my filtering levels to bring the network down to a sane (and easily visualised) number of nodes. After removing ReTweets I have 497,771 tweets from just a subset of my data. Some obvious clusters can be seen:

  • #weather and #rain and (presumably a rather wet) “St Albans” (a very British discussion)
  • The “O2 Arena” near the centre with “Justin Beiber” and #believetour, linked with #amazing, #excited, #nowplaying
  • @onedirection must have been playing (connected with band members @louis_tomlinson and @real_liam_payne amongst others)
  • To the top-right we have a football cluster with “Manchester United”, “Champions League”, #cpfc, #realmadrid and “Old Trafford”
  • The usual tourist spots like “Tower Bridge”, “Covent Garden”, “Hyde Park”, “Big Ben”, “Trafalgar Square” are  discussed with #happy #sun #loveit, linked just off of here is “London Heathrow Airport” and “New York”

Brighton (geo-tagged tweets)


This is my favourite, analysed using 40,379 tweets after removing ReTweets. The nature of the two cities (Brighton is 50 miles south of London on the coast, it is a university town with a young & party-friendly population) is quite apparent:

  • Top left there is discussion around “One Direction”, #justinbeiber and #seo (a particular Brighton tech thing)
  • Just south of @justinbieber is a single chain of not-safe-for-work ranting (another particular Brighton thing)
  • If you jump to the bottom right you’ll see #underwear, #lingerie, #teenagers – not as dodgy as you might expect, Sweetling were doing a social media bra campaign
  • #hove is joined with #sunny #morning and nearby places #lewes #shoreham
  • #brightonbeach and “Brighton Pier” connect with #birds (Seagulls – a bane!) and #sun
  • #friends, #memories#, #happy, #goodtimes, #marina, #fun, #girls cluster around the centre (Brighton does like a party)
  • Off down to the bottom left is a some sort of political discussion (what were they doing in Brighton?)

Reproducing this

All the code is in github at twitter_networkx_concept_map including the one line cURL command to capture the data. An example .gephi file is included for visualisation in Gephi. The built-in networkx viewer (optionally using GraphViz) works reasonably well but isn’t interactive. Maksim’s tutorial and utils class were jolly useful (utils is in my repo), I’m also using twitter-text-python for parsing @usernames, #hashtags and URLs from the tweets.

If you want some custom work around this, give me a shout via Mor Consulting.

Ian applies Data Science as an AI/Data Scientist for companies in Mor Consulting, founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

19 Comments | Tags: ArtificialIntelligence, Life, Python

18 March 2013 - 7:36Semantic map of PyCon2013 Twitter Topics

Maksim taught a lovely Social Graph Analytics course at PyCon the day before I taught Applied Parallel Computing. I took his demo for a “poor mans LDA/LSI analysis” of a Twitter topic (rather than using full LDA it just uses co-incident hashtags) and added usernames to produce the plot below.

UpdateAnalysing #pydata conference (and the cities London and Brighton) tweets using NLTK and NetworkX added as a second post.


White nodes are hashtags (e.g. #raspberrypi centres the left white cluster), purple is for usernames (e.g. organiser @jessenoller is in the centre, Python’s creator @gvanrossum is between #raspberrypi and Jesse, @dabeaz and @raymondh are near the centre). We see a strongly connected cluster of people and hashtags along with several disconnected sets.

Over the course of PyCon I’ve collected all the #pycon tagged Tweets using the 1% Twitter Firehose (via a 1 line curl command). I have some Tweet parsing code which transforms this data into useful subsets (originally I was working on 2D geo-tagged plots of London and Brighton – to be posted later), in this case I extract the hashtags and usernames from each tweet using twitter-text-python and and then build edges in a graph for each pair of mentions that occur in a tweet. E.g.:

“really cool stenography talk by @stenoknight at #PyCon – she still uses #vim with #plover”

will cause a link to form between #pycon and #vim, #pycon and #plover, #vim and #plover. The width of edges in the diagram corresponds to the number of times the same hashtags (and users) are linked in each tweet. To understand which people are related to each concept I added usernames so in the above example edges are also formed between @stenoknight and the three hashtags.

If you open up a larger version of the image (click the main image) you can follow some of the detail. The #raspberrypi tag is interesting – lots of prominent projects are mentioned alongside (e.g. #pandas, #django). Just below the main cluster is a subcluster on #robots #vision #hackers – these are joined to the main #raspberrypi cluster by the adjective #awesome (rather lovely!). All 2,500 attendees of PyCon were given a full Raspberry Pi Model B during the Friday morning keynote by Eben Upton and during the weekend a RaspberryPi hacklab taught many people how to add hardware and use Python on the device.

In the centre we see a lot of people – many people mention each other or are linked by others (e.g. prominent speakers) in their tweets. I filter out ReTweets so we’re only looking at mentions of people inside one tweet if someone has written that tweet afresh. The legendary Testing in Python Birds of a Feather session (#tipbof) on the right is linked to a few prominent folk.

#openscience and #openaccess are well linked to the south of the main cluster, connected to the main group via clusters of people.

I’m quite intrigued by the @styleseat link out to #nailjerks #pixiedust #nailart to the north, they ran a manicure/pedicure session in connection with #pyladies.


Guido gave a keynote this morning and discussed async programming – a new cluster formed (see zoom from earlier analysis shown above) from yesterday’s data with #tulip #sunday #pep3156 whilst talking about PEP 3156. It is interesting to note the time-based nature of the clusters (which we can’t see in this single 2D image, maybe I ought to animate it?).

Update I’ve added the plot below using the Community Detection feature of Gephi, it shows Guido’s async tag set as a separate cluster. #raspberrypi has a nicely large cluster, web servers have their own too.


Due to yesterday’s PyCon 5k Fun Run there’s a disconnected cluster for #10minutemile #shootforthestars #ugh to the north – 150 of us (of 2500 attendees) ran at 7am, we raised $7k towards cancer research.

It is worth mentioning that I removed some of the more prominent nodes as many of the other topics connect to these so they add little information:

  • #pycon
  • #python
  • #pycon2013
  • @pycon
  • @top_webtech @inowgb (spammy)
  • any username node with less than 50 occurrences
  • any hashtag node with only 1 occurrence

I’ll add the code to github tomorrow. Tools used include twitter-text-python, networkx and Gephi. Update the code is in github as twitter_networx_concept_map.

If I get time whilst here I’ll do some more analysis on the data. I’d love to use a named entity tool or some parsing to extract obvious nouns (e.g. packages and topics) that aren’t #hashtagged.

Ian applies Data Science as an AI/Data Scientist for companies in Mor Consulting, founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

19 Comments | Tags: Life, Python

15 March 2013 - 23:22Use of VirtualBox to prepare students (PyCon tutorials)

Minesh and I ran a tutorial (Applied Parallel Computing) at PyCon 2013 yesterday, we’ve been working on building and distributing a VirtualBox (7GB) for students to simplify the teaching with a unified, preconfigured environment. This process took a while, below are my notes. Others (e.g. Kat teaching SimpleCV) also had a VirtualBox.

The upside of a VirtualBox is that everyone has a unified environment, so students see on their screen exactly what you have on your screen. The downside is that this doesn’t help them install the tools onto their laptop for normal use. If you’re teaching a medley of tools (as we were) and especially if some require non-trivial installation (e.g. Disco map/reduce for us) then VirtualBoxes are a clear win.

  • We zipped the directory containing the VDI file, Kat used a single OVF file (both for VirtualBox), I think the single OVF file might be easier to distribute and might work in other (non-VirtualBox) environments. Our zip took 7GB down to 2.2GB
  • Your VirtualBox will be configured for you…but students might have foreign keyboards (e.g. Minesh made our VBox image with a US keyboard, I have a UK keyboard, some students have German etc keyboards) – provide notes on how to reconfigure the Guest OS so the student can setup their keyboard
  • git clone a read-only repo into the VBox, students can then just git pull to get updates
  • We added a run_this_to_confirm_you_have_the_correct_libraries.py script, it checks that everything is installed, students can run this to double check that their install is good
  • Use a standard user and password – we used “pycon:pycon”
  • I made a YouTube screencast using RecordMyDesktop (with desktop compositing disabled to reduce flicker)
  • Bundle everything into a blog post that you can easily update – here are our install notes and video
  • A large zip is harder to distribute – I linked to the zip on my blog (I have lots of bandwidth) and created a torrent using the super-easy burnbit site (here’s my download page) – you can see the torrent link on the install notes page linked above
  • You probably want to use a 32 bit OS for the Guest OS (we used Linux Mint 14 32 bit), a 64 bit Guest OS won’t run on a 32 bit system (but a 32 bit Guest OS will run on a 64 bit host)
  • Despite linking our tutorial notes to the tutorial page on the PyCon website (and mailing students), many didn’t have a preinstalled environment – we had a set of USB Thumb Drives which simplified the setup. Our first 30 minutes was talking so students had time to install the VBox
  • Github is a great place to store code, data (if not huge) and slides

Ian applies Data Science as an AI/Data Scientist for companies in Mor Consulting, founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

6 Comments | Tags: Life, Python

7 March 2013 - 17:10PowerPoint: Brief Introduction to NLProc. for Social Media

For my client (AdaptiveLab) I recently gave an internal talk on the state of the art of Natural Language Processing around Social Media (specifically Twitter and Facebook), having spent a few days digesting recent research papers. The area is fascinating (I want to do some work here via my Annotate.io) as the text is so much dirtier than in long form entries such as we might find with Reuters and BBC News.

The Powerpoint below is just the outline, I also gave some brief demos using NLTK (great Python NLP library).


Ian applies Data Science as an AI/Data Scientist for companies in Mor Consulting, founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

2 Comments | Tags: ArtificialIntelligence, Data science, Life

7 March 2013 - 16:54ANN: twitter-text-python release (Python Tweet parsing library)

A few weeks back I took over as maintainer of the twitter-text-python library (source on github). This library lets you take a tweet like:

"@ianozsvald, you now support #IvoWertzel's tweet ...
parser! https://github.com/ianozsvald/"

and extract the Twitter entities as defined in the Twitter conformance tests. The entities in the above tweet would be:

  • reply: 'ianozsvald'
  • users: ['ianozsvald']
  • tags: ['IvoWertzel']
  • urls: ['https://github.com/ianozsvald/']
  • lists: []  # no lists in this tweet
  • output html: u'<a href="http://twitter.com/ianozsvald">@ianozsvald</a>, ...
  •   you now support <a href="http://search.twitter.com/search?q=%23IvoWertzel">#IvoWertzel</a>\'s
  •   tweet parser! <a href="https://github.com/ianozsvald/">https://github.com/ianozsvald/</a>'

If you’re parsing Tweets or status-update-like-entities (from e.g. App.net)  in Python then this library makes it easy to extract @people, URLs and #hashtags. You can also request the spans (character locations) for each entity, very useful if you have repeated phrases and you’re doing a search/replace.

The library is easily installed using “$ pip install twitter-text-python” (MIT license) via the Python Package Index, currently at version

Credit – the library was developed by Ivo Wertzel (BonsiaDan on github), I merged a few Pull requests after forking to fix some bugs and have now taken over official maintenance.

Ian applies Data Science as an AI/Data Scientist for companies in Mor Consulting, founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

7 Comments | Tags: Life, Python