I’m hugely looking forward to EuroPython in Birmingham from Monday. I’m driving up Monday very early (I wish I’d booked the hotel room for Sunday night too…). Browsing through the abstracts I’d say all the following look darned interesting!
concurrent sequential processes
PyPy and Unladen Swallow
Twisted and gevent
science and maths
SHOGUN machine learning
I’ll bring Headroid along and I hope to organise a Birds of a Feather session on Artificial Intelligence and robotics. If you’re interested in these topics, I’d love to say hi!
I’ve just uploaded a new Mandelbrot.py demo for pyCUDA, it adds a new calculation routine that straddles the numpy (C based math) and the pure-CUDA implementations. In total there are 4 variants to choose from. The speed differences are huge!
Update – this Reddit thread has more details including real-world timings for two client problems (showing 10-3,677* speed-ups over a C task).
You’ll need CUDA 3.1 and pyCUDA installed with a compatible NVIDIA graphics card. This version of the Mandelbrot code forces single precision math – this means it’ll work on all CUDA cards (even the older ones – full list). It runs on my MacBook (Leopard) and Windows, the Windows machines use a 9800 GT and GTX 480. Here’s what it generates:
The big-beast graphics card for my physics client is a GTX 480 – this is NVIDIA’s top of the line consumer card (costing £420GBP in the UK a few weeks back). It is huge – it covers two slots, uses one PCIe 2.0×16 slot and has a requirement for 300-400W of power (I’m using a 750W PSU to be safe on a Gigabyte GA H55M S2H motherboard):
The mandelbrot.py demo has four options (e.g. ‘python mandelbrot.py gpu’):
‘gpu’ is a pure CUDA solution on the GPU
‘gpuarray’ uses a numpy-like CUDA wrapper in Python on the GPU
‘numpy’ is a pure Numpy (C-based) solution on the CPU
‘python’ is a pure Python solution on the CPU with numpy arrays
The default problem is a 1000*1000 Mandelbrot plot with 1000 max iterations. I’m running this on a 2.9GHz dual core Windows XP SP3 with Python 2.6 (only 1 thread is used for all CPU tests). The timings:
‘gpu’ – 0.07 seconds
‘gpuarray’ – 3.45 seconds – 49* slower than GPU version
‘numpy’ – 43.4 seconds – 620* slower than GPU version
‘python’ – 1605.6 seconds – 22,937* slower than GPU version
‘python’ with psyco.full() – 1428.3 seconds – 20,404* slower than GPU version
By default mandelbrot.py forces single precision for all the math. Interestingly on my box if I let numpy default to numpy.complex128 (two double precision floating point numbers rather than numpy.complex64 with two single precision floats) then the Python result is faster:
‘numpy’ – 34.0 seconds (double precision)
‘python’ – 627 seconds (double precision) – 2.5* faster than the single precision version
The ’22,937*’ figure is a little unfair in light of the 627 second result (which is 8,957* slower) but I wanted to use only single precision math for consistency and compatibility across all CUDA cards (the older cards can only do single precision math).
On my older dual core 2.66GHz machine with a 9800 GT I get:
‘gpu’ – 1.5 seconds
‘gpuarray’ – 7.1 seconds – 4.7* slower than GPU version
‘numpy’ – 51 seconds – 34* slower than GPU version
‘python’ – 1994.3 seconds – 1,329* slower than GPU version
If we compare the 0.07 seconds for the GTX 480 against the 1.5 seconds for the 9800 GT (albeit on different machines but the runtime is just measuring the GPU work) then the GTX 480 is 21* faster than the 9800 GT. That’s not a bad speed-up for a couple of years difference in architectures.
If you take a look at the source code you’ll see that the ‘gpu’ option uses a lump of C-like CUDA code, behind the scenes all pyCUDA code is converted into this C-like code and then down to PTX via their compiler. This is the way to go if you understand the memory model and you want to write very fast code.
The gpuarray option uses a numpy-like interface to pyCUDA which, behind the scenes, is converted into CUDA code. Because it is compiled from Python code the resulting CUDA code isn’t as efficient – the compiler can’t make the same assumptions about memory usage as I can make when hand-crafting CUDA code (at least – that’s my best understanding at present!).
The numpy version uses C-based math running on the CPU – generally it is regarded as being ‘pretty darned fast’. The python version uses numpy arrays with straight Python arithmetic, this makes it awfully slow. Psyco 2.0.0 makes it a bit faster.
Feedback and extensions are welcomed via the wiki!
It’d be nice to see some comparisons with PyPy, ShedSkin and other Python implementations. You’ll find links in my older ShedSkin post. It’ll also be interesting to tie this in to some of the A.I. projects in the A.I. Cookbook, I’ll have to ponder some of the problems that might be tackled.
The following two books will be useful if you’re new to CUDA. The first is very friendly, I’m still finding it very useful.
Recently I’ve been playing with OCR to read photos with text, a particular example I care about is extracting the text from English Heritage Plaques for the OpenPlaques project:
I gave an overview of the tesseract open source OCR tool (originally created by HP). Some of the notes I explained came from this tesseract OSCON paper. Some notes:
tesseract ranked highly in international competitions for scanned-image text extraction
it works better if you remove non-text regions (e.g. you isolate just the blue plaque in the above image) and threshold the image to a grey scale
it runs very quickly – it’ll extract text in a fraction of a second so it will run on a mobile phone (iPhone ports exist)
To get people thinking about the task from the computer’s point of view I had everyone read out the text from this blurry photo. Treating the image as a computer would see it shows that you need several passes to learn which country is involved and to guess at some of the terms:
You can guess that the domain is music/theatre (which helps you to specialise the dictionary you’re using), based in the US (so you know that 1.25 is $1.25USD) and even though the time is hard to read it is bound to be 7.30PM (rather than 7.32 or 7.37) because events normally start on the hour or half hour. General knowledge about the domain greatly increases the chance that OCR can extract the correct text.
I talked about the forthcoming competition to write a Plaque-transcriber system, that project is close to starting and you can see demo Python source code in the AI Cookbook.
Optical Character Recognition Web Service and Translator iPhone Demo
To help make OCR a bit easier to use I’ve setup a simple website: http://ocr.aicookbook.com/. You call a URL with an image that’s on the web (I use flickr for my examples) and it returns a JSON string with the translated text. The website is a few lines of Python code created using the fabulous bottle.py.
The JSON also contains a French translation and mp3 links for text to speech, this shows how easy it is to make a visual-assist device for the hard of sight.
Emily built an iPhone demo based on this web service – you can a photograph of some text, it uploads the text to flickr, retrieves the JSON and then plays the mp3s and shows you the translated text.
OCR on videos
The final OCR demo shows a proof of concept that extracts keywords from ShowMeDo‘s screencast videos. The screencasts show programming in action – it is easy to extract frames, perform OCR and build up strong lists of keywords. These keywords can then be added back to the ShowMeDo video page to give Google more indexable content.
Text is all around us and mobile phones are everywhere. It strikes me that sooner or later we’ll be pointing our mobile phone at a poster like this and we’ll get extra information in return:
From the photo we can extract names of places, we also know the phone’s location so a WikiPedia geo-lookup will return relevant pages. Probably we can also extract dates and costs from posters and these can go into our calendar. I used tesseract on this image and extracted enough information to link to several WikiPedia pages with history and a map.
Face Detection for Privacy Invasion
John and I built a system for correlating gowalla check-ins with faces seen in images from the SkiffCam – the webcam that’s hosted in the Skiff co-working space. The goal was to show that we lose quite a lot of privacy without realising it – the SkiffCam has 29,000 images (1Gb of data) dating back over several years.
Using openCV’s face detection system I extracted thousands of faces. John retrieved all the gowalla check-ins based at the Skiff and built a web service that lets us correlate the faces with check-ins. We showed faces for many well-known Brightoners including Seb, Niqui, Paulo, Jon & Anna and Nat.
Given a persons face we could then train a face recogniser to see other occurrences of that person at the Skiff even if they’re not checking in with gowalla. We can also mine their twitter accounts for other identifying data like blogs and build a profile of where they go, who they know and what they talk about. This feels pretty invasive – all with open source tools and public data.
Building on the face detector I next demonstrated the FaceL face labeling project from Colorado State Uni, built on pyVision. The tool works out of the box on a Mac – it can learn several faces or poses during a live demo. Most face recognisers only label the name of the person – the difference with FaceL is that it can recognise basic emotional states such as ‘happy’, ‘neutral’ and ‘sad’. This makes it really easy to work towards an emotion-detecting user interface.
During my demo I showed FaceL correctly recognising ‘happy’ and ‘sad’ on my face, then ‘left’ and ‘right’ head poses’, then ‘up’ and ‘down’ poses. I suspect with the up/down poses that it is really easy to build a nod-detecting interface!
Headroid2 – a Face Tracking Robot
Finally I demo’d Headroid2 – my face tracking robot (using the same openCV module as above) that uses an Arduino, a servo board, pySerial and a few lines of code to give the robot the ability to track faces, smile and frown:
Here’s a video of the earlier version (without the smiling face feedback):
Here’s a random moment – on Blackman Street just down from Brighton Station is this abandoned petrol pump. I’m curious to know what kind of business it supported – anyone know?
This is the cheapside area of Brighton (meaning ‘market area‘ in olde English) known now as the New England Quarter – a few streets from the new sustainable housing developments, green corridor and New England House.