Entrepreneurial Geekiness
22,937* faster Python math using pyCUDA
I’ve just uploaded a new Mandelbrot.py demo for pyCUDA, it adds a new calculation routine that straddles the numpy (C based math) and the pure-CUDA implementations. In total there are 4 variants to choose from. The speed differences are huge!
Update – this Reddit thread has more details including real-world timings for two client problems (showing 10-3,677* speed-ups over a C task).
Update – I’ve written a High Performance Python tutorial (July 2011, 55 pages) which covers pyCUDA and other technologies, you might find it useful.
This post builds upon my earlier pyCUDA on Windows and Mac for super-fast Python math using CUDA.
You’ll need CUDA 3.1 and pyCUDA installed with a compatible NVIDIA graphics card. This version of the Mandelbrot code forces single precision math – this means it’ll work on all CUDA cards (even the older ones – full list). It runs on my MacBook (Leopard) and Windows, the Windows machines use a 9800 GT and GTX 480. Here’s what it generates:
The big-beast graphics card for my physics client is a GTX 480 – this is NVIDIA’s top of the line consumer card (costing £420GBP in the UK a few weeks back). It is huge – it covers two slots, uses one PCIe 2.0×16 slot and has a requirement for 300-400W of power (I’m using a 750W PSU to be safe on a Gigabyte GA H55M S2H motherboard):
The mandelbrot.py demo has four options (e.g. ‘python mandelbrot.py gpu’):
- ‘gpu’ is a pure CUDA solution on the GPU
- ‘gpuarray’ uses a numpy-like CUDA wrapper in Python on the GPU
- ‘numpy’ is a pure Numpy (C-based) solution on the CPU
- ‘python’ is a pure Python solution on the CPU with numpy arrays
The default problem is a 1000*1000 Mandelbrot plot with 1000 max iterations. I’m running this on a 2.9GHz dual core Windows XP SP3 with Python 2.6 (only 1 thread is used for all CPU tests). The timings:
- ‘gpu’ – 0.07 seconds
- ‘gpuarray’ – 3.45 seconds – 49* slower than GPU version
- ‘numpy’ – 43.4 seconds – 620* slower than GPU version
- ‘python’ – 1605.6 seconds – 22,937* slower than GPU version
- ‘python’ with psyco.full() – 1428.3 seconds – 20,404* slower than GPU version
By default mandelbrot.py forces single precision for all the math. Interestingly on my box if I let numpy default to numpy.complex128 (two double precision floating point numbers rather than numpy.complex64 with two single precision floats) then the Python result is faster:
- ‘numpy’ – 34.0 seconds (double precision)
- ‘python’ – 627 seconds (double precision) – 2.5* faster than the single precision version
The ‘22,937*’ figure is a little unfair in light of the 627 second result (which is 8,957* slower) but I wanted to use only single precision math for consistency and compatibility across all CUDA cards (the older cards can only do single precision math).
On my older dual core 2.66GHz machine with a 9800 GT I get:
- ‘gpu’ – 1.5 seconds
- ‘gpuarray’ – 7.1 seconds – 4.7* slower than GPU version
- ‘numpy’ – 51 seconds – 34* slower than GPU version
- ‘python’ – 1994.3 seconds – 1,329* slower than GPU version
If we compare the 0.07 seconds for the GTX 480 against the 1.5 seconds for the 9800 GT (albeit on different machines but the runtime is just measuring the GPU work) then the GTX 480 is 21* faster than the 9800 GT. That’s not a bad speed-up for a couple of years difference in architectures.
If you take a look at the source code you’ll see that the ‘gpu’ option uses a lump of C-like CUDA code, behind the scenes all pyCUDA code is converted into this C-like code and then down to PTX via their compiler. This is the way to go if you understand the memory model and you want to write very fast code.
The gpuarray option uses a numpy-like interface to pyCUDA which, behind the scenes, is converted into CUDA code. Because it is compiled from Python code the resulting CUDA code isn’t as efficient – the compiler can’t make the same assumptions about memory usage as I can make when hand-crafting CUDA code (at least – that’s my best understanding at present!).
The numpy version uses C-based math running on the CPU – generally it is regarded as being ‘pretty darned fast’. The python version uses numpy arrays with straight Python arithmetic, this makes it awfully slow. Psyco 2.0.0 makes it a bit faster.
Feedback and extensions are welcomed via the wiki!
If you want to get started then make sure you have a compatible CUDA card, get pyCUDA (installation instructions), compile pyCUDA (takes 30 minutes from scratch if you’re on a well-known system), try the examples and run mandelbrot.py. The mailing list is helpful.
It’d be nice to see some comparisons with PyPy, ShedSkin and other Python implementations. You’ll find links in my older ShedSkin post. It’ll also be interesting to tie this in to some of the A.I. projects in the A.I. Cookbook, I’ll have to ponder some of the problems that might be tackled.
Books:
The following two books will be useful if you’re new to CUDA. The first is very friendly, I’m still finding it very useful.
Ian is a Chief Interim Data Scientist via his Mor Consulting. Sign-up for Data Science tutorials in London and to hear about his data science thoughts and jobs. He lives in London, is walked by his high energy Springer Spaniel and is a consumer of fine coffees.
Presenting A.I. at FlashBrighton (using Python!)
A couple of weeks back I presented an Artificial Intelligence evening at FlashBrighton with John Montgomery and Emily Toop. The night covered optical character recognition, face detection, robots and some futurology. A video link should follow.
Optical Character Recognition to Read Plaques
Recently I’ve been playing with OCR to read photos with text, a particular example I care about is extracting the text from English Heritage Plaques for the OpenPlaques project:
I gave an overview of the tesseract open source OCR tool (originally created by HP). Some of the notes I explained came from this tesseract OSCON paper. Some notes:
- tesseract ranked highly in international competitions for scanned-image text extraction
- it works better if you remove non-text regions (e.g. you isolate just the blue plaque in the above image) and threshold the image to a grey scale
- it runs very quickly – it’ll extract text in a fraction of a second so it will run on a mobile phone (iPhone ports exist)
To get people thinking about the task from the computer’s point of view I had everyone read out the text from this blurry photo. Treating the image as a computer would see it shows that you need several passes to learn which country is involved and to guess at some of the terms:
You can guess that the domain is music/theatre (which helps you to specialise the dictionary you’re using), based in the US (so you know that 1.25 is $1.25USD) and even though the time is hard to read it is bound to be 7.30PM (rather than 7.32 or 7.37) because events normally start on the hour or half hour. General knowledge about the domain greatly increases the chance that OCR can extract the correct text.
I talked about the forthcoming competition to write a Plaque-transcriber system, that project is close to starting and you can see demo Python source code in the AI Cookbook.
Optical Character Recognition Web Service and Translator iPhone Demo
To help make OCR a bit easier to use I’ve setup a simple website: http://ocr.aicookbook.com/. You call a URL with an image that’s on the web (I use flickr for my examples) and it returns a JSON string with the translated text. The website is a few lines of Python code created using the fabulous bottle.py.
The JSON also contains a French translation and mp3 links for text to speech, this shows how easy it is to make a visual-assist device for the hard of sight.
Emily built an iPhone demo based on this web service – you can a photograph of some text, it uploads the text to flickr, retrieves the JSON and then plays the mp3s and shows you the translated text.
OCR on videos
The final OCR demo shows a proof of concept that extracts keywords from ShowMeDo‘s screencast videos. The screencasts show programming in action – it is easy to extract frames, perform OCR and build up strong lists of keywords. These keywords can then be added back to the ShowMeDo video page to give Google more indexable content.
There’s a write-up of the early system here.
OCR futurology
Text is all around us and mobile phones are everywhere. It strikes me that sooner or later we’ll be pointing our mobile phone at a poster like this and we’ll get extra information in return:
From the photo we can extract names of places, we also know the phone’s location so a WikiPedia geo-lookup will return relevant pages. Probably we can also extract dates and costs from posters and these can go into our calendar. I used tesseract on this image and extracted enough information to link to several WikiPedia pages with history and a map.
Face Detection for Privacy Invasion
John and I built a system for correlating gowalla check-ins with faces seen in images from the SkiffCam – the webcam that’s hosted in the Skiff co-working space. The goal was to show that we lose quite a lot of privacy without realising it – the SkiffCam has 29,000 images (1Gb of data) dating back over several years.
Using openCV’s face detection system I extracted thousands of faces. John retrieved all the gowalla check-ins based at the Skiff and built a web service that lets us correlate the faces with check-ins. We showed faces for many well-known Brightoners including Seb, Niqui, Paulo, Jon & Anna and Nat.
Given a persons face we could then train a face recogniser to see other occurrences of that person at the Skiff even if they’re not checking in with gowalla. We can also mine their twitter accounts for other identifying data like blogs and build a profile of where they go, who they know and what they talk about. This feels pretty invasive – all with open source tools and public data.
Emotion detection
Building on the face detector I next demonstrated the FaceL face labeling project from Colorado State Uni, built on pyVision. The tool works out of the box on a Mac – it can learn several faces or poses during a live demo. Most face recognisers only label the name of the person – the difference with FaceL is that it can recognise basic emotional states such as ‘happy’, ‘neutral’ and ‘sad’. This makes it really easy to work towards an emotion-detecting user interface.
During my demo I showed FaceL correctly recognising ‘happy’ and ‘sad’ on my face, then ‘left’ and ‘right’ head poses’, then ‘up’ and ‘down’ poses. I suspect with the up/down poses that it is really easy to build a nod-detecting interface!
Headroid2 – a Face Tracking Robot
Finally I demo’d Headroid2 – my face tracking robot (using the same openCV module as above) that uses an Arduino, a servo board, pySerial and a few lines of code to give the robot the ability to track faces, smile and frown:
Here’s a video of the earlier version (without the smiling face feedback):
For full details including build instructions see building a face tracking robot.
EuroPython
I’ll bring Headroid3 (this adds face-seeking behaviour) to EuroPython in a few weeks, hopefully I can find a few other A.I. folk and we can run some demos.
Reading material:
If you’re curious about A.I. then the following books will interest you:
Ian is a Chief Interim Data Scientist via his Mor Consulting. Sign-up for Data Science tutorials in London and to hear about his data science thoughts and jobs. He lives in London, is walked by his high energy Springer Spaniel and is a consumer of fine coffees.
Abandoned petrol pump
Here’s a random moment – on Blackman Street just down from Brighton Station is this abandoned petrol pump. I’m curious to know what kind of business it supported – anyone know?
This is the cheapside area of Brighton (meaning ‘market area‘ in olde English) known now as the New England Quarter – a few streets from the new sustainable housing developments, green corridor and New England House.
Ian is a Chief Interim Data Scientist via his Mor Consulting. Sign-up for Data Science tutorials in London and to hear about his data science thoughts and jobs. He lives in London, is walked by his high energy Springer Spaniel and is a consumer of fine coffees.
Emily’s new blog
Emily (@fluffyemily) has started a new blog – EmilyToop.com – to note her progress with iPhone app development, robotics and general geekery.
Her first post is Objective Flickr on the iPhone, inspired by some of the difficulties she had building her demo app for my Optical Character Recognition web service on the A.I. Cookbook.
Ian is a Chief Interim Data Scientist via his Mor Consulting. Sign-up for Data Science tutorials in London and to hear about his data science thoughts and jobs. He lives in London, is walked by his high energy Springer Spaniel and is a consumer of fine coffees.
Talking on Artificial Intelligence next Tuesday at FlashBrighton
I’ve been invited to speak with John Montgomery next Tuesday at FlashBrighton – 7pm at The Werks for 1.5-2 hours or so of demos. We’ll be covering:
- Head tracking robot (build your own in a few hours!)
- Skiff Privacy Invasion – what we can learn from data mining the SkiffCam (the Gov’t can do it – now you can too)
- Optical Character Recognition web service with an iPhone visual-assistant demo
- Automatic transcription of OpenPlaques images (because Google can’t read images!)
- Extracting text from videos to feed Google (because Google can’t read videos!)
- Face detection proof of concept web service
Which, frankly, is quite a lot to cover in 1.5 hours and a couple of the demos still need some development…but that’s part of the fun, right? The demos are mostly in Python and will be written up on the A.I. Cookbook. The goal is to show non-A.I. programmers that a lot of A.I. is pretty accessible now via good open-source libraries.
Richard has given me a lovely Victorian-researcher inspired write-up, it is worth a proper read:
I have spoken this night with Sir Seb Lee-Delisle, the gentleman who runs the FlashBrighton club, an institution of long standing repute. He expressed great delight with my research into Artificial Intelligence, which he assuryes me he has been following with the greatest assiduity, and kindly invited me to present my findings at his club. I did of course accept, and have spent the remaynder of the day deliberating over how I might present these goode labours. I have settled on involving my £5 app collaborator Mr. John Montgomery, with whom I have been engaged on a number of projects for some little time now. …
We’ll hope to see you along!
Ian is a Chief Interim Data Scientist via his Mor Consulting. Sign-up for Data Science tutorials in London and to hear about his data science thoughts and jobs. He lives in London, is walked by his high energy Springer Spaniel and is a consumer of fine coffees.
Read my book
AI Consulting
Co-organiser
Trending Now
1Leadership discussion session at PyDataLondon 2024Data science, pydata, RebelAI2What I’ve been up to since 2022pydata, Python3Upcoming discussion calls for Team Structure and Buidling a Backlog for data science leadsData science, pydata, Python4My first commit to PandasPython5Skinny Pandas Riding on a Rocket at PyDataGlobal 2020Data science, pydata, PythonTags
Aim Api Artificial Intelligence Blog Brighton Conferences Cookbook Demo Ebook Email Emily Face Detection Few Days Google High Performance Iphone Kyran Laptop Linux London Lt Map Natural Language Processing Nbsp Nltk Numpy Optical Character Recognition Pycon Python Python Mailing Python Tutorial Robots Running Santiago Seb Skiff Slides Startups Tweet Tweets Twitter Ubuntu Ups Vimeo Wikipedia