On Wednesday night I jumped on a train up to London to visit the London Financial Python User Group to give a short demo of pyCUDA. I’m using CUDA heavily for my physics consultancy and I figured the finance guys would be interested in 10-1000* speed-ups for their calculations.
The raw figures and the Mandelbrot demo that I gave are already covered in my earlier blog post: 22,937* faster Python math using pyCUDA.
To introduce pyCUDA I used P. Narayanan’s GPUs: For Graphics and Beyond PDF presentation (the first 13 pages), his explanation and diagrams are very clear.
To put CUDA in context against regular CPUs I used the recent Peak MHz graph and the main power/speed/transistor count graph in The Free Lunch is Over: A Fundamental Turn to Concurrency in Software. The main point here is that we’ve topped out at 2-3GHz CPUs and now we have to parallelise our code. Doing so on CPUs means we get 4, 8, 16 (and soon 24 then 32) cores to play with…but with CUDA if the problem is mathematics based we have 480 cores to use!
If you’re interested in the general use of CUDA and GPUs then check out the excellent gpgpu.org.
You may wonder about real-world performance with CUDA. Without naming names I can say that I’m now delivering a 115* speed-up on a particularly gnarly problem (I mentioned during the talk that I’d reached 80* – I’ve managed to improve that in the last 2 days). On an earlier problem when I knew far less about CUDA I delivered a 100* speed-up for the same company.
It was grand to meet a lot of new faces at the group, a few people I’ve met before at PyCons (hi Ben! Giles!). Making a contact with Didrik of Enthought was rather grand too. I hope to visit again.
Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight and in his Mor Consulting, sign-up for Data Science tutorials in London. He also founded the image and text annotation API Annotate.io, lives in London and is a consumer of fine coffees.