About

Ian Ozsvald picture

This is Ian Ozsvald's blog (@IanOzsvald), I'm an entrepreneurial geek, a Data Science/ML/NLP/AI consultant, founder of the Annotate.io social media mining API, author of O'Reilly's High Performance Python book, co-organiser of PyDataLondon, co-founder of the SocialTies App, author of the A.I.Cookbook, author of The Screencasting Handbook, a Pythonista, co-founder of ShowMeDo and FivePoundApps and also a Londoner. Here's a little more about me.

High Performance Python book with O'Reilly View Ian Ozsvald's profile on LinkedIn Visit Ian Ozsvald's data science consulting business Protecting your bits. Open Rights Group

21 February 2015 - 21:05Data-Science stuff I’m doing this year

2014 was an interesting year, 2015 looks to be even richer. Last year I got to publish my High Performance Python book, help co-organise the rather successful PyDataLondon2014 conference, teach High Performance in public (slides online) and in private, keynote on The Real Unsolved Problems in Data Science and start my ModelInsight AI agency. That was a busy year (!) but deeply rewarding.

My High Performance Python published with O’Reilly in 2014

 

This year our consulting is branching out – we’ve already helped a new medical start-up define their data offering, I’m mentoring another data scientist (to avoid 10 years of my mistakes!) and we’re deploying new text mining IP for existing clients. We’ve got new private training this April for Machine Learning (scikit-learn) and High Performance Python (announce list) and Spark is on my radar.

Apache Spark maxing out 8 cores on my laptop

Python’s role in Data Science has grown massively (I think we have 5 euro-area Python-Data-Science conferences this year) and I’m keen to continue building the London and European scenes.

I’m particularly interested in dirty data and ways we can efficiently clean it up (hence my Annotate.io lightning talk a week back). If you have problems with dirty data I’d love to chat and maybe I can share some solutions.

For PyDataLondon-the-conference we’re getting closer to fixing our date (late May/early June), join this announce list to hear when we have our key dates. In a few weeks we have our 10th monthly PyDataLondon meetup, you should join the group as I write up each event for those who can’t attend so you’ll always know what’s going on. To keep the meetup from degenerating into a shiny-suit-fest I’ve setup a separate data science jobs list, I curate it and only send relevant contract/permie job announces.

This year I hope to be at PyDataParis, PyConSweden, PyDataLondon, EuroSciPy and PyConUK – do come say hello if you’re around!


Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

5 Comments | Tags: ArtificialIntelligence, Data science, High Performance Python Book, Life, pydata, Python

19 February 2015 - 11:35Starting Spark 1.2 and PySpark (and ElasticSearch and PyPy)

The latest PySpark (1.2) is feeling genuinely useful, late last year I had a crack at running Apache Spark 1.0 and PySpark and it felt a bit underwhelming (too much fanfare, too many bugs). The media around Spark continues to grow and e.g. today’s hackernews thread on the new DataFrame API has a lot of positive discussion and the lazily evaluated pandas-like dataframes built from a wide variety of data sources feels very powerful. Continuum have also just announced PySpark+GlusterFS.

One surprising fact is that Spark is Python 2.7 only at present, feature request 4897 is for Python 3 support (go vote!) which requires some cloud pickling to be fixed. Using the end-of-line Python release feels a bit daft. I’m using Linux Mint 17.1 which is based on Ubuntu 14.04 64bit. I’m using the pre-built spark-1.2.0-bin-hadoop2.4.tgz via their downloads page and ‘it just works’. Using my global Python 2.7.6 and additional IPython install (via apt-get):

spark-1.2.0-bin-hadoop2.4 $ IPYTHON=1 bin/pyspark
...
IPython 1.2.1 -- An enhanced Interactive Python.
...
 Welcome to
 ____              __
 / __/__  ___ _____/ /__
 _\ \/ _ \/ _ `/ __/  '_/
 /__ / .__/\_,_/_/ /_/\_\   version 1.2.0
 /_/
Using Python version 2.7.6 (default, Mar 22 2014 22:59:56)
 SparkContext available as sc.
 >>>

Note the IPYTHON=1, without that you get a vanilla shell, with it it’ll use IPython if it is in the search path. IPython lets you interactively explore the “sc” Spark context using tab completion which really helps at the start. To run one of the included demos (e.g. wordcount) you can use the spark-submit script:

spark-1.2.0-bin-hadoop2.4/examples/src/main/python 
$ ../../../../bin/spark-submit wordcount.py kmeans.py  # count words in kmeans.py

For my use case we were initially after sparse matrix support, sadly they’re only available for Scala/Java at present. By stepping back from my sklean/scipy sparse solution for a minute and thinking a little more map/reduce I could just as easily split the problem into number of counts and that parallelises very well in Spark (though I’d love to see sparse matrices in PySpark!).

I’m doing this with my contract-recruitment client via my ModelInsight as we automate recruitment, there’s a press release out today outlining a bit of what we do. One of the goals is to move to a more unified research+deployment approach, rather than lots of tooling in R&D which we then streamline for production, instead we hope to share similar tooling between R&D and production so deployment and different scales of data are ‘easier’.

I tried the latest PyPy 2.5 (running Python 2.7) and it ran PySpark just fine. Using PyPy 2.5 a  prime-search example takes 6s vs 39s with vanilla Python 2.7, so in-memory processing using RDDs rather than numpy objects might be quick and convenient (has anyone trialled this?). To run using PyPy set PYSPARK_PYTHON:

$ PYSPARK_PYTHON=~/pypy-2.5.0-linux64/bin/pypy ./pyspark

I’m used to working with Anaconda environments and for Spark I’ve setup a Python 2.7.8 environment (“conda create -n spark27 anaconda python=2.7″) & IPython 2.2.0. Whichever Python is in the search path or is specified at the command line is used by the pyspark script.

The next challenge to solve was integration with ElasticSearch for storing outputs. The official docs are a little tough to read as a non-Java/non-Hadoop programmer and they don’t mention PySpark integration, thankfully there’s a lovely 4-part blog sequence which “just works”:

  1. ElasticSearch and Python (no Spark but it sets the groundwork)
  2. Reading & Writing ElasticSearch using PySpark
  3. Sparse Matrix Multiplication using PySpark
  4. Dense Matrix Multiplication using PySpark

To summarise the above with a trivial example, to output to ElasticSearch using a trivial local dictionary and no other data dependencies:

$ wget http://central.maven.org/maven2/org/elasticsearch/
 elasticsearch-hadoop/2.1.0.Beta2/elasticsearch-hadoop-2.1.0.Beta2.jar
$ ~/spark-1.2.0-bin-hadoop2.4/bin/pyspark --jars 
 elasticsearch-hadoop-2.1.0.Beta2.jar
>>> res=sc.parallelize([1,2,3,4])
 >>> res2=res.map(lambda x: ('key', {'name': str(x), 'sim':0.22}))
 >>> res2.collect()
 [('key', {'name': '1', 'sim': 0.22}),
 ('key', {'name': '2', 'sim': 0.22}),
 ('key', {'name': '3', 'sim': 0.22}),
 ('key', {'name': '4', 'sim': 0.22})]

>>>res2.saveAsNewAPIHadoopFile(path='-', 
 outputFormatClass="org.elasticsearch.hadoop.mr.EsOutputFormat", 
 keyClass="org.apache.hadoop.io.NullWritable", 
 valueClass="org.elasticsearch.hadoop.mr.LinkedMapWritable", 
 conf={"es.resource": "myindex/mytype"})

The above creates a list of 4 dictionaries and then sends them to a local ES store using “myindex” and “mytype” for each new document.  Before I found the above I used this older solution which also worked just fine.

Running the local interactive session using a mock cluster was pretty easy. The docs for spark-standalone are a good start:

sbin $ ./start-master.sh
 #  the log (full path is reported by the script so you could `tail -f `) shows
 # 15/02/17 14:11:46 INFO Master: 
 # Starting Spark master at spark://ian-Latitude-E6420:7077
 # which gives the link to the browser view of the master machine which is 
 # probably on :8080 (as shown here http://www.mccarroll.net/blog/pyspark/).
#Next start a single worker:
sbin $ ./start-slave.sh 0 spark://ian-Latitude-E6420:7077
 # and the logs will show a link to another web page for each worker 
 # (probably starting at :4040).
#Next you can start a pySpark IPython shell for local experimentation:
$ IPYTHON=1 ~/data/libraries/spark-1.2.0-bin-hadoop2.4/bin/pyspark 
  --master spark://ian-Latitude-E6420:7077
 # (and similarity you could run a spark-shell to do the same with Scala)
#Or we can run their demo code using the master node you've configured setup:
$ ~/spark-1.2.0-bin-hadoop2.4/bin/spark-submit 
  --master spark://ian-Latitude-E6420:7077 
  ~/spark-1.2.0-bin-hadoop2.4/examples/src/main/python/wordcount.py README.txt

Note if you tried to run the above spark-submit (which specifies the –master to connect to) and you didn’t have a master node, you’d see log messages like:

15/02/17 14:14:25 INFO AppClient$ClientActor: 
 Connecting to master spark://ian-Latitude-E6420:7077...
15/02/17 14:14:25 WARN AppClient$ClientActor: 
 Could not connect to akka.tcp://sparkMaster@ian-Latitude-E6420:7077: 
 akka.remote.InvalidAssociation: 
 Invalid address: akka.tcp://sparkMaster@ian-Latitude-E6420:7077
15/02/17 14:14:25 WARN Remoting: Tried to associate with 
 unreachable remote address 
 [akka.tcp://sparkMaster@ian-Latitude-E6420:7077]. 
 Address is now gated for 5000 ms, all messages to this address will 
 be delivered to dead letters. 
 Reason: Connection refused: ian-Latitude-E6420/127.0.1.1:7077

If you had a master node running but you hadn’t setup a worker node then after doing the spark-submit it’ll hang for 5+ seconds and then start to report:

15/02/17 14:16:16 WARN TaskSchedulerImpl: 
 Initial job has not accepted any resources; 
 check your cluster UI to ensure that workers are registered and 
 have sufficient memory

and if you google that without thinking about the worker node then you’d come to this diagnostic page  which leads down a small rabbit hole…

Stuff I’d like to know:

  • How do I read easily from MongoDB using an RDD (in Hadoop format) in PySpark (do you have a link to an example?)
  • Who else in London is using (Py)Spark? Maybe catch-up over a coffee?

Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

10 Comments | Tags: ArtificialIntelligence, Data science, Life, pydata, Python

8 February 2015 - 23:54Lightning talk at PyDataLondon for Annotate

At this week’s PyDataLondon I did a 5 minute lightning talk on the Annotate text-cleaning service for data scientists that I made live recently. It was good to have a couple of chats after with others who are similarly bored of cleaning their text data.

The goal is to make it quick and easy to clean data so you don’t have to figure out a method yourself. Behind the scenes it uses ftfy to fix broken unicode, unidecode to remove foreign characters if needed and a mix of regular-expressions that are written on the fly depending on the data submitted.

I suspect that adding some datetime-fixers will be a next step (dealing with UK data when tools often assume that 1/3/13 is 3rd January in US-format is a pain), maybe a fact-extractor will follow.

 

 


Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

5 Comments | Tags: Data science, pydata, Python

18 January 2015 - 19:40Data Science Jobs UK (ModelInsight) – Python Jobs Email List

I’ve had people asking me about how they can find data scientists in London and through our PyDataLondon meetup we’ve had members announcing jobs. There’s no central location for data science jobs so I’ve put together a new list (administered through my ModelInsight agency).

Sign-up to the list here: Data Science Jobs UK (ModelInsight)

  • Aimed at Data Science jobs in the UK
  • Mostly Python (maybe R, Matlab, Julia if relevant)
  • It’ll include Permie and Contract jobs

The list will only work if you can trust it so:

  • Your email is private (it is never shared)
  • The list is on MailChimp so you can unsubscribe at any time
  • We vet the job posts and only forward them if they’re in the interests of the list
  • Nobody else can post into the list (all jobs are forwarded just by us)
  • It’ll be low volume and all posts will be very relevant

Sign-up to the list here: Data Science Jobs UK (ModelInsight)

Obviously if you’re interested in joining the London Python data science community then come along to our PyDataLondon meetups.


Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

6 Comments | Tags: Data science, pydata, Python

10 January 2015 - 14:04A first approach to automatic text data cleaning

In October I gave the opening keynote at PyConIreland on The Real Unsolved Problems in Data Science. One of the topics I covered was poor quality data, by some estimates data cleaning occupies 50-80% of a data scientist’s time.

Personally I’ve just spent the better part of last year figuring out ways to convert poorly-represented company names on 100,000s CVs/resumes to a cleaned subset for my contract recruitment client (via my ModelInsight). This enables us to build ranking engines for contract job applicants (and I’ll note happily that it works rather well!). It only works because we put so much effort into cleaning the raw data. Huge investments like this are expensive in time and money, that carries risk for a client. Tools used include NLTK, ftfy, Pandas, scikit-learn and the re module, all in Python 3.4.

During the keynote I asked if anyone had tooling they could open up to make this sort of task easier. I didn’t get a lot of feedback on that so I’ve had a crack at one of the problems I’d discussed on my annotate.io.

The mapping of raw input data to a lower-dimensional output isn’t trivial, but it felt like something that might be automated. Let’s say you scraped job adverts (e.g. using import.io on adzuna, both based in London). The salary field for the jobs will be messy, it’ll include strings like “To 53K w/benefits”, “30000 OTE plus bonus” and maybe even non-numeric descriptions like “Forty two thousand GBP”. Theses strings are collated from a diverse set of job adverts, all typed by hand by a human and there’s no standard format.

Let’s say we’re after “53000”, “30000”, “42000” as an output. We can expand contractions (“<nbr>K”->”<nbr>000), convert written numbers into an integer and then extract the number. If you’re used to this sort of process then you might expect to spend 30-60 minutes writing unit tests and support code. When you come to the next challenge, you’ll repeat that hour or so of work. If you’re not sure how you want your output data to look you might spend considerably longer trying transformation ideas. What if we could short-circuit this development process and just focus on “what we have” and “what we want”?

More complex tasks include transforming messy company name strings, fixing broken unicode and converting unicode to ASCII (which can ease indexing for search) and identifying tokens that need to be stripped or transformed. There’s a second example over at Annotate and more will follow. I’m about to start work on ‘fact extraction’ – given a block of text (e.g. a description field) can we reliably extract a single fact that’s written in a variety of ways?

Over at Annotate.io I’ll be uploading the first version of a learning text transformer soon. It takes a set of example input->output mappings, learns a transformation sequence that minimizes the transformation distance (hopefully to a distance of 0 meaning it has solved the problem) and then it can use this transformation sequence on future text you pass into the system.

The API is JSON based and will come with Python examples, there’s a mailing list you can join on the site for announcements. I’m specifically interested in the kind of problems you might want to put into this system, please get in contact if you’re curious.

I’m also hoping to work on another data cleaning tool later. If you want to talk about this at a future PyDataLondon meetup, I’d love to chat.


Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

6 Comments | Tags: ArtificialIntelligence, Data science, Python

25 November 2014 - 19:11We’re running more Data Science Training in 2015 Q1 in London

A couple of weeks ago Bart and I ran two very successful training courses in London through my ModelInsight, one introduced data science using pandas and numpy to build a recommender engine, the second taught a two-day course on High Performance Python (and yes, that was somewhat based on my book with a lot of hands-on exercises). Based on feedback from those courses we’re looking to introduce up to 5 courses at the start of next year.

If you’d like to hear about our London data science training then sign-up to our (very low volume) announce list. I posted an anonymous survey onto the mailing list, if you’d like to give your vote to the courses we should run then jump over here (no sign-up, there’s only 1 question, there’s no commitment).

If you’d like to talk about these in person then you can find me (probably on-stage) co-running the PyDataLondon meetups.

Here’s the synopses for each of the proposed courses:

“Playing with data – pandas and matplotlib” (1 day)

Aimed at beginner Pythonista data scientists who want to load, manipulate and visualise data
We’ll use pandas with many practical exercises on different sorts of data (including messy data that needs fixing) to manipulate, visualise and join data. You’ll be able to work with your own data sets after this course, we’ll also look at other visualise tools like Seaborn and Bokeh. This will suit people who haven’t used pandas who want a practical introduction such as data journalists, engineers and semi-technical managers.

“Building a recommender system with Python” (1 day)

Aimed at intermediate Pythonistas who want to use pandas and numpy to build a working recommender engine, this covers both using data through to delivering a working data science product. You already know a little linear algebra and you’ve used numpy lightly, you want to see how to deploy a working data science product as a microservice (Flask) that could reliably be put into production.

“Statistics and Big Data using scikit-learn” (2 days)

Aimed at beginner/intermediate Pythonistas with some mathematical background and a desire to learn everyday statistics and to start with machine learning
Day 1 – Probability, distributions, Frequentist and Bayesian approaches, Inference and Regression, Experiment Design – part discussion and part practical
Day 2 – Applying these approaches with scikit-learn to everyday problems, examples may include (note *examples may change* this just gives a flavour) Bayesian spam detection, predicting political campaigns, quality testing, clustering, weather forecasting, tools will include Statsmodels and matplotlib.

“Hands on with Scikit-Learn” (5 days)

Aimed at intermediate Pythonistas who need a practical and comprehensive introduction to machine learning in Python, you’ve already got a basic statistical and linear algebra background
This course will cover all the terminology and stages that make up the machine learning pipeline and the fundamental skills needed to perform machine learning successfully. Aided by many hands on labs with Python scikit-learn the course will enable you to understand the basic concepts, become confident in applying the tools and techniques, and provide a firm foundation from which to dig deeper and explore more advanced methods.

“High Performance Python” (2 days)

Aimed at intermediate Pythonistas whose code is too slow
Day 1 – Profiling (CPU and RAM), compiling with Cython, using Numba, PyPy and Pythran (all the way through to using OpenMP)
Day 2 – Going multicore (multiprocessing) and multi-machine (IPython parallel), fitting more into RAM, probabilitistic counting, storage engines, Test Driven Development and several debugging exercises
A mix of theory and practical exercises, you’ll be able to use the main Python tools to confidently and reliably make your code run faster


Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

1 Comment | Tags: Data science, Python

26 August 2014 - 21:35Why are technical companies not using data science?

Here’s a quick question. How come more technical companies aren’t making use of data science? By “technical” I mean any company with data and the smarts to spot that it has value, by “data science” I mean any technical means to exploit this data for financial gain (e.g. visualisation to guide decisions, machine learning, prediction).

I’m guessing that it comes down to an economic question – either it isn’t as valuable as some other activity (making mobile apps? improving UX on the website? paid marketing? expanding sales to new territories?) or it is perceived as being valuable but cannot be exploited (maybe due to lack of skills and training or data problems).

I’m thinking about this for my upcoming keynote at PyConIreland, would you please give me some feedback in the survey below (no sign-up required)?

To be clear – this is an anonymous survey, I’ll have no idea who gives the answers.

Create your free online surveys with SurveyMonkey , the world’s leading questionnaire tool.

 

If the above is interesting then note that we’ve got a data science training list where we make occasional announcements about our upcoming training and we have two upcoming training courses. We also discuss these topics at our PyDataLondon meetups. I also have a slightly longer survey (it’ll take you 2 minutes, no sign-up required), I’ll be discussing these results at the next PyDataLondon so please share your thoughts.


Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

3 Comments | Tags: ArtificialIntelligence, Data science, pydata, Python

20 August 2014 - 21:24Data Science Training Survey

I’ve put together a short survey to figure out what’s needed for Python-based Data Science training in the UK. If you want to be trained in strong data science, analysis and engineering skills please complete the survey, it doesn’t need any sign-up and will take just a couple of minutes. I’ll share the results at the next PyDataLondon meetup.

If you want training you probably want to be on our training announce list, this is a low volume list (run by MailChimp) where we announce upcoming dates and suggest topics that you might want training around. You can unsubscribe at any time.

I’ve written about the current two courses that run in October through ModelInsight, one focuses on improving skills around data science using Python (including numpy, scipy and TDD), the second on high performance Python (I’ve now finished writing O’Reilly’s High Performance Python book). Both courses focus on practical skills, you’ll walk away with working systems and a stronger understanding of key Python skills. Your developer skills will be stronger as will your debugging skills, in the longer run you’ll develop stronger software with fewer defects.

If you want to talk about this, come have a chat at the next PyData London meetup or in the pub after.


Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

3 Comments | Tags: Data science, pydata, Python

1 August 2014 - 13:13Python Training courses: Data Science and High Performance Python coming in October

I’m pleased to say that via our ModelInsight we’ll be running two Python-focused training courses in October. The goal is to give you new strong research & development skills, they’re aimed at folks in companies but would suit folks in academia too. UPDATE training courses ready to buy (1 Day Data Science, 2 Day High Performance).

UPDATE we have a <5min anonymous survey which helps us learn your needs for Data Science training in London, please click through and answer the few questions so we know what training you need.

“Highly recommended – I attended in Aalborg in May “:… upcoming Python DataSci/HighPerf training courses”” @ThomasArildsen

These and future courses will be announced on our London Python Data Science Training mailing list, sign-up for occasional announces about our upcoming courses (no spam, just occasional updates, you can unsubscribe at any time).

Intro to Data science with Python (1 day) on Friday 24th October

Students: Basic to Intermediate Pythonistas (you can already write scripts and you have some basic matrix experience)

Goal: Solve a complete data science problem (building a working and deployable recommendation engine) by working through the entire process – using numpy and pandas, applying test driven development, visualising the problem, deploying a tiny web application that serves the results (great for when you’re back with your team!)

  • Learn basic numpy, pandas and data cleaning
  • Be confident with Test Driven Development and debugging strategies
  • Create a recommender system and understand its strengths and limitations
  • Use a Flask API to serve results
  • Learn Anaconda and conda environments
  • Take home a working recommender system that you can confidently customise to your data
  • £300 including lunch, central London (24th October)
  • Additional announces will come via our London Python Data Science Training mailing list
  • Buy your ticket here

High Performance Python (2 day) on Thursday+Friday 30th+31st October

Students: Intermediate Pythonistas (you need higher performance for your Python code)

Goal: learn high performance techniques for performant computing, a mix of background theory and lots of hands-on pragmatic exercises

  • Profiling (CPU, RAM) to understand bottlenecks
  • Compilers and JITs (Cython, Numba, Pythran, PyPy) to pragmatically run code faster
  • Learn r&d and engineering approaches to efficient development
  • Multicore and clusters (multiprocessing, IPython parallel) for scaling
  • Debugging strategies, numpy techniques, lowering memory usage, storage engines
  • Learn Anaconda and conda environments
  • Take home years of hard-won experience so you can develop performant Python code
  • Cost: £600 including lunch, central London (30th & 31st October)
  • Additional announces will come via our London Python Data Science Training mailing list
  • Buy your ticket here

The High Performance course is built off of many years teaching and talking at conferences (including PyDataLondon 2013, PyCon 2013, EuroSciPy 2012) and in companies along with my High Performance Python book (O’Reilly). The data science course is built off of techniques we’ve used over the last few years to help clients solve data science problems. Both courses are very pragmatic, hands-on and will leave you with new skills that have been battle-tested by us (we use these approaches to quickly deliver correct and valuable data science solutions for our clients via ModelInsight). At PyCon 2012 my students rated me 4.64/5.0 for overall happiness with my High Performance teaching.

@ianozsvald [..] Best tutorial of the 4 I attended was yours. Thanks for your time and preparation!” @cgoering

We’d also like to know which other courses you’d like to learn, we can partner with trainers as needed to deliver new courses in London. We’re focused around Python, data science, high performance and pragmatic engineering. Drop me an email (via ModelInsight) and let me know if we can help.

Do please join our London Python Data Science Training mailing list to be kept informed about upcoming training courses.


Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

5 Comments | Tags: Data science, High Performance Python Book, Python

26 June 2014 - 14:08PyDataLondon second meetup (July 1st)

Our second PyDataLondon meetup will be running on Tuesday July 1st at Pivotal in Shoreditch. The announce went out to the meetup group and the event was at capacity within 7 hours – if you’d like to attend future meetups please join the group (and the wait-list is open for our next event). Our speakers:

  1. Kyran Dale on “Getting your Python data onto a Browser” – Python+javascript from ex-academic turned Brighton-based freelance Javascript Pythonic whiz
  2. Laurie Clark-Michalek – “Defence of the Ancients Analysis: Using Python to provide insight into professional DOTA2 matches” – game analysis using the full range of Python tools from data munging, high performance with Cython and visualisation

We’ll also have several lightning talks, these are described on the meetup page.

We’re open to submissions for future talks and lightning talks, please send us an email via the meetup group (and we might have room for 1 more lightning talk for the upcoming pydata – get in contact if you’ve something interesting to present in 5 minutes).

Some other events might interest you – Brighton has a Data Visualisation event and recently Yves Hilpisch ran a QuantFinance training session and the slides are available. Also remember PyDataBerlin in July and EuroSciPy in Cambridge in August.

 


Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

No Comments | Tags: Data science, Life, pydata, Python