About

Ian Ozsvald picture

This is Ian Ozsvald's blog, I'm an entrepreneurial geek, a Data Science/ML/NLP/AI consultant, founder of the Annotate.io social media mining API, author of O'Reilly's High Performance Python book, co-organiser of PyDataLondon, co-founder of the SocialTies App, author of the A.I.Cookbook, author of The Screencasting Handbook, a Pythonista, co-founder of ShowMeDo and FivePoundApps and also a Londoner. Here's a little more about me.

High Performance Python book with O'Reilly View Ian Ozsvald's profile on LinkedIn Visit Ian Ozsvald's data science consulting business Protecting your bits. Open Rights Group

21 May 2010 - 14:44Headroid1 – a face tracking robot head

The video below introduces Headroid1, this face-tracking robot will grow into a larger system that can follow people’s faces, detect emotions and react to engage with the visitor.

The above system uses openCV’s face detection (using the Python bindings and facedetect.py) to figure out whether the face is in the centre of the screen, if the camera needs to move it then talks via pySerial to BotBuilder‘s ServoBoard to pan or tilt the camera until the face is back in the centre of the screen.

Update – see Building A Face Tracking Robot In An Afternoon for full details to build your own Headroid1.

Headroid is pretty good at tracking faces as long as there’s no glare, he can see people from 1 foot up to about 8 feet from the camera. He moves at different speeds depending on your distance from the centre of the screen and stops with a stable picture when you’re back at the centre of his attention. The smile/frown detector which will follow will add another layer of behaviour.

Heather (founder of Silicon Beach Training) used Headroid1 (called Robocam in her video) at Likemind coffee this morning, she’s written up the event:

Andy White (@doctorpod) also did a quick 2 minute MP3 interview with me via audioboo.

Later over coffee Danny Hope and I discussed (with Headroid looking on) some ideas for tracking people, watching for attention, monitoring for frustration and concentration and generally playing with ways people might interact with this little chap:

The above was built in collaboration with BuildBrighton, there’s some discussion about it in this thread. The camera is a Philips SPC900NC which works using macam on my Mac (and runs on Linux and Win too). The ServoBoard has a super-simple interface – you send it commands like ’90a’ (turn servo A to 90 degress) as text and ‘it just works’ – it makes interactive testing a doddle.

Update – the blog for the A.I. Cookbook is now active, more A.I. and robot updates will occur there.

Reference material:

The following should help you move forwards:


Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

9 Comments | Tags: ArtificialIntelligence, Python

17 May 2010 - 21:06Extracting keyword text from screencasts with OCR

Last week I played with the Optical Character Recognition system tesseract applied to video data. The goal – extract keywords from the video frames so Google has useful text to index.

I chose to work with ShowMeDo‘s screencasts as many show programming in action – there’s great keyword information in these videos that can be exposed for Google to crawl. This builds on my recent OCR for plaques project.

I’ll blog in the future about the full system, this is a quick how-to if you want to try the system yourself.

First – get a video. I downloaded video 10370000.flv from Introducing numpy arrays (part 1 of 11).

Next – extract a frame. Using ffmpeg I extracted a frame at 240 seconds as a JPG:

ffmpeg -i 10370000.flv -y -f image2 -ss 240 -sameq -t 0.001  10370000_240.jpg

Tesseract needs TIF input files (not JPGs) so I used GIMP to convert to TIF.

Finally I applied tesseract to extract text:

tesseract 10370000_30.tif 10370000_30 -l eng

This yields:

than rstupr .
See Also
linspate : Evenly spaced numbers with  careful handling of endpoints.
grid: Arrays of evenly spared numbers  in Nrdxmensmns
grid: Grid—shaped arrays of evenly spaced numbers in  Nwiunensxnns
Examples
>>> np.arange(3)
¤rr¤y([¤. 1.  2])
>>> np4arange(3.B)
array([ B., 1., 2.])
>>>  np.arange(3,7)
array([3, A, S, 6])
>>> np.arange(3,7,?)
·=rr··¤y<[3.  5])
III
Ill

Obviously there’s some garbage in the above but there are also a lot of useful keywords!

To clean up the extraction I’ll be experimenting with:

  • Using the original AVI video rather than the FLV (which contains compression artefacts which reduce the visual quality), the FLV is also watermarked with ShowMeDo’s logo which hurts some images
  • Cleaning the image – perhaps applying some thresholding or highlighting to make the text stand out, possibly the green text is causing a problem in this image
  • Training tesseract to read the terminal fonts commonly found in ShowMeDo videos

I tried four images for this test, in all cases useful text was extracted. I suspect that by rejecting short words (less than four characters) and using words that appear at least twice in the video then I’ll have a clean set of useful keywords.

Update – the blog for the A.I. Cookbook is now active, more A.I. and robot updates will occur there.


Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

1 Comment | Tags: ArtificialIntelligence, Life, Programming, Screencasting, ShowMeDo

10 May 2010 - 18:59“Artificial Intelligence in the Real World” lecture at Sussex University 2010

I’m chuffed to have delivered the second version of my “A.I. in the real world” lecture (I gave it last May too) to 2nd year undergraduates at Sussex University this afternoon.

The slides are below, I cover:

  • A.I. that I’ve seen and have been involved with in the last 10 years
  • Some project ideas for undergraduates
  • How to start a new tech business/project in A.I.

In the talk I also showed or talked about:

Artificial Intelligence in the Real World May 2010 Sussex University Guest Lecture

Here’s the YouTube video showing the Grand Challenge entries:

Update – the blog for the A.I. Cookbook is now active, more A.I. and robot updates will occur there.


Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

No Comments | Tags: ArtificialIntelligence, projectbrightonblogs, Python, ShowMeDo, sussexdigital, SussexUniversity, £5 App Meet