eye of eyes 5

Using AndreaMosaic and Zoomify, I was able to create a zoomable interface that at least allows you to look at the individual eyes.

Since zoomify (and other such interfaces) uses image pyramids, the zoomable interface uses a bunch of small image files, I could not host it on this wordpress site (or didn’t have the time/inclination to figure it out).

I’ve hosted it here via Coral CDN.

I would like to use openzoom for this, maybe at a later point.

For now, this project is close. Thanks Aishwarya, for lending your eyes.

See previous post.

Advertisements

eye of eyes 4

Got opencv to detect eyes from the collection of portraits that I have. Got more than 1700+ eyes, that’s way more than the 100 or so I had. Eye detection with opencv works like a charm. There are false positives but the ratio is very small (10 out of 1500 or so)..

Using those and metapixel, I created a composite which you can see below:

Next step is to use something like zoomify to let you see the actual component eyes in detail.

(see previous).

eye of eyes 3

Got Open CV from ubuntu along with the python bindings.
Turns out the one available on ubuntu karmic is opencv 1.x.
OpenCV 2.x is out so, want to make sure I take advantage of any improvements in the library. (see previous post

Following a bunch of sources, (including an easier python binding framework), I finally settled upon compiling opencv and using the new python bindings on ubuntu 64. Too bad I couldn’t use the pyopencv bindings.. looked cool but no x64 support yet.

After mucking around, I got it to capture frames from a webcam and detect my face and my eyes and track it. I’m using the default haarcascade_frontalface_alt.xml and haarcascade_eye.xml cascade files that come with opencv.

Here’s the sample image

In a nutshell, face detection, eye detection (any object detection) using opencv works pretty well as long as you have a properly defined haarcascades file for it.

Next up, rewriting the script to a runnable script for extracting and saving eyes from images.

Getting there… 🙂

eye of eyes 2

After 2, 3 years, I still don’t have enough eye photographs. This would require thousands (see previous post).

One good idea would be to extract eyes from existing photographs of people that I already have. That would mean detecting, cutting, pasting and saving eye images from thousands of pics.

Sounds like a programming job. You’d think!

Turns out there aren’t any free libraries out there.. and coming up with a pattern matching algorithm on your own is not trivial..

Enter OpenCV, an open source computer vision package started by Intel. Besides giving you lots of image manipulation stuff, it gives you object detection using HaarCascades, based on a paper by Viola and Jones, “Rapid object detection using boosted cascade of simple features”, Computer Vision and Pattern Recognition.

The detect object feature basically uses trained classifiers based on known object images to detect similar objects in a given image. The classifiers are stored in xml files that the library can load up. The opencv framework even gives you tools to create and train your own classifiers AND a set of sample cascade files. There are other cascade files available on the net.

Awesome.

zen, the right brain and existence

Jill Bolte Taylor’s TED talk made me think about meditation and the functioning of the brain.

Here’s an excerpt about the talk posted on TED.com

Jill Bolte Taylor got a research opportunity few brain scientists would wish for: She had a massive stroke, and watched as her brain functions — motion, speech, self-awareness –- shut down one by one. An astonishing story.

What I got out of the talk is that once we have the ability to think selectively from the right side of the brain, we enter a sort of zen mode of existence. Which makes me think about the mode that we (or people who can) get into when they’re meditating. Even more fascinating is the possibility of switching from the right to the left at will, and being able to co-relate the individuality aspects (left brain) with the connectedness and universality aspects (right brain) in day to day life.

Fascinating stuff. This probably adds more to brain research and our place in existence than all the mind/brain research done in the last hundred years.

The content on TED is amazing. Chris Anderson (of Wired and long tail fame) has done a tremendous job of ensuring that the content is available for everyone to view. Contrary to what I’ve heard from some folks, I don’t believe TED is elitist at all.

livecoding

There’s a whole sub-culture out there around “livecoding”, using programming (or code) interactively to create performances, typically musical and/or visual. Some folks write their own applications, some use applications created specifically for allowing dynamic creation of visuals or art. Those that do write their own typically use dynamic languages – perl, python, ruby scheme etc from what I’ve found on the net.

There’s a whole bunch of stuff on livecoding at toplap.

Of the applications/environments that specifically allow or promote livecoding, I’ve tried ChucK, SuperCollider and fluxus. The first two let you do livecoding of music (or sounds). Fluxus let’s you create visuals.

Fluxus uses opengl to create/display graphics and lets you manipulate it using scheme (mzscheme).

One of my todo’s is to create something like fluxus, just a minimal set of bindings using clojure, java and jogl while learning clojure.

mathematics as a basis for music

or .. mathematics as a basis for art (part 2)
I’m a little late to the party.

I was researching natural number sequences to create number generators when I came across the OEIS (online encyclopedia of integer sequences). It has a whole bunch of sequences, and I had only created a few (fibonacci, padovan, perrin, lucas and feigenbaum). Not only that, it lets you listen to the sequences by deriving pitch and duration from the sequences via midi files.
It uses another site to generate the midi files, the Musical Algorithms site.

Man, that site is loaded. Besides number sequences, the site lets you input all kinds of algorithms and sequences including DNA sequences (ATGC), constants, powers etc.. and listen to them by tweaking pitch and duration (derived by scaling or mapping).

Oh well..
So, I’m gonna have to take a slightly different tack, probably filtering sequences based on criteria (such as some described in the book “This is your brain on music“), transforming them (like adding syncopations) and combining them..

Stay tuned…