1. I’m now working on mapping Flickr photographs by time as well as geo location.  Here, the fill is adjusted depending on the difference between the photograph’s time stamp and the current time.  The whiter the dot, the closer to the current time and photograph was taken.  The darker the dot the further from the current time the photograph was taken.

    I’m now working on mapping Flickr photographs by time as well as geo location.  Here, the fill is adjusted depending on the difference between the photograph’s time stamp and the current time.  The whiter the dot, the closer to the current time and photograph was taken.  The darker the dot the further from the current time the photograph was taken.

  2. This afternoon I’ve been combining Flickr with Twitter (the yellow dots represent Tweets).  For some reason the Twitter search isn’t returning nearly as many geo tags as I would have expected, not sure if it’s my code or the API that’s at fault.

  3. The Major project that this blog was made to document may be over but the university course continues into another year.  I think that for the purposes of assessment it might be best if I start documenting this year’s work on a separate blog, simply so the examiners can see what work has been done since the start of this year.  Having said that, the work will follow quite closely from what I’ve been documenting on this blog so it seems to make sense to keep it going.
Anyways, I’m still getting back into the swing of things here at uni.  I realise I haven’t put up any pictures of the final exhibition here, my apologies.  At the moment I’m still… assessing the project myself.  Trying to see what worked, what didn’t work, what perhaps might have been done differently.  I’m also looking forward to what I’m going to do next, in this respect there are a few vague ideas beginning to take shape but I’ll save them for another post.
In the meantime this is what I did this morning, not much I grant you but it felt good to get out of the theory books and do a little bit of practical coding.  It occured to me that although geo-location was a fairly major part of the last project, I never actually mapped any of the photographs to a map.  So here’s a map.
The search I used is not as detailed as in the previous project, as this is only a test.  Basically, I have used the FlickrJ library to search for geo-tagged photographs within London and then a Processing Library called Unfolding Maps to plot these locations on to a map of London.  I know this kind of visualisation has been done a thousand times before, not to mention done better, but I have no desire to make work that is mere visualisation.  If I decide to take this further, it will be as a small part of a larger project.

    The Major project that this blog was made to document may be over but the university course continues into another year.  I think that for the purposes of assessment it might be best if I start documenting this year’s work on a separate blog, simply so the examiners can see what work has been done since the start of this year.  Having said that, the work will follow quite closely from what I’ve been documenting on this blog so it seems to make sense to keep it going.

    Anyways, I’m still getting back into the swing of things here at uni.  I realise I haven’t put up any pictures of the final exhibition here, my apologies.  At the moment I’m still… assessing the project myself.  Trying to see what worked, what didn’t work, what perhaps might have been done differently.  I’m also looking forward to what I’m going to do next, in this respect there are a few vague ideas beginning to take shape but I’ll save them for another post.

    In the meantime this is what I did this morning, not much I grant you but it felt good to get out of the theory books and do a little bit of practical coding.  It occured to me that although geo-location was a fairly major part of the last project, I never actually mapped any of the photographs to a map.  So here’s a map.

    The search I used is not as detailed as in the previous project, as this is only a test.  Basically, I have used the FlickrJ library to search for geo-tagged photographs within London and then a Processing Library called Unfolding Maps to plot these locations on to a map of London.  I know this kind of visualisation has been done a thousand times before, not to mention done better, but I have no desire to make work that is mere visualisation.  If I decide to take this further, it will be as a small part of a larger project.

  4. Time to quit?

    I’ve spent the past two… nay several… days attempting to move my project from Processing to X-Code.  There are a number of reasons behind my attempt to abandon Processing.

    1) C++ generally is faster than Java.

    2) I wanted to try running the app on the iPhone.

    2a) It makes sense that for people finding the markers ‘in the wild’ so to speak that they are able to scan them there and then.

    2b) A mobile device offers new possibilities for GPS location.

    2c) The iPhone is an everyday device for the taking and sharing of photographs.

    Having got an app working reasonably well, albeit a bit slowly in Processing, I had hoped it would be relatively simple to convert it to C++.  How I underestimated the task.

    My first problem is finding a library that will allow me to run the AR detection on the iPhone.  I found an OpenFrameworks add-on for the ARToolkit that I had hoped I would be able to use.  However, on closer inspection I found that there are problems getting this to run on the iPhone.  I have it running on OSX, and to be honest there’s no discernible difference in speed between it and Processing.  More annoyingly though, is that it doesn’t support the creation of custom markers.  This is a problem.

    The ARToolkit comes supplied with 400 different markers, so it’s in no way a lack of choice.  However, if I can’t create my own custom markers then whatever markers I use will not bear any relation to the photographs they’re linked to. For me the custom markers are an important part of the project and I’m reluctant to lose them.  I did find another version of the Toolkit, which claimed to support custom markers, using Cinder as opposed to OpenFrameworks, but so far I’ve been unable to make the example run properly.  Also I’m not sure that would run on the iPhone either.

    Following some further Googling I have found a third library, which is actually a C++ version of the library I’ve been using in Processing (NyARToolkit).  I’ve not had a chance to try this yet, but it appears to include a lot of the features found in the Processing version so I have my fingers crossed.

    Another, perhaps even larger, problem is Flickr.  So far I’ve not been able to find an OpenFrameworks add-on to easily access the Flickr API, which surprises me.  I’ve found libraries for C and ObjectiveC, the latter being the language iPhone apps should really be written in.  Now, theoretically according to the Internet, it is possible to combine ObjectiveC and C++ in the same iPhone program assuming everything is referenced properly.  I’ve looked at a few ObjectiveC tutorials and I think I’d be able to make it work.

    However, I’m wondering now - is it worth it?  How badly do I want this to work on a mobile device?  I’ve already spent several days trying to make it work to no avail.  How many days am I willing to spend trying to make it work?  There are possible ways to make the Processing application run on a mobile device - run it on Android instead of iOS, or use something like Phone Gap to run it across multiple platforms. 

    I don’t want to end up in a situation where I’m bashing my head against a wall trying to solve technical problems if I don’t have to.  I’m far more comfortable with Processing and Java than I am with any of the C languages, therefore I’m more confident in my ability to solve potential problems.  In fact I’ve been back to look at my existing app and I’ve already managed to noticeably improve its speed by re-ordering portions of the code.

    If it comes right down to it, I’d rather that the piece works well on a computer than poorly on a mobile device.  There are other problems to solve other than technical ones, which frankly I suspect are more worthy of my time.

  5. I’ve been struggling a little bit to figure out where to go next.  The presentation certainly helped to give me to think a little more objectively about the project and last week I started trying some things out to see where it took me.  As I said in my last post, perhaps the project should be about the images that we don’t print, the everyday images of the family pets and days at the beach.  I also said that perhaps the photographs the markers link to shouldn’t be the ones that I have found out in the world, it seems to make more sense that they’re digital, shared digitally.
So… I started playing around with grabbing images from Flickr.  I’d made an earlier sketch in Processing using the Romefeeder library, which basically allows me to get an RSS feed with a few different parameters from Flickr and using that incorporate images into my own sketch  For example, I could look for recent images uploaded with the tag “puppies” or a feed from a specific user.  This was a perfectly workable method but I really wanted a more controllable way to search for images on Flickr, which meant a more complex piece of code.
While I was working on the Brownie Digital project for Physical Computing last semester, I came across the FlickrJ library.  Basically, it’s the Flickr API in Java which means its easy to integrate into Processing.  I used it in the Brownie Digital project to enable users to upload their photos to Flickr, using this code.  Since I’m rather clueless on exactly how to go about using the Flickr API directly I thought I would give the FlickrJ library a go and see what I could do.
Admittedly, it took a fair bit of swearing, Googling, and staring blankly at the screen before I could get anything useful out of it.  But I did.  I started by learning simply how to ask the Flickr Pandas for photographs (yes, Flickr has Pandas).  Basically the Pandas are just programs which provide a different stream of photographs but there’s not really much way of controlling what you get from them.  However, once I had some photographs I was able to start extracting data from them, like for example if they had been geo-tagged.
From there, I was able to build a simple application in Processing which would search Flickr for photographs with a certain tag.  I’ve been using ‘puppies’ a lot because I have the image of the three dogs as one of my markers.  I then moved on to looking for photographs which not only were tagged with ‘puppies’ but had also been geo-tagged so I could tell exactly where they’d been taken.  Finally, I tried to find images that matched both my search tag and a location, in this case roughly the area enclosed by the M25.  This can be made more specific, for example if I wanted to search for photographs taken within a mile radius of Goldsmiths, but I’ve kept it simple for now.
Having achieved this, I thought I might as well just jump straight ahead and see if I could make it work with my markers.  I took the two markers - the girl on the swing and the three dogs - and linked these to Flickr search tags related to the photographs; so ‘swings’ and ‘puppies’.  It works!  Although there are issues - the program is runs slowly for one - I’m quite pleased with the results.
The searches mostly return the kind of images you would expect, cute photographs of puppies and snaps of children’s play-parks.  It also returns some odd results, like the image above.  For those of you unfamiliar with the TV comedy Black Books (one of the funniest shows ever made in my humble opinion), this is a photograph of the real-life shop Collinge and Clark which was used as the setting for the show.  I didn’t have the sense to check which tag returned this result but it was either ‘swings’ or ‘puppies’.  Personally I can’t see anything in the image, to suggest why it would have been given either of these tags.
It raises potentially interesting questions, much like the Google image dictionary, about how we catalogue things online, how we go about finding them and how things can easily be taken out of context through the meta-data attached to them.  At present I’ve been using my own search tags based on the images I’ve found but I wonder if it might be interesting to ask other people to ‘tag’ the images.  I was interested in how the Descriptive Camera made its returned image descriptions highly subjective so it could be that a similar principle is applied to this project. 

    I’ve been struggling a little bit to figure out where to go next.  The presentation certainly helped to give me to think a little more objectively about the project and last week I started trying some things out to see where it took me.  As I said in my last post, perhaps the project should be about the images that we don’t print, the everyday images of the family pets and days at the beach.  I also said that perhaps the photographs the markers link to shouldn’t be the ones that I have found out in the world, it seems to make more sense that they’re digital, shared digitally.

    So… I started playing around with grabbing images from Flickr.  I’d made an earlier sketch in Processing using the Romefeeder library, which basically allows me to get an RSS feed with a few different parameters from Flickr and using that incorporate images into my own sketch  For example, I could look for recent images uploaded with the tag “puppies” or a feed from a specific user.  This was a perfectly workable method but I really wanted a more controllable way to search for images on Flickr, which meant a more complex piece of code.

    While I was working on the Brownie Digital project for Physical Computing last semester, I came across the FlickrJ library.  Basically, it’s the Flickr API in Java which means its easy to integrate into Processing.  I used it in the Brownie Digital project to enable users to upload their photos to Flickr, using this code.  Since I’m rather clueless on exactly how to go about using the Flickr API directly I thought I would give the FlickrJ library a go and see what I could do.

    Admittedly, it took a fair bit of swearing, Googling, and staring blankly at the screen before I could get anything useful out of it.  But I did.  I started by learning simply how to ask the Flickr Pandas for photographs (yes, Flickr has Pandas).  Basically the Pandas are just programs which provide a different stream of photographs but there’s not really much way of controlling what you get from them.  However, once I had some photographs I was able to start extracting data from them, like for example if they had been geo-tagged.

    From there, I was able to build a simple application in Processing which would search Flickr for photographs with a certain tag.  I’ve been using ‘puppies’ a lot because I have the image of the three dogs as one of my markers.  I then moved on to looking for photographs which not only were tagged with ‘puppies’ but had also been geo-tagged so I could tell exactly where they’d been taken.  Finally, I tried to find images that matched both my search tag and a location, in this case roughly the area enclosed by the M25.  This can be made more specific, for example if I wanted to search for photographs taken within a mile radius of Goldsmiths, but I’ve kept it simple for now.

    Having achieved this, I thought I might as well just jump straight ahead and see if I could make it work with my markers.  I took the two markers - the girl on the swing and the three dogs - and linked these to Flickr search tags related to the photographs; so ‘swings’ and ‘puppies’.  It works!  Although there are issues - the program is runs slowly for one - I’m quite pleased with the results.

    The searches mostly return the kind of images you would expect, cute photographs of puppies and snaps of children’s play-parks.  It also returns some odd results, like the image above.  For those of you unfamiliar with the TV comedy Black Books (one of the funniest shows ever made in my humble opinion), this is a photograph of the real-life shop Collinge and Clark which was used as the setting for the show.  I didn’t have the sense to check which tag returned this result but it was either ‘swings’ or ‘puppies’.  Personally I can’t see anything in the image, to suggest why it would have been given either of these tags.

    It raises potentially interesting questions, much like the Google image dictionary, about how we catalogue things online, how we go about finding them and how things can easily be taken out of context through the meta-data attached to them.  At present I’ve been using my own search tags based on the images I’ve found but I wonder if it might be interesting to ask other people to ‘tag’ the images.  I was interested in how the Descriptive Camera made its returned image descriptions highly subjective so it could be that a similar principle is applied to this project. 

  6. Today I managed to successfully make some markers that the computer could differentiate from each other without too much trouble.  I made them by loading a low-res version of the images into Processing, applied a gray filter and then used a threshold filter on the image to create black and white squares.

    It’s still not the aesthetic I’m looking for but it’s a step in the right direction.

  7. It’s unlikely that I’ll want to use the AR Toolkit to add OpenGL shapes to a scene; more likely I think I’ll want to use it to show photographs, video, text, maybe trigger sound if that’s possible.  This was just a quick sketch to see how easy it was to link a certain image with a marker.  Answer: very.

    It’s unlikely that I’ll want to use the AR Toolkit to add OpenGL shapes to a scene; more likely I think I’ll want to use it to show photographs, video, text, maybe trigger sound if that’s possible.  This was just a quick sketch to see how easy it was to link a certain image with a marker.  Answer: very.

  8. I just discovered something quite exciting, or exciting to my mind at any rate, others may be of a different opinion.

    I’ve been playing around with the NyARToolkit again this afternoon, just trying to get a better feel for how it works.  Just out of interest I tried scribbling over one of my markers to see how much I could disrupt it before it stopped being detected by the webcam, kind of inspired by Adam Harvey’s CV Dazzle Project .  

    The first photo shows my scribbled marker, which the software is still picking up.  I have to say I was surprised at how much I was able to scribble over it and still have it detected.  This made me wonder how far from the original marker I could get and still have the software detect it.  The second and third images show one of my original markers and a hand-drawn copy of it.

    When I tried it with the webcam I found that the software detected the hand-drawn marker pretty well - although I’d say it probably wasn’t quite as accurate as the printed one.  Still I was impressed by how well it was able to track it.  It made me wonder what possibilities there are for making AR Markers - does it have to be a printed piece of paper?  Clearly not - a hand drawn one will work.  Could it be three dimensional?  Photographed?

    This discovery excites me because I see one of the biggest reasons for not using this kind of Augmented Reality marker is the aesthetics.  I don’t like their technical, QR code appearance.  But if there was a way to make the markers more… analogue… there might be something interesting there.

    I know there are also programs available online that can be used to train the software to create custom markers.  I might have a go with one of them and see just how far from a QR code the markers can get.  I know that Aurasma for example doesn’t need these markers - working with image recognition and geo-tagging.

    I’m still not really sure yet exactly what I want to do for this project - other than what I mentioned before about experimenting in the space between digital and analog, physical and ephemeral.  As I’ve said before, I don’t want to repeat old mistakes by getting too caught up trying to make the technology work; so I’m very aware at this point that I can’t get too carried away with it.  Having said that, after having spent the past few weeks focusing so much on my essay I’m enjoying just playing with Processing for now - it’s helping to get the creative juices flowing once more!

  9. Last but certainly not least - Object Orientated Augmented Reality.

  10. Tutorial Two completed - a tad more exciting than the first.