1. I’m now working on mapping Flickr photographs by time as well as geo location.  Here, the fill is adjusted depending on the difference between the photograph’s time stamp and the current time.  The whiter the dot, the closer to the current time and photograph was taken.  The darker the dot the further from the current time the photograph was taken.

    I’m now working on mapping Flickr photographs by time as well as geo location.  Here, the fill is adjusted depending on the difference between the photograph’s time stamp and the current time.  The whiter the dot, the closer to the current time and photograph was taken.  The darker the dot the further from the current time the photograph was taken.

  2. This afternoon I’ve been combining Flickr with Twitter (the yellow dots represent Tweets).  For some reason the Twitter search isn’t returning nearly as many geo tags as I would have expected, not sure if it’s my code or the API that’s at fault.

  3. The Major project that this blog was made to document may be over but the university course continues into another year.  I think that for the purposes of assessment it might be best if I start documenting this year’s work on a separate blog, simply so the examiners can see what work has been done since the start of this year.  Having said that, the work will follow quite closely from what I’ve been documenting on this blog so it seems to make sense to keep it going.
Anyways, I’m still getting back into the swing of things here at uni.  I realise I haven’t put up any pictures of the final exhibition here, my apologies.  At the moment I’m still… assessing the project myself.  Trying to see what worked, what didn’t work, what perhaps might have been done differently.  I’m also looking forward to what I’m going to do next, in this respect there are a few vague ideas beginning to take shape but I’ll save them for another post.
In the meantime this is what I did this morning, not much I grant you but it felt good to get out of the theory books and do a little bit of practical coding.  It occured to me that although geo-location was a fairly major part of the last project, I never actually mapped any of the photographs to a map.  So here’s a map.
The search I used is not as detailed as in the previous project, as this is only a test.  Basically, I have used the FlickrJ library to search for geo-tagged photographs within London and then a Processing Library called Unfolding Maps to plot these locations on to a map of London.  I know this kind of visualisation has been done a thousand times before, not to mention done better, but I have no desire to make work that is mere visualisation.  If I decide to take this further, it will be as a small part of a larger project.

    The Major project that this blog was made to document may be over but the university course continues into another year.  I think that for the purposes of assessment it might be best if I start documenting this year’s work on a separate blog, simply so the examiners can see what work has been done since the start of this year.  Having said that, the work will follow quite closely from what I’ve been documenting on this blog so it seems to make sense to keep it going.

    Anyways, I’m still getting back into the swing of things here at uni.  I realise I haven’t put up any pictures of the final exhibition here, my apologies.  At the moment I’m still… assessing the project myself.  Trying to see what worked, what didn’t work, what perhaps might have been done differently.  I’m also looking forward to what I’m going to do next, in this respect there are a few vague ideas beginning to take shape but I’ll save them for another post.

    In the meantime this is what I did this morning, not much I grant you but it felt good to get out of the theory books and do a little bit of practical coding.  It occured to me that although geo-location was a fairly major part of the last project, I never actually mapped any of the photographs to a map.  So here’s a map.

    The search I used is not as detailed as in the previous project, as this is only a test.  Basically, I have used the FlickrJ library to search for geo-tagged photographs within London and then a Processing Library called Unfolding Maps to plot these locations on to a map of London.  I know this kind of visualisation has been done a thousand times before, not to mention done better, but I have no desire to make work that is mere visualisation.  If I decide to take this further, it will be as a small part of a larger project.

  4. Half-past midnight brainwave.

    I had one of those half-past midnight brainwaves.  I love it when that happens.

    So on a grand scale my problem was this: how do I show my work in the gallery if I don’t want to show an iPhone app?

    This big problem can be broken up into slightly smaller problems.  One of these problems, and the subject of the half-past midnight brainwave, was the question of computers.  

    I’d been playing about with making a desktop application very similar to the iPhone one, in that the retrieved pictures are displayed fullscreen.  I think this gives them the prominence they deserve.  I also felt that actually using the ARToolkit for what it’s intended, drawing images on top of markers in a video feed, didn’t really fit with the work somehow.  The work is about photographs, not fancy computer graphics.  Anyway, this was working well but in itself presented a few problems.

    Firstly the problem of display, one display works perfectly for either portrait or landscape images but is not ideal for both.  My immediate thought process went something like this…

    I need two displays.  No problem.

    I can only connect one external display to a Mac.  Problem.  

    I need two computers.  No problem.  

    Computers available: 1 Intel-based iMac running Snow Leopard, 1 PowerPC-based iMac running Leopard.  Problem.

    Scouring the Internet I tried to find a way to compile my existing program (built on Snow Leopard) for the 32-bit PowerPC iMac.  I’m sure it should be possible and I tried several times but nothing I compiled would run on the old machine.  So I had to find another way of linking the two machines.  This is where the half-past midnight brainwave comes in.  Solution:  Open Sound Control, better known as OSC.

    I realised that I’d actually been horrifically over-complicating what I was trying to do.  Really, I only need one of my computers to run the ARToolkit Software, which has to be the Intel-based Mac.  The second computer’s only purpose is to display images and tag information.  All of this information can easily be sent to the other computer via OSC, it’s just strings after all.  Best of all, OSC can communicate between different programs so I can use it to make my openFrameworks (C++) application talk to another application made in Processing (Java).  Finally, Processing (mostly) runs with no problems on the older Mac.

    So, after a little bit of tinkering with my code the next morning… ta-da two computers talking to each other.

  5. I’ve not been so good with my blog posts of late.  It’s a sign of being busy.

    Looking over my last few posts, I think I left off at the point where I was only just coming back from the verge of giving up on making an iPhone application for this project thanks to various technical difficulties.  Since then I’d say I’ve had something of a turnaround in my technical fortunes, with much progress made.

    I’m fairly confident that I now have (or at least very nearly have) a working iPhone application.  There’s still a few bugs to be ironed out, but on the whole it does what I want it to do.

    What I’m working with now is actually the second version of the app.  The first version worked, but I felt there were too many usability issues.  With the first version I made this mistake of basing it too closely on the code I wrote for Processing; namely carrying out all the Flickr searches when the app is launched and then storing all the information in an array until it is needed later.  Building the app, I realised that by doing this the loading time was far too long; I shouldn’t be able to check Twitter or giggle at funny gifs on Tumblr while I’m waiting.  I suspect that anyone trying to use the app would just get annoyed and quit before it was even loaded.

    I found it was much better to just start the app and only perform a search when a marker is actually detected.   I don’t even bother to store the results in a vector, I simply parse the XML for the address, tags, etc as it comes in.  This means that there’s a few seconds wait between scanning a marker and an image displaying but I feel that it’s a much more acceptable wait - more what people would be used to with an iPhone app.

    The main reason for having all the images stored in an array ready to display was so that as soon as a marker was detected, the photograph would be drawn on top of it.  At first I was reluctant to include a requirement for the user to do something in order to scan the markers (in the case of the iPhone a double tap on the screen).  It just feels a little too close to how you take a photo with the phone, I’m still not sure it makes sense.  However, technical necessity has in a way forced my hand in this case.

    The iPhone, while having a lot of processing power packed into a small device, quite simply does not have the same power behind it as a desktop.  Constantly running the AR detection and drawing the video feed was frankly a bit too much for it to handle.  I was annoyed at first, but in a way it forced me to think about alternatives.

    The way it works now, the user scans a marker by double tapping the screen, a search is carried out on Flickr and then an photograph returned and displayed full-screen (as opposed to drawn on top of a video image).  Working in this way, I thought it might be interesting to relate how the iPhone is handled to a physical photograph, utilising the accelerometer as well as the touch events.  I was thinking particularly about the backs of the photographs, where names, dates and locations are often written.  All this kind of data is available from Flickr and I wanted to incorporate a way to display that.  Originally I was just going to use a touch swipe but I’ve now written it so the user actually has to rotate the phone; not quite like flipping over the back of the photograph but a close second.

    With the bones of the application now in place, I’m beginning to flesh it out a bit.  I’ve written in a case where occasionally, rather than an image from Flickr, the original photograph will be displayed.  Flipping the iPhone over, the user will see a modified quote (like the second image above) from a writer such as Roland Barthes or Susan Sontag.  My plan is for every modified quote to be from a recognisable writer on photography, but one who wrote before digital cameras and social media became commonplace.  By modifying them to relate to digital images I hope to highlight the shifting nature of the medium.

    Words are important to this project.  It is through words, in the form of search tags, rather than sheer chance that every image displayed is found.  I think it’s important for me to think very carefully about what I want these modified quotes to say, as if I’m using more words I need to ensure they are the right ones.

  6. Last week I swapped the digital studios for the darkroom.  I must admit I felt a little out of practice at first but once I got back into it I really started enjoying myself.  If nothing else it was a nice change after spending so long sitting in front of my laptop!
I’m really pleased with the look of the markers I printed.  I had been a bit concerned that the process of printing them digitally and then re-photographing them might mean a loss of quality but I don’t think it’s noticeable.  I shot them on 400 ISO film so they have a lovely grainy quality, some of which you can see in this test strip.

    Last week I swapped the digital studios for the darkroom.  I must admit I felt a little out of practice at first but once I got back into it I really started enjoying myself.  If nothing else it was a nice change after spending so long sitting in front of my laptop!

    I’m really pleased with the look of the markers I printed.  I had been a bit concerned that the process of printing them digitally and then re-photographing them might mean a loss of quality but I don’t think it’s noticeable.  I shot them on 400 ISO film so they have a lovely grainy quality, some of which you can see in this test strip.

  7. Coming back from the verge of giving up.

    A few posts ago I was on the verge of giving up trying to make any kind of iPhone application to work with this project.  Thousands of errors, broken links and mis-matched libraries were doing my head in.  Quite honestly, I was really beginning to ask if it was worth the hassle.

    I take it back.

    My first issue was that I was having trouble getting the ofxARToolkit addon to run on the iPhone.  That particular problem was fixed last Wednesday.  There’s still the issue of custom marker support that I feel has to be addressed but it’s a step in the right direction.  I’m hoping that I might be able to re-write portions of the code based on the other example I found supporting custom markers, although I may need to find myself another marker generator.

    The second problem was Flickr, as there didn’t appear to be an API kit written in C++ that could easily be used in the same program as the ARToolkit.  I had found one written in Objective-C, which in theory could be combined with C++, so I thought this was the route I would have to go down.  Turns out this approach would have been massively over-complicating the program.  Turns out, you don’t need an API kit at all.  This revelation I owe to the Flickr API documentation and this tutorial.  

    After doing some reading, I realised that the Flickr API (and this applies to Twitter as well) can be accessed using a simple HTTP request and returns the search results as a block of XML.  Once you have the XML block, all that’s needed is to parse it to get the required information; such as the URL, license, photographer, etc.  The beauty of this is that all these functions are either built into openFrameworks, or come as simple add-ons.  No additional Flickr library required, and it will work for other websites that use the same response formats.

    My openFrameworks application isn’t quite up to the standards of the one I made in Processing, but it’s getting there.  Last night I was able to search for photographs on Flickr and parse the response to construct a url to the photograph which I could display on screen when a particular marker was detected. It worked both in the simulator and on the iPhone itself once I’d set up a wi-fi connection.

    I realise this is all a bit technical but I that’s just where I’ve been the past week or so.  I have some more ideas about the conceptual aspects of the project but I will save them for another post.

  8. iPhone screengrabs from today.  Detecting a specific marker and drawing an image over it.

  9. The other day I was quite honestly on the verge of giving up making any kind of iPhone application for this project.  I take it back.
This, not particularly beautiful, photograph is the product of days of frustration and seemingly endless X-Code errors.  There are still problems to solve, which will no doubt result in many more days of frustration and error messages but it’s a step in the right direction.  I have never, ever, been so pleased to see a yellow square appear around a box.  It honestly has made my day.

    The other day I was quite honestly on the verge of giving up making any kind of iPhone application for this project.  I take it back.

    This, not particularly beautiful, photograph is the product of days of frustration and seemingly endless X-Code errors.  There are still problems to solve, which will no doubt result in many more days of frustration and error messages but it’s a step in the right direction.  I have never, ever, been so pleased to see a yellow square appear around a box.  It honestly has made my day.

  10. So I’ve been thinking.

    One of the things that’s been bothering me about this project is what it’s final form will be.  In other words - what will the audience see when they arrive at the final show in September?  For me, it’s important that some of the markers make their way out into the world in some way.  I think I mentioned before perhaps leaving some in the library, or maybe in a museum, or a cafe… wherever, the point is they are out in the world.  That has its own particular problems of distribution which I’m not going to go into right now.

    But I have to have something in the show in September, in the physical space of the gallery.  What do I show?  Do I show documentation of the markers in the wild as it were?  Do I show some of the markers scattered in the space and provide a means for people to scan them - a modified camera perhaps?  What best communicates the idea?

    I don’t have a conclusive answer but I wonder if perhaps rather than simply displaying either the markers or the documentation the two can perhaps be combined in some way.  One of the ideas I touched on a while back was that the markers themselves could be printed as a kind of limited edition print, which would be displayed in the gallery.  I can imagine though that if you were to walk into the gallery and, for the sake of argument, be confronted by three black and white prints that appear almost as pixelated images, it might be difficult to make out what’s going on.  Are people going to see markers they can scan? Or some kind of new aesthetic glitch?  Personally I fear the glitch is the more likely.

    It’s perhaps a little on the obvious side but what if I were to photograph the markers in the kind of environments they’re being left?  So then you have the markers in their own right but they also appear as part of another image - so rather than be confronted by three pixelated markers, you see three photographs and perhaps its not immediately obvious the marker is there.

    This one here is just a test that I literally threw together in five minutes - a marker found within the pages of Roland Barthes’ Camera Lucida.  It’s not a particularly interesting image, but it was useful to check that the computer can still read the markers when they’re printed in this way.  It can as you can see from the screenshot.  One thing that I can definitely confirm is that glossy paper and AR markers do not mix, matte is the way forward.

    Next comes the problem of the AR itself.  I realised the other day that there’s actually something quite absurd about inviting people to scan these markers.  The project is about finding images, not creating them.  The act of scanning the markers is in many ways closer to taking a photograph than discovering one.  I feel that this is not helping the work, that the experience of scanning the markers has to be more like an experience of viewing photography.  Perhaps I can create some kind of album, or some kind of modified negative viewer; something that we relate to viewing a photograph not taking it.