1. The Major project that this blog was made to document may be over but the university course continues into another year.  I think that for the purposes of assessment it might be best if I start documenting this year’s work on a separate blog, simply so the examiners can see what work has been done since the start of this year.  Having said that, the work will follow quite closely from what I’ve been documenting on this blog so it seems to make sense to keep it going.
Anyways, I’m still getting back into the swing of things here at uni.  I realise I haven’t put up any pictures of the final exhibition here, my apologies.  At the moment I’m still… assessing the project myself.  Trying to see what worked, what didn’t work, what perhaps might have been done differently.  I’m also looking forward to what I’m going to do next, in this respect there are a few vague ideas beginning to take shape but I’ll save them for another post.
In the meantime this is what I did this morning, not much I grant you but it felt good to get out of the theory books and do a little bit of practical coding.  It occured to me that although geo-location was a fairly major part of the last project, I never actually mapped any of the photographs to a map.  So here’s a map.
The search I used is not as detailed as in the previous project, as this is only a test.  Basically, I have used the FlickrJ library to search for geo-tagged photographs within London and then a Processing Library called Unfolding Maps to plot these locations on to a map of London.  I know this kind of visualisation has been done a thousand times before, not to mention done better, but I have no desire to make work that is mere visualisation.  If I decide to take this further, it will be as a small part of a larger project.

    The Major project that this blog was made to document may be over but the university course continues into another year.  I think that for the purposes of assessment it might be best if I start documenting this year’s work on a separate blog, simply so the examiners can see what work has been done since the start of this year.  Having said that, the work will follow quite closely from what I’ve been documenting on this blog so it seems to make sense to keep it going.

    Anyways, I’m still getting back into the swing of things here at uni.  I realise I haven’t put up any pictures of the final exhibition here, my apologies.  At the moment I’m still… assessing the project myself.  Trying to see what worked, what didn’t work, what perhaps might have been done differently.  I’m also looking forward to what I’m going to do next, in this respect there are a few vague ideas beginning to take shape but I’ll save them for another post.

    In the meantime this is what I did this morning, not much I grant you but it felt good to get out of the theory books and do a little bit of practical coding.  It occured to me that although geo-location was a fairly major part of the last project, I never actually mapped any of the photographs to a map.  So here’s a map.

    The search I used is not as detailed as in the previous project, as this is only a test.  Basically, I have used the FlickrJ library to search for geo-tagged photographs within London and then a Processing Library called Unfolding Maps to plot these locations on to a map of London.  I know this kind of visualisation has been done a thousand times before, not to mention done better, but I have no desire to make work that is mere visualisation.  If I decide to take this further, it will be as a small part of a larger project.

  2. Pure brilliant white.

    I think I might need to re-think my plinth.  My setup for the show basically consists of two iMacs and two wall-mounted external monitors.  Also on the wall will be a selection of empty photo-frames of various sizes and colours.  I have a webcam which is connected to one of the iMacs and I was planning to put this along with the marker photographs on top of a plinth, basically constructed to a size to cover the iMacs but with enough height to make a kind of shelf.  Looking at it in the studio today though… I’m not sure.

    I think really before I can say for sure I need to actually see it in the space where it’s going to be installed.  At the moment though, I feel that the plinth looks like something of an afterthought, not really complementing well the rest of the installation.  Perhaps it would be better if I were to paint the whole wall an off white colour, but then I fear that might also look out of place when the rest of the room is white.  Or perhaps I could put a shelf of wood across the top of the plinth to make it look more like a shelf or table top.

  3. Last week I had a go at photographing some of the markers as if they were found photographs, found on the ground, behind the tv, down the back of a cabinet.  The fridge is a slightly odd one, not so much a found photograph but a snapshot on display.

    These images are scans from a contact sheet so the quality is not brilliant, however looking at the negatives I think the photographs are fairly clear.  I was concerned that a lot of them would be badly exposed and out of focus after my camera’s lens came apart in my hand halfway through the roll of film.

    I am not certain what to do with these images.  I had taken them as a kind of companion piece to the main gallery installation, an attempt to give it a kind of background.  Last week I was leaning towards not using them, thinking that perhaps it was… overkill.  Like forcing a kind of narrative on the markers that they don’t really need.

    Seeing the images though, I’m not sure.  My thinking was to show them separately from the main piece, away in the corner at the back of the space; where there’s little other work and a real chance that people might miss them.  It’s a big space though and I only have maybe six or eight decent photographs here, not enough really to fill the space.  The only option might be some creative arrangement - some near the ground, some at eye-level, some in the corners - to actually give the viewer some sense of looking around the space to discover these images.

  4. Future heirlooms like family photos, home movies, and personal letters now exist only in digital form, and in many cases they are stored using popular services like Flickr, YouTube, and Gmail.

    — 

    from Amazon product description, Your Digital Afterlife by Evan Carroll and John Romano

    I’ve not read this particular book but it strikes me that more and more questions are being asked about what happens to our digital possessions after we’re gone.  As the article I previously re-blogged pointed out in the case of digital music and books we don’t actually own what’s on our iPods, we merely have a license to use it.

    I’ve not focused too much on this question as I don’t think it’s perhaps quite so relevant to photography, as the photographs remain the property of the person who took them not the website they’re stored on.  I know there are exceptions but I am speaking quite generally here.  But there’s still a very real possibility of all the content we own being lost also; really all it would take is for us not to leave our account details behind.

    See also: http://www.thedigitalbeyond.com/

  5. 

I’m having some display issues.
I’ve mainly been working on my iPhone application for the past few weeks.  Aside from the issues I’ve mentioned in some of my earlier posts I have no complaints about how this has gone.  In general I’m really happy with the application but… well it’s not really gallery-display material now is it?
For me the iPhone application works because it is both portable and a natural device for viewing photographs.  If someone with the application on their phone was to find one of my markers they would be able to scan it and begin finding photographs straight away. Initially I had thought that it would be enough to have an iPhone or an iPod available in the gallery for people to try out the application with some markers.  Now, especially having seen the space, I’m not so sure.
I’ve seen plenty of iPhone applications in Degree Shows, but always in the Design section.  I’m not criticising any design showcases but I just feel that a setup such as this simply doesn’t fit in an art exhibition.  Maybe there’s still a way that it can work but I’m worried.  I always knew that for assessment I would have to produce something that could be displayed in a gallery and I feel now that I’ve lost sight of this.  I could show a simple desktop version of the application, but at the moment I’m about as keen on having a solitary iMac sitting on a plinth as I am a solitary iPhone.
So what do I do?
I’ve been racking my brains trying to think of alternatives.  With the extra processing power available on the iMac it’s possible to recognise multiple markers and to display multiple photographs.  I was considering a hand-held camera that the viewer could pass over the markers with the results displayed on a monitor.  This would be more like the augmented reality apps that we’re used to encountering where the image/photograph is displayed over live video feed - as opposed to the iPhone application where the retrieved photographs are displayed full screen.
What I like about this is that it closely relates the retrieved images to the physical markers, at the same time making it very difficult to handle the makers without losing the tracking information.  I think the sensitivity of the tracker, the somewhat tenuous link between the physical and digital images, highlights perhaps the starkest difference between physical and digital image; that you cannot handle them.
Another option I’m considering is full-screen image display similar to the iPhone application but in a slightly more creative setup.  For example, disguising a wall hung LCD display as a picture frame and displaying the retrieved photograph there.  Perhaps to help disguise the fact it’s an LCD display I could cover it with something, such as thin tracing paper.  I tried out a very rough test in the studio and this seemed to work reasonably well, although I think it would be better with a brighter display.  This second option would certainly give the photographs greater prominence than the first, which is actually another reason why I like the iPhone application.
Both of these particular options have the own issues.  The biggest perhaps being the question of how to integrate the tags and other information about the photographs retrieved from Flickr.  Again the iPhone application lends itself to this more than a desktop version, with the option for different displays depending on the orientation of the phone.
At the moment I’m sitting somewhere a little beyond stuck.  I think I need to take a few days to think about how we experience photography and how this may be translated into the gallery display.  I suppose it could be argued in that respect that a desktop computer is in some ways the ideal display device, like the iPhone, now a standard way to view photography.  
Perhaps it is only my own reluctance to have computers openly on display in the gallery that is stopping me from displaying the work in this way.  If this is the case, it may be that I need to re-examine why I am so resistant to this method of display.

    I’m having some display issues.

    I’ve mainly been working on my iPhone application for the past few weeks.  Aside from the issues I’ve mentioned in some of my earlier posts I have no complaints about how this has gone.  In general I’m really happy with the application but… well it’s not really gallery-display material now is it?

    For me the iPhone application works because it is both portable and a natural device for viewing photographs.  If someone with the application on their phone was to find one of my markers they would be able to scan it and begin finding photographs straight away. Initially I had thought that it would be enough to have an iPhone or an iPod available in the gallery for people to try out the application with some markers.  Now, especially having seen the space, I’m not so sure.

    I’ve seen plenty of iPhone applications in Degree Shows, but always in the Design section.  I’m not criticising any design showcases but I just feel that a setup such as this simply doesn’t fit in an art exhibition.  Maybe there’s still a way that it can work but I’m worried.  I always knew that for assessment I would have to produce something that could be displayed in a gallery and I feel now that I’ve lost sight of this.  I could show a simple desktop version of the application, but at the moment I’m about as keen on having a solitary iMac sitting on a plinth as I am a solitary iPhone.

    So what do I do?

    I’ve been racking my brains trying to think of alternatives.  With the extra processing power available on the iMac it’s possible to recognise multiple markers and to display multiple photographs.  I was considering a hand-held camera that the viewer could pass over the markers with the results displayed on a monitor.  This would be more like the augmented reality apps that we’re used to encountering where the image/photograph is displayed over live video feed - as opposed to the iPhone application where the retrieved photographs are displayed full screen.

    What I like about this is that it closely relates the retrieved images to the physical markers, at the same time making it very difficult to handle the makers without losing the tracking information.  I think the sensitivity of the tracker, the somewhat tenuous link between the physical and digital images, highlights perhaps the starkest difference between physical and digital image; that you cannot handle them.

    Another option I’m considering is full-screen image display similar to the iPhone application but in a slightly more creative setup.  For example, disguising a wall hung LCD display as a picture frame and displaying the retrieved photograph there.  Perhaps to help disguise the fact it’s an LCD display I could cover it with something, such as thin tracing paper.  I tried out a very rough test in the studio and this seemed to work reasonably well, although I think it would be better with a brighter display.  This second option would certainly give the photographs greater prominence than the first, which is actually another reason why I like the iPhone application.

    Both of these particular options have the own issues.  The biggest perhaps being the question of how to integrate the tags and other information about the photographs retrieved from Flickr.  Again the iPhone application lends itself to this more than a desktop version, with the option for different displays depending on the orientation of the phone.

    At the moment I’m sitting somewhere a little beyond stuck.  I think I need to take a few days to think about how we experience photography and how this may be translated into the gallery display.  I suppose it could be argued in that respect that a desktop computer is in some ways the ideal display device, like the iPhone, now a standard way to view photography.  

    Perhaps it is only my own reluctance to have computers openly on display in the gallery that is stopping me from displaying the work in this way.  If this is the case, it may be that I need to re-examine why I am so resistant to this method of display.

  6. I’ve not been so good with my blog posts of late.  It’s a sign of being busy.

    Looking over my last few posts, I think I left off at the point where I was only just coming back from the verge of giving up on making an iPhone application for this project thanks to various technical difficulties.  Since then I’d say I’ve had something of a turnaround in my technical fortunes, with much progress made.

    I’m fairly confident that I now have (or at least very nearly have) a working iPhone application.  There’s still a few bugs to be ironed out, but on the whole it does what I want it to do.

    What I’m working with now is actually the second version of the app.  The first version worked, but I felt there were too many usability issues.  With the first version I made this mistake of basing it too closely on the code I wrote for Processing; namely carrying out all the Flickr searches when the app is launched and then storing all the information in an array until it is needed later.  Building the app, I realised that by doing this the loading time was far too long; I shouldn’t be able to check Twitter or giggle at funny gifs on Tumblr while I’m waiting.  I suspect that anyone trying to use the app would just get annoyed and quit before it was even loaded.

    I found it was much better to just start the app and only perform a search when a marker is actually detected.   I don’t even bother to store the results in a vector, I simply parse the XML for the address, tags, etc as it comes in.  This means that there’s a few seconds wait between scanning a marker and an image displaying but I feel that it’s a much more acceptable wait - more what people would be used to with an iPhone app.

    The main reason for having all the images stored in an array ready to display was so that as soon as a marker was detected, the photograph would be drawn on top of it.  At first I was reluctant to include a requirement for the user to do something in order to scan the markers (in the case of the iPhone a double tap on the screen).  It just feels a little too close to how you take a photo with the phone, I’m still not sure it makes sense.  However, technical necessity has in a way forced my hand in this case.

    The iPhone, while having a lot of processing power packed into a small device, quite simply does not have the same power behind it as a desktop.  Constantly running the AR detection and drawing the video feed was frankly a bit too much for it to handle.  I was annoyed at first, but in a way it forced me to think about alternatives.

    The way it works now, the user scans a marker by double tapping the screen, a search is carried out on Flickr and then an photograph returned and displayed full-screen (as opposed to drawn on top of a video image).  Working in this way, I thought it might be interesting to relate how the iPhone is handled to a physical photograph, utilising the accelerometer as well as the touch events.  I was thinking particularly about the backs of the photographs, where names, dates and locations are often written.  All this kind of data is available from Flickr and I wanted to incorporate a way to display that.  Originally I was just going to use a touch swipe but I’ve now written it so the user actually has to rotate the phone; not quite like flipping over the back of the photograph but a close second.

    With the bones of the application now in place, I’m beginning to flesh it out a bit.  I’ve written in a case where occasionally, rather than an image from Flickr, the original photograph will be displayed.  Flipping the iPhone over, the user will see a modified quote (like the second image above) from a writer such as Roland Barthes or Susan Sontag.  My plan is for every modified quote to be from a recognisable writer on photography, but one who wrote before digital cameras and social media became commonplace.  By modifying them to relate to digital images I hope to highlight the shifting nature of the medium.

    Words are important to this project.  It is through words, in the form of search tags, rather than sheer chance that every image displayed is found.  I think it’s important for me to think very carefully about what I want these modified quotes to say, as if I’m using more words I need to ensure they are the right ones.

  7. […] I have the slightly clammy feeling of biography, the sense of living on the edges of other people’s lives without their permission.

    - Edmund de Waal, The Hare with Amber Eyes


    I recently finished reading Edmund de Waal’s The Hare with Amber Eyes, which if you’ve never read I must highly recommend.  It has the - perhaps somewhat dubious - honour of being the first book to make me miss my stop on the Tube.  This quote occurs in the book’s final chapter, and though many of the book’s pages are folded down marking memorable quotes or an interesting point, this one, this week, resonated particularly strongly.

    The reason for this is a pile of photographs I recently bought on eBay.  I didn’t know what I was getting when I bought them, simply described as a “Job Lot of Old Photographs”.  When they arrived I was surprised to see the number of details written on the back of the photographs.  I have been able to discern locations, names, and dates all from the details written, almost always in the same hand, on the back of the images.  I have even, with the help of Google StreetView, been able to locate some of the buildings.

    It is strange that it should have been this last point which made me the most uncomfortable.  It was almost as though the photographs became too real.  With the other photographs I’ve collected there perhaps came a location or a date, sometimes a name, but never such a detailed narrative.  I feel that this knowledge was not meant for me, that I am an intruder peeking through the window into other people’s lives.

    I say peeking as I get the impression this is only a part of the story.  These do not strike me as images to commemorate events, but to communicate the details of a life lived far away.  The fact that they span several years seems to support this - I imagine the photographs were accompanied by a letter, probably several pages long.

    I don’t know what to do with these images now.  The details seem to rich to simply ignore, but on the other hand I feel guilty at the thought of using them for my own means.  The story is not mine to use.

    The notes on the back of these photographs have encouraged me to look beyond the images themselves, to discover where they came from.  The fact that I worry about using this story is surely proof of a greater emotional resonance.  Can this be achieved even if the exact details are held back?  Is a hint enough?

    I wonder if, in this project, rather than simply returning an image, certain photographs could return more.  A comment, a link, a map reference… something that goes beyond the image itself, linking it to the outside world in a more tangible way.

  8. Some thoughts on markers.

    In the past few days I’ve put up a couple of examples of inverted AR markers - that it markers with a white box as opposed to a black one.  There’s a few reasons for this.  

    Firstly, it is almost purely for aesthetic reasons.  Virtually every AR marker I’ve seen online has followed the template of big black border with white geometric shapes.  There are some exceptions but they are few and far between.  Why not make it a thick white border with black geometric shapes?  It may not be radically different but at least it begins to differentiate mine from the thousands of other markers out there.  Of course they are all modelled on the existing ARToolkit markers so they are not mine in the strictest sense but my interpretation of them.

    Clearly, there is a practicality issue - it’s my understanding that the markers work on a principle which I suppose is a little like object detection in OpenCV.  Essentially, they work best with clear differences - a stark change from black to white will work better than a gradual fade of black - gray - white.  The more intricate and complex the markers, the more likely they are to become confused with each other.  It helps that the ARToolkit does actually threshold the images (converts them so all tones are made either black or white) coming from the camera.  This means the printed markers do not have to be starkly coloured black and white prints, provided there is enough of a distinction the computer can do the rest.

    Unfortunately, this does mean the application has to do more image processing.  Particularly with the iPhone application as I still haven’t found a way to use my own custom markers.  For this app I need to invert the incoming camera image so that the computer still sees it as a black border with white shapes.  Unfortunately, this noticeably reduces the speed of the app.  The Processing app on the other hand doesn’t have this problem; since it readily accepts custom markers and I can simply train it with a white marker, no filter required.

    More importantly though, I was trying to think of ways in which the markers could refer to absence.  Being so generic, it’s difficult to see the markers as anything other than a computer-readable symbol, or some kind of glitch.  I thought that white could be an interesting colour to use as it has in a way a kind of double meaning in photography - as the white area on a photograph can be seen simultaneously as the evidence of brightest light or its total absence.

    To clarify, in a print of a black and white photograph - the areas that are white are those where the brightest light hit the negative.  The sun on a bright day for example will appear white in a photograph.  However, when we think about the printing process an area of white on a print is where no light has touched the paper, as it has not passed through the negative to the paper.

    I suppose you could almost make the inverse argument for black.  However in this project I’m interested more in prints than negatives.  To me, black on a print is very much about presence - the bright light from the enlarger obliterating the white of the paper.  It may communicate the absence of light in a scene but on paper it is very much present.  When we see white we see no photograph, in the case of the markers this seems to communicate a missing piece.  Whereas black, to me anyway, suggests the image exists but has been obliterated or covered.

    I don’t think it’s too great a leap to suggest that this kind of duality could be read in relation to the strange duality of digital images where they can be seen as both fixed and ephemeral.  If preserved properly, digital images can at least in theory survive for centuries as pristine as the day they were taken.  Yet they can also be deleted forever at the touch of a button.

    The same cannot be said of the printed image.

  9. So I’ve been thinking.

    One of the things that’s been bothering me about this project is what it’s final form will be.  In other words - what will the audience see when they arrive at the final show in September?  For me, it’s important that some of the markers make their way out into the world in some way.  I think I mentioned before perhaps leaving some in the library, or maybe in a museum, or a cafe… wherever, the point is they are out in the world.  That has its own particular problems of distribution which I’m not going to go into right now.

    But I have to have something in the show in September, in the physical space of the gallery.  What do I show?  Do I show documentation of the markers in the wild as it were?  Do I show some of the markers scattered in the space and provide a means for people to scan them - a modified camera perhaps?  What best communicates the idea?

    I don’t have a conclusive answer but I wonder if perhaps rather than simply displaying either the markers or the documentation the two can perhaps be combined in some way.  One of the ideas I touched on a while back was that the markers themselves could be printed as a kind of limited edition print, which would be displayed in the gallery.  I can imagine though that if you were to walk into the gallery and, for the sake of argument, be confronted by three black and white prints that appear almost as pixelated images, it might be difficult to make out what’s going on.  Are people going to see markers they can scan? Or some kind of new aesthetic glitch?  Personally I fear the glitch is the more likely.

    It’s perhaps a little on the obvious side but what if I were to photograph the markers in the kind of environments they’re being left?  So then you have the markers in their own right but they also appear as part of another image - so rather than be confronted by three pixelated markers, you see three photographs and perhaps its not immediately obvious the marker is there.

    This one here is just a test that I literally threw together in five minutes - a marker found within the pages of Roland Barthes’ Camera Lucida.  It’s not a particularly interesting image, but it was useful to check that the computer can still read the markers when they’re printed in this way.  It can as you can see from the screenshot.  One thing that I can definitely confirm is that glossy paper and AR markers do not mix, matte is the way forward.

    Next comes the problem of the AR itself.  I realised the other day that there’s actually something quite absurd about inviting people to scan these markers.  The project is about finding images, not creating them.  The act of scanning the markers is in many ways closer to taking a photograph than discovering one.  I feel that this is not helping the work, that the experience of scanning the markers has to be more like an experience of viewing photography.  Perhaps I can create some kind of album, or some kind of modified negative viewer; something that we relate to viewing a photograph not taking it.

  10. I’ve been struggling a little bit to figure out where to go next.  The presentation certainly helped to give me to think a little more objectively about the project and last week I started trying some things out to see where it took me.  As I said in my last post, perhaps the project should be about the images that we don’t print, the everyday images of the family pets and days at the beach.  I also said that perhaps the photographs the markers link to shouldn’t be the ones that I have found out in the world, it seems to make more sense that they’re digital, shared digitally.
So… I started playing around with grabbing images from Flickr.  I’d made an earlier sketch in Processing using the Romefeeder library, which basically allows me to get an RSS feed with a few different parameters from Flickr and using that incorporate images into my own sketch  For example, I could look for recent images uploaded with the tag “puppies” or a feed from a specific user.  This was a perfectly workable method but I really wanted a more controllable way to search for images on Flickr, which meant a more complex piece of code.
While I was working on the Brownie Digital project for Physical Computing last semester, I came across the FlickrJ library.  Basically, it’s the Flickr API in Java which means its easy to integrate into Processing.  I used it in the Brownie Digital project to enable users to upload their photos to Flickr, using this code.  Since I’m rather clueless on exactly how to go about using the Flickr API directly I thought I would give the FlickrJ library a go and see what I could do.
Admittedly, it took a fair bit of swearing, Googling, and staring blankly at the screen before I could get anything useful out of it.  But I did.  I started by learning simply how to ask the Flickr Pandas for photographs (yes, Flickr has Pandas).  Basically the Pandas are just programs which provide a different stream of photographs but there’s not really much way of controlling what you get from them.  However, once I had some photographs I was able to start extracting data from them, like for example if they had been geo-tagged.
From there, I was able to build a simple application in Processing which would search Flickr for photographs with a certain tag.  I’ve been using ‘puppies’ a lot because I have the image of the three dogs as one of my markers.  I then moved on to looking for photographs which not only were tagged with ‘puppies’ but had also been geo-tagged so I could tell exactly where they’d been taken.  Finally, I tried to find images that matched both my search tag and a location, in this case roughly the area enclosed by the M25.  This can be made more specific, for example if I wanted to search for photographs taken within a mile radius of Goldsmiths, but I’ve kept it simple for now.
Having achieved this, I thought I might as well just jump straight ahead and see if I could make it work with my markers.  I took the two markers - the girl on the swing and the three dogs - and linked these to Flickr search tags related to the photographs; so ‘swings’ and ‘puppies’.  It works!  Although there are issues - the program is runs slowly for one - I’m quite pleased with the results.
The searches mostly return the kind of images you would expect, cute photographs of puppies and snaps of children’s play-parks.  It also returns some odd results, like the image above.  For those of you unfamiliar with the TV comedy Black Books (one of the funniest shows ever made in my humble opinion), this is a photograph of the real-life shop Collinge and Clark which was used as the setting for the show.  I didn’t have the sense to check which tag returned this result but it was either ‘swings’ or ‘puppies’.  Personally I can’t see anything in the image, to suggest why it would have been given either of these tags.
It raises potentially interesting questions, much like the Google image dictionary, about how we catalogue things online, how we go about finding them and how things can easily be taken out of context through the meta-data attached to them.  At present I’ve been using my own search tags based on the images I’ve found but I wonder if it might be interesting to ask other people to ‘tag’ the images.  I was interested in how the Descriptive Camera made its returned image descriptions highly subjective so it could be that a similar principle is applied to this project. 

    I’ve been struggling a little bit to figure out where to go next.  The presentation certainly helped to give me to think a little more objectively about the project and last week I started trying some things out to see where it took me.  As I said in my last post, perhaps the project should be about the images that we don’t print, the everyday images of the family pets and days at the beach.  I also said that perhaps the photographs the markers link to shouldn’t be the ones that I have found out in the world, it seems to make more sense that they’re digital, shared digitally.

    So… I started playing around with grabbing images from Flickr.  I’d made an earlier sketch in Processing using the Romefeeder library, which basically allows me to get an RSS feed with a few different parameters from Flickr and using that incorporate images into my own sketch  For example, I could look for recent images uploaded with the tag “puppies” or a feed from a specific user.  This was a perfectly workable method but I really wanted a more controllable way to search for images on Flickr, which meant a more complex piece of code.

    While I was working on the Brownie Digital project for Physical Computing last semester, I came across the FlickrJ library.  Basically, it’s the Flickr API in Java which means its easy to integrate into Processing.  I used it in the Brownie Digital project to enable users to upload their photos to Flickr, using this code.  Since I’m rather clueless on exactly how to go about using the Flickr API directly I thought I would give the FlickrJ library a go and see what I could do.

    Admittedly, it took a fair bit of swearing, Googling, and staring blankly at the screen before I could get anything useful out of it.  But I did.  I started by learning simply how to ask the Flickr Pandas for photographs (yes, Flickr has Pandas).  Basically the Pandas are just programs which provide a different stream of photographs but there’s not really much way of controlling what you get from them.  However, once I had some photographs I was able to start extracting data from them, like for example if they had been geo-tagged.

    From there, I was able to build a simple application in Processing which would search Flickr for photographs with a certain tag.  I’ve been using ‘puppies’ a lot because I have the image of the three dogs as one of my markers.  I then moved on to looking for photographs which not only were tagged with ‘puppies’ but had also been geo-tagged so I could tell exactly where they’d been taken.  Finally, I tried to find images that matched both my search tag and a location, in this case roughly the area enclosed by the M25.  This can be made more specific, for example if I wanted to search for photographs taken within a mile radius of Goldsmiths, but I’ve kept it simple for now.

    Having achieved this, I thought I might as well just jump straight ahead and see if I could make it work with my markers.  I took the two markers - the girl on the swing and the three dogs - and linked these to Flickr search tags related to the photographs; so ‘swings’ and ‘puppies’.  It works!  Although there are issues - the program is runs slowly for one - I’m quite pleased with the results.

    The searches mostly return the kind of images you would expect, cute photographs of puppies and snaps of children’s play-parks.  It also returns some odd results, like the image above.  For those of you unfamiliar with the TV comedy Black Books (one of the funniest shows ever made in my humble opinion), this is a photograph of the real-life shop Collinge and Clark which was used as the setting for the show.  I didn’t have the sense to check which tag returned this result but it was either ‘swings’ or ‘puppies’.  Personally I can’t see anything in the image, to suggest why it would have been given either of these tags.

    It raises potentially interesting questions, much like the Google image dictionary, about how we catalogue things online, how we go about finding them and how things can easily be taken out of context through the meta-data attached to them.  At present I’ve been using my own search tags based on the images I’ve found but I wonder if it might be interesting to ask other people to ‘tag’ the images.  I was interested in how the Descriptive Camera made its returned image descriptions highly subjective so it could be that a similar principle is applied to this project.