Algorithmic Visibility at the Selfie Citizenship Workshop
The Selfie Citizenship Workshop was held on the 16th of April at the Digital Innovation Centre at Manchester Metropolitan University, and brought together researchers across various disciplines, fields, and backgrounds in order to explore the notion of ‘selfie citizenship’, and how the selfie has been used for acts of citizenship. The event was very well tweeted, using the hashtag: #selfiecitizenship, and generated over 400 tweets during the day, a network analysis of tweets at the event can be seen here. The event was sponsored by the Visual Social Media Lab, Manchester School of Art, Digital Innovation, and the Institute of Humanities and Social Science Research.
A talk that stood out to me the most was that by Dr Farida Vis, titled: Algorithmic Visibility: Edgerank, Selfies and the Networked Photograph. The reason for this is that I once wrote a blog post where I briefly outlined Farida’s talk at the Digital Culture Conference: improving reality, on algorithmic culture.
The talk at this workshop was centered on an image that Farida saw pop up in her Facebook news feed. This image was shown to her because one of her friends had commented on the picture. Due to their perceived close tie, that is to say, as they were Facebook friends, the image was also shown to her. The image was of an Egyptian protester who is displaying solidarity with Occupy Oakland by holding a homemade cardboard sign with the caption ‘from Egypt to wall street don’t afraid Go ahead #occupyoakland, #ows’.
Occupy Wall Street (OWS) refers to the protest movement which began on September 17th in Zuccotti Park, in New York City’s Wall Street financial district. The movement received global attention, which led to an international occupy movement against social and economic inequality across the world. Hence, why an Egyptian protestor is holding a sign with both the #occupyoakland, and #ows hashtags.
The image left an impression on her, especially the composition of the image; the sign and the man’s face, presumably inviting us to look at his face. Months later she attempted to locate the image, and was surprised to find she could not locate it anywhere on her friend’s wall. It was as if that she had not seen the image in the first place. She asked then, how do people locate images on social media? That is to say, if you see an image, do not initially retrieve it, and are then unable to locate it. How would you locate it? In this case, she knew that the image was about the Occupy movement and was related to Egypt, and she combined these as search queries and, with some detective work, was able to locate the image.
She found that the photographer had uploaded a range of images on a Facebook album, and that there was a similar image to the one she was searching for, but that in this case the protester had their eyes closed. Surprisingly, this image had the exact amount of likes and more shares than the original image. However, this series of other similar images from the same protest were not made visible to her. She argued here, that we should think critically and carefully about the different structures for organising images which can vary across platforms, and how images may be made visible to us.
That is for example, how does EdgeRank decide what image to show us? EdgeRank is the name that was given to the algorithm that Facebook once used to decide what articles should be displayed in a user’s news feed. However, Facebook no longer refer to its algorithm as EdgeRank internally, but rather now employ a more complex news feed algorithm. And that as EdgeRank ranked three elements: Affinity, Weight, and Time Decay. That the current algorithm, that does not have a catchy name, but now takes into account over 100,000 factors in addition the EdgeRank’s three. I would argue here, that just to understand what an algorithm is, in this instance, is difficult. Then, when you attempt to understand the workings behind the algorithm, you find that this is not possible as the methods that Facebook, for example, use to adjust the parameters of the algorithm are considered proprietary and are not available to the public. Moreover, if we do understand how images are made visible, then we are taking the images to be a given.
Algorithms can also get it wrong, take the example of the Facebook year in review feature. which received much press coverage. Displaying one user a photograph of his recently deceased daughter, another user of his fathers ashes, and in one case showing a user a picture of their deceased dog.
This was raised in one of the Q&A’s; that changes in features on social media need to be better documented. This is important in this context, as the image was on a Facebook album, a feature that is not used as widely today. In my own work, for example, I have found that Twitter has implemented several new features, and which is difficult to document and to also connect back to data sets where these new features were not present. Further points raised in the Q&A’s, that I thought were interesting were that of Twitter users ‘hacking’ the platform in order to share Instagram images on Twitter, after Instagram images stopped appearing on Twitter. IFTTT, for example, will allow users to connect Instagram to Twitter.
Overall, I thought the talk highlighted very well that it is important to think about the conditions in which an image may be shown to us, and to also think about what is not shown to us. As a social media user and a Facebook user I see images, videos, links pop up on my news feed. I had not given much thought to the conditions for their visibility, or that an algorithm taking into account over 100,000 factors was deciding what would appear on my news feed.