From darkness to light: how I overcame terrible anxiety and made it to BBC Radio Sheffield

Being an undergraduate was very difficult for me. Plagued by serious mental health issues, low self-esteem, loneliness, and isolation. I could not talk about how I felt, it was not something I could do. The only way I made it through my undergraduate degree is because I was assigned a Disability Adviser and they drew up a plan where I felt comfortable attending seminars and lectures. They were very supportive. The University of Sheffield has a great set of support services.

I went on to study for a Masters and this was a great year where I made many friends, though after this I would experience a lot of lows.  I struggled to find a job, I had bad anxiety, and low self-esteem. On one occasion, I did not leave my house for a week. Most people I knew had moved away from Sheffield, so I felt very lonely, and I am really glad I made it out of this period alive. Though, I was empty from inside. I often listened to Al Pacino’s speech from Any Given Sunday:

We are in hell right now, gentlemen believe me and we can stay here and get the shit kicked out of us or we can fight our way back into the light. We can climb out of hell. One inch, at a time.

Feeling the lows I felt, starting out a PhD in September 2014, I had nothing to lose. Slowly, I started to fight off my anxiety. I took small steps, saying ‘Hello’ to strangers, recording myself speak and listening to it back, delivering very short presentations and increasing the length bit by bit, I volunteered to do temp work where I would have to speak to people. By taking very small steps I manged to fight my fears, to climb out of hell.

The last two years or so have been good, I have met many people across the world. I have made new friends. I have worked on a number of interesting and important projects, I have delivered a number of talks.

collage

I made it to BBC Radio Sheffield last week to talk about my journey. I would like to dedicate the achievement to all of the brilliant teachers, and support workers that helped me over the years.

You can listen to the interview here, and it starts at the 1 hour 35 minute mark.

How a blog post I wrote took me to Split, Croatia (Part 2)

This blog post continues on from part 1 of this 2 part series and looks at my short time in Split, Croatia.

I arrived a day before the conference so I had some time to explore the absolutely  wonderful  city of Split. Just as we were walking around we could see some beautiful views:

amazing views

I was able to work on my workshop by the coast:

20160614_181108.jpg

I noticed that on display by the coast of Split there were many Olympic medalists:

Gold medalists.png

We headed to De Belly, a beautiful restaurant in the heart of Split and were able to get our hands on some fantastic dessert. I’d highly recommend this restaurant to anyone visiting central Split.

20160614_193713.jpg

After that, it was back to the hotel to work on my workshop, so that was my first day in Split!

Overall I really enjoyed my time in Croatia, and I hope to visit again soon. The people are very friendly, and unlike some holiday destinations, you are not hassled by locals to purchase anything.

 

 

 

How a blog post I wrote took me to Split, Croatia (Part 1)

On July 7th, 2015 the Information School ran the iFutures conference (I severed on the committee and operated the social media strategy). I met Sergej Lugovic, from Zagreb University Of Applied Sciences, Croatia at this conference.

I had submitted a blog post for the LSE impact blog and was unsure whether it would be published, Sergej assured me that they would like the post. Three days after the conference on July 10th 2015 the article was published. I kept in touch with Sergej, and he had seen how well my blog post had done receiving thousands of hits and shares.

In June 14-18, 2016 as part of the Contemporary Issues in Economy and Technology (CIET) Sergej was able to organise a workshop that I would deliver on Twitter analytics. Below is an image of me and Sergej shortly before the workshop:

collB=.jpg

The workshop marked the first collaboration, in history, between Zagreb University of Applied Sciences, the University of Split, the Information School, University of Sheffield, and one of the largest food company in the world measured by revenues, and ranked within the top 100 on the Fortune Global 500 in 2014.

All thanks to Sergej’s hard work.

I would also like to thank the hard work of Dr Boze Plazibat in organising the conference. And for providing a tour of the Department of Professional Studies which hosts state of the art facilities.  I was truly impressed by the department. Below is an image of me with Dr Boze Plazibat, CIET 2016 conference organizer:

20160616_144051-2.jpg

Split is a beautiful city and as I arrived a day earlier and left and day later I had the pleasure to do some sight-seeing, and speak to locals. This will be covered in part 2 of the blog post.

Algorithmic Visibility at the Selfie Citizenship Workshop

The Selfie Citizenship Workshop was held on the 16th of April at the Digital Innovation Centre at Manchester Metropolitan University, and brought together researchers across various disciplines, fields, and backgrounds in order to explore the notion of ‘selfie citizenship’, and how the selfie has been used for acts of citizenship. The event was very well tweeted, using the hashtag: #selfiecitizenship, and generated over 400 tweets during the day, a network analysis of tweets at the event can be seen here. The event was sponsored by the Visual Social Media Lab, Manchester School of Art, Digital Innovation, and the Institute of Humanities and Social Science Research.

20150416_100904

A talk that stood out to me the most was that by Dr Farida Vis, titled: Algorithmic Visibility: Edgerank, Selfies and the Networked Photograph. The reason for this is that I once wrote a blog post where I briefly outlined Farida’s talk at the Digital Culture Conference: improving reality, on algorithmic culture.

The talk at this workshop was centered on an image that Farida saw pop up in her Facebook news feed. This image was shown to her because one of her friends had commented on the picture. Due to their perceived close tie, that is to say, as they were Facebook friends, the image was also shown to her. The image was of an Egyptian protester who is displaying solidarity with Occupy Oakland by holding a homemade cardboard sign with the caption ‘from Egypt to wall street don’t afraid Go ahead #occupyoakland, #ows’.

Occupy Wall Street (OWS) refers to the protest movement which began on September 17th in Zuccotti Park, in New York City’s Wall Street financial district. The movement received global attention, which led to an international occupy movement against social and economic inequality across the world. Hence, why an Egyptian protestor is holding a sign with both the #occupyoakland, and #ows hashtags.

The image left an impression on her, especially the composition of the image; the sign and the man’s face, presumably inviting us to look at his face.  Months later she attempted to locate the image, and was surprised to find she could not locate it anywhere on her friend’s wall. It was as if that she had not seen the image in the first place. She asked then, how do people locate images on social media? That is to say, if you see an image, do not initially retrieve it, and are then unable to locate it. How would you locate it? In this case, she knew that the image was about the Occupy movement and was related to Egypt, and she combined these as search queries and, with some detective work, was able to locate the image.

She found that the photographer had uploaded a range of images on a Facebook album, and that there was a similar image to the one she was searching for, but that in this case the protester had their eyes closed. Surprisingly, this image had the exact amount of likes and more shares than the original image. However, this series of other similar images from the same protest were not made visible to her. She argued here, that we should think critically and carefully about the different structures for organising images which can vary across platforms, and how images may be made visible to us.

That is for example, how does EdgeRank decide what image to show us? EdgeRank is the name that was given to the algorithm that Facebook once used to decide what articles should be displayed in a user’s news feed. However, Facebook no longer refer to its algorithm as EdgeRank internally, but rather now employ a more complex news feed algorithm. And that as EdgeRank ranked three elements: Affinity, Weight, and Time Decay. That the current algorithm, that does not have a catchy name, but now takes into account over 100,000 factors in addition the EdgeRank’s three. I would argue here, that just to understand what an algorithm is, in this instance, is difficult. Then, when you attempt to understand the workings behind the algorithm, you find that this is not possible as the methods that Facebook, for example, use to adjust the parameters of the algorithm are considered proprietary and are not available to the public. Moreover, if we do understand how images are made visible, then we are taking the images to be a given.

Algorithms can also get it wrong, take the example of the Facebook year in review feature. which received much press coverage. Displaying one user a photograph of his recently deceased daughter, another user of his fathers ashes, and in one case showing a user a picture of their deceased dog.

This was raised in one of the Q&A’s; that changes in features on social media need to be better documented. This is important in this context, as the image was on a Facebook album, a feature that is not used as widely today. In my own work, for example, I have found that Twitter has implemented several new features, and which is difficult to document and to also connect back to data sets where these new features were not present. Further points raised in the Q&A’s, that I thought were interesting were that of Twitter users ‘hacking’ the platform in order to share Instagram images on Twitter, after Instagram images stopped appearing on Twitter. IFTTT, for example, will allow users to connect Instagram to Twitter.

Overall, I thought the talk highlighted very well that it is important to think about the conditions in which an image may be shown to us, and to also think about what is not shown to us. As a social media user and a Facebook user I see images, videos, links pop up on my news feed. I had not given much thought to the conditions for their visibility, or that an algorithm taking into account over 100,000 factors was deciding what would appear on my news feed.

Almost 6 months of PhD!

My six month progress report is due in soon so I decided to do a blog post about some of the topics and issues I have encountered, and with which I am currently battling with.  I am looking at pandemics and epidemics on Web 2.0. More recently, however, I have been investigating the Ebola epidemic, and I have been collecting Ebola related tweets.

Big data

Big data is a current buzzword within academia and is considered by some to be the new oil. However, keeping with the oil analogy, is it real oil or snake oil? This issue was chronicled by Simon Moss in a Wired article Big Data: New Oil or Snake Oil? Simon discusses the issue of normalising big data in an organisational sense. My issue is that of information quality, that is, the data is big, but, at times, it is of a poor quality. When the data is filtered it is not as big as it once was, and so it becomes little data. However, this small or little data is much more valuable in comparison to the larger set of data.

Ethics

Ethical issues are ever present in social media research. The argument in favour of the utilisation of Web 2.0 for research is centred on the argument on whether the data is in the public domain. This raises questions on whether there is informed consent. Moreover, do Twitter users know that I am gathering this data? If I ask for consent for a tweet on Ebola that I captured in August would I even get a reply? There is a sense, as a Twitter user, that when you send a Tweet out that after a while it goes away. Thus, it is imperative that Twitter users are involved in the decision progress when discussing ethical issues. This was discussed at a conference I attended in November, Picturing the Social: Analysing Social Media Images.

Algorithms

I recently viewed a talk by Farida Vis which formed a part of the digital culture conference, improving reality. A very well-articulated example of the human influence on an algorithm was provided by Farida. This was of an advert on Facebook which promoted an assisted reproduction program, with a picture of a baby. Farida argues that this reflects how those who programmed the algorithm understand gender normative issues. That is, those who wrote the code held a schema whereby they believed a women of a certain age should have children. More recently, on Twitter I witnessed an advert that was advertising a laptop with the caption ‘Costs less than what you spend on Pizza last year’ which resulted in livid responses e.g. ‘Twitter what are you trying to say?’ This advert could have been targeted at all users, so this may not be the best example of a targeted algorithm. A further example is that of adverts for educational courses from Facebook, before I started university. This leads to a question of how much influence social media has on young adults.  There is scope here, also, to examine how websites such as Amazon create suggestions. How does their algorithm work? And where does the human schemas fit in to this.

Methods

When talking about methods there is a tendency to select either a quantitative or qualitative research philosophy. However, in regards to social media research using a mixed method approach will yield richer results. That is, a method of analysis such as network analysis should be complemented with content analysis. If we limit ourselves to a particular research philosophy we will learn less from the data. So, I hope to employ a range of methods in analysing my own data. A related issue around methods, is that of the cost of big data. Big data is certainly out of reach for most academics and this is further exacerbated by stringent terms and conditions which restrict data sharing. The issue of whether the data is available for free, or whether there is a tool to obtain the data is also shaping the platforms I look at.

Images

In my dataset of tweets images occur with great frequency and are often represented as a block of web links when scrolling down a spreadsheet. When I start to filter the dataset should I remove these links? An observation of big data is that it is associated with words and not images. However, in regards to images on Twitter; I would argue that they form a larger network of big data. According to one estimate, there are 250 million images shared on Twitter daily. However, these are overlooked in the majority of Twitter research. That is, during the 2009/2010 epidemic of H1N1, and the various subsequent outbreaks, images must have been shared on Twitter. The images would have formed an integral part of how a person may subsequently think about outbreaks.  However, there was no evidence based research examining these images. Comparing images from different time points allows us to see whether narratives told via images remain the same or whether these change.

In text references:

The Wired News Article I mentioned can be found here: http://www.wired.com/2014/10/big-data-new-oil-or-snake-oil/

The talk by Farida Vis on algorithmic culture I mentioned can be found here: https://www.youtube.com/watch?v=WBXddqzIZTA

[Edited on 26/01/15]