Behind and in front of algorithms: A conversation through the screen

by Elinor Carmi & Martina Mahnke

changed-algorithm

[Image found here.]

Being on the Internet means searching for information. It means digging into what is simultaneously new and old. It means searching for the known while finding the unknown. Being on the Internet means a constant negotiation with algorithms.

How do we get algorithms to do what we want them to do? Does using algorithms mean tricking “the man” behind? Is “he” trying to help us digging through the graveyard of information or does “he” have other intentions? Why are we using these tools? Is it because we have no other possibilities or because we can’t simply create our own algorithms? If Google wasn’t free would it be that big? What would happen if regulators said that nothing on the Internet can be free anymore? How do we negotiate control in relation to the way Information is presented to us on the Internet?

Two media scholars started a conversation about algorithms and how to make sense of them. This is not an academic article, i.e. – we do not reach the catharsis of a final conclusion. If anything, this chat digs into the hotly debated subject of the mathematical equations that organize the way we receive and interact with information on the Internet. It is an attempt to poke the black box that is our screen, but it is merely the beginning, a chat – no more, no less.

ON FACEBOOK (slightly edited and shortened version)

[22-11-2013 4:04 pm Elinor]

I think there is a confusion with programmers and the ownership of software or applications. Programmers employed by Google or Facebook do not own anything. They are working for a corporation and usually don’t have much power or control over what they produce. In addition, these algorithms are a product of many other actors such as different standardization organisations, the FCC, W3C, OSI etc. Therefore, we have to understand that this issue is actually an amalgamation of many actors who shape algorithms.

[22-11-2013 4:35 pm Martina]

I actually like to put emphasis on the human-algorithm-interaction and understand algorithmic output as the result out of an interaction between the user and the algorithms. Simply said: no user, no algorithm, no output. Further, I think we focus too much on the institutional and structural components. I think it’s important to talk about the individual interaction on the micro-level. Yes, we can just ‘blame’ the algorithm or on an institutional level ‘Google’ but this will always end up in a power fight. Who’s right? Who’s wrong? This will never change anything. Therefore, we have to ask what can we as individual users do? In most research users are understood as being passive and non-influential. Hence, we need to start reflecting on our behavior as a user.

[22-11-2013 5:26 pm Elinor]

Putting the responsibility on the user is exactly what neo-liberalism is all about. The ‘power fight’ you are talking about is exactly what is needed, IMO, and usually silenced and moved to the end user’s ‘fault’. I do agree with you that as users AND citizens we have something to do, but it does not end merely by knowing what algorithms do. Instead of thinking of some users as stupid perhaps we should ask why are programmers so obsessed with user experience?

[22-11-2013 5:30 pm Elinor]

In other words, changing user’s knowledge of their ‘on-line’ behavior is only one step that users can do. But corporations and governments have to be accountable for different products (in this case algorithms) provided to citizens. Not all people have the privilege to have the sufficient literacy of programming languages, especially since these are intentionally designed to be ‘black boxed’. So I disagree with you that it is only the ‘users’ responsibility, since this assumes all users and citizens have same economic and cognitive abilities.

[22-11-2013 5:54 pm Martina]

I would even go further and say we teach our children the wrong subjects … I don’t think we disagree as I haven’t said it’s ‘just’ the user but BOTH – I just focus on the micro level. What we might disagree on is what kind of regulation we want. It seems like I’m more on the programmer’s side, wanting liberation. I don’t want some government to decide, I rather engage in literacy and figure out the ‘black-box’. Why should governmental governance be any better?

CONTINUED ON GOOGLE DOCS (slightly edited and shortened version)

[22-11-2013 Elinor]

Programmers are usually a white male elite that has invented this language, and I hardly think we should automatically adopt what they think is the right and only language to talk and build the Internet. If we are to take the power to decide, then I decide not use this language. Oh, but wait, can I really? I do agree with you that governmental regulation can be problematic but not sure if the current situation is much better.

[25-11-2013 Martina]

I do see two lines of argumentation here. One is a very normative one: What is the ‘right’ thing to do? Who takes on power? How do we regulate? Shouldn’t we just let the people decide? If people think it’s useful and use it, why not … And the other side, number two, we are talking about: cultural interrelations. Yes, mankind is not as free as we might hope, we are bound to our heritage. Therefore, I suggest more literacy skills. Back to: we need to teach our children different subjects.

[28-11-2013 Elinor]

I agree these questions are hard but also believe we have to confront them; otherwise they are answered for us rather than by us. I am not even sure that the question is ‘what is the “right” thing to do’. Perhaps there is no right answer to this, but are we given with options? Do we have access to the decision-making procedures of such algorithms, and if not, can we at least have some kind of transparency in terms of what they actually do? These direct questions have to be answered, mainly because algorithms have a direct influence on our digital (and material) lives: structuring what we see on the Internet, deciding what kind of prices we get on various websites, deciding how we interact with each other, deciding how we interact with other commercial and non-human agents, etc. If such an architectural design of online environment has such far reaching consequences over our lives (decisions, actions, thoughts, feelings) shouldn’t we at least have some kind of idea of what they actually do, how they organize the information we engage with and when they decide to change the equations of such algorithms? THEN, literacy will make sense because we will be confronted with these algorithms and have to understand how they work, not as passive agents, but rather as active ones who can take control over the way their online environment is designed and shaped.

[30-11-2013 Martina]

This I think is interesting: “Do we have access to the decision making procedures of such algorithms”?. What I read out of this question is the understanding that algorithms are ‘decision making procedures’. What does that actually mean? Starting very naive: Let’s assume you drive a car and you reach a conjunction, you have 3 possible ways to go and you need to choose one option. The decision that follows is bind to certain conditions. Maybe you’d like to go the fasted way or you’d like to go the most beautiful way. This may or may not influence your choice, however, you need to select. Therefore, making a decision means to select something. You need to go this way and not the other, meaning you leave one road behind. After a while you reach another crossroad and the same scenario occurs. You need to select again, and once again you leave something behind. Problematic at this scenario is that you can only do either or. This seems to account for the algorithms as underlying structure of algorithmic media as well. The final news we see seem to have been selected over others, leaving the impression that we’ll not be able to see them. From a user’s perspective it seems kind of random. Why so? Because we look at the content. We just relate to content ask ourselves why do we see this and not that. The underlying processes of algorithmic media, however, track user’s behavior. User’s behavior is quantified. Here’s a quote out of my interview material:

“And it worked in a way that we know that we gave you a list, ordered from 1 to 10. But you read, actually you clicked on item number 3 first. We inferred that you prefer the content of number 3 to number 1 and 2. And that gives us this next time, if we get any content that is very similar to one and two and content that is similar to 3, then we can assume that because you preferred it last time, you might prefer it this time and we’ll put it first. But if now again you choose the third item, then it switches back. That’s why it keeps sort of (u) what’s going on. If I give you old news at the top and it’s not interesting to you anymore, you gonna read the Johnny Depp item, then we know ok, she prefers that always and it’s always before other stuff. That’s how the only idea works behind it …”

So one could argue that the decision is actually made by the user and where she clicks. In this way it makes sense, if programmers say:

“We don’t filter out. We only sort it.”
“Filtering is not very clever. I mean it’s like if you have information overload and you just drop sources so you didn’t really solve the problem (…) yeah, you solve the problem but not very wise. You just give up on information.”

“It doesn’t make decisions. It gives rank for things. So it’s like mathematical formula. You give the number inside and the formula gives you a rank.”

Maybe it’s important to think about decision. What does it mean for whom in which situation. I think it’s just too simple to say ‘algorithms decide’.

[Elinor Carmi 11-12-2013]

I will divide my answer according to yours🙂

First, your allegory is good but somewhat misleading, because it is true that there must be some kind of decision making but in a way algorithms do not cater for one person (who drives the car), we are talking about huge amounts of people, different backgrounds, different literacy capacities etc. This means that each decision making taken here is so much more crucial because it touches so many people and their digital lives. Therefore, I see this as a design that should be transparent and clear, rather than opaque and vague as it is today. These decisions, categories, standards and options are not neutral and ‘obvious’ paths to be taken, but rather, they are influenced by corporate decisions that are guided by profit and thus should be under much higher scrutiny and supervision. As I’ve mentioned before, programmers do not operate by their own free will, they get job tasks from people who aim to have a specific user experience that will bring profit in the most frictionless way possible.

– I am not sure that as users we only think about the content, I think, but again I have no data or thorough research to back this – that beyond the content that we see, we also care about user experience. And this unfolds also the kinds of expression tools given to us: How can I present myself on Facebook? Which privacy settings can I adjust to filter different circles in my life and so on and so forth (I think PEW research center just published something about youth being super conscious about their settings, because they want to hide information from their parents). These are very important and I think people notice that as well. The fact that I have to identify myself with my real name, my offline identity if you like, already signals a very well planned strategy that suits third party companies rather than users, which, of course, were not asked about any of Facebook’s design changes.

– Filtering and organising information in a specific way is extremely important, and shows that these programmers want to do this job FOR the users, rather than they would do so themselves. It sort of resembles these special stands in supermarkets and even in book stores of specific products, which are given a more central space because the corporation that make them paid more money to have better visibility. Are these the products the users think are more important? Would they choose them if they weren’t so ‘highlighted’? These are important questions, especially since we are talking about different forms of information, some very crucial. Therefore, it is not so much like a mathematical formula since there is an internal bias within them from the beginning, so the process is extremely tilted towards corporations that have the resources to make certain forms of information more visible than others. And this has far reaching consequences on the way we think and understand the world.

***

Concluding through the screen

Google, Facebook, Netflix, Amazon, Spotify and other tech companies are a big part of our (digital) lives. They are here to stay, at least for now, and they rely on algorithms. They shape us and we shape them. They are a complex; interrelating the social and the technical. Many actors are involved, corporations, regulators and most importantly – the users. Influencing the mass as well as the individual. We are very much in the beginning of understanding what algorithms do and how they influence us. And we are even more at the beginning to understand how to deal with them. Is the call for transparency sufficient? Does it even lead us to where we want to go? Is technology really empowering us or is it time to step back? What does empowerment even mean? And how can we find a way for multiple voices/needs/literacy/ to have equal access to the main channels of knowledge production?

What science can do and what we need to do is to develop a language that enables us to understand the complex interrelations beyond the pure mathematics on one side, and modern hyperboles of mystic algorithms on the other. Stay tuned!

[First published Jan 30th, 2014 at the SummerSchool Blog “Digitization”.]

Day 50 – Writing challenge revisited

50 days ago (and in real time more like 60 days ago) I started my PhD-writing-challenge. My goal was to have a first draft of my PhD by Christmas and you may wonder how it all turned out. And, I am still wondering how it turned out myself😉

I’ll be honest with you. I do not have the intended draft. Hence, towards the set goal I failed big time. However, it still has been a great learning experience, which I’m very thankful for. Now I’ve set my next goal for Easter, hoping to be more successful this time, when applying my learnings from this challenge:

  1. Setting the goal to write a PhD-draft in 50 days is simply just a bad idea. I don’t think it’s impossible just the goal itself became too big and too overwhelming. I started to be stressed from the very first day. Therefore, I set smaller goals now. One step at a time. One paragraph at a time. One page at a time.
  2.  Creating a writing habit is not as easy as it sounds. While it’s really easy to create an addiction to chocolade or cigarettes or alcohol, being addicted to writing might be simply not possible and maybe just a great myth. Therefore I (you) need to use discipline to write, whether I (you) like it or not.
  3. Writing is thinking. Thinking is writing. For a long time I underestimated how writing can help clearing up the mess in your head. For a long time I had the idea that I first need to know what to write in order to be able to write. This makes the process somewhat painful. Try to use writing as a method to clear up the mess your head. Thoughts come and go and there are usually not logically connected. This is just the way it is. Therefore, when I now write, I have three documents open: my PhD document on my computer, a personal journal and a calender. Once I start wondering about whether I should do laundry or not I put it down in my calender – as a ‘reward’ to do it after writing (I found out it’s not tempting anymore once I finished writing). In my personal journal I put down all the feelings connected to the writing process: anger, frustration or sometimes even joy when I have had a great idea. This helps me to keep my PhD document clean and I’m still able to deal with all the side-feelings and side-thoughts at the same time.
  4. Have a social life. The longer I set my time to write the less I get done. Have you noticed the same? Therefore it doesn’t make sense to set you social life aside just to get more writing done. However, be selectively antisocial. And write when you had plan to write.
  5. Make the PhD your priority. I think unconsciously I’ve been very split about finishing. Do I really want a university career? What is science all about, anyways? Try to stop thinking about all these questions for some time (put them down in your journal) and come back to them, once your PhD writing is done for the day. Having doubts is normal but try to not let them interfere with your writing. And once the writing part is done, doubting for the rest of the day doesn’t seem very tempting anymore either …

With this – back to writing🙂 and if you have any other great writing strategy, please share it in the comments sections. Thanks in advance!

Day 31 – Time to disagree with Ashton Kutcher

Some time ago I watched a video of Ashton Kutcher. Actually it wasn’t Ashton Kutcher, who grabbed my attention (maybe unconsciously – who knows) but the comment that read something as follows: “Ashton Kutcher got the Teen Choice Awards (…) When Ellen brings it up in this interview, what does he do? He answers with another great speech. I refuse to be jealous.” So, I was wondering what is this great speech all about? It’s about life. His advice:

1) Get a job.
2) Be sexy.
3) Build your life.

I’ll start with number 3 – build your life. Number three is all about ‘building a life’ rather than ‘fitting in’. I think this is a great observation he’s made and I fully agree. Having an East German background, my upbringing revolved around how to fit it. It was more important to ‘fit in’ than ‘to create’. I think nowadays it’s all about creating. If you don’t want to be overrun by modernity you need to be creator and start to actively create your life. Stop being a passive consumer and stop waiting for the ‘right’ thing to come along. Start to take care of yourself, nobody else will. And if you want a different world, you need to engage otherwise nothing will change.

Number 2 – be sexy. Alright, this is a nice catch phrase. It’s actually about being smart. He says, nothing is sexier than being smart. And yes, once again, I agree. Educate yourself and start educating. Stop making claims and be analytical. There’s so much semi-information out there that it’s important to be reflected. Don’t just take statements you read for granted, question them.

And Number 1 – get a job. Kutcher draws on his own experience where he started having a job being 13. He said he’s never quit a job before he had another one and this is what young adults should do, argueing ‘having a job is better than no job’. And no, I disagree. Let’s assume you choose this option and you work at any fast food restaurant. You work long shifts. You work day and night and it’s still hard to make a living. Why? Because you get minimum wage. A lot of these fast food restaurants can only survive because minimum wage ist part of their business model. So while you can feel like a hero, because you have a job, you also support capitalistic markets. And while you still feel like a hero, because you have a job, you’re also crazy tired because you work alternating shifts. So I don’t think having just any job is always the best alternative. I think, if you have the opportunity (and I know not everybody has it) get a family loan. I know it’s not easy because we want to be independent, but I think it’s a great opportunity to learn how to find your own way while being dependent. I don’t suggest to live off your family but ask them to give you a small-interest loan and take time to think. And again this is not about living the big life. (I live with my boyfriend in a 35 sqm appartement making a lunch package every day.)

The greatest lesson I’ve learned in my PhD so far is that great thoughts don’t come overnight. They need a lot (and I mean a lot!) of time and digestion and trail and error. That great carriers just come from working your ass off while washing dishes is a great american myth – it’s not reality. Great inventions need a strong, well-rested individual who doesn’t give up, who doesn’t have to worry about rent or the next meal. Who has the opportunity to direct his or her energy into a life project. What I hear from Kutcher’s speech is that young adults are just a lot of lazy kids, I don’t think so. I just think they’ve just never given the opportunity to take time to think. If they don’t do something, they’re just being lazy.

Therefore, my advise in addition to the other two is: Take your time and think.

Whose in charge when algorithms are in charge? A question of accountability

accountability-jokeToday when reading Anna’s interesting and thought-provoking article I started thinking myself about ‘algorithmic accountability’. In her blog post Anna investigates the ‘aftermath’ of the UN Women awareness campaign. According to her own observations she starts by declaring the campaign has rather raised awareness towards Google’s autocomplete and their “veracity” than drawing attention towards the sexism in the world. This discovery brings her to discuss the question “Who is in charge when algorithms are in charge?”. Understanding algorithms as social constructions, she first addresses the programmers who created the software. In this case Google’s staff should take accountability for their product. However, they refuse to. She cites Spiegel with: “The company maintains that the search engine only shows what exists. It’s not its fault, argues Google, if someone doesn’t like the computed results.” If the programmers refuse to take accountability on the one hand and algorithms so clearly make ‘decisions’ on the other, who then, Anna asks, is in charge? The algorithm? The company behind? The user? She concludes that “accountability goes beyond a binary option of intentionality or complete innocence” is therefore an “extremely complex issue” involving lots of different stakeholders.

I agree and I’d like to elaborate on the issue of accountability.

I think what her blog post shows very clearly is the different understandings of technology. Humanists understand them as quintessentially social focussing on the human behind and the process of social construction. The goal of humanistic research is to investigate regulating mechanisms in order to be able to change them. This has been a very successful line of research. The introduction of software, however, pushes them to their limits. It is very hard to get access to companies like Google and Facebook apart from public available PR statements.

Technologists, however, understand technology as an object. Something that exists outside of the social. In the special case of Google’s autocomplete algorithms it is easy to identify the company behind. But how about social implications of computers? Who would we hold accountable for those? Apple or Microsoft or even Linux? Isn’t therefore the computer rather an object? They have a point. The Spiegel citation further shows an argument, which is very well-known to media scholars and usually brought forth from journalists: they ‘just’ report what is out there.

Where does this leave us? Who’s right? Who’s accountable?

I am personally a big fan of socio-technical systems theory, which emphasizes the quality of the interaction between the social and technical rather than either part. The theory assumes that if the socio-technical relationship is balanced the system is working properly. They therefore aim for joint optimization. Understandable, I think. If we don’t want neither side to have ‘power’ it needs to be equally split, respectively balanced. So, how can we design the interaction between the social and the technical with respect to balance?

First, I think we need to identify the different parts involved. In the case of Google’s autocomple algorithms I’d say the programmers and the users are equally involved. They both contribute with their behavior to the results. The programmers invent the formula and the user delivers the data to fill the formula and makes therewith output possible. Therefore I think both parts need to acknowledge, that they’re part of this relationship. There is no possibility of ‘opting out’. The user (the humanist) can not just put all the responsibility to the technologist. He needs to actively engage in it by trying to understand the software, reading FAQs (I know sounds boring but really helps) and trying out different keywords. And then, programmers finally need to understand that their products have social consequences and they are part of creating those!

After establishing this common ground we can ask: What can both sides do to regain balance?

Users – please start to be reflected about your use of digital technology. Think twice about the result given and try different queries. Programmers – please start explaining your software to us. Don’t make it ‘all mystical’. Get a humanist to translate your software into understandable words. And then, with this understanding, we might we able to re-think Bill Gates quote

“I think it’s fair to say that personal computers have become the most empowering tools we’ve ever created. They’re tools of communication, they’re tools of creativity, and they can be shaped by their user.”

into

I think it’s fair to say that personal computer have become the most empowering and intimidating machine ever created. They’re tools of communication, they’re tools of creativity, and they can be shaped by both – the programmer and the user.

Day 17 – Getting a writing coach

“In this age, confused by too much knowledge.” Søren Kierkegaard

Today I had a great start into the day – I met writing coach Thomas. Yesterday I had one of my “circling” moments, AGAIN, unable to proceed, getting stuck with one sentence for the whole day. This made me search for a writing coach. I always thought of myself as a good writer, but yesterday I realized I don’t have a lot of experience. So far I wrote my master thesis, one journal article and I’m writing this blog, no more. Hmmm and maybe this is actually not a lot to draw on. So here it goes, let’s get help! I googled around a little and found his great blog “Research as a second language” (if you’re a PhD within Social Science I recommend reading it!).

His idea of making writing a “manageable task” appealed to me right a way. And today he said something really smart: What you write will never reflect what you know. Because you’re always further in your head than you’re able to put on paper. You can only put on paper what you’re confident saying and you can only say something with confidence, if it’s been in your mind for a while. Therefore, new ideas will never be on paper – they are in our heads and that’s were they belong. So let’s start putting on paper, what we know!

Further Thomas thinks of the PhD as a combination of paragraphs. Therefore I’m on a 9-hours-writing-challange now. I’ll write an overview of my PhD within 9 hours over the next few days, using 18 paragraphs in total, while one paragraph contains min. 6 sentences and max. 200 words. I’ll start tomorrow and let you know how it goes. So far, it has given me lots and lots of motivation, this I can say already. Thanks, Thomas!

Day 15 – “Make it a writing process”

Today we had one of the usual lunch conversations of what our PhD is all about and if it was understandable. While explaining the argumentation orally everything seems so clear in the text it somehow doesn’t. It’s very fuzzy, it’s ungraspable. And with this the usual piercing question comes up again: Is this good enough? Especially when getting critique from scientific readers with a very logical mindset, it’s hard to defend a somewhat more literary writing. However, as I do believe both is valid, I got this great advise: “Make it a (re-)writing process” This means rather than going back into the whole analysis process, look at it as a simple problem, you might as well look at it as a lack of writing skills. And tackle the problem from this angle. This prevents you from starting all over again. Might sound simple, however, this advice really made my day.

Day 13 – Consolidation

Today I found this really great picture (orginally posted here) and thought to myself that this is a great illustration of the PhD process. Even thought I’m hesitating with the order. I think in my particular case the step “I want to do it” came first and unfortunately it’s not a linear but a recursive process.

Image

However, where do I stand? I have finally worked myself through the problem statement and brought with this lots of light into the dark (word count 2810 – woohoooo) and I think I am jumping up and down between I’ll try to do it and I can do it. All in all very good news🙂 And what about you? Where do you stand in your PhD process?