ANALOGUE ALGORITHMIC PROFILING
IN COLLABORATION WITH MICHAEL STREBEL & HUGO FASSEN
This group-initiated project on algorithmic behaviour is a first attempt at raising and rekindling public consciousness and interest in the direction of proactive engagement and responsible decision-making, which may bring about a more trustworthy relationship among people with their surroundings and `the algorithm`.
My relationship with algorithms is, complicated; I love them and hate them. On one hand, I use them everyday and I supply (with a certain awareness) them with my data and searching history to benefit from. On the other hand, I want them gone every time they create this weird moment of serendipity among my personal thoughts, online persona and physical location. They seem harmless, but are they?
Algorithms are technologies, so they're neither good, nor bad, nor are they neutral; they always follow a certain agenda. In our Western capitalist society of accumulation and greed, it’s difficult to imagine how an internet corporation will give up data (power) they can simply take more of. Beyond them taking data, it has become a well-known fact of our current times that we the users actually give it away freely. Who collects it? Where does it go? How is it processed? Why is it so secretive? Legitimate questions to be asked by us users, no?
The frustration of being in the dark on how we're being profiled by the algorithm and how our data actually gets processed, got us to try and create a more trustworthy relationship with the algorithm. We wanted to increase the public's grasp of it, so we looked for where the human element was already present and tried to interfere with it.
Introduction to an Iterative Process
To make sense of the complexity, we drew up a low-tech analogue prototype to test out our ideas and assumptions about the inner-workings of 'the algorithm' on the public.
We translated what we thought is the digital process of the algorithm into an analogue experience. Then we went out in public to expose and test the mechanism. We scanned for participants, asked them a series of questions to profile them in a very simplistic way and then suggested an activity for them to do. Our aim was to give precise suggestions that would be received as pleasant surprise by the participants, without using the internet as a tool. Then in the end of the process, we added a final step that fundamentally challenges the way in which the algorithm functions in today's society. A simple and speculative tool that starts a conversation between the algorithm and the user, a question, “are you willing to share your data with us?”. The users could say no; if so, we assumed if a digital algorithm recording and analyzing this data, could automatically give it more value; so if one type of data is shared less, it's logically worth more then data that was shared more.
The results of the first trials weren't insightful, because we could hardly pull any people to participate. We were on busy street, in winter without any visual "pazaz". Yet, as we kept on improving the process by adapting our personal roles in the algorithm, the algorithm itself, our visuals and our location; we saw people getting more interested in 'the algorithm' through lengthier conversations. Though despite the new found interest, most participants still didn't mind handing the provided information over to us...