Algorithmic (Un)Fairness

Computer Science, 2018

Deliberate corruption of search results in order to maintain their wholesomeness and integrity. And to fight the Nazis searching for the wrong things in the first place.

In 2016, Microsoft put an AI chatbot (“Tay’) out into the wild which was to act like a 19 year-old girl and “learn” from talking to people on Twitter. After 16 hours and 96,000 tweets interacting with pranksters, Tay proclaimed feminism “cancer” and “Hitler was right”. A year later, Google Images’ ML was wrongly labeling black people as “gorillas”, YouTube was “radicalising” young men away from social justice, and Word2Vec apparently contained “sexist bias”. In the midst of relentless accusations of staggering leftist big tech bias (also Trump’s “Fake News”), “problematic” algorithms needed to become “fair” (i.e. further down the left side of the scale), meaning an epic level of nonsense yet again being concealed under academic jargon. The result being a) bias in the opposite direction, and b) the thousands of university activists/entryists at Google attempting to psychologically engineer people by showing you what it thinks what you should look at rather than what you asked for. The attention of outraged activists obsessed with power, of course, on the extraordinary, planet-breaking power and influence of Internet search is entirely coincidental.