Algorithmic (Un)Fairness¶
Computer Science, 2018
Deliberate corruption of search results in order to maintain their wholesomeness and integrity. And to fight the Nazis searching for the wrong things in the first place.
In 2016, Microsoft put an AI chatbot (“Tay’) out into the wild which was to act like a 19 year-old girl and “learn” from talking to people on Twitter. After 16 hours and 96,000 tweets interacting with pranksters, Tay proclaimed feminism “cancer” and “Hitler was right”. A year later, Google Images’ ML was wrongly labeling black people as “gorillas”, YouTube was “radicalising” young men away from social justice, and Word2Vec apparently contained “sexist bias”. In the midst of relentless accusations of staggering leftist big tech bias (also Trump’s “Fake News”), “problematic” algorithms needed to become “fair” (i.e. further down the left side of the scale), meaning an epic level of nonsense yet again being concealed under academic jargon. The result being a) bias in the opposite direction, and b) the thousands of university activists/entryists at Google attempting to psychologically engineer people by showing you what it thinks what you should look at rather than what you asked for. The attention of outraged activists obsessed with power, of course, on the extraordinary, planet-breaking power and influence of Internet search is entirely coincidental.
- https://books.google.com/ngrams/graph?&year_start=1950&year_end=2019&content=algorithmic%20fairness
- https://scholar.google.com/scholar?q=algorithmic%20fairness
- https://en.wikipedia.org/wiki/Tay_(bot)
- https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
- https://www.theguardian.com/technology/2018/jan/12/google-racism-ban-gorilla-black-people
- https://www.thedailybeast.com/how-youtube-pulled-these-men-down-a-vortex-of-far-right-hate
- https://arxiv.org/pdf/1607.06520.pdf
- https://developers.google.com/machine-learning/fairness-overview
- https://google.com/search?q=straight+couples&tbm=isch