MaSooM Studios: Facebook, Twitter and Google Did Not Create the Nazis

Facebook, Twitter and Google Did Not Create the Nazis



 

“Facebook and Google have become unvarnished reflections of how humans behave on the internet.”
So says Nick Statt in an insightful article I recommend, written for The Verge.
The article deals with the latest PR hit to the social platforms... the ease with which anti-Semites and haters of all races, colors and creeds can be targeted using the same ad tools that bring you soap and toothpaste advertisements:
ProPublica discovered last Thursday that Facebook’s ad tools could target racists and anti-Semites using the very information those users self-report. That initial report kicked off a series of experiments conducted by news organizations that found that Google’s search engine would not only let you place ads next to search results for hateful rhetoric, but its automated processes would even suggest similar, equally hateful search terms to sell ads against. Twitter was also caught up in the controversy, when its filtering mechanisms failed to prevent ads from targeting “Nazi” and the n-word, an issue the company inexplicably attributed to “a bug we have now fixed.” This week, Instagram converted a journalist’s post about a violent threat she received into an ad that it then served to the journalist’s contacts.
Now let’s be clear…unless you’re a believer and follower of way-out conspiracy theories, you’ll probably agree that neither Facebook, Twitter nor Google are really interested in promoting anti-Semitism to Nazi lookalikes, nor the N-word to KKK fans. 
The Verge article cited Facebook COO Sheryl Sandberg’s response to the backlash:
“We never intended or anticipated this functionality being used this way — and that is on us,” Sandberg wrote. “And we did not find it ourselves — and that is also on us.” Sandberg said that, as a someone who is Jewish, the ability to target ads based on an affinity for Hitler made her “disgusted and disappointed.” In an attempt to rectify that oversight, Facebook is now increasing human moderation for its automated processes; improving enforcement of its ad guidelines to prevent targeting that uses attacks on race, ethnicity, gender, and religious affiliation; and creating a more robust user reporting mechanism to cut down on abuses.
So Facebook is doing something I have written about many times before. Simply put, they are looking to humans.
This is significant for many reasons.
At the very least, it makes one wonder about the real effectiveness of AI, and the inability of an algorithm to accurately define and stop hate. But can human scale actually help?
Harvard Law School Cyberlaw Clinic fellow and lawyer Kendra Albert is quoted in The Verge:
“These kinds of controversies will keep happening because the scale and expectations around how many employees are needed to oversee the content or ad programs is teeny compared to the number of ads being served. I think it’s true that often these companies could not have reached the scale that they reached without automating things that traditionally had a human in the loop.”
Now back to the opening thought of the article:
It’s not only that these ad systems are governed by algorithms, the software that is increasingly guided by artificial intelligence tools that automate systems in ways even their creators do not fully comprehend. It’s also that, because of their breadth and poor oversight, Facebook and Google have become unvarnished reflections of how humans behave on the internet. Containing and serving that entire spectrum of our interests, no matter how vile, is a feature, not a bug.
I take a different view.
In my opinion, the behavior manifested on Facebook, Google and elsewhere is not “a reflection of how humans behave on the Internet” but rather a reflection of how humans behave in the real world.
Digital channels did not create sharing, targeting, searching, or advertising for that matter. What they did was amplify our own behavior, tap into our needs and enable us to do the same things—good, bad and evil—that we have been doing for millennia…but on steroids.
Are we really in new territory here?
On one hand, as Statt writes:
Albert says that when new technology arrives on the scene, society is often forced to rethink previously unregulated behavior. This change often occurs after the fact, when we discover something is amiss. “The speed at which this tech is rolled out to the public can make it hard for society to keep up,” Albert adds. “When you’re trying to build as big as possible or as fast as possible, it’s easy for folks who are more skeptical or concerned [to] have issues they’re raising left by the wayside, not out of maliciousness but because, ‘Oh, we have to meet this ship date.’”
Clearly, regulation needs to follow here, as it does for all channels of message distribution, starting with postal mail. Accountability is critical and, frankly, to slough it off by suggesting that no one ever thought it might happen seems a bit misplaced.
Statt is right; it will happen again:
Revelations such as those last week are bound to come up again, and there are likely few, if any, concrete solutions available to weed them out in a way that makes everyone happy. But the onus is on tech companies like Facebook and Google to improve. Both companies grew at astronomical pace through the novel combination of unprecedented reach and data collection, cemented through market dominance, with the low overhead of a largely automated system. The failure to anticipate these edge cases is a symptom of their insatiable quest for growth mixed with a lack of meaningful human oversight.
As I wrote last week, part of the problem is our continued viewing of these companies as tech giants. Imagine if The New York Times said, ‘We are not a media company, we are actually a tech company, because we use all forms of tech to get the news out. Ergo, we have no accountability for what we publish in any form.’ Hmmmm.
Yet the tech label is only part of the problem.
People create hate. People create terror. It is people who highjack our channels to spread their vitriol—not algorithms. The internet is only a reflection of our own behavior.
So while Facebook’s team needs to step up more, so do all of us. We need better regulation, better education and to take personal accountability to watch what we say, what we share and what we post. It’s a tough call today, I know, but imperative nevertheless.
The article concludes with a quote from Eli Pariser, Upworthy founder and author of The Filter Bubble:
“You wake up one morning and you’re mayor of a city, and maybe you never wanted to be a mayor and people are asking, ‘Why does the water run here and not there,’ and ‘What are we going to do about trash pickup’…I don’t know that Facebook set out to have that role, but by virtue of being the place where the city was built, it’s now got some responsibility to sort those things out.”
Even if you wake up and you’re not the mayor (maybe you just live in the city), this topic demands that you be held accountable, too.
Listen to someone who knows:
“I call on people to be obsessed citizens, forever questioning and asking for accountability. That's the only chance we have today of a healthy and happy life.”
–Ai Weiwei
I clearly believe in “obsessed citizens.”
So, what do you think? Have Facebook and Google, “become unvarnished reflections of how humans behave on the internet,” or is the Internet an unvarnished reflection of our sadder selves?
I leave it to you…just as technology does.

0 comments here:

Copyright © 2013 MaSooM Studios