This documentary consists mostly of thoughtful comments from early social media innovators and designers. To a person, they seem to be looking back, scratching their heads and thinking, "My God, what have I done?"
The story is this: Digital technology -- artificial intelligence, online search and social media -- seemed initially like a huge net benefit for humanity. Over time, the warts began to show, chiefly as the little machines became more sophisticated and their human owners remained, well, human.
The lead speaker is Tristan Harris, an ex-Googler who raised some of the issues while he was at the company; these raised a flutter of interest and then disappeared. He has founded a Center for Human Technology that he hopes will have greater effect.
The film is mostly speech but includes intermittent vignettes of a teenager who cannot extricate himself from his attachment to his cellphone and how algorithms in human form keep demanding his attention and feeding it/him attention-seeking narratives. This a bit awkward, but it does serve to break up the talking head bits.
Here are some of the questions raised:
Addiction
"We have moved away from having a tool-based environment to having an addiction-based environment," says Harris, who means what he says.
A former Twitter executive says, "I had to write my own software to break my addiction to Reddit."
Another commenter: "Do you check your smartphone before you pee in the morning or while you're peeing in the morning? Those are the two choices."
Facebook/Instagram comes in for much of the criticism here.
It's worst for the young, says an addiction professor who says Facebook interactions "dig down deeper into the brain stem and take over kids' sense of self-worth and identity."
And for what?: "Hearts, likes, thumbs -- they're fake, brittle popularity," says Chamath Palihapitiya, a Facebook alum.
Jonathan Haidt, a social psychologist at Stern NYU notes that babies born in 1996 were the first children to have cellphones in middle school. Between 2010 and 2020, he says, suicides by pre-teen girls more than doubled.
(I believe this one, by the way. Two years ago in our state, a 12-year-old was harassed constantly by mean girls in her first year of middle school -- a sample text was "Why don't you kill yourself?" -- and was so affected by it that, at the end of the school year, she did kill herself. The state had a detailed law requiring the filing of reports and actions to be taken starting with the first incident, but that school's officials did nothing. The girl's mother arrived home from her latest pleading for help to find her daughter dead.)
Haidt offers advice for parents on when and how much social media are appropriate for children.
What They Know and How They Use It
As a search engine gathers and sorts your internet traffic, it apparently forms conclusions about your personality, your favorite color, the foods you like to eat and the types of books you read (if you read), among others. Other algorithms gather and slot your work history, your medical data, etc.
There are problems here, says Cathy O'Neil, a mathematician on a mission.
"Algorithms are opinions embedded in code," she says -- code whose validity is seldom challenged. (Her 2017 Ted talk, "The era of blind faith in big data must end," goes into some detail)
As the teaser notes at the top, you can get different results from identical internet searches, depending on where you live, or your political affiliations or your previous search history. Effectively, these algorithms try to find "which rabbit hole is closest to your particular interest and then feed you more of the same."
This is one thing if your rabbit hole is golf courses in California or memoirs written by American ex-pats living in Japan.
It's another if you have an interest in news of the day, or if you live in a dictatorship or if an election is coming up soon.
An example: If you mistrust Antifa, you might have been convinced over the weekend that most of the fires in the American West were set deliberately by progressive extremists. But if you support Antifa, you might have been convinced that the 100 nights of protests in Portland, Ore., this summer were peaceful except on occasions when right-wing extremists came into town with flags and weapons. Sorta like CNN v. Fox, with the source of your news flying under the radar.
Another problem is that a steady diet of news-you-want-to-believe tends to convince you that all the reasonable people agree with you.
"If everyone is entitled to their own facts, there is no need (in anyone's mind) for compromise." says Harris.
The agreed-upon fix among the speakers in the film is that government regulation is needed. They call themselves optimists and maybe they're right, but I'm skeptical.
On the plus side, Facebook announced last week it will not accept political advertising between now and the November election
Anyway, The Social Dilemma is a relatively brief 90 minutes and is worth a look. It raises questions we need to be asking ourselves. Why not watch it with a friend or relative, and discuss it in person or over the phone, instead of on social media?
Couldn't hurt.
Notes
Not mentioned here is the Silicon Valley-based Stanford Digital Persuasion Lab, formed in 1998 and still active. (No, the founder and head of the place is not named Darth Vader.) I would have liked to learn from the regretful early innovators, who mostly live and work nearby, what ethical considerations have been introduced in the lab's work over the last two decades
-----
Neither does the film mention medical data. In late 2018, Google bought a huge bank of medical information from Ascension, a big healthcare company. My primary care doctor, whom I like, is employed by Ascension. This raised some hackles, and not just from me.
Early on, one federal investigator posited that perhaps the doctors owned their patients' blood reports and tissue samples and that these were not Ascension's to sell. My reaction was this: NO.
My medical history and test results were paid for by ME (through my health insurance). When I signed up with my doctor, I signed the usual HIPAA privacy statement. Why should Ascension or the doctor be able to share my personal data, with or without my name attached, without my permission? (And, again, I like the doctor.)
Why should Google -- effectively a large advertising company attached to a search engine -- be able to aggregate my data with those of thousands of other people to provide a data bank to be monetized, in any way, without my permission?
No comments:
Post a Comment