Author : Xia Yi, Li Shan
The congressional hearing drama has just ended. The exposure of a confidential document has pushed Facebook to the spotlight:
This company uses AI to make money, in a very special way. Of course, after the U.S. people heard this, it was a tight one again.
Make money tools
US media The Intercept recently acquired a Facebook confidential document that revealed an intimate service Facebook offers advertisers: loyalty forecasts.
This is an artificial intelligence based service.
The so-called loyalty prediction is, in simple terms, a sentence that predicts the user's future behavior as a basis for advertising.
In other platforms, advertisers can accurately place advertisements based on the current status of users. In Facebook, advertisers may rely on decisions that the user has not yet made.
By predicting what choices its 2 billion users will face next time, Facebook has done a "more than you understand you," helping advertisers "enhance marketing efficiency."
For example, with loyalty forecasting service, Facebook can completely say: Dear Apple, my dear Samsung LG Huawei, our AI can predict that millions of iOS users are determined to switch to Android, do you want to accurately put it?
What you decide next is between the platform and the advertiser.
In contrast, if you put up to a big wave of Taobao in your shopping cart, Google Baidu, which distributes advertisements based on search terms, is too simple.
Because of the data that Facebook has, it is far more comprehensive than any other company.
How Facebook uses AI to precisely serve ads. The Inception
It can be seen from the document that Facebook“Loyalty Forecast” rely on data: user location, device information, connected to the Wi-Fi network, what videos were watched, friends and relatives of various details & hellip;…
Even the content you chat with your friends is a guide to advertising. Senator Bill Nelson encountered: "When I chatted with my friend on Facebook yesterday, I mentioned that I liked some kind of chocolate. Then suddenly I began to receive various kinds of chocolate advertisements. ”
What this senator has encountered is just a small part of the iceberg of Facebook advertising services.
It's like, everything you have already been seen by Facebook's AI.
This is the above mentioned, AI knows you better than you.
This skill, if used by "generalization" in other ways, or used by other people, is really scary. It is no wonder that the American people are a bit fryer.
Facebook said that in order to protect user privacy, all data is aggregated and anonymous. Of course, Facebook does not sell data, and they sell, or rent, the "projected" capabilities that they have achieved by analyzing their own data.
The three-handed data obtained by Cambridge analysis can be used to implement thousands of people pushing political advertisements. Facebook, which holds 2 billion users' full-scale information, will only make it easier to determine who will be your next mobile phone.
In the face of doubt, Facebook wants to pull its peers. In response, the company stated:
Like many other advertising platforms, Facebook uses machine learning technology to display the right ads to the right people. We do not claim to know what people think or share personal information with advertisers.
Shield & Backpan Man
Do you think AI can only be used to make money? Take clothes.
Facebook has also recently opened up another big new feature for AI: Backdoor.
On Capitol Hill, there are 10 hours of artificial intelligence. Repetitively, Ms. Zola seems to be very dependent and even has high hopes. However, in the end, it has always been a shield and a back pan.
Reduce hate speech? AI can handle it.
Terrorist recruitment information? AI can handle it.
Fake account? Or AI.
Russia interferes with elections? AI.
Racial advertising? AI.
safe question? AI.
How can this problem be solved? Rely on AI.
Why hasn't the problem been solved yet? AI is not good enough.
On content review issues, Facebook’s AI has been trying and has never been approved.
Xiaoza repeatedly mentioned at the hearing that Facebook’s detection system automatically deleted 99% of “Terrorist content” without anyone adding a tag. In 2017, Facebook announced that it "try" to use artificial intelligence to detect "possible support of terrorism."
99% of "Terrorism content" is likely to be dominated by pictures and video. With trained machine learning algorithms, the accuracy of image recognition is now even surpassed by humans on specific data sets, and companies have also invested in large-scale use.
However, more content is presented in textual form.
However, AI's ability to understand natural language does not even reach kindergarten level. Some nuances of human language, artificial intelligence recognize the performance of worrying, so that it is difficult to distinguish between true and false news is strong AI.
Xiaozha himself also admitted that the AI used by Facebook to identify hate speech is now far too high in error rate. If you want to rely on AI review content, it may take 5-10 years“.
By the end of the year, Facebook plans to hire 20,000 people for security and content review.
This repeated use of "artificial intelligence" to defend the behavior has also been criticized by the media.
The word artificial intelligence is too broad. To define it exactly, writing it into a textbook is an entire chapter. It contains various kinds of automation, machine learning and so on. An article published by The Verge stated that the term "artificial intelligence" makes Xiaozha find a pretext in front of a group of laymen. After all, they are unable to carry out in-depth questions.
Bully we do not understand technology? One member made a counterattack.
Gary Peters (Gary Peters) asked a question about the transparency of artificial intelligence: "You know, artificial intelligence is not without risk, you must maintain a very high degree of transparency in the construction of this algorithm. ”
Yes, your shield: artificial intelligence, it itself is very problematic.
Zuckerberg admits that this issue is very important, he also said that Facebook has an entire artificial intelligence ethics team to solve this problem.
However, even if we do not speak of artificial intelligence, Facebook and its counterparts, the "transparency & rdquo; understanding is only basically the same as" ambiguously written in the user agreement ". This does not seem to solve the problem.
As for the core of Facebook's apology in this round of "mdash;" data leakage and abused by Cambridge Analytica, artificial intelligence can't help much.
When it really needs artificial intelligence, although it has great potential, it will also be biased by the use of problematic data sets. The tendency to identify blacks as criminals, to think that women do not work with science, etc., has emerged in the application of artificial intelligence.
Zuckerberg said at the hearing:
Across the board, we have a responsibility to not just build tools, but to make sure those tools are used for good. ”
Build tools and make sure these tools are used to do good things. I hope that Xiaozha and his colleagues really believe this sentence and can remember it.
Finally made an egg, Xiaozha this Congress hearing, after many Americans read, said Zuckerberg is a robot & hellip;… Then some people come out and spoof a lot of video.
This is one of them.
Talking short-circuited ……
Repair it, fix it ……