'This year Facebook filed two very interesting patents in the US. One was a patent for emotion recognition technology; which recognises human emotions through facial expressions and so can therefore assess what mood we are in at any given time-happy or anxious for example. This can be done either by a webcam or through a phone cam. The technology is relatively straight forward. Artificially intelligent driven algorithms analyses and then deciphers facial expressions, it then matches the duration and intensity of the expression with a corresponding emotion. Take contempt for example. Measured by a range of values from 0 to 100, an expression of contempt could be measured by a smirking smile, a furrowed brow and a wrinkled nose. An emotion can then be extrapolated from the data linking it to your dominant personality traits: openness, introverted, neurotic, say.
The accuracy of the match may not be perfect, its always good to be sceptical about what is being claimed, but as AI (Artificial Intelligence) learns exponentially and the technology gets much better; it is already much, much quicker than human intelligence.
Recently at Columbia University a competition was set up between human lawyers and their AI counterparts. Both read a series of non-disclosure agreements with loopholes in them. AI found 95% compared to 88% by humans. The human lawyers took 90 minutes to read them; AI took 22 seconds. More incredibly still, last year Google’s AlphaZero beat Stockfish 8 in chess. Stockfish 8 is an open-sourced chess engine with access to centuries of human chess experience. Yet AlphaZero taught itself using machine learning principles, free of human instruction, beating Stockfish 8 28 times and drawing 72 out of 100. It took AlphaZero four hours to independently teach itself chess. Four hours from blank slate to genius.
A common misconception about algorithms is that they can be easily controlled, rather they can learn, change and run themselves-a process known as deep “neural” learning. In other words, they run on self-improving feed back loops. Much of this is positive of course, unthought of solutions by humans to collective problems like climate change are more possible in the future. The social payoffs could be huge too. But what of the use of AI for other means more nefarious. What if, as Yuval Noah Hariri says, AI becomes just another tool to be used by elites to consolidate their power even further in the 21stcentury. History teaches us that it isn’t luddite to ask this question, nor is it merely indulging in catastrophic thinking about the future. Rapidly evolving technology ending up in the hands of just a few mega companies, unregulated and uncontrolled, should seriously concern us all.'
Read more: With New Patents Facebook Brings Us One Step Close to a Dystopian Future
Did you like this article?
Thank you for your vote!
17 hours ago
US facial recognition will cover 97 percent of departing airline passengers within four years
From our advertisers
10 hours ago
Macron's polls rise to highest level since yellow vest revolt as French approve his handling of Notre-Dame fire
18 hours ago
The “Deal of the Century”: The US Wants To Enable Netanyahu to “Officially” Occupy the West Bank
21 hours ago
One last hit from the collusion crack pipe? Journalists turn on Attorney General Barr without evidence
From our advertisers