
I love this reflection even more than the title of the paper. https://t.co/0EgIQWDFOm

I love this reflection even more than the title of the paper. https://t.co/0EgIQWDFOm
From @emilymbender, @timnitGebru, @mcmillan_majora and Shmargaret Shmitchell paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” https://t.co/avVti8u8zS
A must-read to understand the invisible way people can be influenced and AI manipulated. https://t.co/hyZ6xZrq3o
“The amount of compute used to train the largest deep learning models…has increased 300,000x in 6y, increasing at a far higher pace than Moore’s Law.”
From “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”
https://t.co/G1Wg57uAz1 https://t.co/9ZAjk1bigO
Meet the people trying to replicate and open-source OpenAI’s GPT-3 https://t.co/AUEGkx77yF
“When Peter put his contact information online, it had an intended context of use. Unfortunately, applications built on top of GPT-2 are unaware of this context…”
Does GPT-2 Know Your Phone Number? https://t.co/4S1y9Xk9yh https://t.co/IAOl2EpcLl
There are multiple reasons to try this. One is assessing design impact on user productivity.
I’m glad to see that research on #AI is expanding beyond whatever top cloud providers offer. There’s a universe of solutions that IT orgs should know about