Episode 27 - Big Algorithm, Fat Tails, and Converging Priors
Today we dive into the current Bayesian flame wars on Twitter. Do Bayesian priors converge? As Nassim Taleb (@nntaleb) points out, not necessarily until a fat tail or power law distribution. We'll talk about what that means, and the wonders worked by Bayes rule even under some seemingly preposterous priors.
Also - the military wants to do machine learning with less data. Is the era of big data over and giving way to the era of the big algorithm? The results of the Twitter Shadow Ban poll, QA bias, the Streisand effect and the Alex Jones banning
Military looking for algorithms that require less data
Where’s Waldo Finding Robot
Twitter is the only place that hasn’t removed Alex Jones
Oops! This one went out of date in about a day. We’ll follow up next week!
Tweet on Bayesian Priors that don’t have convergent posteriors
The idea behind “Bayesian” approaches is that if 2 pple have different priors, they will eventually converge to the same estimation, via updating. A “wrong” prior is therefore OK.
Under fat tails, if you have the wrong prior, you never get there. (Taleb & Pilpel, 2004)
Video that mentions all the stats about the world getting better: