Multiple comparisons bias is a big problem in quantitative analysis. Effectively it's just the notion that the more tests you run, the more likely you are to get false positives (things that look like they confirm your hypothesis, but are really just random chance). If you don't correct for this at some point you're very likely to accept hypotheses that aren't based on any real relationships. p-Hacking is just the abuse of this phenomenon, in which someone runs tons of tests until they find one specific situation in which their tests pass. It can be intentional or inadvertent, but it happens. This lecture will introduce you to the concept, show some experimental examples, and talk about correcting for it.
All of our lectures can be found here: