I worked my way through Chapter 3 of Box, Hunter, and Hunter. Also a good chapter, this chapter was all about the assumptions that are absolutely required by classical statistics, but also about what to do when you can't meet those assumptions. There is a nice table comparing the number of "significant" results you get from completely random sets of data with certain characteristics. A particular bugbear is autocorrelation, which occurs when a measurement is affected by another one nearby in time, a very common occurence. Depending on whether the autocorrelation is positive or negative, you can get way more or way fewer significant results than you should. I'm going to take a stab at recreating this data analysis in R.
I mentioned this subject a couple of times before. Simple reaction time data from Woodley et al.Bruce Charlton is the first person I know of to discuss the implications of the change in simple reaction time data in the paper by Woodley et al. Charlton claims the data shows
Geoff Canyon has a post about Google's tricky interview questions. Microsoft is also known for asking these kind of questions during interviews, and you can run into them anywhere in the technical world. Also known as Fermi problems or back-of-the-envelope calculations, I ran into these a lot during college because
A fun post exploring the differences between a number of statistical computation packages. As one of the commenters said, this is an awesome flame war! The comments are very informative, with all sorts of historical information explaining how and why certain packages turned out the way they are. I use