I worked my way through Chapter 3 of Box, Hunter, and Hunter. Also a good chapter, this chapter was all about the assumptions that are absolutely required by classical statistics, but also about what to do when you can't meet those assumptions. There is a nice table comparing the number of "significant" results you get from completely random sets of data with certain characteristics. A particular bugbear is autocorrelation, which occurs when a measurement is affected by another one nearby in time, a very common occurence. Depending on whether the autocorrelation is positive or negative, you can get way more or way fewer significant results than you should. I'm going to take a stab at recreating this data analysis in R.