Today I uploaded my Elementary Stats Guide. I made this from lecture notes taken during my first semester graduate stats course at Northern Arizona University. As noted on the downloads page, this guide is frequentist. This is because the professor was a strict frequentist, and the guide is geared towards helping the unwary survive a typical university stats course. Dr. Turek insisted that you interpret everything strictly, and graded your verbal expression of statistical conclusions. For example, a confidence interval does not tell you whether you can find your mean between the two specified values with 1-α confidence. A confidence interval is a mathematical construction such that if you constructed a large (infinite) number of such intervals, α% of the time they would in fact contain the parameter of interest. However, your confidence interval is only one such interval, and so cannot be assumed to be true. How's that for clear?
Everyone in the world uses confidence intervals as if they were actually of some use in locating parameters, so it can be entertaining to watch the verbal gymnastics this generates. This points towards a more sensible interpretation of probability. I read yesterday that there is a new option called a credible interval that means exactly what you think it means. I have not yet worked my way through the math, but I am very interested.