The standard normal and the t-distribution form the basis of single sample tests. In this video, learn to apply these distributions in hypothesis tests about a population mean.

- [Instructor] Let's look at two important statistical tests, the z-test and the t-test. Now, locating a sample statistic in a sampling distribution and deciding about the null hypothesis, is what constitutes a statistical test. The usual procedure for a statistical test is to transform the statistic into a score in a distribution that has a mean of zero and a standard deviation of one. The transformed value is the test statistic. When the Central Limit Theorem applies use the standard normal distribution as the sampling distribution of the mean, and transform the sample statistic x-bar into a standard score z. To do that, you take the sample mean, subtract the hypothesized mean and divide by the standard error of the mean. Locate the z-score in the standard normal distribution and if the z-score is in the rejection region, reject the null hypothesis. If it's not in the rejection region, do not reject the null hypothesis. If you can reject the null hypothesis, the results are said to be statistically significant. From an example we looked at before, a social scientist believes that people in the 86705 zip code have higher IQs than average. The null hypothesis is that Mu 86705 is less than or equal to 100. The alternative hypothesis is that Mu 86705 is greater than 100, and Alpha equals .05. In carrying to the study, the social scientist measures IQ in a sample of 256 people from the 86705 ZIP code. She finds that the average IQ in the sample is 102.5. Reject the null hypothesis? Well, the Central Limit Theorem applies. The sample is large, 256. The population is normally distributed, its IQ. And the standard deviation is 16. The sampling distribution of the mean is a normal distribution, so the Z is equal to x-bar minus Mu under the null hypothesis, divided by the standard error of the mean, which is 102.5 minus 100, and a denominator 16 divided by the square root of 256, or 102.5 minus 100, divided by 16 over 16, which comes out to 2.5. 1.65 cuts off .05 of the area under the standard normal distribution. The rejection region is in the right-side tail of the distribution because the alternative hypothesis is that the mean is greater than 100. We locate 2.5, the z-score, in the sampling distribution. It's in the rejection region, so we reject the null hypothesis. This procedure is a z-test. But what about that other example that we looked at earlier? A social scientist believes that people in the 86705 ZIP code differ from the average IQ, but she doesn't know if their IQ will be higher or lower. So, the null hypothesis is that Mu in that ZIP code is equal to 100. The alternative hypothesis is that Mu in that ZIP code is not equal to 100. She measures IQ in a sample of 256 people from that ZIP code. She finds the average IQ in the sample is 102.5. Reject the null hypothesis? Once again, z equals 2.5. 1.96 cuts off .025 of the area under the distribution in the right tail. - 1.96 cuts off .025 of the area under the distribution in the left tail. The rejection region is in both tails because the alternative hypothesis is that the population mean is not equal to 100. We locate 2.5, the z-score in the sampling distribution. It's in the rejection region, so the decision is to reject the null hypothesis. Most of the time, the Central Limit Theorem doesn't apply. That's usually the case because we're dealing with small samples and unknown population standard deviation. Here's an example. A software designer has created a new user interface for a company's computers. He claims it's more user-friendly than the existing interface, and that users will make significantly fewer than 10 errors in a test of usability. So, a sample of 16 users makes an average of 8.8 errors with a standard deviation of three. Without Alpha equals .05, is the designer correct? The null hypothesis is that Mu is greater than or equal to 10; the alternative hypothesis is that Mu is less than 10. Here we use a t-distribution with degrees of freedom equal to N minus one as the sample distribution of the mean. We find that value that cuts off Alpha in that t-distribution is -1.75 for degrees of freedom equals 15. The rejection region is on the left, because the alternative hypothesis is that Mu is less than 10. The idea is to convert x-bar into a value you can locate in the t-distribution. If the value is in the rejection region, reject the null hypothesis. So, converting the sample mean to t, here's how you do it. T with N minus one degrees of freedom is the sample mean minus Mu under the null hypothesis divided by the standard deviation divided by the square root of the sample size. In this case, that's t with 15 degrees of freedom equals 8.8 minus 10, divided by and a denominator three divided by the square root of 16, which is -1.2 divided by three-fourths, and that's equal to -1.6. T value converted from x-bar is -1.6. The decision is to not reject the null hypothesis because the t is not in the rejection region. This procedure is a t test. So, another example. A machine is calibrated to produce widgets with a width of seven inches. A sample of nine widgets has an average width of 7.8 inches with a standard deviation of one inch. Should the machine be recalibrated? The null hypothesis is that Mu equals seven. The alternative hypothesis is that Mu is not equal to seven, and Alpha is .05. Here, the t-distribution is appropriate with 15 degrees of freedom. The t-distribution with 15 degrees of freedom as a sampling distribution of the mean. We find the values that cut off Alpha in that t-distribution. And if degrees of freedom equal eight, the values are plus or minus 2.31. The rejection region is on both sides, because the alternative hypothesis is that Mu is not equal to 10. So, the idea is to convert x-bar into a value you can locate in the t-distribution. If the value is in the rejection region, reject the null hypothesis. And here, to convert that x-bar, once again, it's x-bar minus Mu divided by the standard deviation which is in turn divided by the square root of the sample size. In this example, that's t with eight degrees of freedom equal to 7.8 minus seven, and an enumerator of one, divided by the square root of nine, or 0.8 divided by one-third, which is 2.4. Our t-value is 2.4, so we reject the null hypothesis. And we must recalibrate the machine. So summing up, when the Central Limit Theorem applies, convert x-bar to a z-score and use the standard normal distribution as the sampling distribution. That's the z-test. Otherwise, convert x-bar to t and use the t-distribution with N-1 degrees of freedom as the sampling distribution. That's the t-test. If you can reject the null hypothesis, the results are said to be statistically significant.

###### Released

6/11/2019- Explain how to calculate simple probability.
- Review the Excel statistical formulas for finding mean, median, and mode.
- Differentiate statistical nomenclature when calculating variance.
- Identify components when graphing frequency polygons.
- Explain how t-distributions operate.
- Describe the process of determining a chi-square.

## Share this video

## Embed this video

Video: The z-test and the t-test