##### Frequently Asked Questions

##### Q. What is considered adequate participation rate by students to ensure valid results? What is the minimum response rate for the SETE scores to be valid?

A. Because the SETE scoring algorithm uses methods based on "small area estimation", scores can actually be as small as n=2. However, score bias will be larger as a result of these smaller sample sizes. Initial Monte Carlo simultions, that mimic the data characteristics of SETE responses, have suggested that sample sizes less than n=10 increase the SETE scaled score bias up to about 10% of some known population couse mean (e.g. a known population course mean initialized in the simulation).

##### Q. Will SETE be just as accurate for high and low response rates? Has the margin of error been calculated for different sample sizes/response rates?

A. It is important not to confuse group confidence intervals with the prediction interval estimates. They are similar, but are not exactly the same. For the SETE course means (based on a one-way random effects ANOVA model - a so-called "multilevel" model), predeiction interval estimates are essentially the equivalent of a standard error of measurement, where the course unit is the unit of observation. A prediction interval error estimate (or a standard error of measurement - SEM) has been estimated to be between 10 and 12 for the current scaled scores based on a population mean of 800 using the particular adjusted sampling weight scheme that is used. While this SEM estimate does not depend on the course sample size, more reasonably the bounds for the prediction intervals will be biased for course means less than 10. For example, initial simulation results suggest that for 80% prediction intervals, the true population course mean is captured only 70% of the time - bias that is roughly 10% less than that of the required 80% accuracy for 80% prediction intervals.