The most important products of applied statistical work are statistical inferences, e.g., hypothesis tests and confidence regions. Hence, the statistician is primarily concerned with assuring that statistical inferences are correct, with regard to their degree of uncertainty, e.g., type I error or coverage probability. By convention, asymptotic arguments, and notably the central limit theorem, are most commonly used to bolster the assumed correctness of statistical inferences. This presentation examines several examples where this paradigm falters, and why it may be wholly unnecessary. Potential empirical alternatives are presented to directly verify the correctness of statistical inferences, given assumptions that are usually no stronger than those associated with asymptotic results.