F test vs T test. What is the real diference?












6














As I've learnt, a T - test is used to compare 2 populations’ means, whereas an F-test (ANOVA) is used to compare 2/> populations’ variances.



At the end is this doing the same thing?



My background is from biology and no strong math/stat background. I wonder because whenever I used ANOVA (comparing >2 groups) followed by postHOC Tukey and not observing sig. differences, supervisor asking to use multiple t-test every time. Is this acceptable way of doing statistics.



I see there are many publications in biology where they do not follow statistics taught in the Textbooks.










share|cite|improve this question




















  • 1




    Not necessarily, to use a t-test you need an assumption of equal variances, so you may use an F-test to determine if the equal variance assumption holds. So in this case an F-test is a precursor to a t-test.
    – gd1035
    Nov 25 at 1:21
















6














As I've learnt, a T - test is used to compare 2 populations’ means, whereas an F-test (ANOVA) is used to compare 2/> populations’ variances.



At the end is this doing the same thing?



My background is from biology and no strong math/stat background. I wonder because whenever I used ANOVA (comparing >2 groups) followed by postHOC Tukey and not observing sig. differences, supervisor asking to use multiple t-test every time. Is this acceptable way of doing statistics.



I see there are many publications in biology where they do not follow statistics taught in the Textbooks.










share|cite|improve this question




















  • 1




    Not necessarily, to use a t-test you need an assumption of equal variances, so you may use an F-test to determine if the equal variance assumption holds. So in this case an F-test is a precursor to a t-test.
    – gd1035
    Nov 25 at 1:21














6












6








6







As I've learnt, a T - test is used to compare 2 populations’ means, whereas an F-test (ANOVA) is used to compare 2/> populations’ variances.



At the end is this doing the same thing?



My background is from biology and no strong math/stat background. I wonder because whenever I used ANOVA (comparing >2 groups) followed by postHOC Tukey and not observing sig. differences, supervisor asking to use multiple t-test every time. Is this acceptable way of doing statistics.



I see there are many publications in biology where they do not follow statistics taught in the Textbooks.










share|cite|improve this question















As I've learnt, a T - test is used to compare 2 populations’ means, whereas an F-test (ANOVA) is used to compare 2/> populations’ variances.



At the end is this doing the same thing?



My background is from biology and no strong math/stat background. I wonder because whenever I used ANOVA (comparing >2 groups) followed by postHOC Tukey and not observing sig. differences, supervisor asking to use multiple t-test every time. Is this acceptable way of doing statistics.



I see there are many publications in biology where they do not follow statistics taught in the Textbooks.







statistics normal-distribution statistical-inference variance






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Nov 25 at 1:27

























asked Nov 25 at 0:58









Oncidium

313




313








  • 1




    Not necessarily, to use a t-test you need an assumption of equal variances, so you may use an F-test to determine if the equal variance assumption holds. So in this case an F-test is a precursor to a t-test.
    – gd1035
    Nov 25 at 1:21














  • 1




    Not necessarily, to use a t-test you need an assumption of equal variances, so you may use an F-test to determine if the equal variance assumption holds. So in this case an F-test is a precursor to a t-test.
    – gd1035
    Nov 25 at 1:21








1




1




Not necessarily, to use a t-test you need an assumption of equal variances, so you may use an F-test to determine if the equal variance assumption holds. So in this case an F-test is a precursor to a t-test.
– gd1035
Nov 25 at 1:21




Not necessarily, to use a t-test you need an assumption of equal variances, so you may use an F-test to determine if the equal variance assumption holds. So in this case an F-test is a precursor to a t-test.
– gd1035
Nov 25 at 1:21










2 Answers
2






active

oldest

votes


















3














The appropriateness of the statistical test depends on the research hypothesis. If, as you suggest in your question, the research hypothesis is that there is a difference in means between at least two groups when there are strictly more than two groups to be compared, then the $F$-test arising from ANOVA is an appropriate test under additional assumptions, because the null hypothesis would be $$H_0 : mu_1 = mu_2 = ldots = mu_k$$ where $k > 2$ represents the number of groups, and $mu_i$ is the true mean of group $i$. A level $alpha$ test would control Type I error for the alternative (research) hypothesis. But the result of such a test would not formally tell you which groups differ from each other in a pairwise sense; thus the need for the Tukey post hoc test, or you could use pairwise $t$ tests with multiplicity correction.



As an illustration of the importance of the research hypothesis, if you have a control group against which different treatments are compared, you could use Dunnett's test instead of ANOVA, as the only comparisons of interest are the treatments against control, not treatments against each other.



The central issue underlying the subsequent identification of statistically significant pairwise differences after an omnibus test is that of multiple comparisons--e.g., even with as few as $4$ groups, you would have $binom{4}{2} = 6$ pairwise comparisons and the Type I error would be inflated without multiplicity correction such as the Bonferroni adjustment.



To simply do pairwise tests before ANOVA would, in my opinion, be ill-advised from the perspective of statistical rigor, although as I have implied, it is not the more serious methodological flaw. It may be useful for exploratory purposes, but adjustment for multiple comparisons is absolutely necessary in order to make inferential claims that could withstand scrutiny.



One final note: a "$t$ test" does not require an assumption of equal variances; the Welch $t$ test (using the Satterthwaite estimate of the degrees of freedom) is one way to address the issue of unequal group variances, and the test statistic is compared to a Student $t$ distribution, so I would consider that a $t$ test.






share|cite|improve this answer





























    2














    A T-test is a univariate hypothesis test that is applied when standard deviation is not known and the sample size is small. The T-statistic follows Student t-distribution under null hypothesis. You use this test for comparing the means of two populations. As @gd1035 mentioned, the t-test assumes equal variances, which you could first check by using an F-test.



    The F-test, on the other hand, is statistical test that determines the equality of the variances of the two normal populations. The F-statistic follows the f-distribution under null hypothesis. You use this test for comparing two population variances.






    share|cite|improve this answer





















      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3012298%2ff-test-vs-t-test-what-is-the-real-diference%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      3














      The appropriateness of the statistical test depends on the research hypothesis. If, as you suggest in your question, the research hypothesis is that there is a difference in means between at least two groups when there are strictly more than two groups to be compared, then the $F$-test arising from ANOVA is an appropriate test under additional assumptions, because the null hypothesis would be $$H_0 : mu_1 = mu_2 = ldots = mu_k$$ where $k > 2$ represents the number of groups, and $mu_i$ is the true mean of group $i$. A level $alpha$ test would control Type I error for the alternative (research) hypothesis. But the result of such a test would not formally tell you which groups differ from each other in a pairwise sense; thus the need for the Tukey post hoc test, or you could use pairwise $t$ tests with multiplicity correction.



      As an illustration of the importance of the research hypothesis, if you have a control group against which different treatments are compared, you could use Dunnett's test instead of ANOVA, as the only comparisons of interest are the treatments against control, not treatments against each other.



      The central issue underlying the subsequent identification of statistically significant pairwise differences after an omnibus test is that of multiple comparisons--e.g., even with as few as $4$ groups, you would have $binom{4}{2} = 6$ pairwise comparisons and the Type I error would be inflated without multiplicity correction such as the Bonferroni adjustment.



      To simply do pairwise tests before ANOVA would, in my opinion, be ill-advised from the perspective of statistical rigor, although as I have implied, it is not the more serious methodological flaw. It may be useful for exploratory purposes, but adjustment for multiple comparisons is absolutely necessary in order to make inferential claims that could withstand scrutiny.



      One final note: a "$t$ test" does not require an assumption of equal variances; the Welch $t$ test (using the Satterthwaite estimate of the degrees of freedom) is one way to address the issue of unequal group variances, and the test statistic is compared to a Student $t$ distribution, so I would consider that a $t$ test.






      share|cite|improve this answer


























        3














        The appropriateness of the statistical test depends on the research hypothesis. If, as you suggest in your question, the research hypothesis is that there is a difference in means between at least two groups when there are strictly more than two groups to be compared, then the $F$-test arising from ANOVA is an appropriate test under additional assumptions, because the null hypothesis would be $$H_0 : mu_1 = mu_2 = ldots = mu_k$$ where $k > 2$ represents the number of groups, and $mu_i$ is the true mean of group $i$. A level $alpha$ test would control Type I error for the alternative (research) hypothesis. But the result of such a test would not formally tell you which groups differ from each other in a pairwise sense; thus the need for the Tukey post hoc test, or you could use pairwise $t$ tests with multiplicity correction.



        As an illustration of the importance of the research hypothesis, if you have a control group against which different treatments are compared, you could use Dunnett's test instead of ANOVA, as the only comparisons of interest are the treatments against control, not treatments against each other.



        The central issue underlying the subsequent identification of statistically significant pairwise differences after an omnibus test is that of multiple comparisons--e.g., even with as few as $4$ groups, you would have $binom{4}{2} = 6$ pairwise comparisons and the Type I error would be inflated without multiplicity correction such as the Bonferroni adjustment.



        To simply do pairwise tests before ANOVA would, in my opinion, be ill-advised from the perspective of statistical rigor, although as I have implied, it is not the more serious methodological flaw. It may be useful for exploratory purposes, but adjustment for multiple comparisons is absolutely necessary in order to make inferential claims that could withstand scrutiny.



        One final note: a "$t$ test" does not require an assumption of equal variances; the Welch $t$ test (using the Satterthwaite estimate of the degrees of freedom) is one way to address the issue of unequal group variances, and the test statistic is compared to a Student $t$ distribution, so I would consider that a $t$ test.






        share|cite|improve this answer
























          3












          3








          3






          The appropriateness of the statistical test depends on the research hypothesis. If, as you suggest in your question, the research hypothesis is that there is a difference in means between at least two groups when there are strictly more than two groups to be compared, then the $F$-test arising from ANOVA is an appropriate test under additional assumptions, because the null hypothesis would be $$H_0 : mu_1 = mu_2 = ldots = mu_k$$ where $k > 2$ represents the number of groups, and $mu_i$ is the true mean of group $i$. A level $alpha$ test would control Type I error for the alternative (research) hypothesis. But the result of such a test would not formally tell you which groups differ from each other in a pairwise sense; thus the need for the Tukey post hoc test, or you could use pairwise $t$ tests with multiplicity correction.



          As an illustration of the importance of the research hypothesis, if you have a control group against which different treatments are compared, you could use Dunnett's test instead of ANOVA, as the only comparisons of interest are the treatments against control, not treatments against each other.



          The central issue underlying the subsequent identification of statistically significant pairwise differences after an omnibus test is that of multiple comparisons--e.g., even with as few as $4$ groups, you would have $binom{4}{2} = 6$ pairwise comparisons and the Type I error would be inflated without multiplicity correction such as the Bonferroni adjustment.



          To simply do pairwise tests before ANOVA would, in my opinion, be ill-advised from the perspective of statistical rigor, although as I have implied, it is not the more serious methodological flaw. It may be useful for exploratory purposes, but adjustment for multiple comparisons is absolutely necessary in order to make inferential claims that could withstand scrutiny.



          One final note: a "$t$ test" does not require an assumption of equal variances; the Welch $t$ test (using the Satterthwaite estimate of the degrees of freedom) is one way to address the issue of unequal group variances, and the test statistic is compared to a Student $t$ distribution, so I would consider that a $t$ test.






          share|cite|improve this answer












          The appropriateness of the statistical test depends on the research hypothesis. If, as you suggest in your question, the research hypothesis is that there is a difference in means between at least two groups when there are strictly more than two groups to be compared, then the $F$-test arising from ANOVA is an appropriate test under additional assumptions, because the null hypothesis would be $$H_0 : mu_1 = mu_2 = ldots = mu_k$$ where $k > 2$ represents the number of groups, and $mu_i$ is the true mean of group $i$. A level $alpha$ test would control Type I error for the alternative (research) hypothesis. But the result of such a test would not formally tell you which groups differ from each other in a pairwise sense; thus the need for the Tukey post hoc test, or you could use pairwise $t$ tests with multiplicity correction.



          As an illustration of the importance of the research hypothesis, if you have a control group against which different treatments are compared, you could use Dunnett's test instead of ANOVA, as the only comparisons of interest are the treatments against control, not treatments against each other.



          The central issue underlying the subsequent identification of statistically significant pairwise differences after an omnibus test is that of multiple comparisons--e.g., even with as few as $4$ groups, you would have $binom{4}{2} = 6$ pairwise comparisons and the Type I error would be inflated without multiplicity correction such as the Bonferroni adjustment.



          To simply do pairwise tests before ANOVA would, in my opinion, be ill-advised from the perspective of statistical rigor, although as I have implied, it is not the more serious methodological flaw. It may be useful for exploratory purposes, but adjustment for multiple comparisons is absolutely necessary in order to make inferential claims that could withstand scrutiny.



          One final note: a "$t$ test" does not require an assumption of equal variances; the Welch $t$ test (using the Satterthwaite estimate of the degrees of freedom) is one way to address the issue of unequal group variances, and the test statistic is compared to a Student $t$ distribution, so I would consider that a $t$ test.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Nov 25 at 3:08









          heropup

          62.5k66099




          62.5k66099























              2














              A T-test is a univariate hypothesis test that is applied when standard deviation is not known and the sample size is small. The T-statistic follows Student t-distribution under null hypothesis. You use this test for comparing the means of two populations. As @gd1035 mentioned, the t-test assumes equal variances, which you could first check by using an F-test.



              The F-test, on the other hand, is statistical test that determines the equality of the variances of the two normal populations. The F-statistic follows the f-distribution under null hypothesis. You use this test for comparing two population variances.






              share|cite|improve this answer


























                2














                A T-test is a univariate hypothesis test that is applied when standard deviation is not known and the sample size is small. The T-statistic follows Student t-distribution under null hypothesis. You use this test for comparing the means of two populations. As @gd1035 mentioned, the t-test assumes equal variances, which you could first check by using an F-test.



                The F-test, on the other hand, is statistical test that determines the equality of the variances of the two normal populations. The F-statistic follows the f-distribution under null hypothesis. You use this test for comparing two population variances.






                share|cite|improve this answer
























                  2












                  2








                  2






                  A T-test is a univariate hypothesis test that is applied when standard deviation is not known and the sample size is small. The T-statistic follows Student t-distribution under null hypothesis. You use this test for comparing the means of two populations. As @gd1035 mentioned, the t-test assumes equal variances, which you could first check by using an F-test.



                  The F-test, on the other hand, is statistical test that determines the equality of the variances of the two normal populations. The F-statistic follows the f-distribution under null hypothesis. You use this test for comparing two population variances.






                  share|cite|improve this answer












                  A T-test is a univariate hypothesis test that is applied when standard deviation is not known and the sample size is small. The T-statistic follows Student t-distribution under null hypothesis. You use this test for comparing the means of two populations. As @gd1035 mentioned, the t-test assumes equal variances, which you could first check by using an F-test.



                  The F-test, on the other hand, is statistical test that determines the equality of the variances of the two normal populations. The F-statistic follows the f-distribution under null hypothesis. You use this test for comparing two population variances.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Nov 25 at 2:34









                  bob

                  1089




                  1089






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.





                      Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                      Please pay close attention to the following guidance:


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3012298%2ff-test-vs-t-test-what-is-the-real-diference%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Plaza Victoria

                      In PowerPoint, is there a keyboard shortcut for bulleted / numbered list?

                      How to put 3 figures in Latex with 2 figures side by side and 1 below these side by side images but in...