Interpretation of R output from Cohen's Kappa





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}







3












$begingroup$


I have the following result from carrying out Cohen's kappa in R



library(irr)
n = 100
o = c(rep(0,n), rep(1,n))
p = c(rbinom(n,1,0.5), rbinom(n,1,0.51))
k = kappa2(
data.frame(p,o), "unweighted"
)
k


Which outputs



 Cohen's Kappa for 2 Raters (Weights: unweighted)

Subjects = 200
Raters = 2
Kappa = -0.08

z = -1.13
p-value = 0.258


My interpretation of this




the test is displaying that there seems to be disagreement between the two vectors as kappa is negative. However, given the p value of 0.258 we can't say that this disagreement is significant, and may just be down to chance.




If
someone could highlight if there is anything I'm missing from this interpretation that would be appreciated.










share|cite|improve this question











$endgroup$








  • 2




    $begingroup$
    Please use seeded-random data (set.seed()) so we get a reproducible example. Also, try other package implementations such as DescTools::CohenKappa(), it gives you lower and upper confidence intervals which might be more meaningful to decide whether you can conclude there was no agreement/disagreement.
    $endgroup$
    – smci
    21 hours ago




















3












$begingroup$


I have the following result from carrying out Cohen's kappa in R



library(irr)
n = 100
o = c(rep(0,n), rep(1,n))
p = c(rbinom(n,1,0.5), rbinom(n,1,0.51))
k = kappa2(
data.frame(p,o), "unweighted"
)
k


Which outputs



 Cohen's Kappa for 2 Raters (Weights: unweighted)

Subjects = 200
Raters = 2
Kappa = -0.08

z = -1.13
p-value = 0.258


My interpretation of this




the test is displaying that there seems to be disagreement between the two vectors as kappa is negative. However, given the p value of 0.258 we can't say that this disagreement is significant, and may just be down to chance.




If
someone could highlight if there is anything I'm missing from this interpretation that would be appreciated.










share|cite|improve this question











$endgroup$








  • 2




    $begingroup$
    Please use seeded-random data (set.seed()) so we get a reproducible example. Also, try other package implementations such as DescTools::CohenKappa(), it gives you lower and upper confidence intervals which might be more meaningful to decide whether you can conclude there was no agreement/disagreement.
    $endgroup$
    – smci
    21 hours ago
















3












3








3


2



$begingroup$


I have the following result from carrying out Cohen's kappa in R



library(irr)
n = 100
o = c(rep(0,n), rep(1,n))
p = c(rbinom(n,1,0.5), rbinom(n,1,0.51))
k = kappa2(
data.frame(p,o), "unweighted"
)
k


Which outputs



 Cohen's Kappa for 2 Raters (Weights: unweighted)

Subjects = 200
Raters = 2
Kappa = -0.08

z = -1.13
p-value = 0.258


My interpretation of this




the test is displaying that there seems to be disagreement between the two vectors as kappa is negative. However, given the p value of 0.258 we can't say that this disagreement is significant, and may just be down to chance.




If
someone could highlight if there is anything I'm missing from this interpretation that would be appreciated.










share|cite|improve this question











$endgroup$




I have the following result from carrying out Cohen's kappa in R



library(irr)
n = 100
o = c(rep(0,n), rep(1,n))
p = c(rbinom(n,1,0.5), rbinom(n,1,0.51))
k = kappa2(
data.frame(p,o), "unweighted"
)
k


Which outputs



 Cohen's Kappa for 2 Raters (Weights: unweighted)

Subjects = 200
Raters = 2
Kappa = -0.08

z = -1.13
p-value = 0.258


My interpretation of this




the test is displaying that there seems to be disagreement between the two vectors as kappa is negative. However, given the p value of 0.258 we can't say that this disagreement is significant, and may just be down to chance.




If
someone could highlight if there is anything I'm missing from this interpretation that would be appreciated.







hypothesis-testing model-comparison agreement-statistics association-measure cohens-kappa






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Apr 19 at 17:32







baxx

















asked Apr 19 at 14:08









baxxbaxx

320111




320111








  • 2




    $begingroup$
    Please use seeded-random data (set.seed()) so we get a reproducible example. Also, try other package implementations such as DescTools::CohenKappa(), it gives you lower and upper confidence intervals which might be more meaningful to decide whether you can conclude there was no agreement/disagreement.
    $endgroup$
    – smci
    21 hours ago
















  • 2




    $begingroup$
    Please use seeded-random data (set.seed()) so we get a reproducible example. Also, try other package implementations such as DescTools::CohenKappa(), it gives you lower and upper confidence intervals which might be more meaningful to decide whether you can conclude there was no agreement/disagreement.
    $endgroup$
    – smci
    21 hours ago










2




2




$begingroup$
Please use seeded-random data (set.seed()) so we get a reproducible example. Also, try other package implementations such as DescTools::CohenKappa(), it gives you lower and upper confidence intervals which might be more meaningful to decide whether you can conclude there was no agreement/disagreement.
$endgroup$
– smci
21 hours ago






$begingroup$
Please use seeded-random data (set.seed()) so we get a reproducible example. Also, try other package implementations such as DescTools::CohenKappa(), it gives you lower and upper confidence intervals which might be more meaningful to decide whether you can conclude there was no agreement/disagreement.
$endgroup$
– smci
21 hours ago












1 Answer
1






active

oldest

votes


















3












$begingroup$

From the perspective of an applied analyst:



First note: that disagreement means if rater A says 1 rater B says 0; it is like how a Pearson correlation of -1 denotes a strong, albeit negative, relationship. The actual null hypothesis here is: what rater A says has no relation to what rater B says.



I wouldn't make such vague yet absolute declarations such as "there seems to be disagreement" (or rather there seems to be no agreement). It is not really an appropriate summary of data without significant background and context. If we had that background and context (such as in a discussions section), we could contribute some nuanced synthesis of the result, pointing to improvements or reasons for disagreement, etc.



To interpret the results:




  • report the percentage agreement, note if any one category was more prevalent (a case when % agreement may be high but $kappa$ may be low)

  • state the kappa statistic and it's confidence interval

  • I often question the worth of a p-value where the null hypothesis is a stupid case of "no agreement", but you can quote the p-value and say that the data did not provide evidence that the raters agree.






share|cite|improve this answer











$endgroup$














    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "65"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f403970%2finterpretation-of-r-output-from-cohens-kappa%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    3












    $begingroup$

    From the perspective of an applied analyst:



    First note: that disagreement means if rater A says 1 rater B says 0; it is like how a Pearson correlation of -1 denotes a strong, albeit negative, relationship. The actual null hypothesis here is: what rater A says has no relation to what rater B says.



    I wouldn't make such vague yet absolute declarations such as "there seems to be disagreement" (or rather there seems to be no agreement). It is not really an appropriate summary of data without significant background and context. If we had that background and context (such as in a discussions section), we could contribute some nuanced synthesis of the result, pointing to improvements or reasons for disagreement, etc.



    To interpret the results:




    • report the percentage agreement, note if any one category was more prevalent (a case when % agreement may be high but $kappa$ may be low)

    • state the kappa statistic and it's confidence interval

    • I often question the worth of a p-value where the null hypothesis is a stupid case of "no agreement", but you can quote the p-value and say that the data did not provide evidence that the raters agree.






    share|cite|improve this answer











    $endgroup$


















      3












      $begingroup$

      From the perspective of an applied analyst:



      First note: that disagreement means if rater A says 1 rater B says 0; it is like how a Pearson correlation of -1 denotes a strong, albeit negative, relationship. The actual null hypothesis here is: what rater A says has no relation to what rater B says.



      I wouldn't make such vague yet absolute declarations such as "there seems to be disagreement" (or rather there seems to be no agreement). It is not really an appropriate summary of data without significant background and context. If we had that background and context (such as in a discussions section), we could contribute some nuanced synthesis of the result, pointing to improvements or reasons for disagreement, etc.



      To interpret the results:




      • report the percentage agreement, note if any one category was more prevalent (a case when % agreement may be high but $kappa$ may be low)

      • state the kappa statistic and it's confidence interval

      • I often question the worth of a p-value where the null hypothesis is a stupid case of "no agreement", but you can quote the p-value and say that the data did not provide evidence that the raters agree.






      share|cite|improve this answer











      $endgroup$
















        3












        3








        3





        $begingroup$

        From the perspective of an applied analyst:



        First note: that disagreement means if rater A says 1 rater B says 0; it is like how a Pearson correlation of -1 denotes a strong, albeit negative, relationship. The actual null hypothesis here is: what rater A says has no relation to what rater B says.



        I wouldn't make such vague yet absolute declarations such as "there seems to be disagreement" (or rather there seems to be no agreement). It is not really an appropriate summary of data without significant background and context. If we had that background and context (such as in a discussions section), we could contribute some nuanced synthesis of the result, pointing to improvements or reasons for disagreement, etc.



        To interpret the results:




        • report the percentage agreement, note if any one category was more prevalent (a case when % agreement may be high but $kappa$ may be low)

        • state the kappa statistic and it's confidence interval

        • I often question the worth of a p-value where the null hypothesis is a stupid case of "no agreement", but you can quote the p-value and say that the data did not provide evidence that the raters agree.






        share|cite|improve this answer











        $endgroup$



        From the perspective of an applied analyst:



        First note: that disagreement means if rater A says 1 rater B says 0; it is like how a Pearson correlation of -1 denotes a strong, albeit negative, relationship. The actual null hypothesis here is: what rater A says has no relation to what rater B says.



        I wouldn't make such vague yet absolute declarations such as "there seems to be disagreement" (or rather there seems to be no agreement). It is not really an appropriate summary of data without significant background and context. If we had that background and context (such as in a discussions section), we could contribute some nuanced synthesis of the result, pointing to improvements or reasons for disagreement, etc.



        To interpret the results:




        • report the percentage agreement, note if any one category was more prevalent (a case when % agreement may be high but $kappa$ may be low)

        • state the kappa statistic and it's confidence interval

        • I often question the worth of a p-value where the null hypothesis is a stupid case of "no agreement", but you can quote the p-value and say that the data did not provide evidence that the raters agree.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited 19 hours ago









        smci

        89911018




        89911018










        answered Apr 19 at 14:30









        AdamOAdamO

        35.2k265143




        35.2k265143






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Cross Validated!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f403970%2finterpretation-of-r-output-from-cohens-kappa%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Plaza Victoria

            In PowerPoint, is there a keyboard shortcut for bulleted / numbered list?

            How to put 3 figures in Latex with 2 figures side by side and 1 below these side by side images but in...