All coding in documents, datasets, externals, memos, codes and cases is text coding.These are treated separately, producing separate results: Coders are identified by their NVivo user profiles User profiles.īoth text coding and region coding can be compared. If comparing groups, content is taken to be coded if at least one member of a group coded it. It is possible to compare coding between groups of users, however, standardly, comparisons are made between individual coders. You can also select groups of codes or files by identifying folders, static sets or classifications they belong to. You can select specific codes (or relationships or sentiments) or cases for comparison, in selected data files, datasets, externals and/or memos. Cohen's kappa coefficient: a statistical measure that takes into account the amount of agreement expected by chance-expressed as a decimal in the range –1 to 1 (where values ≤ 0 indicate no agreement, and 1 indicates perfect agreement).
COMPARING GROUPS NVIVO 12 CODE
Percentage agreement: the number of content units on which coders agree (to code or not to code), divided by the total number of units-reported as a percentage.Agreement is measured by two statistical methods: The coding comparison query compares coding by two users to measure 'inter-rater reliability'-the degree of coding agreement between them.