Krippendorff's Alpha Calculation
Explain how Datasaur implements the Krippendorff's Alpha algorithm.
Last updated
Explain how Datasaur implements the Krippendorff's Alpha algorithm.
Last updated
Krippendorff's Alpha is one of the algorithms that is supported by Datasaur to calculate the agreement while taking into account the possibility of chance agreement. We will deep dive into how Datasaur collects all labels from labelers and reviewers in a project and process them into an Inter-annotator Agreement matrix.
Suppose there are 2 labelers and 1 reviewer — Labeler A, Labeler B, and Reviewer — who labeled the same spans. Labeler A work is visualized in Image 1, Labeler B work is visualized in Image 2, and Reviewer work is visualized in Image 3.
In this section, we will see the calculation detail between Labeler A and Reviewer.
First, we need to arrange the sample data into Table 1 for the better visualization.
Table 1. Sample Data
Span | Labeler A | Reviewer |
---|---|---|
The Tragedy of Hamlet | EVE | TITLE |
Prince of Denmark | PER | |
Hamlet | PER | PER |
William Shakespeare | PER | PER |
1599 | YEAR | YEAR |
1601 | YEAR | YEAR |
Shakespeare | ORG | PER |
30557 | QTY |
Second, we need to remove spans that only have 1 label i.e. Prince of Denmark and 30557. They should be removed because spans with a single label will introduce a calculation error. The calculation result will still show the agreement level between 2 annotators. The cleaned data is shown in Table 2.
Table 2. Cleaned Data
Span | Labeler A | Reviewer |
---|---|---|
The Tragedy of Hamlet | EVE | TITLE |
Hamlet | PER | PER |
William Shakespeare | PER | PER |
1599 | YEAR | YEAR |
1601 | YEAR | YEAR |
Shakespeare | ORG | PER |
Third, we need to create an agreement table based on the cleaned data. The table is visualized in Table 3.
Based on the table, 5 values are calculated: , , , , and .
is the total spans in the data.
Here, because there are 6 spans.
is the total labels that span has.
is the total number of label.
Here, because there are 5 labels.
is the number of label in span .
Here is the calculation result.
is the total of label in the data.
is the total spans in the data.
is the number of label in span .
Here is the calculation result.
is the total labels in the data.
is the total spans in the data.
is the total labels that span has.
Here is the calculation result.
is the average number of labels per span.
is the total spans in the data.
Here is the calculation result.
Fourth, we need a weight function to weight the labels. Every label is treated equally because one label is no difference than the other. Hence, the weight function that will be used is stated in Formula 5.
is the weighted number of label in span .
is the number of label in span .
Fifth, the observed weighted percent agreement is calculated.
We will start by calculating the weighted number of label using Formula (6).
is the weighted number of label in span .
is the total number of label.
is the weighted number of label in span .
is the number of label in span .
For example, we can apply Formula (6) to calculate the weighted EVE label in span 1.
We need to calculate all of the span and label combination. The complete calculation result is visualized in Table 4.
After we got the weighted number of labels, we need to calculate the agreement percentage for a single span and label using Formula (7).
is the agreement percentage of label in span .
is the number of label in span .
is the weighted number of label in span .
is the average number of labels per span.
is the total labels that span has.
For example, we can apply Formula (7) to calculate the agreement percentage of EVE label in span 1.
We need to calculate all of the span and label combination. The complete calculation result is visualized in Table 5.
We can simplify the result by getting the agreement percentage of a single span using Formula (8).
is the agreement percentage of span .
is the total number of label.
is the agreement percentage of label in span .
For example, we can apply Formula (8) to calculate the agreement percentage of span 1.
We need to calculate the agreement percentage of all spans. The complete calculation result is visualized in Table 6.
From the previous calculation, we can calculate the average agreement percentage using Formula (9).
is the average agreement percentage.
is the total spans in the data.
is the agreement percentage of span .
We can apply Formula (9) to calculate the average agreement percentage.
Finally, the observed weighted percent agreement is calculated using Formula (10).
is the observed weighted percent agreement.
is the average agreement percentage.
is the total spans in the data.
is the average number of labels per span.
We can apply Formula (10) to calculate the observed weighted agreement percentage.
Sixth, the chance weighted percent agreement is calculated.
We start by calculating the classification probability for each label using Formula (11).
is the classification probability for label.
is the total of label in the data.
is the total labels in the data.
Here is the calculation result.
To calculate the chance weighted percent agreement, Formula (11) can be applied to Formula (12).
is the chance weighted percent agreement.
is the total number of label.
is the classification probability for label.
Here is the chance weighted percent agreement calculation.
Finally, Krippendorff's alpha is calculated using Formula (13).
is the Krippendorff's alpha between Labeler A and Reviewer.
is the observed weighted percent agreement.
is the chance weighted percent agreement.
We can get the by applying and to Formula (13).
We apply the same calculation for agreement between labelers, and between reviewer and labelers.
Missing labels from a single labeler will be removed.
The percentage of chance agreement will vary depending on:
The number of the labels in a project.
The number of label options.
When both labelers agree but the reviewer rejects the labels:
The agreement between the two labelers increases.
The agreement between the labelers and the reviewer decreases.