Krippendorff's Alpha is one of the algorithms that is supported by Datasaur to calculate the agreement while taking into account the possibility of chance agreement. We will deep dive into how Datasaur collects all labels from labelers and reviewers in a project and process them into an Inter-annotator Agreement matrix.
Sample Data
Suppose there are 2 labelers and 1 reviewer — Labeler A, Labeler B, and Reviewer — who labeled the same spans. Labeler A work is visualized in Image 1, Labeler B work is visualized in Image 2, and Reviewer work is visualized in Image 3.
Calculating the Agreement
In this section, we will see the calculation detail between Labeler A and Reviewer.
1. Arranging the data
First, we need to arrange the sample data into Table 1 for the better visualization.
Table 1. Sample Data
2. Cleaning the data
Second, we need to remove spans that only have 1 label i.e. Prince of Denmark and 30557. They should be removed because spans with a single label will introduce a calculation error. The calculation result will still show the agreement level between 2 annotators. The cleaned data is shown in Table 2.
Table 2. Cleaned Data
3. Creating the agreement table
Third, we need to create an agreement table based on the cleaned data. The table is visualized in Table 3.
Based on the table, 5 values are calculated: n, ri​, rk​, r, and r′.
Total spans in the data
n is the total spans in the data.
Here, n=6 because there are 6 spans.
Total labels in each span
ri​=k=1∑m​rik​(1) ri​ is the total labels that span i has.
m is the total number of label.
Here, m=5 because there are 5 labels.
rik​ is the number of k label in span i.
Here is the calculation result.
r1​=r1,EVE​+r1,ORG​+r1,PER​+r1,TITLE​+r1,YEAR​=1+0+0+1+0=2
r2​=r2,EVE​+r2,ORG​+r2,PER​+r2,TITLE​+r2,YEAR​=0+0+2+0+0=2
r3​=r3,EVE​+r3,ORG​+r3,PER​+r3,TITLE​+r3,YEAR​=0+0+2+0+0=2
r4​=r4,EVE​+r4,ORG​+r4,PER​+r4,TITLE​+r4,YEAR​=0+0+0+0+2=2
r5​=r5,EVE​+r5,ORG​+r5,PER​+r5,TITLE​+r5,YEAR​=0+0+0+0+2=2
r6​=r6,EVE​+r6,ORG​+r6,PER​+r6,TITLE​+r6,YEAR​=0+1+1+0+0=2
Total of each label
rk​=i=1∑n​rik​(2) rk​ is the total of k label in the data.
n is the total spans in the data.
rik​ is the number of k label in span i.
Here is the calculation result.
rEVE​=r1,EVE​+r2,EVE​+r3,EVE​+r4,EVE​+r5,EVE​+r6,EVE​=1+0+0+0+0+0=1
rORG​=r1,ORG​+r2,ORG​+r3,ORG​+r4,ORG​+r5,ORG​+r6,ORG​=0+0+0+0+0+1=1
rPER​=r1,PER​+r2,PER​+r3,PER​+r4,PER​+r5,PER​+r6,PER​=0+2+2+0+0+1=5
rTITLE​=r1,TITLE​+r2,TITLE​+r3,TITLE​+r4,TITLE​+r5,TITLE​+r6,TITLE​=1+0+0+0+0+0=1
rYEAR​=r1,YEAR​+r2,YEAR​+r3,YEAR​+r4,YEAR​+r5,YEAR​+r6,YEAR​=0+0+0+2+2+0=4
Total labels in the data
r=i=1∑n​ri​(3) r is the total labels in the data.
n is the total spans in the data.
ri​ is the total labels that span i has.
Here is the calculation result.
r=r1​+r2​+r3​+r4​+r5​+r6​=12
Average number of labels per span
r′=nr​(4) r′ is the average number of labels per span.
n is the total spans in the data.
Here is the calculation result.
r′=nr​=612​=2
4. Choosing weight function
Fourth, we need a weight function to weight the labels. Every label is treated equally because one label is no difference than the other. Hence, the weight function that will be used is stated in Formula 5.
wik​=rik​(5) wik​ is the weighted number of k label in span i.
rik​ is the number of k label in span i.
5. Calculating Pa
Fifth, the observed weighted percent agreement is calculated.
Weighted number of labels
We will start by calculating the weighted number of label using Formula (6).
rik+​=l=1∑m​wkl​ril​(6) rik+​ is the weighted number of k label in span i.
m is the total number of label.
wkl​ is the weighted number of l label in span k.
ril​ is the number of l label in span i.
For example, we can apply Formula (6) to calculate the weighted EVE label in span 1.
r1,EVE+​=l=1∑5​wEVE,l​r1,l​=1∗1+0∗0+0∗0+0∗1+0∗0=1 We need to calculate all of the span and label combination. The complete calculation result is visualized in Table 4.
Agreement percentage
After we got the weighted number of labels, we need to calculate the agreement percentage for a single span and label using Formula (7).
pa∣ik​=r′(ri​−1)rik​(rik+​−1)​(7) pa∣ik​ is the agreement percentage of k label in span i.
rik​ is the number of k label in span i.
rik+​ is the weighted number of k label in span i.
r′ is the average number of labels per span.
ri​ is the total labels that span i has.
For example, we can apply Formula (7) to calculate the agreement percentage of EVE label in span 1.
pa∣1,EVE​=r′(r1​−1)r1,EVE​(r1,EVE+​−1)​=2(2−1)1(1−1)​=0 We need to calculate all of the span and label combination. The complete calculation result is visualized in Table 5.
Agreement percentage of a single span
We can simplify the result by getting the agreement percentage of a single span using Formula (8).
pa∣i​=k=1∑m​pa∣ik​(8) pa∣i​ is the agreement percentage of span i.
m is the total number of label.
pa∣ik​ is the agreement percentage of k label in span i.
For example, we can apply Formula (8) to calculate the agreement percentage of span 1.
pa∣1​=k=1∑5​pa∣1,k​=0+0+0+0+0=0 We need to calculate the agreement percentage of all spans. The complete calculation result is visualized in Table 6.
Average agreement percentage
From the previous calculation, we can calculate the average agreement percentage using Formula (9).
pa′​=n1​i=1∑n​Pa∣i​(9) pa′​ is the average agreement percentage.
n is the total spans in the data.
pa∣i​ is the agreement percentage of span i.
We can apply Formula (9) to calculate the average agreement percentage.
pa′​=61​i=1∑6​Pa∣i​=61​(0+1+1+1+1+0)=0.6666 Calculating Pa
Finally, the observed weighted percent agreement is calculated using Formula (10).
pa​=pa′​(1−nr′1​)+nr′1​(10) pa​ is the observed weighted percent agreement.
pa′​ is the average agreement percentage.
n is the total spans in the data.
r′ is the average number of labels per span.
We can apply Formula (10) to calculate the observed weighted agreement percentage.
pa​=pa′​(1−nr′1​)+nr′1​=0.6666(1−6×21​)+6×21​=0.6944 6. Calculating Pe
Sixth, the chance weighted percent agreement is calculated.
Classification probability
We start by calculating the classification probability for each label using Formula (11).
πk​=rrk​​(11) πk​ is the classification probability for k label.
rk​ is the total of k label in the data.
r is the total labels in the data.
Here is the calculation result.
πEVE​=rrEVE​​=121​=0.0833
πORG​=rrORG​​=121​=0.0833
πPER​=rrPER​​=125​=0.4166
πTITLE​=rrTITLE​​=121​=0.0833
πYEAR​=rrYEAR​​=124​=0.3333
Calculating Pe
To calculate the chance weighted percent agreement, Formula (11) can be applied to Formula (12).
pe​=k=1∑m​πk​2(12) pe​ is the chance weighted percent agreement.
m is the total number of label.
πk​ is the classification probability for k label.
Here is the chance weighted percent agreement calculation.
pe​=k=1∑m​πk​2
pe​=πEVE​2+πORG​2+πPER​2+πTITLE​2+πYEAR​2
pe​=0.08332+0.08332+0.41662+0.08332+0.33332
pe​=0.3055
7. Calculating the Alpha
Finally, Krippendorff's alpha is calculated using Formula (13).
α=1−pe​pa​−pe​​(13) α is the Krippendorff's alpha between Labeler A and Reviewer.
pa​ is the observed weighted percent agreement.
pe​ is the chance weighted percent agreement.
We can get the α by applying pa​ and pe​ to Formula (13).
α=1−pe​pa​−pe​​=1−0.30550.6944−0.3055​=0.56 Summary
We apply the same calculation for agreement between labelers, and between reviewer and labelers.
Missing labels from a single labeler will be removed.
The percentage of chance agreement will vary depending on:
The number of the labels in a project.
The number of label options.
When both labelers agree but the reviewer rejects the labels:
The agreement between the two labelers increases.
The agreement between the labelers and the reviewer decreases.