REMINDER: This article is not a tutorial on applying and using statistics, but simply is a functional instruction on using the CESP documents themselves.


The Attribute MSA is a standard Attribute Agreement Analysis that covers misclassification and accuracy of attribute agreements.


The document contains instructions as needed.


Figure 1 - Attribute MSA Steps 1-3

Step 1

  • Number of appraisers is the number of people that will be appraising items in the study. (recommended 3 people)
  • Number of items is the number of unique items that are being used in the study (recommended minimum of 10 parts, maximum of 50)
  • Number of replicates is the number of times each item will be examined by each person. (minimum of 2 replicates)

Step 2

  • Value of Good item is what the user wishes to refer to an OK/Good part as such as Pass, Good, OK.
  • Value of Bad item is what the user wishes to refer to a Not OK/Bad part, such as Fail, Bad, NOK, Defect.

Step 3

  • Random Order Selection allows you to select between 5 pre-determined randomised run orders for your trials. Simply select any of them initially. You then select a different random order on repeat trials. This is built this way to prevent the need for macros in this document.


Figure 2 - Attribute MSA Steps 4 to 5+


This now requires the known Standard to be entered against each of the items in the study. This is the predetermined status of each of the items in the study as decided by the assessment team  - not the individual appraisers. 


For example, the team decides that Part 1 is an example of a Good product – simply select in the red standard column from the drop down list against Part 1 the “Good” – Repeat this for each of the parts in the study.


Important! Remember that each item you appraise should be identified with the 'part' number. So if you were inspecting 10 documents, ensure that each document is numbered 1 through to 10 for a unique ID. You can then pick out the relevant ID item by following the run order column.  In the above example, it would mean that for your first appraisal, you would show the appraisers part 4 and gather their result - then part 10, then part 3 and so on. It is important you remember this for the excel document otherwise you may end up entering the results in the incorrect order (it may feel intuitive to enter results simply from top to bottom, however, this is incorrect as you must ensure the part you are appraising corresponds to the line on this document).



Once this is complete, the MSA activity can be completed as per the instruction given in your training and using steps 5+ above. With the data entry complete, verify that the validation message above the trial numbers is green and states that you may continue. If measurements have been missed or step 1 settings are incorrect, an error message will appear. 


Once validated - select the Summary Report Tab at the bottom of the sheet. This will then present the MSA Study Report pages as follows:

Figure 3 - Summary Report


The Summary Report gives the overall results of the study. In the top left is the overall accuracy slide. As a general rule, accuracy below 80% is considered unacceptable in Attribute MSA studies. 


To the right of this, is the simple ratings of overall error, Good Rated Bad, Bad rated Good and Mixed Ratings figures. These titles of “Good/Bad” will change based on what you defined as your Good and Bad values.


Bottom Left is the appraiser comparison, showing the overall accuracy of the individual appraiser. The red line is the overall study accuracy.


To the bottom right is a standard text of how to interpret the different readings on the summary report. This section does not change regardless of the results of the study.


Figure 11 - MSA Misclassification Report

The Misclassification report details the results of the individual parts and the appraisers.  From this it is possible to interpret which part may be a “borderline” part, or poorly set as a standard. It is also possible to see which appraisers may need further training, and also whether or not an appraiser has a particular bias (for example, on the side of caution and failing parts or products that should actually be passed).


These titles of “Pass/Fail” will change based on what you defined as your OK and NOK ratings.


Figure 12 - MSA Accuracy Report


The Accuracy Report provides graphical comparisons between the appraisers, the parts to the standard, the trials and the Good/Bad classifications.


You can also input a decimal alpha level to modify confidence intervals accordingly.


These comparisons are showing accuracy in deductions across all areas.


Your Capella trainer will provide full details of how to interpret these results in your training.