Reports for EQA participants

 

These general rules apply to the language in which the individual documents are issued:

Most documents contain a key that explains the symbols used.

Click on an image to enlarge it.
 

Evaluation of one EQA round
Final Report Supplements to the final report

Each participant receives a final report to the evaluation of the EQA round. This report is accompanied by other supplements, the list of which is given in the table at the end of the final report. The supervisor's comment is also part of the final report.
Participants who did not submit their results also receive the final report, but without attachments.
 
The final report is public (available on the website).

In the EQA Plan for the relevant year, you can find a list of documents for each EQA programme that will be attached to the final report.
The supplements to the final report are confidential and are intended only for individual participants.

 

Documents related to participation in the round

Confirmation of attendance

The conditions defining the rules for issuing confirmations of attendance are defined in the EQA Plan for the relevant year.
The confirmation of attendance contains a list of all the tests the results of which the participant has given, regardless of the accuracy of these results. This document serves as proof of participation in the relevant EQA round.

Certificate

The conditions defining the rules for issuing certificates are defined in the EQA Plan for the relevant year.
The certificate is issued only for some tests of some programmes and only for tests where the participant succeeded.
The criteria for certification you can find in the Infoservice section in the Certification xxxx documents (where xxxx is a year of validity).

 

Documents related to qualitative results

Result sheet

The result sheet for qualitative results summarizes in a single document the participant's own results and summary statistics of all results (frequency table) with assigned values marked.

Result sheet (scoring)

     

In some EQA programmes, not only individual tests are evaluated, but also these tests are scored. The purpose of the scoring is to provide overall clear information about a certain group of tests, which is evaluated as a whole on the basis of the participant's score. We choose this approach especially in cases where many partial tests are included in the round and it is appropriate to perform a comprehensive evaluation of the participant's results for certain groups of tests.
Currently, this type of assessment is used in the following EQA programs:

  • Peripheral Blood Morphology Evaluation (DIF) - individual samples (A and B) are evaluated separately on the basis of point gain. At the end of the result sheet, 2 tests are evaluated (see example in the pictures).
  • Peripheral Blood Smears - Photos (NF) - individual photographs (1, 2, 3 and 4) are evaluated separately on the basis of point gain. At the end of the result sheet, 4 tests are evaluated.
  • Bone Marrow Aspirate Film (NKDF) - individual photographs (1 and 2) and the overall description of the smear are evaluated separately on the basis of the point gain for each patient. Because there are two patients, 6 tests are evaluated at the end of the result sheet.

The scoring rules for the above mentioned programs can be found in the section Infoservice.

 

Documents related to quantitative results

Result sheet (comparability)

We use this type of result sheet in programs where only the comparability (not traceability) of the results is evaluated.

Result list (comparability and traceability)


We use this type of result sheet in programs where the comparability and/or traceability of the results are evaluated.
Comparability can be assessed whenever sufficient number of the results is available.
Traceability can only be assessed for tests for which there is a reference value traceable to a higher metrological standard.
Different criteria can be set to evaluate comparability and traceability.

Complex statistics


We use this document in programmes where 2 samples are measured.
The first line shows the EQA round identification and the participant code. The name of the test is printed on the second line. In the upper left part, the participant's results are printed, as well as the identification of the group in which the participant's results were evaluated (e.g. "all results" or a group identified by the use the same measurement principle and reagents of one manufacturer, etc.).
The remaining part of the sheet can be divided into 3 relatively independent parts, the content of which is described in the following lines:

  1. Youden plot (top left)
    It displays the results of all participants of the round - the x-axis represents the results for sample A, the y-axis for sample B. If there is a rectangle inside the graph, then this defines the area of correct results. Dots that show the results of participants who belong to the same evaluated group as the participant's results are printed in black. Other dots are printed in grey. The participant's own dot (results) is marked with a dashed line.
  2. P-score history for the last 2 years (top right)
    Description of calculation and interpretation of P-score you can find here.
    The P-score is also given if the participant's result belongs to a group that is not evaluated (i.e. a reliable assigned value is not available, e.g., because the group is not very small). The meaningfulness of the P-score in these cases can of course be discussed, however, we prefer the variant where we provide the participant with information (albeit with limited informative value) over the variant where the participant does not receive any information at all (P-score not specified - blank chart).
  3. Summary statistics (bottom half of the page)
    Calculated statistical parameters are given for all results and also for individual independently evaluated groups. Statistical parameters (SD, CV, etc.) are given only for groups with a frequency of at least n = 5.
    In the column headed AV, you will find the type of assigned value that was used for the particular group evaluated (e.g. RV, CVP, etc.).
    For programmes E1, E2 and TM, which we provide in cooperation with RfB and for which all statistical calculations are performed over a common set of the results SEKK+RfB, the number of groups (lines in the statistics) is sometimes relatively large, which reduces clarity. Therefore, for the mentioned programs, we print in the statistics only those groups in which at least one of our participants is located. Complete statistics are available on the web for potential applicants.

Histograms


Histograms are designed to show the distribution of quantitative results in cases where Youden plots cannot be used. This is especially the case when the number of samples used in the round is different from 2 or the results for one of the samples are close to zero.
Each column represents the number of results that lie in the interval described on the x-axis.
The white bars show the set of all results.
The red bars show the results of the participant's group (the group to which the participant's result belongs). The participant's group is described at the top right.
If any results are outside the range of the x-axis of the graph (left or right), their number is given in the lower left, resp. right corner.
The position of the participant's result is indicated by a red circle at the top of the graph. If the participant's result is outside the x-axis range of the graph, the circle changes to an arrow that indicates in which direction the participant's result lies.

Results with uncertainties


This document is only available to participants who have reported results including uncertainties.
One graph is given for each test and sample.
The title of the graph shows the name of the test and the label of the sample. Furthermore, there is the number of results displayed in the graph and the number of results that exceed the range of the x-axis of the graph (they are outside the graph).
The y-axis of the graph is always calibrated in %. Reference values, uncertainties and participants' results are converted to %. The conversion is performed so that the assigned value is equal to 100 % and then all other data are converted to % using this value. The data converted in this way are then displayed in the graph as shown in the figure on the left. Thanks to this procedure, it is not necessary to construct a large number of graphs (for each evaluated group separately).
The middle horizontal line shows the position of the assigned value (AV = 100%). The dashed horizontal lines on both sides of this line show the expanded combined uncertainty of the assigned value (Uc,AV). The upper and lower solid lines define the range of acceptable results (Dmax)).
The results of the individual participants are displayed as ascending dots, each of them having a whiskers constructed on each side and representing the expanded combined uncertainty of the measurement result (Uc using the coverage factor k = 2). The participant's result is shown in the lower right corner of the graph and the participant's own result is marked in bold in the graph.
 

Basic interpretation of the position of individual result and its uncertainty should be found as a model example in the picture below:

  • Laboratories 1, 2, 10 and 11 reported results out of the acceptable difference (Dmax).
  • Laboratories 3 a 9 reported results inside the range of acceptable results, but uncertainties of their determination (showing the range of probable true value) exceed the acceptable range (on one side). This shows that their successful result in this round contributed not only their good work, but also a piece of luck (probability that their result should lie outside the acceptable range, is not negligible).
  • Extreme uncertainty value reported laboratory 5: Their result of measurement almost ideally agrees to the assigned value, but its uncertainty is so high, that the true value should lie outside the acceptable range (the uncertainty is unacceptably large compared to Dmax).
  • We can also consider the result of laboratory 6 to be extreme, which reported individual uncertainty much lower than other participants and lower than the uncertainty of the assigned value (dashed lines). It shows that the result seeming ideal should be considered fake.

 

Annual overviews

We send the documents mentioned below to the participants once per year, after finishing the evaluation of all rounds of the year (thus usually in January of the next year).
But we do not send it to the participants who attended only small number of the rounds.
Annual overviews are confidential and are intended only for individual participants.

Overview of the certificates

The document serves the participants to gain an overview of the acquired certificates and their validity for the previous year.
The report lists all the tests for which the certificates are issued in the EQA system (regardless of whether the test was carried out at the laboratory mentioned at the top of the document). For the tests carried out by the participant the date of issue and expiry date of the last issued certificate are printed.

Overview of the tests

The document serves the participants to gain an overview of their success for individual tests in the previous year. Also provides a comparison of the success of the participant (column individual results) and the set of all participants (summary results).
The table shows for each EQA programme and for each test the number of test results that the participant reported in the appropriate year, and success. For comparison purposes, also an overview of all results for each EQA programme and test (including success) is presented.
 
It can occur (and it is quite common) in the right half of the table, showing the overall results, that the totals at the header (EQA programme) do not match the sum of the numbers for each test of the programme. This is due to the fact that the header row - for the programme - shows totals for the entire programme, but the only lines with individual tests that the laboratory performed are printed. Thus, if the laboratory does not perform all the tests of the programme then there are missing some of the lines in the report and totals shown in line for the programme are greater than the sum of individual (printed) tests.