Comparative Assessment (CA) Job execution

The goal of the Comparative Assessment job type is to compare the MT output from two different engines to select the more suitable engine. To perform this evaluation, the evaluator will be presented with a job view that will display Translation 1 and Translation 2 (MT outputs from two different MT models). Each MT output should be evaluated using the scoring system selected by the project manager.

In the example screenshot below, a 5-point scoring system has been selected.

CA Job Execution

The definitions of the scoring system can be displayed by hovering over the scores, or by selecting the list icon guidelines in the bottom left of the screen.

CA Job Execution howering

If both MT outputs are scored the same, the evaluator will be prompted to select the better translation (if possible) before proceeding.

CA Job Execution - equal score

Reporting issues for an CA Job

If there is an issue with either of the MT outputs or the Source, the evaluator can provide feedback.

  • For issues with MT outputs, the evaluator can tick the box indicating that the segment has UNTRANSLATED CONTENT or the text is in the wrong language (with UNEXPECTED CHARACTERS/LANGUAGE in target). If there is some other issue, the evaluator can leave a comment indicating the issue.

  • For issues with the Source, the evaluator can tick the ERRORS IN SOURCE box indicating that it contains spelling errors, grammar errors or that it is in the wrong language. If there is some other issue, the evaluator can leave a comment indicating the issue.

When feedback has been added to any of these elements, the Feedback button will be highlighted yellow.

The feedback sections can be hidden or shown and their state will be remembered until updated by the evaluator.