"The Robust Reading Competition has moved to its new permanent space at http://rrc.cvc.uab.es. This site will remain available but will not accept any further submissions from January 2015 onwards. Please use the new site at http://rrc.cvc.uab.es for up to date information and to submit new results. You can continue to use your existing user accounts while all associated data have been transferred to the new site. If you encounter any problem, please contact us. Apologies for any inconvenience caused."

Results

You can see below the latest results, updated in real time when new submissions are made to the system. Methods published in ICDAR 2013 in grey, plus other public methods in white and your methods in yellow.

Ranking for Task 1 - Text Localization

The ICDAR 2013 evaluation protocol for the text localization task is as described in the report of the competition [1], based on [2]. The evaluation protocol is our own implementation, tightly integrated to the Web services offered through the competition Web site, and is not making use of the DetEval tool offered by the authors of [2]. It has come to our attention that slight differences exist between the ICDAR 2013 evaluation protocol and the results obtained by using DetEval. These are due to a number of heuristics that are not documented in the DetEval paper [2]. These are explained in more detail in the FAQ. For compatibility and to assist authors who prefer to use the DetEval framework, we have implemented an alternative evaluation protocol which is consistent to the DetEval tool and takes into account all undocumented heuristics. Any method submitted will be automatically evaluated through both frameworks, while you can switch between the two using the menu below. Note that the ranking of the methods rarely changes. Sometimes the DetEval alternative produces more intuitive results due to the extra matching rules.

1. D. Karatzas, F. Shafait, S. Uchida, M. Iwamura, L. Gomez, S. Robles, J. Mas, D. Fernandez, J. Almazan, L.P. de las Heras , "ICDAR 2013 Robust Reading Competition", In Proc. 12th International Conference of Document Analysis and Recognition, 2013, IEEE CPS, pp. 1115-1124.

2. C. Wolf and J.M. Jolion, "Object Count / Area Graphs for the Evaluation of Object Detection and Segmentation Algorithms", International Journal of Document Analysis, vol. 8, no. 4, pp. 280-296, 2006.

MethodRecallPrecisionHmean
Sams89.40 %88.83 %89.11 %
BUCT_YST84.19 %91.66 %87.77 %
USTB_TexStar82.38 %93.83 %87.74 %
Blindsight201273.81 %90.11 %81.15 %
TH-TextLoc75.85 %86.82 %80.96 %
I2R_NUS_FAR71.42 %84.17 %77.27 %
Baseline69.21 %84.94 %76.27 %
Text Detection73.18 %78.62 %75.81 %
I2R_NUS67.52 %85.19 %75.34 %
BlockAnalysis72.69 %77.78 %75.15 %
BDTD_CASIA67.05 %78.98 %72.53 %
OTCYMIST74.85 %67.69 %71.09 %
Inkam52.21 %58.12 %55.00 %

Ranking for Task 2 - Text Segmentation

Pixel ResultsAtom based Results
MethodRecallPrecisonF-ScoreWell s.MergedBrokenBr.-Mer.LostFalse p.DetectedRecallPrecisionFscore
BlockAnalysis282.21 %81.50 %81.85 %64625301000730622768582.61 %84.09 %83.34 %
USTB_FuStar87.21 %79.98 %83.44 %6258920561587370726080.01 %86.20 %82.99 %
BUCT_YST87.29 %77.87 %82.31 %58211065540882151691474.42 %84.19 %79.00 %
I2R_NUS87.95 %74.40 %80.61 %505115843061151685687864.57 %73.44 %68.72 %
OTCYMIST81.82 %71.00 %76.03 %5143142034212231083717865.75 %71.65 %68.57 %
I2R_NUS_FAR82.56 %74.31 %78.22 %461914741211716156577159.05 %80.04 %67.96 %
Text Detection78.68 %68.63 %73.32 %388327163601187210559049.64 %69.46 %57.90 %

Ranking for Task 3 - Word Recognition

MethodTotal Edit distanceCorrectly Recognised WordsT.E.D. (upper)C.R.W. (upper)
PhotoOCR105.5 82.21 %88.8 85.41 %
MAPS196.2 80.4 %186.4 81.51 %
PLT200.4 80.26 %190.9 81.38 %
NESP214.5 79.29 %198.2 80.75 %
Baseline409.4 60.95 %400.1 61.57 %
Important Dates
  • 15 December: Web site online
  • 15 January: Registration of interest
  • 28 February: Training datasets available
  • 30 March: Test datasets available
  • 8 April: Submission of results
  • 19 April: Method descriptions due