"The Robust Reading Competition has moved to its new permanent space at http://rrc.cvc.uab.es. This site will remain available but will not accept any further submissions from January 2015 onwards. Please use the new site at http://rrc.cvc.uab.es for up to date information and to submit new results. You can continue to use your existing user accounts while all associated data have been transferred to the new site. If you encounter any problem, please contact us. Apologies for any inconvenience caused."

Overview

 

Images are frequently used in electronic documents (Web and email) to embed textual information. The use of images as text carriers stems from a number of needs. For example images are used in order to beautify (e.g. titles, headings etc), to attract attention (e.g. advertisements), to hide information (e.g. images in spam emails used to avoid text-based filtering), even to tell a human apart from a computer (CAPTCHA tests).

Automatically extracting text from born-digital images is therefore an interesting prospect as it would provide the enabling technology for a number of applications such as improved indexing and retrieval of Web content, enhanced content accessibility, content filtering (e.g. advertisements or spam emails) etc.

While born-digital text images are on the surface very similar to real scene text images (both feature text in complex colour settings) at the same time they are distinctly different. Born-digital images are inherently low-resolution (made to be transmitted online and displayed on a screen) and text is digitally created on the image; scene text images on the other hand are high-resolution camera captured ones. While born-digital images might suffer from compression artefacts and severe anti-aliasing they do not share the illumination and geometrical problems of real-scene images. Therefore it is not necessarily true that methods developed for one domain would work in the other.

 

In 2013 we set out to find out the state of the art in Text Extraction in both domains (born-digital images and real scene). We received 24 submissions over three different tasks in the born-digital Challenge, 10 during the competition run and 14 more over the past year, after the competition was opened in a continuous mode in October 2011.

Given the strong interest displayed by the community, and the fact that there is still a large margin for improvement, in the ICDAR 2013 edition we will revisit the above tasks and invite further submissions on an updated and even more challenging dataset.

This challenge is set up around three tasks: Text Localisation, Text Segmentation and Word Recognition. Participation in any or all tasks is welcome. Check the details in the Tasks page.

The results from the past ICDAR competition can be found in the ICDAR proceedings [1]. You can also have a look at the final report here, and the presentation we did during the conference here.

  1. D. Karatzas, S. Robles Mestre, J. Mas, F. Nourbakhsh, P. Pratim Roy , "ICDAR 2011 Robust Reading Competition - Challenge 1: Reading Text in Born-Digital Images (Web and Email)", In Proc. 11th International Conference of Document Analysis and Recognition, 2011, IEEE CPS, pp. 1485-1490. [pdf] [presentation]

HomeImage.png

Important Dates
  • 15 December: Web site online
  • 15 January: Registration of interest
  • 28 February: Training datasets available
  • 30 March: Test datasets available
  • 8 April: Submission of results
  • 19 April: Method descriptions due