Welcome to ICDAR RRC-ICText 2021!

It’s Nice To Meet You

Tell Me More


The highlights of ICDAR Robust Reading Challenge ICText 2021.

Novel Dataset

A large scale industry standard Integrated Circuit OCR dataset that include annotations of aesthetic classes on character level.

New Challenges

3 new challenges that are focusing on practicality and performance, bridging the gap between the research community and industry.

New Metrics

Evaluate the model’s overall performance, focusing on the efficiency and effectiveness of the model.


3 new tasks that are introduced in this compeition. We host two challenges at eval.ai, one for Task 1 and 2 and another one for Task 3.

Task 1

End-to-end Text Spotting on Integrated Circuit.

Task 2

End-to-end Text Spotting and Aesthetic Assessment.

Task 3

Inference Speed, Model Size and Score Assessment.


Schedule of the ICDAR 2021 Robust Reading Competition - ICText.

  • Dec 2020 - Feb 2021

    Dataset Preparation

    Collecting and annotating iamges. We want to make sure that the annotations are of highest quality.

  • 1st Feb 2021

    Registration Open and Dataset Release

    Registration open for participants at eval.ai, please sign up an account to participate. Non-disclouse agreement has to be signed and it will be reviewed by the organisers. Dataset will be released for download for approved participants only.

  • 31st Mar 2021

    Competition Deadline

    Result submission will close at 31st March 2021 11.59PM UTC. Late submission will not be entitled for prize money.

  • 7th April 2021

    Release Final Ranking

    Winners of each challenges will be confirmed and announced.

Frequently Asked Questions

You are encouraged to read them carefully.

Answer: First of all, you have to register an account and join our challenge on eval.ai. Then obtain and sign the NDA according to the specified standard before you email it to us. Once you give us your registered email address and your NDA is verified, we will send you the training dataset and grant you access for submissions. You can refer to the following figure for more information.

Answer: Participants are allowed to use any publicly available scene text dataset or any aesthetic dataset, given that the extra data you used must be open-sourced and adhered to our Terms and Conditions. If the synthetic data generation pipeline is already open-sourced (SynthText or MJSynth or any text in-painting method), you can generate your data based on it with whatever changes you want. Then, you must open source the generation code, corpus/vocabulary, and data in general if you used them for training. As long as the generation process is replicable and the data is not privatized, then it will be fine for us. You are free to use our training data as a starting point for a synthetic dataset. By private data, we refer to the data that is accessible to you only. For instance, if you intend to use [Your Affiliation]’s in-house synthetic data generation module (which is accessible to [Your Affiliation] only) to generate synthetic data, then this is prohibited. Because this would be unfair to other participants as you are the only one accessible to such resources. For more details, please refer to OPEN-SOURCE CODE and WINNERS’ OBLIGATIONS sections in the Terms and Conditions.

Answer: We differentiate the characters by referring to perfect images without aesthetic defects and based on the image context.

Answer: We have taken extensive measures to make sure that the annotations are correct. Unfortunately, we might miss out on some of them. The dataset is annotated by a few human annotators, hence it is likely to contain wrong or inconsistent labels, which might introduce additional noises to the annotations. We are rechecking the annotations and we will release an updated version before 12/02/2021.

Answer: Eval.ai has allocated more resources for our challenge. We ask for your patience when you are submitting. It might take more than 60 seconds for your submission to be evaluated.


Partner universities and corporations.

Contact Us

You can fill in the form below or reach us at ictext [at] vitrox.com