OCR/Roadmap

From Open Food Facts wiki
Revision as of 17:22, 2 August 2015 by Teolemon (talk | contribs)

Currently, all products are edited manually. This project is about automatic or semi-automatic detection of a number of things using OCR and Computer vision.

Current state

  • OCR extraction of Ingredients using Tesseract 2 (production) and 3 (.net)
  • Uses the French dictionary for all languages
-- /home/off-fr/cgi# grep get_ocr *
Ingredients.pm:use Image::OCR::Tesseract 'get_ocr';
Ingredients.pm: $text =  decode utf8=>get_ocr($image,undef,'fra');

Short term goals

Long-term goals

* Get dictionaries translations from Wikidata
* Investigate Ocropus for complex layout extractions
* Investigate Open CV for detection of patterns, logos…

Targets

  • Logos of brands (Getting them from POD ?)
  • Logos of Labels (standardized)
  • Text (distorted - bottle case, diagonally - with low light, bright light)
  • Standardized layouts (US Nutrition labels)
  • Standardized text (quantities, EU Packaging codes)
  • Barcodes (extraction in uploaded images)
    • Store in separate image for further reference
  • Image orientation: check that the text is properly oriented to guess if the image is properly oriented.
  • Deep Learning
    • Product photo on packaging - guess category based on product picture
    • Container: guess whether it's a bottle, cardboard…

Extracting areas is already great work: if we can extract logos or patterns, it will be faster for humans to double check and turn that into text.