Mistral OCR 3

(mistral.ai)

308 points | by pember 1 day ago

11 comments

  • Tiberium 2 hours ago
    From a tweet: https://x.com/i/status/2001821298109120856

    > can someone help folks at Mistral find more weak baselines to add here? since they can't stomach comparing with SoTA....

    > (in case y'all wanna fix it: Chandra, dots.ocr, olmOCR, MinerU, Monkey OCR, and PaddleOCR are a good start)

    • belval 2 hours ago
      I've worked on document extraction a lot and while the tweet is too flippant for my taste, it's not wrong. Mistral is comparing itself to non-VLM computer vision services. While not necessarily what everyone needs, they are a very different beasts compared to VLM based extraction because it gives you precise bounding boxes, usually at the cost of larger "document understanding".

      Its failure mode are also vastly different. VLM-based extraction can misread entire sentences or miss entire paragraphs. Sonnet 3 had that issue. Computer vision models instead will make in-word typos.

    • logicprog 28 minutes ago
      I'd want to see a comparison with Qwen 3 VL 235B-A22B, which is IME significantly better than MinerU.
  • pzo 2 hours ago
    there has been so many open source OCR in the last 3 months that would be good to compare to those especially when some are not even 1B params and can be run on edge devices.

    - paddleOCR-VL

    - olmOCR-2

    - chandra

    - dots.ocr

    I kind of miss there is not many leaderboard sections or arena for OCR and CV and providers hosting those. Neglected on both Artificial Analysis and OpenRouter.

    • culi 1 hour ago
      Someone posted a project here about a month ago where they compare models in head-to-head matchups similar to llmarena

      https://www.ocrarena.ai/leaderboard

      Hasn't been updated for Mistral but so far gemeni seems to top the leaderboard.

      • andai 1 hour ago
        How can something have a very high ELO but a very low win rate?
        • BlackLotus89 30 minutes ago
          You don't loose any elo if your opponent is much stronger than you. Remis could in theory play a part as well.
      • jeffbee 1 hour ago
        OCR developers from decades past must be slapping their foreheads now that it seems users will wait a whole minute per page and be happy.
    • pzo 2 hours ago
      what I like in MistralOCR is that they have simple pricing $1/1k pages and API hosted on their servers. With other OCR is hard to compare pricing because are token based and you don't know how many tokens is the image unless you run your own test.

      E.g. with Gemini 3.0 flash you might seem that model pricing increased only slightly comparing to Gemini 2.5 flash until you test it and will see that what used to be 258 per 384x384 input tokens now is around 3x more.

      • gunalx 51 minutes ago
        But they doubled the price g for this new mistralocr3 model to 2$
    • hereme888 1 hour ago
    • andai 1 hour ago
      I spent like three hours trying to get one of these running and then gave up. I think the paddleOCR one.

      It took an hour and a half to install 12 gigabytes of pytorch dependencies that can't even run on my device, and then it told me it had some sort of versioning conflict. (I think I was supposed to use UV, but I had run out of steam by that point.)

      Maybe I should have asked Claude to install it for me. I gave Claude root on a $3 VPS, and it seems to enjoy the sysadmin stuff a lot more than I do...

      Incidentally I had a similar experience installing open web UI... It installed 12 GB of pytorch crap.. I rage quit and deleted the whole thing, and replicated the functionality I actually needed in 100 lines of HTML.... Too bad I can't do that with OCR ;)

    • jammo 28 minutes ago
      [dead]
  • tecoholic 1 hour ago
    > Mistral OCR 3 is ideal for both high-volume enterprise pipelines and interactive document workflows.

    I don’t know how they can make this statement with 79% accuracy rate. For any serious use case, this is an unacceptable number.

    I work with scientific journals and issues like 2.9+0.5 and 29+0.5 is something we regularly run into that has us never being able to fully trust automated processes and require human verification every step.

    • MallocVoidstar 1 hour ago
      Where are you seeing 79% accuracy? 79% only occurs on the page as a win rate, not an accuracy
      • g947o 1 hour ago
        And I believe the number is 74%, compared to OCR 2.

        What matters is whether this is better than competition/alternatives. Of course nobody is just going to take the output as is. If you do that, that's your problem.

  • hereme888 1 hour ago
    I'm reading worse performance than many OSS offerings like Paddle, MinerU, MonkeyOCR, etc:

    https://www.codesota.com/ocr

  • GZGavinZhao 42 minutes ago
    Does it handle math expressions (those rendered from LaTeX) well? I've been looking for a good OCR model to transcribe my math textbooks into markdown (obviously ignoring the images and figures) with LaTeX as math expressions, and none of the current OCR models work reliably enough.

    EDIT: you can try it yourself for free at https://console.mistral.ai/build/document-ai/ocr-playground once you create a developer account! Fingers crossed to see how well it works for my use case.

    • RagnarD 21 minutes ago
      Please post an update on how well it works for you.
  • singularity2001 23 minutes ago
    No one mentioning the possibly most beautiful css effect on the Internet??
  • petcat 2 hours ago
    It seems like Mistral is just chasing around sort of "the fringes" of what could be useful AI features. Are they just getting out-classed by OAI, Google, Anthropic?

    It seems like EU in general should be heavily invested in Mistral's development, but it doesn't seem like they are.

    • IMTDb 9 minutes ago
      > It seems like EU in general should be heavily invested in Mistral's development, but it doesn't seem like they are

      The EU is extremely invested in Mistral's development: half of the effort is finding ways to tax them (hello Zucman tax), the other half is wondering how to regulate them (hello AI act)

    • tensor 2 hours ago
      Form processing is vastly more useful than meme generation. When people need to do real work this is the sort of tool they are going to reach for.
      • sbuttgereit 2 hours ago
        Yep. I saw the title and got excited.... this is a particular problem area where I think these things can be very effective. There are so many data entry class tasks which don't require huge knowledge or judgement... just clear parsing and putting that into a more machine digestible form.

        I don't know... feels like this sort of area, while not nearly so sexy as video production or coding or (etc.)... but seems like reaching a better-than-human performance level should be easier for these kinds of workloads.

    • bee_rider 1 hour ago
      Following the leaders too closely seems like a bad move, at least until a profitable business model for an AI model training company is discovered. Mistral’s models are pretty good, right? I mean they don’t have all the scaffolding around them that something like chatGPT does, but building all that scaffolding could be wasted effort until a profitable business model is shown.

      Until then, they seem to be able to keep enough talent in the EU to train reasonably good models. The kernel is there, which seems like the attainable goal.

      • qwytw 1 hour ago
        >Mistral’s models are pretty good, right

        Are they? IIRC their best model is still worse than the gpt-oss-120B?

    • BoredPositron 2 hours ago
      I guess it's better to do the same stuff everyone else is doing?
    • VWWHFSfQ 2 hours ago
      I think there is a lot of broad support, but they're just kind of hamstrung by EU regulation on AI development at this stage. I think the end game will ultimately be getting acquired by an American company, and then relocating.
      • tensor 2 hours ago
        I hope the EU blocks any acquisitions by American companies. The west needs to start protecting its strategic assets.
    • lawlessone 2 hours ago
      >It seems like EU in general should be heavily invested

      Maybe, i think it will be to our benefit when the bubble pops that we are not heavily invested, no harm investing a little.

  • breadislove 1 hour ago
    i just gave it a quick spin on my fav documents. quick check:

    - table entries hallucinated - tables messed up (tables merged, forgot rows) - forgot to parse some text passages

    if you are doing something serious, i would not use it

    • ipsum2 1 hour ago
      You might want to mention that you are a competitor.
      • ghjv 1 hour ago
        thought you were being flip or assuming that, but checked their profile and you are right. I agree that this should be disclosed in their comment.
  • singularity2001 21 minutes ago
    Not OS / free weights right?
  • film42 2 hours ago
    Is open router still sending all OCR jobs to Mistral? I wonder if they're trying to keep that spot. Seems like Mistral and Google are the best at OCR right now, with Google leading Mistral by a fair bit.
    • numlocked 1 hour ago
      (I work at OpenRouter) If you send a PDF to our API we will:

      1. Use native PDF parsing if the model supports it

      2. Use this Mistral OCR model (we updated to this version yesterday)

      3. UNLESS you override the "engine" param to use an alternate. We support a JS-based (non-LLM) parser as well [0]

      So yes, in practice a lot of OCR jobs go to Mistral, but not all of them.

      Would love to hear requests for other parsers if folks have them!

      [0] https://openrouter.ai/docs/guides/overview/multimodal/pdfs#p...

  • vasco 1 hour ago
    Gave it a birth registry from a Portuguese locality from 1755 which my dad and I often decipher to figure out geneology and it did a terrible job.

    Regular Gemini Thinking can actually get 70-80% of the documents correct except lots of mistakes on given names. Chatgpt maybe understands like 50-60%.

    This Mistral model butchered the whole text, literally not a word was usable. To the point I think I'm doing something wrong.

    The test document: https://files.fm/u/3hduyg65a5

    • observationist 25 minutes ago
      Just gave it a shot with Grok 4.1 thinking - do you have the ground truth translation to compare? I've tried 4 different times, with slight tweaks adding information from your description, and it's given me a range of interpretations. It'd be nice to see if any of them got close - a couple were more like pulpy telenovela plots, lol.

      The model might need tuning in order to be effective - this is normal for releases of image mode models, and after a couple days, there will be properly set up endpoints to test from, so it might be much better than you think. Or it could be really bad with turn of the 19th century portugese cursive.

    • zzleeper 1 hour ago
      Oh god, I'm sure I wouldn't come close to 50%; that's so hard to read
      • vasco 1 hour ago
        It's tough but my dad is quite good at it. He has books of common abbreviations and agglutinations from different centuries. After you get used to it it's faster and very fun.

        We were mind blown how good Gemini was at it.

        • ilamont 54 minutes ago
          I am too. Gemini 3.0 fast on old scrawled diary entries in English from 100+ years ago got them 95% right. It also added historical context when I prefaced the images with the identity of the writer, such as summaries of an old military unit history in Europe post-WW1 it got from a very obscure U.S. Army archive.

          Huge timesaver.