OCR any PDB

Drop a photo, scan, or PDF (up to 2.5GB). We extract the text right in your browser — free, unlimited, and your files never leave your device.

Private and secure

Everything happens in your browser. Your files never touch our servers.

Blazing fast

No uploading, no waiting. Convert the moment you drop a file.

Actually free

No account required. No hidden costs. No file size tricks.

Optical Character Recognition (OCR) turns images of text—scans, smartphone photos, PDFs—into machine-readable strings and, increasingly, structured data. Modern OCR is a pipeline that cleans an image, finds text, reads it, and exports rich metadata so downstream systems can search, index, or extract fields. Two widely used output standards are hOCR, an HTML microformat for text and layout, and ALTO XML, a library/archives-oriented schema; both preserve positions, reading order, and other layout cues and are supported by popular engines like Tesseract.

A quick tour of the pipeline

Preprocessing. OCR quality starts with image cleanup: grayscale conversion, denoising, thresholding (binarization), and deskewing. Canonical OpenCV tutorials cover global, adaptive and Otsu thresholding—staples for documents with nonuniform lighting or bimodal histograms. When illumination varies within a page (think phone snaps), adaptive methods often outperform a single global threshold; Otsu automatically picks a threshold by analyzing the histogram. Tilt correction is equally important: Hough-based deskewing (Hough Line Transform) paired with Otsu binarization is a common and effective recipe in production preprocessing pipelines.

Detection vs. recognition. OCR is typically split into text detection (where is the text?) and text recognition (what does it say?). In natural scenes and many scans, fully convolutional detectors like EAST efficiently predict word- or line-level quadrilaterals without heavy proposal stages and are implemented in common toolkits (e.g., OpenCV’s text detection tutorial). On complex pages (newspapers, forms, books), segmentation of lines/regions and reading order inference matter:Kraken implements traditional zone/line segmentation and neural baseline segmentation, with explicit support for different scripts and directions (LTR/RTL/vertical).

Recognition models. The classic open-source workhorse Tesseract (open-sourced by Google, with roots at HP) evolved from a character classifier into an LSTM-based sequence recognizer and can emit searchable PDFs, hOCR/ALTO-friendly outputs, and more from the CLI. Modern recognizers rely on sequence modeling without pre-segmented characters. Connectionist Temporal Classification (CTC) remains foundational, learning alignments between input feature sequences and output label strings; it’s widely used in handwriting and scene-text pipelines.

In the last few years, Transformers reshaped OCR. TrOCR uses a vision Transformer encoder plus a text Transformer decoder, trained on large synthetic corpora then fine-tuned on real data, with strong performance across printed, handwritten and scene-text benchmarks (see also Hugging Face docs). In parallel, some systems sidestep OCR for downstream understanding: Donut (Document Understanding Transformer) is an OCR-free encoder-decoder that directly outputs structured answers (like key-value JSON) from document images (repo, model card), avoiding error accumulation when a separate OCR step feeds an IE system.

Engines and libraries

If you want batteries-included text reading across many scripts, EasyOCR offers a simple API with 80+ language models, returning boxes, text, and confidences—handy for prototypes and non-Latin scripts. For historical documents, Kraken shines with baseline segmentation and script-aware reading order; for flexible line-level training, Calamari builds on the Ocropy lineage (Ocropy) with (multi-)LSTM+CTC recognizers and a CLI for fine-tuning custom models.

Datasets and benchmarks

Generalization hinges on data. For handwriting, the IAM Handwriting Database provides writer-diverse English sentences for training and evaluation; it’s a long-standing reference set for line and word recognition. For scene text, COCO-Text layered extensive annotations over MS-COCO, with labels for printed/handwritten, legible/illegible, script, and full transcriptions (see also the original project page). The field also relies heavily on synthetic pretraining: SynthText in the Wild renders text into photographs with realistic geometry and lighting, providing huge volumes of data to pretrain detectors and recognizers (reference code & data).

Competitions under ICDAR’s Robust Reading umbrella keep evaluation grounded. Recent tasks emphasize end-to-end detection/reading and include linking words into phrases, with official code reporting precision/recall/F-score, intersection-over-union (IoU), and character-level edit-distance metrics—mirroring what practitioners should track.

Output formats and downstream use

OCR rarely ends at plain text. Archives and digital libraries prefer ALTO XML because it encodes the physical layout (blocks/lines/words with coordinates) alongside content, and it pairs well with METS packaging. The hOCR microformat, by contrast, embeds the same idea into HTML/CSS using classes like ocr_line and ocrx_word, making it easy to display, edit, and transform with web tooling. Tesseract exposes both—e.g., generating hOCR or searchable PDFs directly from the CLI (PDF output guide); Python wrappers like pytesseract add convenience. Converters exist to translate between hOCR and ALTO when repositories have fixed ingestion standards—see this curated list of OCR file-format tools.

Practical guidance

  • Start with data & cleanliness. If your images are phone photos or mixed-quality scans, invest in thresholding (adaptive & Otsu) and deskew (Hough) before any model tuning. You’ll often gain more from a robust preprocessing recipe than from swapping recognizers.
  • Choose the right detector. For scanned pages with regular columns, a page segmenter (zones → lines) may suffice; for natural images, single-shot detectors like EAST are strong baselines and plug into many toolkits (OpenCV example).
  • Pick a recognizer that matches your text. For printed Latin, Tesseract (LSTM/OEM) is sturdy and fast; for multi-script or quick prototypes, EasyOCR is productive; for handwriting or historical typefaces, consider Kraken or Calamari and plan to fine-tune. If you need tight coupling to document understanding (key-value extraction, VQA), evaluate TrOCR (OCR) versus Donut (OCR-free) on your schema—Donut may remove a whole integration step.
  • Measure what matters. For end-to-end systems, report detection F-score and recognition CER/WER (both based on Levenshtein edit distance; see CTC); for layout-heavy tasks, track IoU/tightness and character-level normalized edit distance as in ICDAR RRC evaluation kits.
  • Export rich outputs. Prefer hOCR /ALTO (or both) so you keep coordinates and reading order—vital for search hit highlighting, table/field extraction, and provenance. Tesseract’s CLI and pytesseract make this a one-liner.

Looking ahead

The strongest trend is convergence: detection, recognition, language modeling, and even task-specific decoding are merging into unified Transformer stacks. Pretraining on large synthetic corpora remains a force multiplier. OCR-free models will compete aggressively wherever the target is structured outputs rather than verbatim transcripts. Expect hybrid deployments too: a lightweight detector plus a TrOCR-style recognizer for long-form text, and a Donut-style model for forms and receipts.

Further reading & tools

Tesseract (GitHub) · Tesseract docs · hOCR spec · ALTO background · EAST detector · OpenCV text detection · TrOCR · Donut · COCO-Text · SynthText · Kraken · Calamari OCR · ICDAR RRC · pytesseract · IAM handwriting · OCR file-format tools · EasyOCR

Frequently Asked Questions

What is OCR?

Optical Character Recognition (OCR) is a technology used to convert different types of documents, such as scanned paper documents, PDF files or images captured by a digital camera, into editable and searchable data.

How does OCR work?

OCR works by scanning an input image or document, segmenting the image into individual characters, and comparing each character with a database of character shapes using pattern recognition or feature recognition.

What are some practical applications of OCR?

OCR is used in a variety of sectors and applications, including digitizing printed documents, enabling text-to-speech services, automating data entry processes, and assisting visually impaired users to better interact with text.

Is OCR always 100% accurate?

While great advancements have been made in OCR technology, it isn't infallible. Accuracy can vary depending upon the quality of the original document and the specifics of the OCR software being used.

Can OCR recognize handwriting?

Although OCR is primarily designed for printed text, some advanced OCR systems are also able to recognize clear, consistent handwriting. However, typically handwriting recognition is less accurate because of the wide variation in individual writing styles.

Can OCR handle multiple languages?

Yes, many OCR software systems can recognize multiple languages. However, it's important to ensure that the specific language is supported by the software you're using.

What's the difference between OCR and ICR?

OCR stands for Optical Character Recognition and is used for recognizing printed text, while ICR, or Intelligent Character Recognition, is more advanced and is used for recognizing hand-written text.

Does OCR work with any font and text size?

OCR works best with clear, easy-to-read fonts and standard text sizes. While it can work with various fonts and sizes, accuracy tends to decrease when dealing with unusual fonts or very small text sizes.

What are the limitations of OCR technology?

OCR can struggle with low-resolution documents, complex fonts, poorly printed texts, handwriting, and documents with backgrounds that interfere with the text. Also, while it can work with many languages, it may not cover every language perfectly.

Can OCR scan colored text or colored backgrounds?

Yes, OCR can scan colored text and backgrounds, although it's generally more effective with high-contrast color combinations, such as black text on a white background. The accuracy might decrease when text and background colors lack sufficient contrast.

What is the PDB format?

Palm Database ImageViewer Format

The PDB (Protein Data Bank) image format is not a traditional 'image' format like JPEG or PNG, but rather a data format that stores three-dimensional structural information about proteins, nucleic acids, and complex assemblies. The PDB format is a cornerstone of bioinformatics and structural biology, as it allows scientists to visualize, share, and analyze the molecular structures of biological macromolecules. The PDB archive is managed by the Worldwide Protein Data Bank (wwPDB), which ensures that the PDB data are freely and publicly available to the global community.

The PDB format was first developed in the early 1970s to serve the growing need for a standardized method of representing molecular structures. Since then, it has evolved to accommodate a wide range of molecular data. The format is text-based and can be read by humans as well as processed by computers. It consists of a series of records, each of which starts with a six-character line identifier that specifies the type of information contained in that record. The records provide a detailed description of the structure, including atomic coordinates, connectivity, and experimental data.

A typical PDB file begins with a header section, which includes metadata about the protein or nucleic acid structure. This section contains records such as TITLE, which gives a brief description of the structure; COMPND, which lists the chemical components; and SOURCE, which describes the origin of the biological molecule. The header also includes the AUTHOR record, which lists the names of the people who determined the structure, and the JOURNAL record, which provides a citation to the literature where the structure was first described.

Following the header, the PDB file contains the primary sequence information of the macromolecule in the SEQRES records. These records list the sequence of residues (amino acids for proteins, nucleotides for nucleic acids) as they appear in the chain. This information is crucial for understanding the relationship between the sequence of a molecule and its three-dimensional structure.

The ATOM records are arguably the most important part of a PDB file, as they contain the coordinates for each atom in the molecule. Each ATOM record includes the atom serial number, atom name, residue name, chain identifier, residue sequence number, and the x, y, and z Cartesian coordinates of the atom in angstroms. The ATOM records allow for the reconstruction of the three-dimensional structure of the molecule, which can be visualized using specialized software such as PyMOL, Chimera, or VMD.

In addition to the ATOM records, there are HETATM records for atoms that are part of non-standard residues or ligands, such as metal ions, water molecules, or other small molecules bound to the protein or nucleic acid. These records are formatted similarly to ATOM records but are distinguished to facilitate the identification of non-macromolecular components within the structure.

Connectivity information is provided in the CONECT records, which list the bonds between atoms. These records are not mandatory, as most molecular visualization and analysis software can infer connectivity based on the distances between atoms. However, they are crucial for defining unusual bonds or for structures with metal coordination complexes, where the bonding may not be obvious from the atomic coordinates alone.

The PDB format also includes records for specifying secondary structure elements, such as alpha helices and beta sheets. The HELIX and SHEET records identify these structures and provide information about their location within the sequence. This information helps in understanding the folding patterns of the macromolecule and is essential for comparative studies and modeling.

Experimental data and methods used to determine the structure are documented in the PDB file as well. Records such as EXPDTA describe the experimental technique (e.g., X-ray crystallography, NMR spectroscopy), while the REMARK records can contain a wide variety of comments and annotations about the structure, including details about data collection, resolution, and refinement statistics.

The END record signifies the end of the PDB file. It is important to note that while the PDB format is widely used, it has some limitations due to its age and the fixed column width format, which can lead to issues with modern structures that have a large number of atoms or require greater precision. To address these limitations, an updated format called mmCIF (macromolecular Crystallographic Information File) has been developed, which offers a more flexible and extensible framework for representing macromolecular structures.

Despite the development of the mmCIF format, the PDB format remains popular due to its simplicity and the vast number of software tools that support it. Researchers often convert between PDB and mmCIF formats depending on their needs and the tools they are using. The PDB format's longevity is a testament to its fundamental role in the field of structural biology and its effectiveness in conveying complex structural information in a relatively straightforward manner.

To work with PDB files, scientists use a variety of computational tools. Molecular visualization software allows users to load PDB files and view the structures in three dimensions, rotate them, zoom in and out, and apply different rendering styles to better understand the spatial arrangement of atoms. These tools often provide additional functionalities, such as measuring distances, angles, and dihedrals, simulating molecular dynamics, and analyzing interactions within the structure or with potential ligands.

The PDB format also plays a crucial role in computational biology and drug discovery. Structural information from PDB files is used in homology modeling, where the known structure of a related protein is used to predict the structure of a protein of interest. In structure-based drug design, PDB files of target proteins are used to screen and optimize potential drug compounds, which can then be synthesized and tested in the lab.

The PDB format's impact extends beyond individual research projects. The Protein Data Bank itself is a repository that currently contains over 150,000 structures, and it continues to grow as new structures are determined and deposited. This database is an invaluable resource for education, allowing students to explore and learn about the structures of biological macromolecules. It also serves as a historical record of the progress in structural biology over the past decades.

In conclusion, the PDB image format is a critical tool in the field of structural biology, providing a means to store, share, and analyze the three-dimensional structures of biological macromolecules. While it has some limitations, its widespread adoption and the development of a rich ecosystem of tools for its use ensure that it will remain a key format in the foreseeable future. As the field of structural biology continues to evolve, the PDB format will likely be supplemented by more advanced formats like mmCIF, but its legacy will endure as the foundation upon which modern structural biology is built.

Supported formats

AAI.aai

AAI Dune image

AI.ai

Adobe Illustrator CS2

AVIF.avif

AV1 Image File Format

BAYER.bayer

Raw Bayer Image

BMP.bmp

Microsoft Windows bitmap image

CIN.cin

Cineon Image File

CLIP.clip

Image Clip Mask

CMYK.cmyk

Raw cyan, magenta, yellow, and black samples

CUR.cur

Microsoft icon

DCX.dcx

ZSoft IBM PC multi-page Paintbrush

DDS.dds

Microsoft DirectDraw Surface

DPX.dpx

SMTPE 268M-2003 (DPX 2.0) image

DXT1.dxt1

Microsoft DirectDraw Surface

EPDF.epdf

Encapsulated Portable Document Format

EPI.epi

Adobe Encapsulated PostScript Interchange format

EPS.eps

Adobe Encapsulated PostScript

EPSF.epsf

Adobe Encapsulated PostScript

EPSI.epsi

Adobe Encapsulated PostScript Interchange format

EPT.ept

Encapsulated PostScript with TIFF preview

EPT2.ept2

Encapsulated PostScript Level II with TIFF preview

EXR.exr

High dynamic-range (HDR) image

FF.ff

Farbfeld

FITS.fits

Flexible Image Transport System

GIF.gif

CompuServe graphics interchange format

HDR.hdr

High Dynamic Range image

HEIC.heic

High Efficiency Image Container

HRZ.hrz

Slow Scan TeleVision

ICO.ico

Microsoft icon

ICON.icon

Microsoft icon

J2C.j2c

JPEG-2000 codestream

J2K.j2k

JPEG-2000 codestream

JNG.jng

JPEG Network Graphics

JP2.jp2

JPEG-2000 File Format Syntax

JPE.jpe

Joint Photographic Experts Group JFIF format

JPEG.jpeg

Joint Photographic Experts Group JFIF format

JPG.jpg

Joint Photographic Experts Group JFIF format

JPM.jpm

JPEG-2000 File Format Syntax

JPS.jps

Joint Photographic Experts Group JPS format

JPT.jpt

JPEG-2000 File Format Syntax

JXL.jxl

JPEG XL image

MAP.map

Multi-resolution Seamless Image Database (MrSID)

MAT.mat

MATLAB level 5 image format

PAL.pal

Palm pixmap

PALM.palm

Palm pixmap

PAM.pam

Common 2-dimensional bitmap format

PBM.pbm

Portable bitmap format (black and white)

PCD.pcd

Photo CD

PCT.pct

Apple Macintosh QuickDraw/PICT

PCX.pcx

ZSoft IBM PC Paintbrush

PDB.pdb

Palm Database ImageViewer Format

PDF.pdf

Portable Document Format

PDFA.pdfa

Portable Document Archive Format

PFM.pfm

Portable float format

PGM.pgm

Portable graymap format (gray scale)

PGX.pgx

JPEG 2000 uncompressed format

PICT.pict

Apple Macintosh QuickDraw/PICT

PJPEG.pjpeg

Joint Photographic Experts Group JFIF format

PNG.png

Portable Network Graphics

PNG00.png00

PNG inheriting bit-depth, color-type from original image

PNG24.png24

Opaque or binary transparent 24-bit RGB (zlib 1.2.11)

PNG32.png32

Opaque or binary transparent 32-bit RGBA

PNG48.png48

Opaque or binary transparent 48-bit RGB

PNG64.png64

Opaque or binary transparent 64-bit RGBA

PNG8.png8

Opaque or binary transparent 8-bit indexed

PNM.pnm

Portable anymap

PPM.ppm

Portable pixmap format (color)

PS.ps

Adobe PostScript file

PSB.psb

Adobe Large Document Format

PSD.psd

Adobe Photoshop bitmap

RGB.rgb

Raw red, green, and blue samples

RGBA.rgba

Raw red, green, blue, and alpha samples

RGBO.rgbo

Raw red, green, blue, and opacity samples

SIX.six

DEC SIXEL Graphics Format

SUN.sun

Sun Rasterfile

SVG.svg

Scalable Vector Graphics

TIFF.tiff

Tagged Image File Format

VDA.vda

Truevision Targa image

VIPS.vips

VIPS image

WBMP.wbmp

Wireless Bitmap (level 0) image

WEBP.webp

WebP Image Format

YUV.yuv

CCIR 601 4:1:1 or 4:2:2

Frequently asked questions

How does this work?

This converter runs entirely in your browser. When you select a file, it is read into memory and converted to the selected format. You can then download the converted file.

How long does it take to convert a file?

Conversions start instantly, and most files are converted in under a second. Larger files may take longer.

What happens to my files?

Your files are never uploaded to our servers. They are converted in your browser, and the converted file is then downloaded. We never see your files.

What file types can I convert?

We support converting between all image formats, including JPEG, PNG, GIF, WebP, SVG, BMP, TIFF, and more.

How much does this cost?

This converter is completely free, and will always be free. Because it runs in your browser, we don't have to pay for servers, so we don't need to charge you.

Can I convert multiple files at once?

Yes! You can convert as many files as you want at once. Just select multiple files when you add them.