OCR any JPS

Drop a photo, scan, or PDF (up to 2.5GB). We extract the text right in your browser — free, unlimited, and your files never leave your device.

Private and secure

Everything happens in your browser. Your files never touch our servers.

Blazing fast

No uploading, no waiting. Convert the moment you drop a file.

Actually free

No account required. No hidden costs. No file size tricks.

Optical Character Recognition (OCR) turns images of text—scans, smartphone photos, PDFs—into machine-readable strings and, increasingly, structured data. Modern OCR is a pipeline that cleans an image, finds text, reads it, and exports rich metadata so downstream systems can search, index, or extract fields. Two widely used output standards are hOCR, an HTML microformat for text and layout, and ALTO XML, a library/archives-oriented schema; both preserve positions, reading order, and other layout cues and are supported by popular engines like Tesseract.

A quick tour of the pipeline

Preprocessing. OCR quality starts with image cleanup: grayscale conversion, denoising, thresholding (binarization), and deskewing. Canonical OpenCV tutorials cover global, adaptive and Otsu thresholding—staples for documents with nonuniform lighting or bimodal histograms. When illumination varies within a page (think phone snaps), adaptive methods often outperform a single global threshold; Otsu automatically picks a threshold by analyzing the histogram. Tilt correction is equally important: Hough-based deskewing (Hough Line Transform) paired with Otsu binarization is a common and effective recipe in production preprocessing pipelines.

Detection vs. recognition. OCR is typically split into text detection (where is the text?) and text recognition (what does it say?). In natural scenes and many scans, fully convolutional detectors like EAST efficiently predict word- or line-level quadrilaterals without heavy proposal stages and are implemented in common toolkits (e.g., OpenCV’s text detection tutorial). On complex pages (newspapers, forms, books), segmentation of lines/regions and reading order inference matter:Kraken implements traditional zone/line segmentation and neural baseline segmentation, with explicit support for different scripts and directions (LTR/RTL/vertical).

Recognition models. The classic open-source workhorse Tesseract (open-sourced by Google, with roots at HP) evolved from a character classifier into an LSTM-based sequence recognizer and can emit searchable PDFs, hOCR/ALTO-friendly outputs, and more from the CLI. Modern recognizers rely on sequence modeling without pre-segmented characters. Connectionist Temporal Classification (CTC) remains foundational, learning alignments between input feature sequences and output label strings; it’s widely used in handwriting and scene-text pipelines.

In the last few years, Transformers reshaped OCR. TrOCR uses a vision Transformer encoder plus a text Transformer decoder, trained on large synthetic corpora then fine-tuned on real data, with strong performance across printed, handwritten and scene-text benchmarks (see also Hugging Face docs). In parallel, some systems sidestep OCR for downstream understanding: Donut (Document Understanding Transformer) is an OCR-free encoder-decoder that directly outputs structured answers (like key-value JSON) from document images (repo, model card), avoiding error accumulation when a separate OCR step feeds an IE system.

Engines and libraries

If you want batteries-included text reading across many scripts, EasyOCR offers a simple API with 80+ language models, returning boxes, text, and confidences—handy for prototypes and non-Latin scripts. For historical documents, Kraken shines with baseline segmentation and script-aware reading order; for flexible line-level training, Calamari builds on the Ocropy lineage (Ocropy) with (multi-)LSTM+CTC recognizers and a CLI for fine-tuning custom models.

Datasets and benchmarks

Generalization hinges on data. For handwriting, the IAM Handwriting Database provides writer-diverse English sentences for training and evaluation; it’s a long-standing reference set for line and word recognition. For scene text, COCO-Text layered extensive annotations over MS-COCO, with labels for printed/handwritten, legible/illegible, script, and full transcriptions (see also the original project page). The field also relies heavily on synthetic pretraining: SynthText in the Wild renders text into photographs with realistic geometry and lighting, providing huge volumes of data to pretrain detectors and recognizers (reference code & data).

Competitions under ICDAR’s Robust Reading umbrella keep evaluation grounded. Recent tasks emphasize end-to-end detection/reading and include linking words into phrases, with official code reporting precision/recall/F-score, intersection-over-union (IoU), and character-level edit-distance metrics—mirroring what practitioners should track.

Output formats and downstream use

OCR rarely ends at plain text. Archives and digital libraries prefer ALTO XML because it encodes the physical layout (blocks/lines/words with coordinates) alongside content, and it pairs well with METS packaging. The hOCR microformat, by contrast, embeds the same idea into HTML/CSS using classes like ocr_line and ocrx_word, making it easy to display, edit, and transform with web tooling. Tesseract exposes both—e.g., generating hOCR or searchable PDFs directly from the CLI (PDF output guide); Python wrappers like pytesseract add convenience. Converters exist to translate between hOCR and ALTO when repositories have fixed ingestion standards—see this curated list of OCR file-format tools.

Practical guidance

  • Start with data & cleanliness. If your images are phone photos or mixed-quality scans, invest in thresholding (adaptive & Otsu) and deskew (Hough) before any model tuning. You’ll often gain more from a robust preprocessing recipe than from swapping recognizers.
  • Choose the right detector. For scanned pages with regular columns, a page segmenter (zones → lines) may suffice; for natural images, single-shot detectors like EAST are strong baselines and plug into many toolkits (OpenCV example).
  • Pick a recognizer that matches your text. For printed Latin, Tesseract (LSTM/OEM) is sturdy and fast; for multi-script or quick prototypes, EasyOCR is productive; for handwriting or historical typefaces, consider Kraken or Calamari and plan to fine-tune. If you need tight coupling to document understanding (key-value extraction, VQA), evaluate TrOCR (OCR) versus Donut (OCR-free) on your schema—Donut may remove a whole integration step.
  • Measure what matters. For end-to-end systems, report detection F-score and recognition CER/WER (both based on Levenshtein edit distance; see CTC); for layout-heavy tasks, track IoU/tightness and character-level normalized edit distance as in ICDAR RRC evaluation kits.
  • Export rich outputs. Prefer hOCR /ALTO (or both) so you keep coordinates and reading order—vital for search hit highlighting, table/field extraction, and provenance. Tesseract’s CLI and pytesseract make this a one-liner.

Looking ahead

The strongest trend is convergence: detection, recognition, language modeling, and even task-specific decoding are merging into unified Transformer stacks. Pretraining on large synthetic corpora remains a force multiplier. OCR-free models will compete aggressively wherever the target is structured outputs rather than verbatim transcripts. Expect hybrid deployments too: a lightweight detector plus a TrOCR-style recognizer for long-form text, and a Donut-style model for forms and receipts.

Further reading & tools

Tesseract (GitHub) · Tesseract docs · hOCR spec · ALTO background · EAST detector · OpenCV text detection · TrOCR · Donut · COCO-Text · SynthText · Kraken · Calamari OCR · ICDAR RRC · pytesseract · IAM handwriting · OCR file-format tools · EasyOCR

Frequently Asked Questions

What is OCR?

Optical Character Recognition (OCR) is a technology used to convert different types of documents, such as scanned paper documents, PDF files or images captured by a digital camera, into editable and searchable data.

How does OCR work?

OCR works by scanning an input image or document, segmenting the image into individual characters, and comparing each character with a database of character shapes using pattern recognition or feature recognition.

What are some practical applications of OCR?

OCR is used in a variety of sectors and applications, including digitizing printed documents, enabling text-to-speech services, automating data entry processes, and assisting visually impaired users to better interact with text.

Is OCR always 100% accurate?

While great advancements have been made in OCR technology, it isn't infallible. Accuracy can vary depending upon the quality of the original document and the specifics of the OCR software being used.

Can OCR recognize handwriting?

Although OCR is primarily designed for printed text, some advanced OCR systems are also able to recognize clear, consistent handwriting. However, typically handwriting recognition is less accurate because of the wide variation in individual writing styles.

Can OCR handle multiple languages?

Yes, many OCR software systems can recognize multiple languages. However, it's important to ensure that the specific language is supported by the software you're using.

What's the difference between OCR and ICR?

OCR stands for Optical Character Recognition and is used for recognizing printed text, while ICR, or Intelligent Character Recognition, is more advanced and is used for recognizing hand-written text.

Does OCR work with any font and text size?

OCR works best with clear, easy-to-read fonts and standard text sizes. While it can work with various fonts and sizes, accuracy tends to decrease when dealing with unusual fonts or very small text sizes.

What are the limitations of OCR technology?

OCR can struggle with low-resolution documents, complex fonts, poorly printed texts, handwriting, and documents with backgrounds that interfere with the text. Also, while it can work with many languages, it may not cover every language perfectly.

Can OCR scan colored text or colored backgrounds?

Yes, OCR can scan colored text and backgrounds, although it's generally more effective with high-contrast color combinations, such as black text on a white background. The accuracy might decrease when text and background colors lack sufficient contrast.

What is the JPS format?

Joint Photographic Experts Group JPS format

The JPS image format, short for JPEG Stereo, is a file format used to store stereoscopic photographs taken by digital cameras or created by 3D rendering software. It is essentially a side-by-side arrangement of two JPEG images within a single file that, when viewed through appropriate software or hardware, provides a 3D effect. This format is particularly useful for creating an illusion of depth in images, which enhances the viewing experience for users with compatible display systems or 3D glasses.

The JPS format leverages the well-established JPEG (Joint Photographic Experts Group) compression technique to store the two images. JPEG is a lossy compression method, which means that it reduces file size by selectively discarding less important information, often without a noticeable decrease in image quality to the human eye. This makes JPS files relatively small and manageable, despite containing two images instead of one.

A JPS file is essentially a JPEG file with a specific structure. It contains two JPEG-compressed images side by side within a single frame. These images are called the left-eye and right-eye images, and they represent slightly different perspectives of the same scene, mimicking the slight difference between what each of our eyes sees. This difference is what allows for the perception of depth when the images are viewed correctly.

The standard resolution for a JPS image is typically twice the width of a standard JPEG image to accommodate both the left and right images. For example, if a standard JPEG image has a resolution of 1920x1080 pixels, a JPS image would have a resolution of 3840x1080 pixels, with each side-by-side image occupying half of the total width. However, the resolution can vary depending on the source of the image and the intended use.

To view a JPS image in 3D, a viewer must use a compatible display device or software that can interpret the side-by-side images and present them to each eye separately. This can be achieved through various methods, such as anaglyph 3D, where the images are filtered by color and viewed with colored glasses; polarized 3D, where the images are projected through polarized filters and viewed with polarized glasses; or active shutter 3D, where the images are displayed alternately and synchronized with shutter glasses that open and close rapidly to show each eye the correct image.

The file structure of a JPS image is similar to that of a standard JPEG file. It contains a header, which includes the SOI (Start of Image) marker, followed by a series of segments that contain various pieces of metadata and the image data itself. The segments include the APP (Application) markers, which can contain information such as the Exif metadata, and the DQT (Define Quantization Table) segment, which defines the quantization tables used for compressing the image data.

One of the key segments in a JPS file is the JFIF (JPEG File Interchange Format) segment, which specifies that the file conforms to the JFIF standard. This segment is important for ensuring compatibility with a wide range of software and hardware. It also includes information such as the aspect ratio and resolution of the thumbnail image, which can be used for quick previews.

The actual image data in a JPS file is stored in the SOS (Start of Scan) segment, which follows the header and metadata segments. This segment contains the compressed image data for both the left and right images. The data is encoded using the JPEG compression algorithm, which involves a series of steps including color space conversion, subsampling, discrete cosine transform (DCT), quantization, and entropy coding.

Color space conversion is the process of converting the image data from the RGB color space, which is commonly used in digital cameras and computer displays, to the YCbCr color space, which is used in JPEG compression. This conversion separates the image into a luminance component (Y), which represents the brightness levels, and two chrominance components (Cb and Cr), which represent the color information. This is beneficial for compression because the human eye is more sensitive to changes in brightness than color, allowing for more aggressive compression of the chrominance components without significantly affecting perceived image quality.

Subsampling is a process that takes advantage of the human eye's lower sensitivity to color detail by reducing the resolution of the chrominance components relative to the luminance component. Common subsampling ratios include 4:4:4 (no subsampling), 4:2:2 (reducing the horizontal resolution of the chrominance by half), and 4:2:0 (reducing both the horizontal and vertical resolution of the chrominance by half). The choice of subsampling ratio can affect the balance between image quality and file size.

The discrete cosine transform (DCT) is applied to small blocks of the image (typically 8x8 pixels) to convert the spatial domain data into the frequency domain. This step is crucial for JPEG compression because it allows for the separation of image details into components of varying importance, with higher frequency components often being less perceptible to the human eye. These components can then be quantized, or reduced in precision, to achieve compression.

Quantization is the process of mapping a range of values to a single quantum value, effectively reducing the precision of the DCT coefficients. This is where the lossy nature of JPEG compression comes into play, as some image information is discarded. The degree of quantization is determined by the quantization tables specified in the DQT segment, and it can be adjusted to balance image quality against file size.

The final step in the JPEG compression process is entropy coding, which is a form of lossless compression. The most common method used in JPEG is Huffman coding, which assigns shorter codes to more frequent values and longer codes to less frequent values. This reduces the overall size of the image data without any further loss of information.

In addition to the standard JPEG compression techniques, the JPS format may also include specific metadata that relates to the stereoscopic nature of the images. This metadata can include information about the parallax settings, convergence points, and any other data that may be necessary for correctly displaying the 3D effect. This metadata is typically stored in the APP segments of the file.

The JPS format is supported by a variety of software applications and devices, including 3D televisions, VR headsets, and specialized photo viewers. However, it is not as widely supported as the standard JPEG format, so users may need to use specific software or convert the JPS files to another format for broader compatibility.

One of the challenges with the JPS format is ensuring that the left and right images are properly aligned and have the correct parallax. Misalignment or incorrect parallax can lead to an uncomfortable viewing experience and may cause eye strain or headaches. Therefore, it is important for photographers and 3D artists to carefully capture or create the images with the correct stereoscopic parameters.

In conclusion, the JPS image format is a specialized file format designed for storing and displaying stereoscopic images. It builds upon the established JPEG compression techniques to create a compact and efficient way to store 3D photographs. While it offers a unique viewing experience, the format requires compatible hardware or software to view the images in 3D, and it may present challenges in terms of alignment and parallax. Despite these challenges, the JPS format remains a valuable tool for photographers, 3D artists, and enthusiasts who wish to capture and share the depth and realism of the world in a digital format.

Supported formats

AAI.aai

AAI Dune image

AI.ai

Adobe Illustrator CS2

AVIF.avif

AV1 Image File Format

BAYER.bayer

Raw Bayer Image

BMP.bmp

Microsoft Windows bitmap image

CIN.cin

Cineon Image File

CLIP.clip

Image Clip Mask

CMYK.cmyk

Raw cyan, magenta, yellow, and black samples

CUR.cur

Microsoft icon

DCX.dcx

ZSoft IBM PC multi-page Paintbrush

DDS.dds

Microsoft DirectDraw Surface

DPX.dpx

SMTPE 268M-2003 (DPX 2.0) image

DXT1.dxt1

Microsoft DirectDraw Surface

EPDF.epdf

Encapsulated Portable Document Format

EPI.epi

Adobe Encapsulated PostScript Interchange format

EPS.eps

Adobe Encapsulated PostScript

EPSF.epsf

Adobe Encapsulated PostScript

EPSI.epsi

Adobe Encapsulated PostScript Interchange format

EPT.ept

Encapsulated PostScript with TIFF preview

EPT2.ept2

Encapsulated PostScript Level II with TIFF preview

EXR.exr

High dynamic-range (HDR) image

FF.ff

Farbfeld

FITS.fits

Flexible Image Transport System

GIF.gif

CompuServe graphics interchange format

HDR.hdr

High Dynamic Range image

HEIC.heic

High Efficiency Image Container

HRZ.hrz

Slow Scan TeleVision

ICO.ico

Microsoft icon

ICON.icon

Microsoft icon

J2C.j2c

JPEG-2000 codestream

J2K.j2k

JPEG-2000 codestream

JNG.jng

JPEG Network Graphics

JP2.jp2

JPEG-2000 File Format Syntax

JPE.jpe

Joint Photographic Experts Group JFIF format

JPEG.jpeg

Joint Photographic Experts Group JFIF format

JPG.jpg

Joint Photographic Experts Group JFIF format

JPM.jpm

JPEG-2000 File Format Syntax

JPS.jps

Joint Photographic Experts Group JPS format

JPT.jpt

JPEG-2000 File Format Syntax

JXL.jxl

JPEG XL image

MAP.map

Multi-resolution Seamless Image Database (MrSID)

MAT.mat

MATLAB level 5 image format

PAL.pal

Palm pixmap

PALM.palm

Palm pixmap

PAM.pam

Common 2-dimensional bitmap format

PBM.pbm

Portable bitmap format (black and white)

PCD.pcd

Photo CD

PCT.pct

Apple Macintosh QuickDraw/PICT

PCX.pcx

ZSoft IBM PC Paintbrush

PDB.pdb

Palm Database ImageViewer Format

PDF.pdf

Portable Document Format

PDFA.pdfa

Portable Document Archive Format

PFM.pfm

Portable float format

PGM.pgm

Portable graymap format (gray scale)

PGX.pgx

JPEG 2000 uncompressed format

PICT.pict

Apple Macintosh QuickDraw/PICT

PJPEG.pjpeg

Joint Photographic Experts Group JFIF format

PNG.png

Portable Network Graphics

PNG00.png00

PNG inheriting bit-depth, color-type from original image

PNG24.png24

Opaque or binary transparent 24-bit RGB (zlib 1.2.11)

PNG32.png32

Opaque or binary transparent 32-bit RGBA

PNG48.png48

Opaque or binary transparent 48-bit RGB

PNG64.png64

Opaque or binary transparent 64-bit RGBA

PNG8.png8

Opaque or binary transparent 8-bit indexed

PNM.pnm

Portable anymap

PPM.ppm

Portable pixmap format (color)

PS.ps

Adobe PostScript file

PSB.psb

Adobe Large Document Format

PSD.psd

Adobe Photoshop bitmap

RGB.rgb

Raw red, green, and blue samples

RGBA.rgba

Raw red, green, blue, and alpha samples

RGBO.rgbo

Raw red, green, blue, and opacity samples

SIX.six

DEC SIXEL Graphics Format

SUN.sun

Sun Rasterfile

SVG.svg

Scalable Vector Graphics

TIFF.tiff

Tagged Image File Format

VDA.vda

Truevision Targa image

VIPS.vips

VIPS image

WBMP.wbmp

Wireless Bitmap (level 0) image

WEBP.webp

WebP Image Format

YUV.yuv

CCIR 601 4:1:1 or 4:2:2

Frequently asked questions

How does this work?

This converter runs entirely in your browser. When you select a file, it is read into memory and converted to the selected format. You can then download the converted file.

How long does it take to convert a file?

Conversions start instantly, and most files are converted in under a second. Larger files may take longer.

What happens to my files?

Your files are never uploaded to our servers. They are converted in your browser, and the converted file is then downloaded. We never see your files.

What file types can I convert?

We support converting between all image formats, including JPEG, PNG, GIF, WebP, SVG, BMP, TIFF, and more.

How much does this cost?

This converter is completely free, and will always be free. Because it runs in your browser, we don't have to pay for servers, so we don't need to charge you.

Can I convert multiple files at once?

Yes! You can convert as many files as you want at once. Just select multiple files when you add them.