Glossary ATR

A

Accuracy

Accuracy is a score to measure the performance of an automatic text recognition model.

Artificial Intelligence (AI)

AI is a term used for computer systems that can perform tasks mimicking human intelligence. This also encompasses learning and reasoning procedures, as well as complex problem-solving and decision-making.

Automatic Text Recognition (ATR)

ATR, for Automatic Text Recognition, refers to the process of acquiring automatically, usually with machine learning technologies, digital textual data from a digitized analog document.

B

Baseline

A baseline defines a virtual line, passing through at least two points, on which the text is written. It serves as the foundation for text recognition (also see topline).

Bounding box

A bounding box is usually a rectangle drawn around a text region, or a text line, within an image, and defines their spatial extent.

C

Character Error Rate (CER)

CER is a score to evaluate the accuracy of character identification in an automatic transcription at the character level. For example, a 5% CER means that the automatic transcription correctly transcribed 95 characters out of 100.

Citability (as defined by the Cultural Heritage Data Reuse Charter by DARIAH)

Cultural Heritage data and any resulting research need to be fully citable to increase their visibility and impact. Relevant data citation standards should be applied.

Colour Space

A colour space is a specific organisation of colours that provides a way to quantify and represent colours, such as RGB (red, green, blue) or grayscale system.

Computer Vision

Computer vision is part of the AI domain enabling the interpretation and comprehension of visual information in images or videos by computers.

Controlled Vocabulary

Controlled vocabularies define information in a specific domain with a set collection of terms and phrases. The usage of controlled vocabularies can simplify the maintenance of consistency and precision in indexed information and also enable facilitated machine-based retrieval and organisation.

Convolutional Neural Network (CNN)

CNNs are deep learning models used for processing and analysing visual data. They leverage filters and layers to recognise patterns and features within images.

Copyright

Copyright defines the legal protection given to authors of original works. It grants exclusive rights to reproduce, distribute, and display their creations, also preventing usage by non-authorised parties.

Corpus

A corpus is a (structured) collection of texts or data. Corpuses (or corpora) are mostly used for linguistic analysis because they contain various documents or sources for research purposes.

Cropping

Cropping is used in editing and describes the removal of unwanted parts of an image.

D

Dewarping

Dewarping corrects distortions and/or warping in digital images and is used mostly in documents or photographs. Dewarping helps to restore original shapes or perspectives from the images.

Deskewing

Deskewing straightens and/or aligns skewed digital images. This is helpful especially in scanned documents because it makes them appear more aligned.

Digitisation

Digitisation describes the process of converting analogue information, such as text, images, or audio, into a digital format. This digital format can then be stored and processed by machines.

Download

Download is the process of copying data from one computer system to another, mostly done on the web.

Dots Per Inch (DPI)

DPI is the acronym for a measuring unit. DPI measurements quantify the resolution of images or printed documents by indicating the number of horizontal and vertical dots within a one-inch line. Thereby, it indicates the level of detail in a digital image or the quality of a printed document.

E

End Formats/Output Formats

End formats, also called output formats, are the final file types used for presenting/sharing data after processing it for specific purposes like, for example, ATR. See also: File Format.

Extensible Markup Language (XML)

XML is a standard for document markups. As a language, its goal is to store and share data.

F

FAIR-Principles

The FAIR principles are quality criteria developed in the context of data management, emphasising the importance of making data Findable, Accessible, Interoperable and Reusable.

File Format

File formats standardise the structure of information encoded and stored in computer files. Within such a file, the format allows for data organisation, also enabling compatibility, accessibility, and interpretation by specific software.

G

Ground Truth (GT)

The Ground Truth is information that we know to be true. In the context of ATR, it refers to the manual and/or verified transcription of a text. GT serves as training data.

H

Handwritten Text Recognition (HTR)

HTR is the process of recognising and extracting handwritten text from scanned images of writing using computer systems.

hOCR

hOCR is an open HTML standard for ATR output.

Hypertext Markup Language (HTML)

HTML is a markup language for displaying the content and structure of documents on web browsers.

I

International Image Interoperability Framework (IIIF)

The IIIF is a standard for sharing images and their metadata in an interoperable way, making them available platform-independently.

Input

Input are files or parameters that are fed into a computer.

Interoperability (as defined by the Cultural Heritage Data Reuse Charter by DARIAH)

Cultural Heritage Data should be made accessible in a form that facilitates reuse of the data for research. Formats should work and be interoperable for both scholars and Cultural Heritage Institutions.

Image Optimisation

The term image optimisation is the process of reducing the file size of an image while maintaining the same visual quality.

J

JPEG

JPEG (Joint Photographic Experts Group) is a commonly used lossy compression method for digital images, particularly those produced by digital photography. It is well suited for automatic text recognition.

JPEG2000

JPEG 2000 (JP2) is an image compression standard and encoding system. It is a very complex and therefore large file format. It is great for storing images because there is no loss of information.

L

Layout Analysis

In the context of digital documents, layout analysis is the process that involves identifying and segmenting different components within a document. These components can be text, images and structural elements.

M

Machine Learning

Machine learning is part of artificial intelligence and enables systems to learn from experience and improve without explicit instructions included in programming. Machine learning uses algorithms capable of analysing the data and making predictions or decisions based on patterns and trends within them.

Manual Corrections

Manual corrections are correction processes during which data or, in the case of ATR, documents and/or texts are manually corrected to eliminate errors.

Markup Language

Markup languages are standards used for making annotations in documents. They add machine-understandable structure and formatting to the documents, including metadata, thereby enabling interpretation by software or browsers.

Model

A model in the context of ATR is a file created by following a training process. It contains the parameters used to produce a transcription from images.

N

Neural Networks

A neural network is a type of machine learning model that is compositionally built from small units and that is typically designed to transform a set of numerical input values into a set of numerical output values. Each unit has one or more parameters that can be changed during model training. Combined, the parameters of the model form a specific memory which can represent features of the training data with the goal of improving desired output on novel data after training.

O

Object Detection Models

Object detection models, as part of computer vision algorithms, identify and locate one or more objects in digital images or video material. Object detection models can localise the objects’ boundaries and group them into predefined classes or categories.

Optical Character Recognition

OCR is the conversion of images of printed text into machine-encoded text. Nowadays, most systems work with neural networks, similar to the technique underlying HTR.

Open Science

The open science movement promotes accessibility, transparency and collaboration in all areas of scientific research. The aim is to reach openness for scientific knowledge, data, methods and publications so that they can be used at large, especially by other researchers.

Openness (as defined by the Cultural Heritage Data Reuse Charter by DARIAH)

Cultural Heritage data should be shared under an open license whenever possible, taking into account existing copyright and any restrictions due to national legislation and privacy issues.

Ontology

An ontology refers to a structured representation of domain knowledge. In the domain of ATR, an ontology also refers to a controlled vocabulary that describes the layout of a textual document.

Output

Output are files or parameters obtained at the end of a computer process, such as a TEI file extracted from an image document.

P

Pixel

A pixel is the basic unit in digital graphics, meaning the smallest unit that can be displayed and represented on a digital display device.

Portable Document Format (PDF)

PDF is a file format for capturing electronic documents in exactly the intended format.

Post-ATR Correction

Post-ATR correction (post-automatic text recognition correction) is the process of automatically or manually correcting output from ATR.

Prediction

Act of using a model to generate (or predict) a layout recognition or a text recognition using an image and a segmentation or a transcription model.

Pre-Processing

Pre-processing for digital documents involves various steps of often automated procedures. They depend on the kind/type of data that needs to be pre-processed. Usually, the following steps are included: Cleaning, normalisation, layout correction and formatting adjustments. Overall, it aims at improving the quality of the data one is working on.

Q

Quality Assurance

Quality assurance in ATR involves the systematic evaluation and validation of recognised text output. It ensures the accuracy and reliability of the output text and guarantees its quality for future use.

R

Reciprocity (in the context of the Heritage Data Reuse Charter by DARIAH)

Reciprocity is the agreement of both Cultural Heritage Institutions and researchers to share content and knowledge equally with each other, making use of data centres and research infrastructures. See: https://www.dariah.eu/activities/open-science/data-re-use/.

Resolution

Resolution is the measurement of the number of pixels that can be contained on a display screen or in a camera sensor.

Rule-Based Methods

Rule-based methods are used in artificial intelligence to make decisions or perform tasks based on pre-defined logical rules.

S

Scanning

Scanning is the procedure that turns analogue, physical documents like books or newspapers into a digital file.

Segmentation

Segmentation is the process of dividing an image into distinct regions and/or segments. It is used to facilitate analysis and identify and delineate objects or areas of interest within the image. In the case of ATR, this concerns zones and lines in a textual document.

Segmentation Model

A segmentation model is a trained model for layout analysis [see entry model and segmentation].

Self-Attention Mechanism

Self-attention relates to the mechanism of relating different positions of a single sequence for computing a representation of it. It is used by transformer models for being able to process sequential data.

Semi-automated correction

Semi-automated correction mixes automatic computer correction and correction made by a human.

Stewardship (as defined by the Heritage Data Reuse Charter by DARIAH)

Long-time preservation, persistence, accessibility and legibility of cultural heritage data should be a priority. See: https://www.dariah.eu/activities/open-science/data-re-use/.

T

Taxonomy

Taxonomies categorise data by using a hierarchical classification system. Data or information can be structured into groups based on certain characteristics.

TIFF

The abbreviation stands for Tag(ed) Image File Format. It is an image file format for storing raster graphics images, popular with graphic artists, publishers and photographers. It is a very complex and therefore large file format. It is great for storing images because there is no loss of information.

Topline

A topline defines a virtual line, passing through at least two points, on which the text is written. It serves as the foundation for text recognition. Also see topline, and note that baselines are more common than toplines in current ATR systems.

Training

Training is the learning process of the ATR engine for producing a model.

Transformers

Transformers are a type of deep learning model in AI. Transformer models use a mechanism called ‘self-attention’ for being able to process sequential data. They have outstanding performance in tasks such as natural language processing and machine translation given their ability to catch complex relationships in the input sequences.

Trustworthiness (as defined by the Cultural Heritage Data Reuse Charter by DARIAH)

The provenance of Cultural Heritage data and any consequent research should be clear, up to date, openly available and therefore trustworthy.

U

Upload

Upload refers to the process of copying data by transferring external data to your own computer.

W

Word Error Rate (WER)

The Word Error Rate is a score to evaluate the accuracy of word identification of an automatic transcription. For example, a 5% WER means that the automatic transcription correctly transcribed 95 words out of 100.

X

XML Analysed Layout and Text Object (ALTO)

ALTO is an XML standard for reporting the physical layout and logical structure of text transcribed by OCR or HTR. It retains all the geometric coordinates of the content (text, illustrations, graphics) in the image and allows the image and text to be superimposed.

XML Page Analysis and Ground Truth Elements (PAGE)

PAGE is an XML standard for encoding digitised documents. Comparable to the ALTO format, it can be used to display the structure of a page and its contents.

XML Text Encoding Initiative (TEI)

XML TEI is based on the guidelines of the Text Encoding Initiative. It is a standard for encoding and structuring textual information using XML markup. This enables representing detailed texts with rich metadata, facilitating scholarly analysis and sharing digital texts.

Search OpenEdition Search

You will be redirected to OpenEdition Search