Supported Formats
Sources: Wikipedia and IANA
Last updated
Sources: Wikipedia and IANA
Last updated
This page details all the supported Datasaur formats, provides examples for each format and clarifies expected file structure where appropriate. Note: Through file transformers we can customize the output format.
TXT file is a simple file format that contains unformatted text and can be easily opened and edited using a basic text editor. It is commonly used for storing and exchanging data, code, and other textual information.
A TSV (tab-separated values) file is a simple text format for storing data in a tabular structure. A TSV file encodes a number of records that may contain multiple fields.
Each record is represented as a single line.
Each field
value is represented as text.
Fields in a record are separated from
one other by the tab character .
Note that because is a special character for this format, fields that contain tabs are not allowed in this encoding.
The header (first) line of this encoding contains the name of
each field, separated by tabs.
Example
IOB (inside, outside, beginning) is a common labeling format for labeling tokens in computational linguistics (ex: named-entity recognition). IOB is also a .tsv, but conforms to the following rules:
The B- prefix before a tag indicates that the tag is the beginning of a chunk.
The I- prefix before a tag indicates that the tag is inside a chunk.
The B- tag is used only when a tag is followed by a tag of the same type without O tokens between them.
The O tag indicates that a token does not belong to a chunk.
Example
A CSV (comma-separated values) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas. The use of the comma as a field separator is the source of the name for this file format.
A CSV file typically stores tabular data (numbers and text) in plain text, in which case each line will have the same number of fields.
Example
💡 As for now, a row-based project using CSV format does not support answers containing ;
. We treat it as multiple answers. For example, the answer is She brings some flowers: rose; sunflower; and daisy.
It will be interpreted as three answers that containShe brings some flowers: rose
,sunflower
, and daisy
.
XLS and XLSX is a well-known format for Microsoft Excel documents that was introduced by Microsoft XLS is an older format that was used in older versions of Excel, while XLSX is a newer format that is the default in more recent versions of Excel. Both formats allow users to input, organize, and analyze data in rows and columns. They also support features such as formulas, charts, and graphs. XLSX is a more efficient format that offers better data recovery and larger file size limits.
Example
JSON (JavaScript Object Notation) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and array data types (or any other serializable value).
A JSON file may contain the following data structures:
An object is an unordered set of name/value pairs.
An object begins with {left brace and ends with }right brace. Each name is followed by :colon and the name/value pairs are separated by ,comma.
An array is an ordered collection of values.
An array begins with [left bracket and ends with ]right bracket. Values are separated by ,comma.
A value can be a string
in double quotes, or a number
, or true or false or null
, or an object
or an array
. These structures can be nested.
A string is a sequence of zero or more Unicode characters, wrapped in double quotes, using backslash escapes. A character is represented as a single character string. A string is very much like a C or Java string.
A number is like a C or Java number, except that the octal and hexadecimal formats are not used.
Whitespace can be inserted between any pair of tokens. Excepting a few encoding details, that completely describes the language.
Datasaur Schema is a customized JSON format that is designed to fit all available project types in Datasaur app. This format can be used for mixed project type, e.g. Token + Document labeling. You will receive all label and answer combined in one exported file.
A Datasaur Schema contains the following data structures.
version: version number of Datasaur schema.
Rows Field
content: the text of the sentence.
tokens: the tokens form of the sentence.
metadata: contains additional information for a line. You can find the structure and configuration options for metadata here.
labelerInfo: the information about the labeler.
id: the unique identifier of a labeler (each labeler has a different id).
email: email that labeler used when signing in.
displayName: the display name of the email.
labelSets: contains all the label items that you used for the project.
index: the position of the label set in UI
labelItems: an array of labelItems for a label set
id: id of the labelSet
labelName: the displayed name of the label set item
parentId: id of the parent label set item
color: the color of the label set item
labels: an array of labels for the document. Labels consist of spanLabels, arrowLabels, boundingBoxLabels, timeLabels.
spanLabels are all labels that are applied directly to the token/sentence.
arrowLabels are all labels that are applied in the top of arrow.
boundingBoxLabels are all labels that are applied in the top of OCR documents.
timeLabels are all labels that are applied in the top of audio waveform.
Below are all attributes under labels.
id: identifier from the applied label.
labeledBy:
CONFLICT
: if it has not been resolved
REVIEWER
: if it has been resolved
AUTO
: if it has been resolved by meeting the consensus
LABELER
: if it comes from labeler
labeledByUserId: the user id of a reviewer
acceptedByUserId: the user id of a reviewer who accepts the label. It will be null if there's no user who accept it manually.
rejectedByUserId: the user id of a reviewer who rejects the label. It will be null if there's no user who rejects it manually.
status: label status**.** It can be REJECTED
, if it is rejected by Reviewer, and ACCEPTED
if it is accepted by Reviewer.
hashCode: Datasaur's code to represent label information.
For example, SPAN:gpe:0:0:0:4:0:0:0:4:3:0:undefined:undefined.
Below is the explanation: <type:label set item id:layer or label set index:start cell line:cell index:start token index: start char index: end cell line: end cell index: end token index: end char index: counter.>
textPosition: information about where are the exact location for the labelled text.
start: starting text position
row: number of line
column: number of column. For token based project, it always gives 0 value
tokenIndex: number of token, relative to row number
charIndex: character index position, relative to token
end: ending text position
row: number of line
column: number of column. For token based project, it always gives 0 value
tokenIndex: number of token, relative to row number
charIndex: character index position, relative to token
Arrow label type specific fields
originId: origin id of an arrow label
destinationId: origin id of an arrow label
Bounding Box label type specific fields
coordinates: consists of 4 points paired x and y value.
Timestamp label type specific fields
startTimeMillis: starting timestamp in millisecond.
endTimeMillis: ending timestamp in millisecond.
comments: contains all comment that you insert for the document.
id: the id of the comment
parentId: the id of the parent comment - this will be filed if the comment thread has replies.
hashCode: Datasaur's code to represent comment's information, including the value being commented.
message: the content of the comment
type: the type of comment, can be SPAN_LABEL
,SPAN_TEXT
, ARROW_LABEL
, and COMMENT
userId: the id of user who create the comment
createdAt: the time when the user create the comment
documentQuestions: contains the question set that is used for a document-based project.
id: the id of the question
name: default name given per question
description: question text from the question set.
type: type of the question. It can be in TEXT
, DROPDOWN
, HIERARCHICAL_DROPDOWN
, NESTED
, SLIDER
, DATE
, TIME
, CHECKBOX
, URL
.
displayed: state whether it’s shown or not. True if it’s shown in the extension.
parentId: the id of parent questions.
documentAnswerSet: contains the answer from the used question set.
The answers consists of paired documentQuestion ID and answers. Example: “1”: “Good”
. It shows 1
as id and Good
as the question answers,
rowQuestions: contains the question set that is used for a row-based project.
id: the id of the question
name: default name given per question
description: question text from the question set.
type: type of the question. It can be in TEXT
, DROPDOWN
, HIERARCHICAL_DROPDOWN
, NESTED
, SLIDER
, DATE
, TIME
, CHECKBOX
, URL
.
displayed: state whether it’s shown or not. True if it’s shown in the extension.
parentId: the id of parent questions.
rowAnswerSets: contains all answer from row-based project. It consists of row number, rowQuestions ID, and answers.
Example:
5
as the row number, 1
as parentId question from 2
and 3
. 2
and 3
are the question ID. A
and B
are the answers from 2
and 3
Examples
Conversational JSON is a format Datasaur created to support conversational or chat-like data. Each file should represent a conversation or chat log containing multiple message
s.
Each message
object must have the following properties:
content
: String content of the message. Required.
speaker
: Identifies who say / sent the message. Required.
color
: Optional. If provided, Datasaur will use the color to render the speaker’s avatar.
alignment
: Required. One of LEFT
or RIGHT
. Currently, there are no visual difference between them. In the future, we plan to use this information to place the message boxes accordingly
indent
: Required. Integer between 0-4. Currently, there are no visual difference between them. In the future, we plan to use this information to place messages in a thread-like view.
Example:
JSON Simplified is an export format for Span labeling project. This format contains both the text as well as the labeled spans, along with a character indexing. It’s suitable for simpler workflows where we expect each sentence to be contained and isolated from one another.
In the example below, here are the objects recognized at Datasaur.
text: the sentence.
entities: array of label applied
text: the token
type: the label applied.
start_idx: the character position in the labeled token.
The character position uses zero-based index.
end_idx: the last character position + 1 (because end_index does not include the last character).
The character position uses zero-based index.
JSON (Simplified) export format limitation:
Can't export arrow label
Can't export label multi sentence
Example
JSON Advanced is a proprietary Datasaur format designed in collaboration with our users to capture all possible data. This format is commonly used for partial token labeling projects. You can also use it when exporting token-based with arrow projects, such as coreference and dependency.
A JSON_ADVANCED file may contain the following data structures:
Sentences field
id: the sentence position.
content: the text of the sentence.
tokens: the tokens form of the sentence.
labels
l: the label applied.
layer: the layer position of the labels. This field is reserved for a project where a labeling of multiple tag set at once. For now you can disregard this field and this field is always set to 0.
id: the unique identifier of a label.
If the id has 9 segments, this indicates span label. For example, INNM0ViFwo8LluMTaTIK9:0:0:14:0:0:18:6:0
and here's the explanation <label set item id>:<layer>:<sidS>:<s>:<charS>:<sidE>:<e>:<charE>:<index>
.
If the id has 21 segments, this indicates arrow label. For example, tfc1FkbbEk9fOLx6haR1s:0:INNM0ViFwo8LluMTaTIK9:0:0:14:0:0:18:6:0:Oq_VuB0s_N7D8ZY0rgYsg:0:0:0:0:0:2:5:0:0
and here's the explanation <label set item id>:<arrow layer>:<….. origin id>:<….destination id>:<arrow index>.
hashCode: Datasaur's code to represent label information __
Span label. For example, SPAN:gpe:0:0:0:4:0:0:0:4:3:0:undefined:undefined.
Below is the explanation:
type:label set item id:layer or label set index:start cell line:cell index:start token index: start char index: end cell line: end cell index: end token index: end char index: counter.
Arrow label. For example, ARROW:dyC-o1HBnn49dcqDSphmJ:1:0:0:0:0:0:0:10:6:0:SPAN:geo:0:0:0:0:0:0:0:0:4:0:undefined:undefined:SPAN:geo:0:0:0:10:0:0:0:10:6:0:undefined:undefined
. Below is the explanation:
type:label set item id:layer or label set index:start cell line:cell index:start token index: start char index: end cell line: end cell index: end token index: end char index: counter:<span label: origin>:<span label: destination>.
documentId: the id of document.
sidS, sidE: the sentence starting and ending position of a label in 0-based index. In Datasaur, it is possible that a label spans across sentences.
s: the token starting position of a label in the starting sentence in 0-based index.
e: the token ending position of a label in the ending sentence in 0-based index.
charS: the character starting position of a label in the starting token in 0-based index.
charE: the character ending position of a label in the ending token in 0-based index.
metadata: additional information for a cell. You can find the structure and configuration options for metadata here.
labelerInfo: the information about the labeler.
id: the unique identifier of a labeler (each labeler has different id).
email: email that labeler used when signing in.
displayName: the display name of the email.
labelSets: contains all the label items that you used for the project.
index: the position of the label set in UI
labelItems: an array of labelItems for a label set
id: id of the labelSetItem
labelName: the displayed name of the label set item
parentId: id of the parent label set item
color: the color of the label set item
labels: an array of labels for the document
labelText: label content for row-based project. It will be null for other project beside the row-based project.
id: identifier from the applied label.
documentId : identifier for document where the label is applied.
startCellLine: starting line sentence position
startCellIndex: starting line column position
startTokenIndex: starting token index position
startCharIndex: starting character index position (relative to tokenIndex, start from 0 again when tokenIndex incremented)
endCellLine: ending line sentence position
endCellIndex: ending line column position
endTokenIndex: ending token index position
endCharIndex: ending character index position
layer: the layer where the token is positioned
counter: labels with the same name to be placed multiple times in the same position, start from 0
type: the type of labels -> SPAN, ARROW, BOUNDING_BOX
createdAt:
Labeler: the time labels applied
Reviewer: the time labels got accepted
updatedAt: last update timestamp on the label
Review related fields
acceptedByUserId: the user id of a reviewer who accepts the label. It will be null if there's no user who accept it manually.
rejectedByUserId: the user id of a reviewer who rejects the label. It will be null if there's no user who rejects it manually
labeledByUserId: the user id of a reviewer
labeledBy:
CONFLICT if it has not been resolved
REVIEWER if it has been resolved
AUTO if it has been resolved by meeting the consensus
Arrow label type specific fields
originId: origin id of an arrow label
originNumber: auto increment ID for origin
destinationId: origin id of an arrow label
destinationNumber: auto increment ID for destination
Bounding box label type specific fields
pageIndex: index of the page if the document contain multiple pages
nodeCount: total number of the bounding box points
x0: x coordinate of top left position of the bounding box
y0: y coordinate of top left position of the bounding box
x1: x coordinate of top right position of the bounding box
y1: y coordinate of top right position of the bounding box
x2: x coordinate of bottom right position of the bounding box
y2: y coordinate of bottom right position of the bounding box
x3: x coordinate of bottom left position of the bounding box
y3: y coordinate of bottom left position of the bounding box
pages: an array of page information for OCR project type
pageIndex: index of the page if the document contain multiple pages
pageHeight: original page height in pixel
pageWidth: original page width in pixel
comments
id: the id of the comment
parentId: the id of the parent comment - this will be filed if the comment thread has replies.
hashCode: Datasaur's code to represent comment's information, including the value being commented
message: the content of the comment
type: the type of comment, can be SPAN_LABEL
,SPAN_TEXT
, ARROW_LABEL
, and CELL_LABEL
userId: the id of user who create the comment
createdAt: the time when the user create the comment
Example (token-based with arrow)
Example (token-based with character-based labeling)
Example (token-based with bounding-box labeling)
JSON Tabular is a derivative of the JSON format that is used to represent table data format (in the form of an array of objects). You can choose this format if you are working on row-based labeling.
Example
Per version 6.43.0, Datasaur now supports JSONL natively 🎉
JSONL (JSON Lines) - https://jsonlines.org/ - is a text file format suitable for storing data that can be processed one record at a time. Datasaur supports a subset of valid JSONL files, namely:
the file must end in the .jsonl
extension
each record in a file must be in the same structure / format. If the first record / line is an array, all the following lines must also be an array. If the first record is an object, all the following lines must also be a JSON object.
The JSONL file format is supported for row-based project.
Here are some sample JSONL structure that Datasaur supports:
For JSONL with objects, you can have nested values, for example:
Datasaur will render all values after stringify-ing them.
Note that Datasaur relies on the first record / line to check the header length. Any items not in the first line will not be parsed.
Here is an example of how it may affect your workflow:
Let’s take the sample data above, and alter it a bit such that if someone has not completed
a session, there is no completed data stored
This file will be parsed just fine, but you will be missing the completed
column - because there is no completed
key in the first line.
As such, we highly recommend making your data consistent between each line, to ensure the best compatibility with our parser.
TSV_NON_IOB is a derivative of the TSV format that represents data that does not follow the IOB format - for example, B-GEO
is just GEO
. If your project is token-based (with or without arrows), you can choose this format for export.
A TSV_NON_IOB file contains the following data structure (this explanation is based on our example below):
#FORMAT: the file header.
#Text: the sentence representation.
1-1: the sentence-token.
The first 1
indicates the sentence number.
The second 1
indicates the token number.
0-3: the character index.
TITLE[1]: the label applied.
[1]
indicates uniquely identify annotation across lines.
Column 5: indicates layer 2.
author[2-1]: the label on the arrow.
2
indicates the arrow’s token origin.
1
indicates the arrow’s token destination.
Column 7: indicates layer 4.
Column 8: indicates layer 5.
Note: column 5, 7, and 8 will be filled if you label the token in the mentioned layers.
We built this format to be compatible with WebAnno
Example (token-based)
Example (token-based with arrows)
Universal Dependencies use a revised version of the CoNLL-X format called CoNLL-U. Sentences consist of one or more word lines, and word lines contain the following fields:
sent_id: Sentence id.
text: Sentence.
ID: Word index, integer starting at 1 for each new sentence; may be a range for multiword tokens; may be a decimal number for empty nodes (decimal numbers can be lower than 1 but must be greater than 0).
FORM: Word form or punctuation symbol.
LEMMA: Lemma or stem of word form.
UPOS: Universal part-of-speech tag.
XPOS: Language-specific part-of-speech tag; underscore if not available.
FEATS: List of morphological features from the universal feature inventory or from a defined language-specific extension; underscore if not available.
HEAD: Head of the current word, which is either a value of ID or zero (0).
DEPREL: Universal dependency relation to the HEAD (root iff HEAD = 0) or a defined language-specific subtype of one.
DEPS: Enhanced dependency graph in the form of a list of head-deprel pairs.
MISC: Any other annotation.
Example
CoNLL_2003 is usually used for POS tagging and named entity recognition labeling. All data files contain one word per line with empty lines representing sentence boundaries. At the end of each line there is a tag which states whether the current word is inside a named entity or not. The tag also encodes the type of named entity. Each line contains four fields:
The word
Part of-speech tag
Chunk tag
Named entity tag
Note: Importing or exporting files with conll_2003 format can be done if you checked the following task settings.
Tokens and token spans should have at most one label.
Allow arrows to be drawn between labels. Checking this setting will activate layer feature.
You could do POS tagging on Layer 0 and NER tagging on Layer 1. If you export the file with conll_2003, the result will be as shown as sample file below.
YOLO (You Only Look Once) is a popular object detection algorithm known for its speed and accuracy. For that reason, it is often used in real-time object detection in videos and images.
A YOLO file is a text-based format used for storing annotations and labels for object detection tasks. Each line in a YOLO file represents one annotated object/label in an image. One label in a YOLO file is represented with the following format.
Class ID: An integer representing the object’s label class. The ID starts from 0
. Each Class ID corresponds to a label class’s 0-based index/order in the label set.
Bounding Box: Four floating-point numbers representing the coordinates of the bounding box in the image. The four numbers are the following.
x_center
: the x
(horizontal) coordinate of the bounding box’s center point.
y_center
: the y
(vertical) coordinate of the bounding box’s center point.
width
: the width of the bounding box.
height
: the height of the bounding box.
The coordinates are normalized values relative to the image’s width and height.
The “0, 0” point is the top-left of the image, while the “1, 1” point is the bottom-right of the image.
Example
Limitations for Export
A YOLO file can only represent labels in one image. Due to that nature, Datasaur has limitations when importing and exporting labels from a multi-page file (e.g. PDF, TIFF) to YOLO.
If you wish to create a pre-labeled Bounding Box Labeling project with a multi-page file + a YOLO file, the pre-labeled labels will only be applied to the first page.
If you wish to export a Bounding Box Labeling project with multi-page files to YOLO format, only labels from the first page will be exported.
LabelMe is a open-sourced format used for annotating images with labels for object detection and segmentation tasks. Each annotation file contains metadata about the image, a list of labeled objects, and their corresponding shapes and properties.
Objects in LabelMe are represented with polygonal shapes, which are defined by a series of vertices. This format can be used as an annotation file for bounding box labeling.
A LabelMe file contains the following data structures.
filename
: The name of the image file being annotated.
folder
: The directory or folder containing the image.
source
: Information about the image and annotations origin.
imagesize
: Information about the image and annotations origin.
object
: Array of the annotated objects within the image
As an annotation file, here are the fields used at Datasaur.:
object
name
: The label or class name of the object.
deleted
: Indicates if the object is deleted (0
for no, 1
for yes).
verified
: Indicates if the object's annotation has been verified (0
for no, 1
for yes).
occluded
: Describes whether the object is occluded (blocked) by another object (yes
or no
).
date
: The date the annotation was made (if provided).
id
: A unique identifier for the object within this image.
polygon
: Represents the points making up the bounding box surrounding the annotated object.
pt
: A list of points making up the polygon.
x
: The x-coordinate of the point in pixels.
y
: The y-coordinate of the point in pixels.
username
: The annotator's username (if provided).
attributes
: A string containing additional attributes for the object
Example
Limitations for Import
Due to the free-style nature of attributes
, currently Datasaur does not support reimporting them in. Datasaur will only read attributes with the text
key, and set the value as the label’s caption.
Pascal VOC is a widely used format for annotating images with labels for object detection tasks. Each annotation file contains metadata about the image, a list of labeled objects, and their corresponding bounding boxes.
Objects in Pascal VOC are represented with bounding boxes defined by the coordinates of their corners. This format can be used as an annotation file for bounding box labeling.
A Pascal VOC file may contain the following data structures.
filename
: The name of the image file being annotated.
folder
: The directory or folder containing the image.
source
: Information about the image and annotations origin.
size
: Dimensions of the image.
width
: The width of the image in pixels.
height
: The width of the image in pixels.
depth
: The number of color channels in the image.
segmented
: Indicates if the image has been segmented (0
for no, 1
for yes).
object
: Array of the annotated objects within the image.
As an annotation file, here are the fields used at Datasaur:
object
name
: The label or class name of the object.
difficult
: Indicates if the object is difficult to detect (0
for no, 1
for yes).
occluded
: Indicates if the object is occluded (0
for no, 1
for yes).
truncated
: Indicates if the object is truncated (0
for no, 1
for yes).
bndbox
: The bounding box coordinates for the object.
xmin
: The x-coordinate of the top-left corner of the bounding box.
ymin
: The y-coordinate of the top-left corner of the bounding box.
xmax
: The x-coordinate of the bottom-right corner of the bounding box.
ymax
: The y-coordinate of the bottom-right corner of the bounding box.
attributes
: Additional attributes for the object (if any).
attribute
name
: The name of the attribute.
value
: The value of the attribute.
Example
Limitations for export
Datasaur currently does not handle exporting the image color to depth
, and it will be exported as an empty tag.
Markdown is a lightweight markup language with plain-text-formatting syntax, created in 2004 by John Gruber with Aaron Swartz. Markdown is often used to format readme files, for writing messages in online discussion forums, and to create rich text using a plain text editor.
Example
Scalable Vector Graphics (SVG) is an Extensible Markup Language (XML)-based vector image format for two-dimensional graphics with support for interactivity and animation. SVG images and their behaviors are defined in XML text files. This means that they can be searched, indexed, scripted, and compressed. As XML files, SVG images can be created and edited with any text editor, as well as with drawing software.
A bitmap is a type of memory organization or image file format used to store digital images. The term bitmap comes from the computer programming terminology, meaning just a map of bits, a spatially mapped array of bits. Now, along with pixmap, it commonly refers to the similar concept of a spatially mapped array of pixels. Raster images in general may be referred to as bitmaps or pixmaps, whether synthetic or photographic, in files or memory.
Tagged Image File Format, abbreviated TIFF or TIF, is a computer file format for storing raster graphics images, popular among graphic artists, the publishing industry, and photographers. TIFF is widely supported by scanning, faxing, word processing, optical character recognition, image manipulation, desktop publishing, and page-layout applications
WebP is an image format employing both lossy and lossless compression. It is currently developed by Google, based on technology acquired with the purchase of On2 Technologies. As a derivative of the VP8 video format, it is a sister project to the WebM multimedia container format. WebP-related software is released under a BSD license.
JPEG is a commonly used method of lossy compression for digital images, particularly for those images produced by digital photography. The degree of compression can be adjusted, allowing a selectable trade off between storage size and image quality. JPEG typically achieves 10:1 compression with little perceptible loss in image quality.
Portable Network Graphics is a raster-graphics file format that supports lossless data compression. PNG was developed as an improved, non-patented replacement for Graphics Interchange Format (GIF). PNG supports palette-based images (with palettes of 24-bit RGB or 32-bit RGBA colors), grayscale images (with or without alpha channel for transparency), and full-color non-palette-based RGB or RGBA images. The PNG working group designed the format for transferring images on the Internet, not for professional-quality print graphics; therefore non-RGB color spaces such as CMYK are not supported
Graphics Interchange Format (GIF) is a bitmap image format. The format supports up to 8 bits per pixel for each image, allowing a single image to reference its own palette of up to 256 different colors chosen from the 24-bit RGB color space. It also supports animations and allows a separate palette of up to 256 colors for each frame. These palette limitations make GIF less suitable for reproducing color photographs and other images with color gradients, but well-suited for simpler images such as graphics or logos with solid areas of color. Unlike video, the GIF file format does not support audio.
The Portable Document Format (PDF) is a file format developed by Adobe in the 1990s to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems. Based on the PostScript language, each PDF file encapsulates a complete description of a fixed-layout flat document, including the text, fonts, vector graphics, raster images and other information needed to display it. PDF was standardized as ISO 32000 in 2008, and no longer requires any royalties for its implementation.
PPTX is a zipped, XML-based file format that is part of the Microsoft Office Open XML (also known as OOXML or OpenXML) specification, introduced as part of Microsoft Office 2007 and later. PPTX is the default presentation file format for new PowerPoint presentations. Support for loading and saving PPT files is built into PPTX.
DOCX is part of Microsoft Office Open XML specification (also known as OOXML or OpenXML) and was introduced with Office 2007. DOCX is a zipped, XML-based file format. Microsoft Word 2007 and later uses DOCX as the default file format when creating a new document. Support for loading and saving legacy DOC files is also included.
URL is a file format that contains a list of urls. On the other hand, a URI is a standardized format used to identify and locate resources on the internet. This format is used to create a document labeling project with the URL Viewer, so you can label web pages through Datasaur.
HTML is a markup language used to create web pages and other types of online content. It provides a standardized way of defining the structure and appearance of web pages, including text, images, and multimedia elements like audio and video. HTML files are commonly used to create and publish websites, as well as to share content across the internet. To ensure your multimedia elements are rendered properly in Datasaur, please make sure they use full URLs instead of relative paths. For example, img
tags should have its src
property like this:
MP4 is a video file format that is widely used for streaming and sharing videos online. It provides high-quality video and audio compression, making it a popular choice for digital content creators and viewers alike.
M4A, short for MPEG-4 Audio, is a file format used to store audio data. It is a part of the MPEG-4 container format, which can hold various types of media like audio, video, and text.
MP3 is a digital audio file format commonly used for storing music and other audio recordings. It is a compressed format that allows for high-quality sound while minimizing the file size.
Flac is a lossless audio file format that preserves the original quality of the recording. It is often used by audiophiles and music producers who require the highest level of audio fidelity.
AAC is a file format for storing music or other sounds. It stands for Advanced Audio Coding or Advanced Audio Codec. It is one of the standard formats that comes from the MPEG organization, the same people who invented MP3.
WAV is a high-quality audio file format that is often used for storing uncompressed audio recordings. It is a popular format in professional audio production and is known for its high level of accuracy and fidelity.
SRT is a subtitle file format used for adding subtitles to video content. It contains the text of the subtitles along with timing information to synchronize them with the video.
VTT is a subtitle file format that is commonly used for adding captions and subtitles to video content. It is a newer format than SRT and supports more advanced features such as text styling and positioning.
LayoutLM (Layout Language Model) is a transformer-based model from Microsoft that is designed to process and understand documents by combining text and layout information.
A LayoutLM file is a .tsv
file where each row represents a labeled element within a document. Each row includes the following fields:
text
: the content of the label (caption).
xmin
: x coordinate of top left position of the bounding box.
ymin
: y coordinate of top left position of the bounding box.
xmax
: x coordinate of bottom right position of the bounding box.
ymax
: y coordinate of bottom right position of the bounding box.
width
: the width of the bounding box.
height
: the height of the bounding box.
label
: the label class name.
page_index
: index of the page if the document contains multiple pages.
In LayoutLM format, labels are expected to contain only single-word entries. If a label has a multi-word or multi-line caption, it will be disregarded by default. However, there is an exception for labels with the same number of words as associated shapes (eg. merged labels): such labels are processed as multiple, distinct labels, each containing a single word and its corresponding shape.