1.1. About this manual

1.1.1. Copyright Notice

All rights reserved. No part of this document may be reproduced, copied or transmitted in any form by any means electronic, mechanical or otherwise without the permission of Marquise Technologies sàrl. If you are interested in receiving permissions for reproduction or excerpts, please contact us at

1.1.2. Trademarks

Marquise Technologies, the company’s logo and products' logo are pending registration trademarks of Marquise Technologies sàrl. All other trademarks mentioned here within are the property of their respective owners.

1.1.3. Notice of Liability

The information in this document is distributed and provided “as is“ without warranty. While care has been taken during the writing of this document to make the information as accurate as possible, neither the author or Marquise Technologies sàrl shall not be held responsible for losses or damages to any person or entity as a result of using instructions as given in this document.

1.1.4. Conventions

This documentation makes use of several symbols and typographical conventions in order to differentiate various paragraphs from standard descriptive text. Here is the list of symbols and typographical styles used:

INFO : Additional information about the current topic.

WARNING : Important information that you should always take into consideration.

TOOL-TIP : Additional information about tool usage

1.2. About ICE

ICE is a Reference player for the playback and QC of high-end content in any format from SD to 4K, including DCP as well as Interoperable Master Format (IMF) packages. Validation tools, audio & image analysis and support for automated QC reports complete the toolset.

Dedicated to post houses, broadcasters, archives and cinema operators, ICE plays natively all the formats used in the industry in the production, post production and distribution phases, and also support ACES and HDR content.

1.3. Documents & Resources

Information like product brochures, white papers and video tutorials are referenced here:

The release notes, the latest available version of the software and the Knowledge Base is available on Marquise Technologies support portal.

1.3.1. Creating an account on the Support Portal

Register yourself on our support portal to get access to latest software releases and Knowledge Base article.

1.4. Contacting Support

Support is available for customers under a valid support and maintenance program.

All the Support requests need to be sent using our ticketing system, accessible from the Support Portal.

To inform us of an issue or place a question related to product support, please go under TICKETS to create a new ticket.

Please report only one question or issue per ticket and indicated the version of the software you are using.

The more information you give us, the best we can help.

Please note that for urgent tickets we process them in order of arrival.


This chapter covers high level information about ICE, and in particular:

  • Hardware Requirements

  • Software installation

  • License installation

2.1. Hardware Requirements

The way ICE playbacks media highly depends on the capabilities of the hardware chosen. Please make sure to select the workstation according to your needs.

Operating System

Microsoft Windows 10 64 bit

From January 2021 onwards, Windows 7, Windows 8 and Windows 8.1 are deprecated.

Supported GPU

NVIDIA P4000, P5000, P6000,
GTX 1080, GTX 1080ti,
RTX 4000, RTX 6000, RTX 2080, RTX 2080ti,
RTX 3080, RTX 3080ti, RTX A6000

All configurations MUST have 2 GPUs when GPU is used for decoding JPEG2000 (DCP & IMF).
A single GPU machine will not reach its peak performance.

In order to ensure the proper functioning of Nvidia’s GPUs cards, please keep up to date the version of your drivers and refer to the Multi-GPU Configuration article of the Knowledge base.

Supported Video IO cards

Bluefish44 Kronos, Bluefish444 Supernova S+, Bluefish444 Neutron,
AJA Kona 5, AJA Kona 4, Aja Corvid 88,
BlackMagic DeckLink Studio 4K


Whatever the type of storage chosen, internal, NAS or SAN, its capacity and bandwidth will impact the playback speed of ICE.

Count 30% overhead to the necessary bandwidth for a particular media.
For example to assure a realtime playback of a 800 Mbit content, the network must serve sustained 1040 Mbit.


Minimum 1 screen of 1920 x 1200 resolution.
The second screen for dual display mode can be of 1920 x 1080 resolution

For ease of use and optimal rendering, we recommend a Dream Color Z 27 series or similar type screen of 2560 x 1440.

2.2. Recommended Hardware configurations

2.2.1. Recommended Systems for DCP Playback



Dual Intel Xeon Gold 6130 processors or higher


Minimum 64GB





Video IO

Bluefish444 Supernova S+, AJA Kona 4 or higher

2.2.2. Recommended Systems for IMF Playback



Dual CPU Intel Xeon Gold 6134 3,20Ghz processors or higher


Minimum 128GB





Video IO

Bluefish444 Kronos ēlektron, AJA Kona 5, Corvid 88

Supermicro SYS-7049GP-TRT


Dual CPU Intel Xeon Gold 6134 3,20Ghz processors or higher


Minimum 128GB


Minimum 3 x NVIDIA RTX 2080ti

Video IO

Bluefish444 Kronos ēlektron, AJA Kona 5, Corvid 88

2.3. Video Output capabilities

MIST is able to achieve different video output workflows, according to the different supported devices.

Output Workflows Recommended Device

Dual 4K 60p in SDI

- AJA Kona 5 (12G SDI)
- AJA Corvid 88 (3G SDI)
- Bluefish444 Kronos ēlektron

Simultaneous 1 x 4K 60p in SDI
and 1 x 4K 60p in HDMI 2.0
(including Dolby Vision tuneling)

- AJA Kona 5 (12G SDI)

Single 8K in SDI

- AJA Kona 5 (12G SDI)
- AJA Corvid 88 (3G SDI)
- Bluefish444 Kronos ēlektron

Single 8K in HDMI 2.0


Single 4K 120p in HDMI 2.0


Video Over IP (SMPTE ST-2110)

- Bluefish444 Kronos optikós
- AJA Corvid 88 (3G SDI) + AJA IPT-10G2-SDI

2.4. Software Installation

ICE latest release version can be downloaded from the Support Portal.

Follow the instructions of the Set up wizard for the installation on your computer :


2.5. License installation

After having installed ICE, you need to launch the software.


Select the "Install License" button.


From , you should have received two files: ice_license.dat and ice.lic. Sometimes, for a special purpose, these files have different names, you just need to rename them like this.

2.5.1. Installing Primary License for ICE

First step, you have to import the ice_license.dat file. You need to browse the file system where you have stored the licenses. It is easier to find by to setting the correct ".dat" filter in the "File Type" line:


Should the import of the license worked fine, new functions will appear in the starting menu:


2.5.2. Installing Auxiliary License for ICE

The Auxiliary License needs to be imported as well, so select again “Install License” button to to install the ice.lic file:


Again browse the file system where you have stored the licenses, and select “.lic” in the file type for retrieving the file.


Now, you have your software correctly activated.


You can check your license and plugins by pressing F12 (About) or by pressing ctrl + right click on the viewer, when a project is open.



This section covers high level information about ICE, and in particular:

  • ICE User Interface Structure

  • Interface Basics

3.1. Software Organization

ICE is built around the TimeLine. This backbone is the principal module, from where all the features to achieve QC are accessible.

The secondary module is the Project Management.

3.1.1. Modules

This structure has been specially designed to be as efficient as possible to support long hours of work.

A module in ICE is displayed on the full screen of the user interface. Each module interface differs from the others.

Each module can be accessed from any other module of the application using either keyboard shortcuts or the Module Radial Menu.

  • To access Project press F1

  • To access TimeLine press F3

3.1.2. Sub-modules

From the TimeLine module, it is possible to access Sub-Modules.

These feature-specific panels or windows are accessible from the bottom menu.

3.2. Overview of the Project Module

In ICE a project designates a whole structure of directories and files that all together make up the project. The project can be then seen as a shell for various objects that you work with or work on, including assets (video, audio, timed text) and metadata.

Managing the content to playback per project is necessary when you work with component based media, and it allows ICE to support supplemental packages and multiple compositions when playing back DCP and IMF content. The general hierarchy of a project is based on various key elements that are used while working on a project. These elements are listed in the table below:

Project settings

Parameters that define the project at the top level such as default composition settings, user interface settings, video IO settings, etc.

Project media bin

The Project media bin references media before it can be used in a composition. It is accessible from the Media tab of the the Command Panel .


Each composition is an actual timeline with video and audio capabilities. Any playback will be done within a composition.

Various metadata

Other metadata stored in the project.

The Project Module is the first module to appear when starting ICE: you will have the choice to open an existing Project, or to create a new one.

In this module are set the parameters for the project, like its properties.

Project Settings

This is also the place where you can set user interface preferences or select a control panel.

Project parameters can always be modified at a later moment, as you work.

This module is detailed under the chapter Project.

3.3. Overview of the TimeLine Module

The TimeLine module is the most important module of ICE. From there all the necessary tools such as the editing timeline, conforming, primary color grading, calibration, color analysis and playback functions are available.

This is also from this module that all the mastering capabilities and export of deliverables are available.

The TimeLine Module workspace is composed of different elements:

  • The Menu Bar

  • The Command Panel

  • The Image Viewport

  • The Timeline


3.3.1. The Menu bar


This menu bar gives access to various capabilities of ICE, related to content management or specific features.

Content management



Access to Projects Manager / Exit the Project



Access to the Settings for the Project


Import metadata files (EDL, cutting list, color decision list, QC report, etc.), as well as content packages (DCP, IMF) and KDMs.


Export metadata files (EDL, cutting list, etc.) and reports


Access the Conforming panel



Show / hide the Events Viewer


Access to Color Grading


Access to Video Pipeline Diagram

3.3.2. The Command Panel

The Command Panel gives access to the assets of a project (media, compositions, metadata) as well as some tools for working with those assets.

From there you will also be able to import and manage the media for the project.

Accessing the Command Panel

By default on opening of a new project the Command Panel is displayed. You can hide it using the icon Media COMMAND below the panel itself:

Command Panel Menu

On the left of the panel a vertical menu bar give access to the different features and tools :


manage MEDIA of the project


display the different compositions of the project


manage access to external repositories




display assets and compositions' metadata


manage the COMPOSITION SETTINGS : Output format, color management, overlays


provides information related to Storage, GPU and video IO card performances. See section CONSOLE

Resizing the Command Panel

You can resize the Command Panel window horizontally and vertically:

  • Position the mouse cursor at the edge of the window and when it changes appearance click and slide up to your need.


3.3.3. The Image Viewport

The Viewport layout and capabilities are detailed in the chapter VIEWPORT

3.3.4. The Timeline

The TimeLine behavior is detailed in the chapter TIMELINE.

3.4. Interface Basics

Whatever the module or sub-module you are in, the following interface displays can be met.

3.4.1. Cursors

ICE mouse cursor changes appearance if an action with the mouse is possible:


Cursor in normal mode


Possibility to move horizontally an interface element


Possibility to move vertically an interface element


Possibility to extend or resize and interface element


Cursor in move mode


Possibility to set an IN point (in the timeline)


Possibility to set an OUT point (in the timeline) or to extend a clip duration (from the last frame of the clip)


Cursor in trim mode


Indicates that the application is busy performing an operation

3.4.2. Contextual Menus

A contextual menu is a menu in a graphical user interface (GUI) that appears upon user interaction, such as a right-click. This menu offer a selected set of choices that are available in the current state, or context, of the application.

The Contextual Menus are accessible from every module of ICE, at the current mouse cursor location.

Calling a contextual menu when mouse cursor is located on a specific panel or interface element is not always possible. In that particular case, move the cursor to the nearest empty part of the user interface to be able to call the desired Menu.

ICE uses two different contextual Menus: Radial Menu and Drop-down Menus.

Module Radial Menu

The Modules Menu allows navigation through the different modules.

Calling Module Radial Menu also allows to exit ICE application.

  • Press Ctrl + right-click to display the Module menu.

Drop-down menu

The drop-down menus give quick access to possible actions for the specific area the mouse is located.


3.4.3. Dialogue Windows

Dialogue windows are represented by large rectangles opening over the module you are in.

They are accessible using interface buttons.

Dialogues windows can have several tabs and sometimes sub-tabs where the user can set different parameters.


3.4.4. Warning Messages

Warning messages can appear in the different modules of ICE.

These messages interrupt the current work, in order to inform the user about a critical path.

Warning messages always require an action from the user : press OK to continue or ESC to cancel.

Warning messages

3.4.5. Keyboard Shortcuts - Help

A lot of keyboard shortcuts are available in ICE.

  • Press the keyboard H key to display the Shortcuts List available for the current Module (available from the TimeLine module only)

A recapitulation of the available Keyboards Shortcuts for ICE is available in Appendix Keyboard Shortcuts.

3.5. Starting ICE

On opening, ICE displays a Radial Menu:

  • From the menu choose New

The New Project window appears.

  • Add a name and select your settings then click OK


Details about the settings can be found in the Project Management chapter.

The new project opens on an empty TimeLine.

Details about the TimeLine can be found in the TimeLine chapter.

3.6. Closing ICE

  • Press Ctrl + right mouse click to display the menu and select Exit to leave the Project.


Once you have exited the Project, chose Exit to close ICE.



4.1. Managing Projects

When the application is started, it opens on the Start menu, allowing to either create a new project or opening an existing one.


This Start Menu also permit to exit the application’s session.

The Import shortcuts available are described in the sections DCP Import and IMF Import.

4.1.1. Creating a New Project

  • To create a new project choose New on the Module Radial Menu.

  • Click in the text fields to edit it and type your project information:

new project

The following characters are forbidden:
< (less than)
> (greater than)
: (colon)
" (double quote)
/ (forward slash)
\ (backslash)
| (vertical bar or pipe)
? (question mark)
* (asterisk)

  • Confirm with [OK] or cancel with Esc.

Some basic settings for the project can be chosen from this panel, however the full list of settings are managed from the Project Module. Detailed information about these settings are given in the following sections.

Once the new project is created, the application switches to the TimeLine module.

4.1.2. Opening an existing Project

  • On the Start Menu, select Open.

open project

On the left side of the panel the list of all the projects is displayed.

On the right side are indicated the list of all the compositions saved for the selected project.

  • Select the project or the composition directly and click [OK].

  • A double click on the project will open the last saved composition.

4.1.3. Renaming a project

You can rename a project in the Project Settings / General / Title:

  • double-click on the text field to edit it

4.1.4. Deleting a Project

  • To delete a project, select it in the project list and click the delete icon at the bottom of the panel.

4.2. Project Settings

You can modify the Project settings at anytime. The settings chosen will not affect the project’s compositions, it will only assign defaults parameters to the new compositions created.

  • To modify the Project settings, use the Media Project Settings icon in the menu bar of the TimeLine or press F1.

Project Settings

The Project Manager appears in a panel, composed of different tabs:


General information of the project (e.g. Production company, EIDR, ISAN etc)


Set the properties and type of the project.


Manage the default media settings.

Video Output

Configure the video output settings of the Video I/O board (e.g. Bluefish444 or AJA).

Mastering Display

Setup the communication with the Reference Monitor.

User Interface

Set user interface preferences.

Control Surface

Connect and setup an existing control surface.


All the miscellaneous settings like the auto-save setting.

Dolby Vision

Setup the properties of the CMU device.

VTR Emulation

Configure settings to turn ICE in a virtual telecine.

To leave the Project settings, click OK to go back to the TimeLine.

4.2.1. General Settings

For all projects, it is possible to enter additional information or metadata that can be used as reference information to identify a project:

Insert custom information like Production Company, Director’s name, etc.

  • Click on the edit field and type desired information.

4.2.2. Properties

In the Properties tab, set the default dimensions, frame rate or duration for all new compositions in the project.

Each new Composition of the project will be created with these parameters.

Properties tab

The settings of a particular composition can be modified at any time, see Composition Settings chapter.


There is is different output format available as presets:

  • Select the desired preset to set automatically the dimensions, the frame rates and the pixel aspect of the media.

Properties tab
Custom Output format

To define a custom format, select Custom in the Preset drop-down menu, and enter the desired format, frame rate and pixel aspect:


  • Click on the width digits to edit the text and enter the desired value.

  • Press tab to edit the height digits.

  • Finish with enter to save the new dimensions.

You can also change the dimensions by navigating in the cyan bar by dragging the slider one way or the other.

Frame rate

Choose if the project will be played in 24, 25, 29.97 or 30 frames per second. It is important to bear in mind the destination of the final result when setting the frame rate, as it will affect the playback speed.

Frame rate

Pixel Aspect

Select the desired aspect ratio of the Custom project size: 4/3, 16/9, or any other ratio from 1.00:1 to 4.00:1.

Pixel Aspect

By default, the composition duration is set on 24h.

  • To change the project composition duration, click on the edit field and enter the desired composition duration for the project.


It is possible to set the beginning of the composition (the start) at a specific TimeCode.

By default, the Start is set on 00:00:00:00.

This setting affects the current composition only. To modify the Start TC for all new compositions in the project, see Default Start.

  • Choose if your project is a Stereo3D project or a normal 2D one. By default, the project is set in Normnal 2D.

Audio config

Allows you to choose a default audio configuration for all new compositions in the project.

Digital Intermediate settings

The following settings are useful only when working from film stock.

Film Stock

If you are working from film then select the stock type via the drop-down menu.

Film Type

Setting a Film type will define how the TimeLine is calculated in Feet + Frames (please refer to the chapter Changing the Timebase display).

Available film types are: 8mm and super 8mm, 16mm, 35mm (2, 3, 4 or 8 perforations) and 65mm (5, 8, 12 or 15 perforations).

Film Polarity

Click on the film polarity in use. The selected one is highlighted in cyan.

Printer Lights

Set the printer to obtain balance for color and density of the stock film. PL Point and PL 1/2 Point allow to refine the values more accurately.

4.2.3. Media

Media Directory
  • The Media Directory allows to define a default location for all media for this project.


When you extract the essence of a DCP package, this is also where it is stored.

Preview Directory

Choose where to store the video thumbnails (media preview) automatically generated when a video content is referenced in a project. By default, htex are stored in the Project directory.

Default frame rate

Some image sequences (e.g. DPX) do not carry frame rate information. In such cases, it is necessary to define a frame rate for this content.

Choose a frame rate from the drop-down menu for your image sequence before importing media into the library.

Default start

Defines a composition start by default for each new timeline in the Project.

Snapshot Directory

Defines a directory to save the snapshots.

See also the chapter Snapshots.

Name Template

Modifies the default naming convention for the snapshots.

File Request

a file saving window will appear to allow you to manually save the snapshot each time you take one:


uses defaults settings to automatically save the snapshots.


Applies automatically the LUT set for the Display or for the SDI output to your snapshot.

4.2.4. Codecs

This tab allows you to set the parameters when working with JPEG2000. Depending on the performances of your workstation you can choose to use CPU or GPU power.

In CPU mode, you can also defines the maximum number of CPU threads engaged in the encoding or decoding process.

CPU encoding: the number of threads varies with the available shared memory.

4.2.5. Video Output

In this panel, you can select the properties of the signal that you wish to output.

Once the workstation has been properly connected to a display device (i.e. A DCI compliant projector or a SDI reference monitor), you can setup the video output of your project to obtain the appropriate signal on your device.


Resolutions displayed in the list are relative to the formats supported and delivered by the I/O card.

Basics drivers are installed with the application. The .msi file install the minimum configuration to run the video card. To access the video card proprietary tools, download the drivers from their manufacturer website. It is highly recommended to keep the drivers up to date.

The Video output setting in the Project only defines the output through the video IO card. There is no relation with the project / compositions output properties.

4.2.6. Mastering Display

This tab allows you to configure the communication between your Reference monitor and the application: the ST-2086 metadata selected in the Mastering Display of the COMPOSITION SETTINGS can be automatically sent to the Reference monitor, allowing to configure it accordingly and avoid the operator to use the monitor’s manual menu.

The following monitors can be remote controlled:

  • Canon HDR 4K monitors

  • Eizo HDR 4K monitors

  • TVLogic HDR 4K monitors

open project

To physically establish the communication between a EIZO Prominence monitor and your workstation, you need to connect a USB cable between the USB downstream port of the PC and the USB upstream port of the monitor before launching the software. The USB hub function is set up automatically upon connection of the USB cable.

When the application is lauches, it takes control of the settings of the EIZO monitor. Each time a composition is loaded, the selected mastering display parameters are communicated to the EIZO monitor, which is automatically configured accordingly.

A change of mastering display preset will refresh the monitor. The new parameters used are indicated temporarily on the monitor.

TVLogic LUM-310R

To establish the communication between a TVLogic LUM-310R monitor and your workstation:

open project
  1. Get a DB9 Female to RJ45 Male cable.

  2. Connect the cable from the RS-232 port of your computer to the RS-422 IN port of the TVLogic display.

  3. Open the .cfg file for your application in the directory C:\Users\$SessionName$\AppData\Roaming\Marquise Technologies\session.

  4. Add the following lines between <MTSessionConfig> and the </MTSessionConfig> tag:

    <DisplayMonitorConfig id="tvlogic">

The CommPort number to indicate depends on the CommPort available on your system.

The DeviceId number must be the same as indicated in the monitor’s menu settings in the GPI tab / Monitor ID. Change one or the other accordingly. The default DeviceId is number 1.

Save the file before leaving.

4.2.7. User interface

Select your preferences for the user interface:

Color Wheel
Screen Layout

Allows you to display the User Interface on 1 or 2 monitors (Dual).
For now, only scopes can be placed on the second screen.

you must restart the software to apply this change.

Timeline Auto-Hide

automatically hides the Timeline during playback

Cursor Auto-Hide

automatically hides the mouse cursor during playback

Start play on Load

starts automatically the playback of the content loaded on the timeline.

Resume Play after Scrub

resumes the playback after a scrub of the playhead with the mouse.

Keep Clips after Drops

keeps the clips attached to the mouse when performing a Drag and Drop on the timeline. This is useful when adding the same clip several times on the timeline.

Color Wheel

Allows you to choose the orientation of your color-grading wheels

Default Keyframe

Selects the default Keyframe and interpolation settings based on your needs.

4.2.8. Control Surface

When using a control panel, this is where you can connect and configure it.

The Tangent Devices panels are supported.

  • Select the desired device from the drop-down menu.

Please be sure to have install the control panel’s drivers before.
Download "Marquise RAIN" from Tangent’s website.

4.2.9. Misc

Defines autosavings.

4.2.10. Dolby

This sections defines the default settings for Dolby technologies.

Dolby Vision

Allows you to configure Dolby Vision settings.

CMU Type

Select if you want to use Dolby iCMU (internal CMU, aka Software CMU) or a Dolby eCMU (external CMU device).

See also CMU in the HDR chapter.


If eCMU is selected, then enter the connection information between your workstation and the eCMU:

IP Address

Type the IP address of the connected eCMU.

Port number

Type the port number of the connected eCMU.

Device ID

Move the slider to change the device ID of the connected eCMU or double-click on the device number to edit it.


To use the iCMU, you need a valid Dolby Vision license from Dolby Laboratories.

  • to install the Dolby Vision license, in the Starting Menu choose Install License:

  • select Dolby Vision in the File Type drop-down menu and browse your system to the location of your license.

  • Validate with OK

Once the license is installed and its validity checked, the Dolby Vision license information is displayed:

Dolby Atmos

Defines a default playback output for Dolby Atmos tracks.



ICE allows to import a large variety of media:
flat files, packages like DCP, IMF or iTunes, image sequences, PDF and XML files in sidecar, EDLs, camera magazines, etc.

In ICE, media are managed in the Media tab of the Command Panel.

5.1. Media Tab

The Media tab is where the essences for a specific project are manually or automatically referenced from disks or a SAN.

The essences can be any type of video, audio, or subtitles files.

The Media Tab also permits the organization of the content for a project. From there assets can be selected to get loaded on the timeline.

The MEDIA tab is composed of 2 columns: on the left, the folder tree, and on the right the related content, like in a standard file browser interface.

  • To adjust the width of the columns use the vertical slider :


5.1.1. Organizing the Project Media


You can create folders to organize the project media according to your needs.

  • to add a folder, position the mouse in the left column and press mouse’s right button. Choose New from the drop-down menu:


Alternatively you can use the icon "create new folder":


when adding other folders at root level, make sure that the root project is selected (highlighted in blue) and not a folder.

  • to add a sub-folder, select the desired folder and press mouse’s right button. Choose New from the drop-down menu:


Alternatively you can use the icon "create new folder".

  • to delete a folder or a sub-folder, select the desired folder and press mouse’s right button. Choose Delete from the drop-down menu.

  • to rename a folder or a sub-folder, select the desired folder and press mouse’s right button. Choose Rename from the drop-down menu.

Moving media

You can rearrange the media in the folders once they have been imported in the Media bin:

  • Using the mouse, click on the media then lift it with a quick swipe of the left mouse button. This process is called Lift, Carry & Drop. The Clip will be attached to the mouse.

  • Select the new folder location:

  • In the folder bin, use right mouse button and select Move Here:


5.2. Import of Media

There is two ways for importing media for a project:

  • Drag & Drop from Windows file system

  • Import function using the internal smart file browser

5.2.1. Drag & Drop files from Windows' file system

Drag & Drop from Windows' file system allows you to easily import flat files, packages (DCP, IMF, iTunes, etc..), image sequences, camera magazines, compositions, sidecars files, etc. as well as entire directories.

  • Hit the Windows key or press the button + at the bottom right of the panel to display the Windows file browser.

  • Select your files and drop them anywhere in the application’s window.

Once the media are properly referenced, they are displayed in the Media tab.

  • To import content into a specific folder of the project media, first select the desired folder prior to drag & drop the content.

Drag & Drop sequences

To import image sequences like DPX or TIFF drag & drop the first image of the sequence and the full sequence will be automatically imported.

Because there is no information about frame rates in DPX or TIFF images, first set the correct frame rate in the Properties of the Project.

Drag & Drop packages

The import of component-based packages like DCP, IMF or iTunes deliverables is also possible using drag & drop:

  • To import a package, drag & drop the entire package’s folder.

5.2.2. Import Panel

The Import panel is an internal smart file browser that has the capability of understanding complex media files and packages.

This is also the function you will use to import specific metadata files like EDLs or QC reports in XML.

Access import from the Media tab
  • position the mouse in the left column and press mouse’s right button. Choose Import from the drop-down menu:

  • Alternatively double-click on an empty area of the left column

  • To import content into a specific folder of the project media, first select the desired folder prior to import the content.

Access from the IMPORT button

Click on the IMPORT button on the menu bar to display the Import panel.

Import panel

The Import panel is composed of 2 areas:

The left column, used for quickly navigating through the physical files volumes the system is connected to.
Bookmarks and Aliases are also displayed there.

The folder tree column, showing folders' organization within a volume.

When a package like a DCP or an IMF is selected, the CPLs and the important metadata of the package are automatically displayed :


In order to ease the navigation, you can display only a specific file type.

  • Click on the file type to open the drop-down menu:

  • Select the desired type of file from the list:

  • Validate the import using OK.

Additional import methods with validation process are also available: refer to DCP and IMF Import sections for more information.

5.2.3. Delete Media

  • to delete a media, select the desired clip and press the button - at the bottom right of the Command Panel.

Alternatively, when the clip is selected, you can press the mouse’s right button. Choose Delete from the drop-down menu.

Deleting a media in the Media tab will remove it from the Project media bin and will not physically delete the file.

5.2.4. Import of Directory

It is possible to import an entire directory in the project media bin: the source directory organization with the folders and subfolers and their related files is reproduced in the media bin.

This functionality is particulary useful when importing complex folder’s organisation like a camera magazine or DPX sequences organised in reels.

  • To import a Directory, position the mouse in the left column and press mouse’s right button. Choose Import Directory from the drop-down menu:


Navigate in the file browser to the desired directory, and validate the import using OK.

5.2.5. Media views

The operator can display the media assets using different views :

  • Thumbnails

  • List details

  • List tiles

Click on the View icons on the upper right of the Command Panel to change the views:


5.2.6. Encrypted content

When importing an encrypted DCP, the assets are displayed with a closed lock and the media preview is black:


After importing the KDM (See Import Encrypted DCP), the locks are open and the media preview is enabled.


5.3. Relink Media

When a media referenced in a project has been moved from its original physical location, the application cannot access it and it is no longer possible to work with it.

In this case, a warning is displayed on the clip:


It is then necessary to proceed to a manual relink of the path to the files.

  1. right-click on the clip in the Media tab to display the drop-down menu.

  2. Browse the file system to the new location of the file

  3. Click OK to validate

Your media is now relinked.

5.4. Direct Playback from the Media tab

  • A double click on a media will automatically load the content on the timeline.

  • To start automatically the playback of the content loaded on the timeline, enables the option Start play on Load in the Project Settings, User interface

  • To load manually the media on the timeline, refers to section Adding clips to a composition

  • to start the playback of an IMF or DCP package, see the Composition tab below.

You can also right-click on a clip in the library to display the drop-down menu options:

Load as Composition

Load the clip on the timeline in a new Composition

Load as Source

Load the clip in the Source Viewport


Save the media metadata in EBU Core XML


Delete the clip from the project media bin


To relink a media

Open System Location

Opens the Windows’s file browser to the physical location of the media

Select All

Select all the clips in the current directory

Deselect All

Deselect all the clips in the current directory

5.5. Compositions Tab

This tab displays all the compositions available in the project, whatever their type is: CPLs of an IMF or a DCP package as well as project’s compositions.

5.5.1. Temporary Compositions

By default, when a media imported in the Media bin is played directly, a Temporary composition is automatically created.


This composition will be replaced by a new temporary composition if another media is chosen to be played, unless it is manually saved.

  • To save a temporary Composition, right click on it and enter a name in the text field:


5.5.2. Project Compositions

All the compositions saved for a project are accessible from this tab:


5.5.3. CPLs

When importing a DCP or an IMF package, CPLs are listed as compositions:


5.5.4. Sidecar assets

When a package contains SideCar files, they are displayed in the list Sidecar Assets.

  • To open a SideCar asset double click on it. If Windows system recognizes the file type, the associated application will be automatically launched.

5.5.5. Processing Graphs

OPLs present in an IMF package are listed under the Processing Graphs list.

5.5.6. Playback of the Composition

  • A double click on a Composition will automatically load all the assets referenced in the composition on the timeline.

5.5.7. Toggle compositions in the project

When a project contains several compositions, you can quickly toggle from one to another:

  • To toggle between compositions, double click on the desired one: the timeline automatically switches compositions.

5.5.8. Adjust columns

You can modify the lenght of the columns for the list of composition. Position the mouse until the cursor changes appearance next to the column you want to modify and slide:


Read more on Compositions

5.6. Media Inspection

The Metadata Tab of the Command Panel allows the inspection of the metadata embedded in a file: COMPOSITION, CLIP, STATIC and DYNAMIC metadata.


settings of the composition


thechnical metada of the clip


technical or descriptive metadata valid for the entire clip


technical or descriptive metadata valid for a frame or a scene.

The metadata displayed are those of the file loaded on the timeline. When multiple files are present, the metadata are displayed for the content on the active layers at the playhead location.

To inspect the metadata of a clip in the media tab, without modifying the current composition on the timeline, you can load it in the Source Viewport.

5.7. Media Metadata Exports

It is possible to export the metadata of a media file.

  • Right click on a media in the Control Panel / MEDIA tab and choose Save…​ from the drop down menu:


The Media Metadata Export windows opens.

In the file type, select the desired Metadata export:


exports the media metadata in XML following EBUCore specification

Media Report

exports a Media Report in PDF or XML format

  • Add a name for the file and browse the file system to the desired location and validate with OK

The Media Metadata Export windows can also be accessed from the timeline:

  • Right click on the desired clip on the timeline and choose Save…​ from the drop down menu:



The TimeLine is the core feature of ICE.

From the TimeLine, you have access to a variety of tools allowing to play and QC any type of content.

In this chapter you will learn the following:

  • Basics for the timeline

  • How to create new compositions

  • How to add clips to the compositions

  • How to load and save compositions

  • How to use the Event Viewer

6.1. Definitions

Below you will have an overview of the vocabulary frequently used in the Timeline section :


A project is a structure that is made of several compositions.


A composition is a structure made of different sorts of media assets: video, audio and timed text (e.g. close captions or subtitles). The metadata associated to the assets are also part of the composition.
It is also definite by a format (width, height, bitdepth, frame rate, sample rate, etc) and a duration.
In addition other composition properties exist such as a marking zone, markers, PlayHeads, etc to help the editing process. Each sort of media is organized into layers: the video layers (located in the video layer stack) and the audio layers (located in the audio layer stack).
Editing of video layers and audio layers can be done separately or jointly.
Layers are composited together in their category (video layers together, audio layers together).
The result of a composition is a video stream and one or more audio streams (one for mono, two for stereo, etc). The rendering of a composition generates a new clip.


A layer is a placeholder for tracks. The number of tracks depends on the layer category, audio, video or other.

Video Layer

A video layer is made of 2 tracks: V1 (also called the A-Roll) is the main video track of the layer. V2 (also called the B-Roll) is the secondary video track of the layer is generally used only when transitions are involved.

Audio Layer

An audio layer is made of one or several tracks, depending on the audio configuration. The audio configuration specifies the number of audible tracks, usually assigned to individual speakers in a spatial configuration. The following soundfields among others are supported: Mono, Stereo, 5.1, 6.1, 7.1.


A track is a placeholder for segments. Segments can be moved within the track, trimmed, slid (Slide operation) or slipped (Slip operation).


A segment is a basic unit of editing. It defines the start and end of a media source (audio, video or subtitles) in time. Transitions (video or audio) are special segments that do not represent any media source but rather blend two other segments (audio and audio or video and video).

6.2. Image Viewport

The Viewport is the part of the workspace where the image is displayed.

On opening of a project, the Viewport is reduced but you can enlarge it by closing the Command Panel or minimizing the timeline.

6.2.1. Navigate / Pan

To easily navigate in any area of the image, use the pan navigation:

  • Alt + click and maintain left mouse button pressed to move the image in every possible direction.

  • Press C to center the image in the Viewport.

6.2.2. Zoom

To zoom in a specific part of the image:

  • Scroll middle mouse button down to zoom IN, and scroll up to zoom OUT.


6.2.3. Viewport Options

From the GUI

Some controls for the Viewport are directly accessible from the GUI (in addition to keyboard shortcuts).



Show / hide Red channel


Show / hide Green channel


Show / hide Blue channel


Show / hide Alpha channel


Show / hide Mask


Show / hide Original image


Show / hide Zebra mode


Show / hide Dynamic Tone Mapping


Lock Fit Viewport


Gang the 2 Viewports




Show / Hide Dual Viewport


Lock Viewports playback together


Display guide lines on the Image Viewport:


Alt+C to show camera borders (project format)


Alt+A to display the Viewport axis

Safe Frames

Alt+F to show Action and Title safe areas according to the Active Area chosen.

Active Area

Alt+B to display the borders of the frame as per the frame aspect chosen in the Active Area tab.

Working Full Screen

To hide / show the timeline, use Page down and Page up keys.

6.2.4. Snapshots

It is possible to capture a snapshot of the content displayed in the viewport.

  • to capture a snapshot, press Ctrl+F12

Settings for the snapshots are located in the Media section of the Project.

It is possible to define automatically the naming convention for your snapshots as well as the directory to save them, or to opt for a manual saving.

You can also add the LUT defined for the GUI or the SDI output.

6.3. Dual Viewport

The Dual Viewport allows you to display simultaneously two video tracks for comparison purposes. The two Viewports can also be synchronized together, for an accurate frame matching.

6.3.1. Adding media on the Dual Viewport

To use the Dual Viewport option, it is easier to start with an open project.

  • To open the Dual Viewport, click on DUAL on the TimeLine or use the shortcut Alt+X:

Dual Viewport

The Record Viewport (right) is for the Composition, and the Source Viewport (left) is used by the source media.

  • Select the source media from the Command Panel and press Ctrl + double mouse click: it will automatically be placed in the Source Viewport.

  • Alternatively, you can right click on a media and choose Load in Source to directly open it in the Source Viewport.

  • From the Compositions list in the Command Panel you can also use right clik on a composition to choose Load in Source Viewport.

All kind of file formats can be loaded in the source viewport, including IMF and DCP packages.

  • To toggle from one Viewport to the other, click on the desired Viewport. The timeline displayed is the one for the Vewport outlined in grey (active viewport).

  • You can also switch from one to the other with the X shortcut.

The navigation management tools in the Viewport remain the same as for Single Viewport on the selected Viewport. Refer to chapter Viewport Manipulations .

6.3.2. Frame matching

It is possible to synchronize the timeline of the source with the one of the composition to do frame matching.

  • Select the viewport you want to use as the reference image, position the PlayHead on the desired location and click LOCK or press the G key.

Automatically, the second timeline will place and lock the PlayHead position at the same image number. You can also playback both timelines at the same time.

The frame matching depends on the duration of the two timelines. If one is shorter than the other, the last selected image of the shortest timeline will remain frozen.

6.4. Dual Video Output

It is possible to output simultaneously 2 video streams, up to 4K 60p each, using the video IO board.

The Video Routing matrix allows you to manage which viewport is outputted to which video channel.

  • access the Video Routing in the Command Panel / Tools

  • For each video channel output capability, you can choose to display either the source viewport or the **record viewport

  • Use the dropdown menu of the last column for managing Stereo 3D content and select which eye to display, with or without Tone Mapping applied.


Possible video output mappings:


Refer to section Video Output capabilities for more information on the supported devices.

6.5. Timeline Basics

The Timeline itself is composed of several parts:



TimeLine Background


Layers Control Box




Composition Timescale


TimeLine Controls


Transport Controls



6.5.1. Transport Controls

The commands for the Transport controls are the following:



Composition Start time


Mark IN point


Go back to first frame


Go back last key frame


Go back next frame


Play backward


Time Code at current frame / PlayHead position


Play / Stop


Go to next frame


Go to next key frame


Go to last frame


Mark OUT point


Composition End time


Playback Mode

6.5.2. TimeLine Controls

Some controls for the timeline are directly accessible from the GUI (in addition to keyboard shortcuts).

Editing controls



Show / Hide the Command Panel



Toggle Replace / Insert modes for clips



Snap clip to PlayHead or cuts


Add a video track


Add an audio track


Add a subtitle track


Add an auxiliary track

Undo / Redo
  • Undo and Redo are also in direct access on the timeline:


6.5.3. PlayBack & Speed information

The user interface displays playback information. It is also possible to change the playback speed on the fly.


6.5.4. Navigating the Timeline

Depending on the length of the composition, you may need to navigate through the composition back and forth, or change the display scale to reveal more or less of it.

Moving around in the Timeline

To move around the timeline without changing the PlayHead position, is done by using the keyboard and mouse. The following procedure explains how to move the timeline to the left or to the right to reveal the parts that could not be displayed on screen:

  • Press Alt on the keyboard and using the mouse, click the left button and drag the mouse while keeping the left button pressed.

The timeline will be shifted to the left or to the right, revealing the hidden regions.

Zooming the Timeline

You can display the complete timeline or a detailed part of it without changing PlayHead position by zooming in or out the timeline:

  • To zoom IN our OUT use the hotkeys Ctrl++ / Ctrl+-

  • Alternatively press the buttons Z+ and Z-.


You can also automatically fit the composition in the timeline:

  • To fit the composition in the timeline, use Ctrl+Shift+F or press the FIT button.

Using the PlayHead

This vertical yellow line indicates where in the timescale the current frame is located. It is also referred as “TimeMarker”.

  • To center the PlayHead in the TimeLine, use the Ctrl+Shift+C.

  • To highlight interesting times (cuts, markers, locators, etc), scrub the PlayHead back and forth with the Snap mode activated: the PlayHead will stop arriving at any interesting time of any track. Scrubbing full speed in Snap mode does not make the PlayHead to stop.

  • To navigate through interesting times ONLY, click Ctrl while scrubbing the PlayHead.

  • To slow down scrubbing capabilities, in order to navigate more easily when there is a lot of interesting times near the PlayHead, click Alt while scrubbing. This will force the PlayHead to move twice slower than normal and allow a better positioning.

TimeLine Navigation Shortcuts

To navigate more easily in the timeline some PlayHead shortcuts are available and detailed in the appendix Shortcuts.

6.5.5. TimeLine Configuration

The TimeLine can be configured to serve your needs depending on the projects you are working on. Possible configurations include:

  • Changing the Timebase

  • Modifying the Layers appearance and manipulating them

  • Manage Layers and create new ones

Changing the Timebase display

The Timebase (or timescale) is by default in time code mode.

It can be modified to display other time codes or frame information.

The Timeline can be displayed in the following modes:

Normal time Code

Feet + Frame

Frame Number

  • You can toggle the Timebase displays using Alt+T.

Timebase display in Time Code:


Timebase display in Feet + Frame:


Timebase display in Frame Number:

Modifying Layers appearance

The layer appearance can be modified to better display the tracks information if needed.

The TimeLine part can be expanded to display more layers:

  • To expand or collapse TimeLine, place the cursor on the top of the time line until it changes appearance and lift up or down.

  • To navigate in the different layers, use the Scroll Bar on the right of the Timeline.

  • To select several layers, press Ctrl + click and pick the desired layers.

  • To select all the layers, press Ctrl+A.

  • To deselect all the layers, press Ctrl+D.

In the user interface the tracks are separated by a "split". It can be moved up and down to reveal more or less of one of the stacks.


This action reveals additional information like the image resolution for the video layer, or the audio channel number for an audio layer.

  • To resize all the layers altogether, select the layers on the Control Box on the left of the TimeLine using Ctrl + click, keep Ctrl pressed and scroll up or down the mouse reel.

Managing Layers

Some managing operations are available for each type of layer.

  • To display the drop down menu for the layers, position the mouse on the Layers Control Box, select the Layer you want to modify and press right-click.


Allows to rename the layer.


Insert a layer right above.

Merge Stereo 3D

Allows to merge left and right eyes in one track.


Delete the current layer. Deleting a track removes all clip instances on the track but does not affect source clips available in the library.


Lock the current layer.

Move Up/Down

Allows to reorder your tracks by moving them up or down.

Select Clips

Selects all the clips in the track chosen.


You can split your audio configuration to mono or stereo tracks

Layers Manipulation

An important notion when manipulating the Timeline layers is the “Active” layer. This is especially important when editing the clips.

Active layers are labelled in blue:


In order to quickly manipulate the layers, you can use the Timeline Hot Box.

  • Place the mouse cursor on the TimeLine background and press Ctrl + right-click to display the Hot Box for the TimeLine ans select LAYER:


The Hot box provides you with short cuts to select or deselect the different type of layers.

The Layer Control Box also displays important icons:


Indicates that the layer is enabled. To change layer status to disabled, click on the icon.


Indicates that the layer is disabled. To change layer status to enabled, click on the icon.


Shortcut to lock the layer. Icon turns red when activated.

Create new Layers

At any time you can add additional layers to the composition. Layers can be added according to their type, video, audio, etc..


Insert a Video layer on top of all others


Add an audio layer at the bottom of all others


Add a timed text layer (for subtitles or closed captioning) on top of the video layers.


Add an auxiliary track at the bottom of the audio layers. Auxiliary tracks are used to display special metadata tracks like Dolby Atmos, DBox, etc…

About Auxiliary Tracks

Click X+, to display a drop-down menu and chose the appropriate auxiliary track:


HDR Dynamic Metadata

Dolby Vision Metadata v2.9

use this type for adding a Dolby Vision 2.9 Metadata ISXD track

Dolby Vision Metadata v4.0

use this type for adding a Dolby Vision 4.0 Metadata ISXD track

Dolby Vision Metadata v5.0

use this type for adding a Dolby Vision 5.0 Metadata ISXD track

Immersive Audio

Dolby Atmos Cinema

use this type for adding an Atmos track for a DCP package (see DCP with Dolby Atmos)

DTS-X Cinema v1/v2

use one of these types for adding a DTS-X Cinema immersive audio track for a DCP package. (see DCP with DTS-x)

Composite Immersive Audio

use this type for adding an immersive audio metadata like Dolby Atmos as IAB track for an IMF package.

6.6. Compositions

Assembling a program from multiple clips is done in a composition. By default every project has a default composition, called “Temporary”. However in a real world project, it is necessary to create multiple compositions for various needs, such as various edits of the same program, etc.

A composition is a structure made of different sorts of media assets: video, audio and timed text (e.g. close captions or subtitles). The metadata associated to the assets are also part of the composition.
It is also definite by a format (width, height, bitdepth, frame rate, sample rate, etc) and a duration.
In addition other composition properties exist such as a marking zone, markers, PlayHeads, etc to help the editing process. Each sort of media is organized into layers: the video layers (located in the video layer stack) and the audio layers (located in the audio layer stack).
Editing of video layers and audio layers can be done separately or jointly.
Layers are composited together in their category (video layers together, audio layers together).
The result of a composition is a video stream and one or more audio streams (one for mono, two for stereo, etc). The rendering of a composition generates a new clip.

In this section, you will learned the following:

  • Adjusting a composition’s duration

  • Managing (Saving, loading) compositions

  • The Composition settings

6.6.1. Composition duration

Composition duration is displayed in light grey over the timescale:


By default, the composition is set on 24h.

The duration of the composition can be adjusted by a simple move (left mouse click and drag) of the small handles at each end of the line.

  • To auto adjust the composition duration to the clips on the timeline, press Alt+Ctrl+F

The Composition can also be manually adjusted by typing values for the beginning and the end of the composition:

  • Click in the Composition Start and / or End in the Shuttle Bar, and type desired values.


The composition is automatically updated with the new duration.

In order to quickly fit the composition, you can also use the Timeline Hot Box:

  • Press Ctrl + right-click on the Timeline itself to display the TimeLine Hot Box and select COMPOSITION. The Hotbox provides you with short cuts to fit the different type of layers of the composition.


The duration of the composition affects the playback: if part of a media is outside the composition, it will not be played.

If you drag & drop an IMF or a DCP package containing multiple CPLs, ICE will automatically creates the compositions for each of the CPLs.

6.6.2. Composition Management

A project can contain an unlimited number of compositions, each of them with a different output format, frame rate or image resolution.

Compositions are reflecting the content of a Timeline.

To manage a composition, select one and use right-click to display the drop-down menu:

Load in Record

Load this Composition in the Record Viewport

Load in Source

Load this Composition in the Source Viewport


Create a new composition, with an empty timeline. By default it’s named "NewComposition"


Delete a composition. This action only deletes a Composition from within the software project and has not effect on a phyical content’s CPL.


Duplicate the entire composition in a new one. Assets and Settings are all duplicated. The new composition has the same name with "copy" at the end.


Allows to rename a Composition. Enter the name in the field and validate using Enter.

When you load a composition, the clips may appear in red for a short time, the time needed to renew the links. In general, except for a graphical subtitle track (e.g. PNG files), the names of the clips are red when there is no associated media:


It is then necessary to either Relink or Conform the media.

Create a new composition
  • To create a new composition in an empty timeline, use Ctrl+N

Save a composition

The compositions are automatically saved when you leave the project.

  • to manually save a composition, use Ctrl+S

6.6.3. Composition Settings

Access the Composition Settings

Composition Settings are accessible from the Command Panel:


In the Composition settings, you will find different kind of settings, presented in tabs:


defines the name, the type and the mode of your composition.


defines the way the media is outputted


Color Management System. This parameter sets the composition either in the native color space of the video clips on the timeline, or switch the composition in ACES color space.


defines mastering display and Target Displays.


defines blanking information and image burn-ins

All the parameters of this panel will apply to the current Composition. Loading a new empty TimeLine will restore default project settings.

General settings

Name of the composition. Edit it with a click on the text field.


Choose the type of your composition. "Compositing" is set by default for new projects.
When importing a DCP or IMF original version package, the versioning mode is then automatically set. This mode will affect the way content is presented:


This parameter set the composition in 3D STEREO or in normal 2D mode. In 3D STEREO mode, you can also chose the priority of the left and right eyes. More information about 3D mode is available in the 3D STEREO guide.


Enabled when in 3D Stereo mode.
LEFT/RIGHT : When you create a new track it will be the left eye and the new/upper track will be the right eye.

Film Type

Setting a Film type will define how the TimeLine is calculated in Feet+Frames (see section TimeLine in the Player chapter).
Available film types are: 8mm and super 8mm, 16mm, 35mm (2, 3, 4 or 8 perforations) and 65mm (5, 8, 12 or 15 perforations).

EIDR and ISAN levels

Enter EIDR and ISAN identifyers here. The 3 different levels ID are supported.

Ad-ID & Clearcast

Enter Ad-ID identifier and Clearcast number here.

EIDR, ISAN, Ad-ID and Clearcast information entered here will be embedded in the content’s metadata.

Format Preset

Select the desired output preset in the drop down list:


When chosing a different image resolution, in order to fit the picture in the expected output canvas and therefore avoid any undesired crop, use the Pan and Scan function.

Format Custom

Choosing Custom in the preset list allows you to manually set the output:


Use the slider or click on the digits to edit the text and enter the desired value.


The aspect ratio of the canvas is automatically calculated based on the dimensions.

Frame rate

Choose if the composition will be played from 14 up to 72 frames per second. this setting will affect the playback speed. For more information refer to the TimeCode section of this manual.

Pixel Aspect

For anamorphic content: select the desired aspect ratio of the pixel.

Active Area Aspect

select the active image area ratio from the drop down menu thus to exclude letter and pillar boxing from any processing or analysis.

  • To see the active image area on the viewport, press Alt+B to display a green border frame.


Burn in blankings (letterhead and pillar boxing). Blankings are not affected by color or contrast corrections.


Choose more or less opacity for the blankings: double-clik on the value to edit it or use the slider.

Safe Area Aspect

Select the safe area aspect to display.

Safe Action and Title

Define the percentage of tolerance for safe action and safe title in relation with the chosen safe area aspect. Double-clik on the value to edit it or use the slider.

  • Press Alt+F to display Safe frame on the Viewport.


This tab is dedicated to the choice of a Color Management System for your current composition. For more information, please read carefully the Color Management System chapter.


The Display tab allows you to define a Mastering Display or a Target Display for your content.

This is also the place to manage the display’s LUTs.

For more informations, please read carefully the Mastering Display chapter.


You can use Overlays to burn some metadata on the image Timecode, file name, and camera metadata.

Overlays work using text and metadata that you can modify to create burn-ins with a variety of styles.

Add a custom text Overlay
  • Click on + to add a new overlay. By default the tag "text" appears on the list.

  • Select the overlay on the list to edit it:

  • To modify the text, click in the editing text field:


and validate with enter.

Add a Metadata Overlay

Some important metadata are directly available using metadata tags.

  • To add one, select the overlay in the list and click on +tag icon:


and select a metadata in the drop-down menu:


You can add several metadata tags for each overlay:


and mix custom text with metadata:

Modifying overlay positioning

The positioning of overlays in the image are defined by offsets to vertical and horizontal positioning.

  • Use the text alignment icons and the X and Y axis to position the overlays at the desired place.

Text Horizontal positioning:


Text Vertical positioning:

Saving Overlay templates
  • To save overlays settings as a template, click EXPORT on the timeline menu and select in File Type "Marquise Technologies Overlays Template":

  • Select a location to save the template (by default it will be saved in the temporary folder of the User).

  • To upload a saved Overlay template, click IMPORT on the timeline menu and browse the file system. You can use the File Type info to narrow the search.

To display an Overlays Template in the Overlays Template drop-down menu, it must be saved in the Software installation directory folder C:\Program Files\Marquise Technologies\$software_version$\resources\templates\overlays :


  • Click on Templates to open the drop down-menu and select a template:


A list of overlays for this template is displayed.

  • Click on the desired overlay to modify it.

6.7. Clips

In ICE, a clip is the visual representation of any type of asset : video, audio, subtitle or metadata file.

Depending on the assets, like audio tracks for example, the way they are added on the Composition timeline is important.

6.7.1. Adding manually clips to a Composition

Once you have dropped your media in the Media tab of the Command Panel (see Media Tab, you can start adding the clips on the Timeline.

There are several ways to add clips to a composition and they all depend on the context. Some methods will be more appropriate than others in some particular cases.

Sequential Paste

This is the easiest method to manually drop some clips in the timeline.

The clips will be placed in the timeline automatically at the playhead position, one after the other, in the order you have selected them in the Media tab.

  1. Select the clip in the Media tab of the Commad Panel (clip turns in grey when selected)

  2. Using the mouse, click on the thumbnail then lift it with a quick swipe of the left mouse button. This process is called Lift, Carry & Drop.

The Clip will be attached to the mouse:

Media tab

To import several clips, select them in the Media tab in the order you want them on the TimeLine using Ctrl + Left mouse button then click on any thumbnail and lift them with a quick swipe of the left mouse button:

Media tab
  • Once the clip(s) is/are attached to the mouse, press Ctrl+V to drop the clip(s) on the timeline.

The clips will position themselves on the layers, at the playhead position.

The Timeline must be configured with the right number of audio tracks prior to the import. This can be configured either from the Project properties, or by adding manually additional layers (press A+ or S+ for example).

  • Too keep the clip(s) attached to the mouse for pasting them again, choose the option Keep Clips after Drop in the Project Settings / User Interface. When this option is enabled, click Esc to free the mouse.

Stacked Paste

In Stacked Paste mode, the clips will be placed in the timeline in pile, one below each other, starting from the playhead location.

  • Select the clips in the Media tab as mentioned above

  • Once the clip(s) is/are attached to the mouse, press Ctrl+Shift+V to drop the clip(s) on the timeline.

  • Press Esc to free the mouse.

Extended Pasting

When none of the above methods is convenient, extended paste functions are available.

  • With the clip(s) attached to the mouse, press the right mouse button on an empty area of the timeline to display the dropdown menu:


This action opens the Paste Extended tool box:

  • Select the desired paste mode with the drop down menu to be applied either on the current layer or on a new one.

The various paste methods include:

By Source TC

will place the clips at specific places in the composition, according to their source timecode.

By Increasing Source TC

will place the clips one after each other but sorted by their source timecode.

Alphanumeric order

will drop the clips one after each other sorted by their name in the increasing alphabetical order.

Reverse Alphanumeric order

will drop the clips one after each other sorted by their name in the reverse alphabetical order.

Clip and composition frame rate

After the import, if your clip appears in red on the TimeLine, it means that your composition settings have not the same frame rate than your clip:

  • To modify them, go to the Composition settings in the Control Panel and change them manually, or

  • Press right mouse button on the clip itseld to display the Clip Operations drop down menu :

  • Choose Set Composition Format to automatically adapt the composition settings to the clip properties.

6.7.2. Selecting clips

  • To select a clip on the track, click on it using the left mouse button. Clip will change color to light grey.

  • To select several clips, use Ctrl + left mouse button on each of them.

  • To select all the clips, start to select one clip then use Ctrl+A.

  • To deselect clip(s), use Ctrl+D.

  • To select a range of clips, position the mouse on the track next to a clip you would like to enclose and press Shift + left mouse button and drag the mouse over the desired clips. The covered range is bordered in blue.


Alternatively you can use the TimeLine Hot Box to quickly manipulate the clips:


6.7.3. Removing clips

You can remove an entire clip or a range of frames from the composition using several methods :

Delete clips
  • Select the clip(s) you need to delete and press Delete.

Delete Ripple

This deleting mode allows to delete a clip without leaving a gap in the Timeline.

  • Select the clip(s), and press Backspace on the keyboard.

Lifting clips or a range of frames

Lifting is the process of removing one or more clips or a range of frames from the composition. The range of the composition to be lifted is defined by the mark in/out range. When lifted, the marked range leaves a gap of the same duration as the mark in/out range.

In order to remove a portion of the composition using the Lift operation, you need first to mark the range using the Mark In/Out tool :

  • to remove a marked region, press Right mouse button on the TimeLine background to display the Composition drop-down menu and select Clear Marked Range or Clear Mark :

  • Alternatively you can use the shortcuts Alt+I and Alt+O to set / remove the marked range.

Once the region is marked, to perform the lift you can:

  • press Ctrl+L

  • display the Composition drop-down menu by cliking Right mouse button on the TimeLine background and select Lift Marked Range.


Timeline after a LIFT operation:


This operation occurs on the active layers only.

Extracting clips or a range of frames

Extracting clips is a process similar to Lift, however there is no gap left by the removed marked range. The remaining clip parts or full clips that were on the right of the mark Out point are moved backwards to the left to fill the gap (also called ripple deletion).

To Extract a clip or a range of frames you must first mark a range and then :

  • press Ctrl+E or

  • display the Composition drop-down menu by cliking Right mouse button on the TimeLine background and select Extract Marked Range.


Timeline after an EXTRACT operation:


This operation occurs on the active layers only.

6.8. Markers

Markers define regions of a composition that have a specific meaning.

Typical information located by Markers are for example First frame of Credit, or Commercial break.

  • To access the Markers panel, click on TOOLS in the Command Panel:


6.8.1. Adding markers

  • To Add a Marker, position the Playhead on the desired timecode and in the Markers panel select a color and click +. The Marker is set for a default duration of 1 sec.

  • To mark a range, first define your range and in the Market panel click +.

  • To delete a Marker, select it in the Markers' list and click -.

Markers are represented in the TimeLine by colored triangles and positioned in the Marker’s track:


6.8.2. Navigating through Markers

You can navigate from Marker to Marker using different options:

  • Double click on a time code in the Markers list for the PlayHead to jump directly to the related frame in the TimeLine.

  • Use shortcuts Shift+U for Next Marker and Alt+U for Previous Marker

Additionally, when the PlayHead is positioned on a particular Marker, it is highlighted in the Marker’s list.

6.8.3. Defining a markers

SMPTE markers labels are supported. To add some, select the desired Marker in the list and click on Default on the Type column.

This action display the Markers drop down menu.


You can also add custom comments.

Select the desired Marker in the list and click on Default on the Comments column and add your custom text:


6.8.4. Editing Markers

You can move or change the duration of a marker directly in the timeline:

  • Position the mouse on the IN or OUT point of the marker until the cursor shows markers and move the marker to the new position.

6.8.5. Exporting / Importing Markers

Save Markers

It is possible to export the markers information in XML.

  • Click on the Export icon on the bottom right of the Markers panel to enter the Save Markers window.

  • Choose a location for your file and enter a name.

  • Finish with OK.

Import Markers
  • To import a Markers file, click on the LOAD icon on the bottom right of the Markers panel.

  • Browse the folder tree on your left, select the file and click OK.

Only Markers created using ICE or MIST can be loaded. If you want to import an external file, you can use the Locators.

6.9. Locators

If they are similar to the Markers, Locators are only used for custom comments.

  • To access the Locators panel, click TOOLS in the Command Panel:


Locators are represented in the TimeLine by colored squares ans positioned on the Locators' track:


6.9.1. Adding Locators

  • To Add a Locator, position the Playhead on the desired timecode and in the Locators panel select a color and click +.

  • To delete a Locator, select it in the Locators' list and click -.

You can create several Locators at the same timecode, however on the TimeLine only the last Locator entered will be displayed.

6.9.2. Navigating through Locators

You can navigate from Locator to Locator using different options:

  • Double click on a time code in the Locators list for the PlayHead to jump directly to the related frame in the TimeLine.

  • Use shortcuts P for Next Locator and Alt+P for Previous Locator

Additionally, when the PlayHead is positioned on a particular Marker, it is highlighted in the Marker’s list.

6.9.3. Defining a Locators

  • Select the desired Locator in the list and click on the empty field in the Comments column and add your custom text.

  • To rename a Locator, double click on its name and enter the new text.

6.9.4. Editing Markers

You can move a Locator directly in the timeline:

  • Position the mouse on the locator until the cursor shows locator and move the locator to the new position.

6.9.5. Exporting / Importing Locators

Save Locators

It is possible to export the Locators information in XML.

  • Click on the Export icon on the bottom right of the Locators panel to enter the Save Locators window.

  • Choose a location for your file and enter a name.

  • Finish with OK.

Import Locators
  • To import a Locators file, click on the Load icon on the bottom right of the Locators panel.

  • Browse the folder tree on your left, select the file and click OK.

6.10. Reels Management

Reels or Segments are often present in DCP or IMF packages.

  • To access the Reels panel, click on the TOOL tab of the Command Panel.

In ICE, the reels or segments are identified by a colored bar on the TimeLine.


6.10.1. Navigating through Reels

You can navigate from Reel to Reel using different options:

  • Double click on a time code in the Reels' list for the PlayHead to jump directly to the fist frame of the reel in the TimeLine.

  • Use shortcuts Alt+Ctrl+Page Up| for Next Reel and Alt+Ctrl+Page Down| for Previous Reel

Additionally, when the PlayHead is positioned on a particular Reel, it is highlighted in the Reels' list.

6.10.2. Deleting Reels

In order to delete a reel, click on the desired reel and click - to validate the deletion of the selected reel.

this action cannot be undone.

6.11. Playback

6.11.1. Playback Mode

There is several modes for playback available:


Play Once

Play the current composition just once


Play ping pong

Play backward then forward the composition, endless.


Play Loop

Play back the current composition, endless

The “Play once” mode is set by default.

  • To toggle to the other modes, click on the icon until the desired mode is displayed.


6.11.2. Playing Back a Marked Region

To play a specific region of the composition, mark the desired range with IN and/or OUT points:

IN point OUT point Description Shortkey



Play forward in the marked region




Play backward in the marked region




Play forward from the IN point to the end of the compositio




Play backward from the end of the composition to the IN point




Play forward from the beginning of the composition to the OUT point




Play backward from the OUT point to the beginning of the composition


If the PlayHead is inside the marked range, the playback starts at the PlayHead origin

6.11.3. Playback Information

The timeline’s interface displays some playback information such as:


Current frame rate of the playback in frames per second.
If the frame rate is below the frame rate selected in the composition settings, it means that the hardware has not sufficient power to achieve the required performances.
If the frame rate is over the frame rate selected, it might come from a display synchronization problem.


Define the increment between the frames:
Speed 100%: plays all the images
Speed 200% : plays one frame out of two.
Range goes from -800 to + 800%

  • Use JKL controls to modify the playback speed.

6.12. Event Viewer

6.12.1. Overview

One of the major differences between a colorist and an editor is the fact that they have different requirements regarding the display of the shots and how to navigate from one to the other. While the editor will be mostly using the timeline to navigate in the composition, a colorist is more likely to prefer another method. The reason for this is quite simple : when color grading a show, the focus is on the shots as individual pieces of content and their placement in the chronology of the show or the possible transitions between them is not relevant.

A classic editing timeline is far from sufficient to serve the purpose. Instead another navigation tool is preferred. This navigation tool is the Event Viewer (sometimes called the shot viewer).

The Event Viewer also displays the clips available on the timeline in a chronological manner; however, it represents each clip with a preview of one of the frames of the clip. Each preview is displayed next to each other and their dimension is always the same. As a matter of fact, the Event Viewer does not visually represent the length of a clip (or shot). Again this information is of little interest during the color correction process. The diagram below shows you the difference between the timeline and the Event Viewer representations :


Besides the visual difference, there is also the fact that an Event Viewer allows only a limited number of editing operations. These editing operations are the common ones used by a colorist or an assistant and once again they only serve the purpose of permitting operations needed in this context.

It is also worth noting that the name (i.e. Event Viewer) refers to the objects it displays as “events” and not clips or shots or anything else. From the colorists point of view the timeline is an assemblage of pieces of material that have been spliced together in a chronological manner.

Each of these objects are called “events” because they are actually events occurring at a certain time during the playback of the final show. Once an event is over, it is immediately followed by another event (which can be another piece of material, a transition, or simply a black screen).

A colorist will work on each of these events and make sure that they are all consistent from the color point of view and correspond to the intentions of the director of photography.

In addition to the navigation shortcuts offered by the Event Viewer, there is another important reason to use it. The Event Viewer also gives you access to multiple grading versions per event.

In this chapter, you will learn the following things:

  • How to show and hide the Event Viewer.

  • Understand the information displayed in the Event Viewer.

  • Navigate through events.

  • Insert, delete and replace events.

  • Store and recall various color correction versions per event.

6.12.2. Accessing the Event Viewer

The Event Viewer is a key element during a grading session. As a matter of fact, colorists use it much more than the regular timeline. Therefore it must be accessible at any time by using all the connected input devices, i.e. the mouse or the stylus, the keyboard and of course the control surface.

Using the mouse

If you use the mouse or the stylus, then the Event Viewer can be accessed by clicking on EVENTS located on the lower right side of the screen.

Once you have clicked on EVENTS, its color will change to light blue to indicate that the Event Viewer is being displayed. Clicking once more on EVENTS will cause the Event Viewer to be hidden again.

Using the keyboard

When using the keyboard to show or hide the Event Viewer, you must use the E key shortcut. By pressing E once, you will force the Event Viewer to be displayed. Pressing E again you will force it to be hidden again.

Using the control surface

Accessing the Event Viewer with the control surface is done via a dedicated key. Usually the same key is used to toggle the presence on screen of the viewer. Please refer to documentation for your control surface provided with the software for more information on how to access the Event Viewer.

Once you have used one of the methods above to reveal the event viewer, it will be displayed at the bottom of the screen, like illustrated in the screenshot below:


6.12.3. Understanding the Event Viewer

The Event Viewer always displays the events on the currently active layer only. In a multi-layer composition, you must activate the layer you want to work on first. The Event Viewer is immediately updated with the events (clips or holes) existing on the active layer.

The events are arranged chronologically from the left to the right. The leftmost event is the first event on the timeline, while the rightmost is the last event on the timeline.

The current event (the one that you are working on) is always centered in the display. Towards the left you will find the preceding events (or past events), while to the right you will have the upcoming events (or future events).

The diagram below shows multiple events on the currently selected layers and their organization from left to right:


As previously explained, the current event (the one that you are working on) is always centered. To distinguish it from the other surrounding events, it has a light-blue blinking border.

Besides the indication of which event is currently worked on, there are other indications that the current event carries and that are worth looking at. The close-up screenshot below shows the current event centered and exposes some extra information :


The current event also has a display of the current frame number as well as the total number of frames. This indication is updated as you navigate through the event.

Moreover, each event has a “dirty” flag that is displayed on the top-left corner of the thumbnail. This flag indicates that the event has been modified and the modifications have not been confirmed or saved. So, if you don’t save your composition by yourself, the auto-save could do it and then make the flag disappear.

6.12.4. Navigating through shots

One of the top purposes of the Event Viewer is to provide a fast visually-oriented navigation through the shots available in the currently active layer. Rather than locating a segment in a timeline, the Event Viewer allows you to visually locate the event you want to go to by looking at is thumbnail.

The navigation is performed by either using the mouse and stylus, or the same navigation keyboard shortcuts as in the timeline or eventually by using the transport controls of your control surface. In this section, only the specific Event Viewer navigation methods will be explained.

Using the mouse

The navigation to the neighborhood shots (those actually visible) is done via the mouse or stylus. To quickly go to any of the visible shots, you simply need to double-click on its thumbnail.

The PlayHead will immediately move to the selected event and its first frame will become the current frame. The new event also becomes the current one and it is centered in the Event Viewer. The various information usually displayed on the current event is also updated to reveal the current status.

Select an active track with the Event Viewer
  • Like the timeline in the editing mode, you can move from a track to an other by using the shortcut Ctrl+Page Up / Ctrl+Page Down. If the track is empty, it will show you only one event with “No clips on track !” written.

Moving through the Event Viewer
  • As we mentioned before, to move through the Event Viewer, you can use the PlayHead on the timeline like in the editing mode or use the same shortcuts (e.g. press SHIFT + Left to move ten frames before). There is cursor on the top of the current event selected to let you know where you are regarding the duration of the clip.

6.12.5. Inserting shots

During the grading process it could happen that a missing shot needs to be inserted in the timeline. This can happen for various reasons, some being the fact that a VFX shot was missing or you simply need to insert some titles for the purpose of mastering. Whatever the reason, the best way to edit the timeline is still by using the editing tools in the actual timeline or editing mode.

Nevertheless, for some simple operations it is also quite handy to insert shots by placing them between other shots via some quick methods. The event viewer offers the possibility to insert shots before or after a shot, by dropping one or more clips attached to your pointing device.

To insert before or after a shot through the Event Viewer, some conditions need to be met :

  • First of all, one or more clips must be attached to the cursor.

  • Then you must hover around one of the event’s drop zones.

  • Finally you must release the clips by clicking on the drop zone.

Also you have first to decide if you want to insert or replace by clicking on the button on the top left corner of the timeline to toggle from one to another (or you can press the hotkey INSERT). The one showed will be the one used :

Inserting before the current shot

In the screenshot on the left, the mouse cursor is hovering around the leftmost area of the event. A red bar with a left-oriented arrow head appear to indicate that the clip can be inserted before the one you are on.

Inserting after the current shot

In the screenshot on the left, the situation is similar to the one describe above, but this time the red bar and the arrow head are oriented to the right. This indication means that the clip can be inserted after the one you are on.

Once you have clicked on one of the drop zones described above the clip(s) attached to the cursor will be dropped either right before or right after the event you were hovering around.

6.12.6. Replacing shots

Just as important as being able to insert shots on the fly, without any conforming or editing operations, it often happens that shots simply need to be replaced. There are many ways to replace shots on the timeline and every method has its pros and cons. While the timeline and more editing-oriented and the conforming operations are more adapted to building or adapting full compositions to media changes, they still are somehow complex operations.

A simple method is provided by the Event Viewer, facilitating shot replacement without the need to understand editing or conforming.

To replace one or more shots through the Event Viewer, some conditions need to be met:

  • First of all, one or more clips must be attached to the cursor.

  • Then you must hover around the center of the event for which you need to replace the media.

  • Finally you must release the clips by keeping the Alt key pressed and clicking on the center of the event.

If you have multiple clips attached to the cursor, then the next events will have their media replaced as well, starting with the event you started the operation on and continuing with the next events to the right. The operation is fully terminated when all the clips attached to the cursor are released.

6.12.7. Selecting shots

During the grading process shots must be selected and deselected to perform various operations on them as a group. Shots can be selected any many places with the exact same effect on them, however the event viewer also allows some basic selection operation for the sake of convenience. Below are listed the selection operations that can be performed directly in the Event Viewer.

Selecting and deselecting individual events

The most used type of selection is to select shots individually, one by one. This operation can be accomplished either by using the mouse or the stylus or the control surface.

Using the mouse

To select individual shots with the mouse or the stylus, you need to hold the Ctrl key pressed and click on the event. When doing so, the event gets a light blue shade to indicate that it is selected.


Using the control panel

Individual events can also be selected using the control panel. However, since a control panel does not allow to pick events at random, you position the PlayHead on the event you want to selected and use the dedicated button on your control panel.

Selecting and deselecting all events

Another very used type of selection is when all events are selected or deselected at once. There is no dedicated method for this in the event viewer, so your only option is to use the keyboard as described below.

Using the keyboard

When using the keyboard, you must press Ctrl+A to select all the events. To deselect all the events, you must press Ctrl+D. Depending on the operation, the events with either get all highlighted with light blue or return to their normal state.

When selecting or deselecting all the events, the effect is not limited to the events visible in the event viewer. As a matter of fact, all the events in the timeline will be affected.


7.1. Audio in the TimeLine

Audio tracks and channels are reprensented in the Timeline as they are in the file.

The timeline allows an infinity of audio sounfields and individual audio channels.

Example of a composition with 2 soundfields:


7.1.1. Remapping audio channels

If channels of an audio track have been incorrectly mapped, it is possible to remap them automatically in the timeline:

  • Click on the audio channel to remap then use right mouse button to display the Clip drop-down menu.

  • in Channel Mapping, select the new channel to assign:


7.1.2. Display Audio Waveforms

  • press Right mouse button on the TimeLine background to display the Composition drop-down menu and select Audio Waveforms:


7.2. Audio Monitoring

In ICE, the audio is outputted on both the video IO card and the PC sound card.

When the PC sound card is used, if the system ressources are low, some images may drop to follow the sound.

When using the video IO, audio and images are guaranteed to be in sync.

To be able to output audio, you need first to activate the audio channels in the Audio Routing panel

7.2.1. Audio Routing

To activate the audio channels:

  • In the Command Panel, select TOOLS then ROUTING

  • Assign the audio layers of all the soundfields present on the timeline to the desired audio channel (up to 16) by a click in the cell.

  • To change the channel assignment, click on the desired channel for each audio layers.

7.2.2. Down-Mix

You can perform a down-mix a 7.1 or 5.1 soundfield to Stereo:

  • In the Command Panel, select TOOLS then ROUTING

  • In the Audio Routing section, use the DOWNMIX dropdown menu to select the desired downmix configuration:


7.2.3. Audio Analysis

ICE offers a full set of Audio monitoring tools for the quality control. Refers to the chapter Audio Monitoring for details about the different audio scopes.

7.3. Immersive Audio

7.3.1. Importing Immersive Audio files

in ICE, the Immersive Audio files are not managed as regular audio tracks.

When importing an immersive audio track into the Media tab, ICE recognizes the metadata and display a different image icon for the file:


The immersive audio files must be placed on an Auxiliary data track on the timeline.

if ICE has detected immersive audio metadata in the file, it will prevent the drop of the file on a regular audio track.

  • Create a new Auxiliary data track by clicking on the X+ button on the left side of the timeline and drop the file on this track.


7.3.2. Dolby Atmos

ICE is supporting Dolby Atmos technology and allows the QC and the playback of the Atmos files.

Supported Formats

ICE supports the following Atmos formats:

  • DAMF (Dolby Atmos Master Format)

  • BWF ADM (Broadcast Wave Format with Audio Definition Model)

  • IAB (Immersive Audio Bitstream) for DCP (including encrypted files) and IMF

These files can be imported in the timeline as stand-alone files or wrapped in an IMF or a DCP package (IAB).

When importing a DAMF in stand-alone, select in the directory the file with the extensions .atmos and drop it into ICE:

Dolby Atmos QC
Dolby Atmos Playback

It is possible to playback all supported Atmos files through the integrated Dolby Atmos Renderer, to the exception of encrypted IAB files for Digital Cinema Package.

The Dolby Atmos Renderer permits the following channel based audio outputs: 2.0, 5.1, 7.1 and 7.1.4.

  • To select an audio configuration for the play-back of Dolby Atmos tracks, go to PROJECT SETTINGS / DOLBY / DOLBY ATMOS and select the desired configuration from the drop-down menu:

Atmos Metadata Inspection

Dolby Atmos metadata are displayed in the Metadata Inspector Panel:



Conforming is a step that is sometimes necessary, in many and varied situations: reconnecting an off-line editing, connecting the subtitle tracks or the image sequences to their original files etc. This is also useful in case of media present in the timeline, but which have their source deleted from the project media bin.

The application allows to manage this crucial function with numerous options.

8.1. Conform an AAF and a XML file

Once the AAF or the XML file has been imported using the IMPORT on the menu bar, a timeline appears with red unlinked medias, as shown below :


If the media is already present into the project media bin, it will be linked automatically.

When importing an XML or an AAF, the tracks could be imported as invisible. Be sure to have the track visible:


When a composition is loaded, by default the Playhead is positionned at TC 00: 00: 00: 00. To go to the first frame of your content, press the "Home" key, to make the playhead positioned at the entry point of the composition.

You can access to the Conform module by right-clicking on the viewport from the Timeline Module:

Click on CONFORM in the menu bar to access Conforming panel.

In the conforming panel, the media of the composition appear in list form, sorted by type. A series of columns containing media information are adjustable by placing the cursor between two columns :


To move and see the remaining columns on the right of the window, just click on the slider at the bottom of the list and scroll in one direction as in the other :


On the bottom of the panel, there is several tools used to configure how the content can be retrieved:

Search in

Choose if you want to look for media either in the project media bin (LIBRARY) or in the file system (SYSTEM).


Specifies the navigation path through the media either in the project media bins or system directories.


Specifies the navigation path through the project media bins, in order to refer the media into the desired folder.

Match name

Searches will be based on the CLIP, REEL or FILE name of the content.
By choosing NO, this search option is disabled.

Match time

Searches will be based on the IN, OUT or IN & OUT timecode of the content.
By choosing NO, this search option is disabled.

Time Type

Searches will be based on different types of timecode:
DEFAULT (original timecode),
ABSOLUTE (hour of the day of shooting),
EDGE (unique timecode per roll).


Choose whether the previous timecode’s options are for the TC SOURCE or ORIGIN.


This option allows to match one timecode with another. For example, in the case of a media whose start time is different from the one of its edit list.
Click P to set a positive offset,
Click N to set a negative one.

Sometimes the conforming process requires the media to be renamed or completed in the case of missing information.

It is possible to rename the media by their Clip name, their Reel name or their File name.

  • To rename a clip, first select it in the list.

  • Enter a name and click SET:


With the TAG tool, the name can be based on an existing parameter. Clicking on it displays a list:


TAGs can be cumulated and added to typed text. For example, if the file name does not have existing information in its column and we want to fill it with information from the clip column, we choose the clip tag and then we type the file extension:


After clicking SET, the media’s FILE NAME column will be renamed:


You can apply the renaming function on several clips at the same time: select them using Ctrl and click.

You can also select all the clips Ctrl+A and deselect all of them Ctrl+D.

Once the settings are made, click MATCH at the bottom of the module to start conforming.

After scanning, linked files should display a thumbnail as shown below:


If some files could not be conformed with this first path, the operation can be repeated with any type of search without losing the previous conform.

When all the desired files are linked, just click on ESC to exit the module. Into the project media bin, the conformed medias have been referenced automatically.

In the project media bin, if a media has been moved from the operating system, the media will appear red in the list mode or with a “media offline” information into the thumbnail of the clip.

8.3. The EDL’s case

The Edit Decision List can be imported using IMPORT on the timeline:


Once the EDL has been chosen, a window will appear in order to select the appropriated settings before importing. All informations about the video and audio tracks, their transition as well as the timecode presents in the EDL will be displayed as a list. A slider on the right allows to go up and down throught the this list:


At the bottom left of the window, with the Import As drop-down menu, it is possible to import the EDL as a new layer or a new composition.

In the case of a New layer, for obvious reasons, some settings will be grayed and inaccessible. If a New composition is chosen, the dimensions, pixels and bit depth have to be setup or we can directly select a preset:


To the right, we can rename the Clip and Reel Name from the EVENT, REEL and COMMENTS columns:


A digit padding setting is available when choosing From Event as well as we can the choice to include or not the extension when using Comment:


Transition management can be generated by the application or interpreted as being a clip at the discretion of the user. If the EDL contains parameters color transformations, it is possible to keep or ignore them with the ASC CDL drop-down menu.

  • Once all settings are made, click on OK.


ICE is able to import and manage a wide range of subtitles and captions for the broadcast and film industries. To see the full list of formats supported by ICE, please refer to the Appendix Supported Input formats .

In the following section, we will see how to import and inspectsubtitles and captions.

9.1. Import subtitles and captions files

It is possible to import text subtitles and graphic subtitles. A type of track is dedicated to subtitles files on the top of all the video layers in the composition.

9.1.1. Import text profile

The import of standalone subtitle file is similar to a video or a audio assets.

You can import the subtitle file directly from the OS browser by selecting it and dragging it into the Command Panel's Media bin.

9.1.2. Import image profile

The way to import subtitles and captions image profile is identical to text profile.

However, make sure the image files and the subtitle XML file are in the same root folder, permitting to automatically locate the image files to which the XML subtitle file refers.

9.1.3. Place a subtitle file in a composition

To place your subtitles and captions in the timeline create a subtitle track by clicking S+ at the top left of the timeline and place your media like any other asset type or paste it directly to the playhead location.

In case you have selected multiple subtitle tracks, they will be stacked on the top of each other.

Like video and audio tracks, you can rename the subtitle track by right clicking on its name to call the LAYER OPERATIONS menu. Then click on Rename and type the desired text (by default, “Untitled”). This will be easier to identify them in the export phase.

Subtiles and captions

9.1.4. Set the composition format according to the subtitle

It is possible to configure the composition size according to the properties of a subtitle file such as IMSC1. To do this, the file must have the attribute tts: extent defining the resolution of the image in its settings.

Most of the time, this attribute is used in the subtitle files using the image profile and the pixel unit. This is to inform the optimal resolution of the use of the subtitle, so as not to deteriorate its quality which the would make it unusable:

  1. From the subtitle track, right-click on the subtitle file.

  2. Click on Set composition format.

The composition fits the dimensions of the subtitle file.

In the case of using a subtitle in text profile, the subtitle will use the same resolution as the current composition. The text being vectorial, there is no loss of quality.

9.2. Inspecting subtitles

Once a subtitle has been loaded in the timeline, it is ready for inspection.

9.2.1. Loading font for subtitles

The subtitle file can contain one or more blocks of text, each block itself being able to contain several lines of text. Each block has several properties, including the font face type, size and of course the start and end time during the presentation.

As soon as the subtitle has been placed in the timeline, subtitles are displayed into the Viewport. If not, verify that the associated font has been loaded:

  • Go to the TOOLS panel in the Command Panel. A panel appears to allow you to modify reels, markers, subtitles, locators as well as track.

  • In the SUBTITLES tab, access the FONT tab.

If a font is present in the subtitle, it will automatically appear under the tab FONT.

If the font has been not loaded, no font will be listed and no subtitle will be displayed into the Viewport. Note that a subtitle file can have multiple referenced fonts into the list.

Subtiles and captions

In order for a font associated with a subtitle to be automatically loaded, the font file must be in the same root folder as the subtitle text file. Otherwise, the Arial font is loaded by default.

  • Double-click on the font number to open the Font Library. Fonts present in this list are based on the OS font folder.

  • Select the desired font into the list and click on OK to validate your selection.

TrueType fonts (.ttf) larger than 640 KB will be displayed in yellow to warn you that they are not respecting the DLP Cinema subtitles specifications (CineCanvas) for DCP mastering.

9.2.2. Displaying multiple subtitles

Several active subtitles track can be displayed simultaneously.

Active tracks have their name highlighted in cyan.

If needed you can hide the track by clicking on the check mark to deactivate it and on the X to display it again:

Subtiles and captions

In order to check the proper placement of subtitles on the image, you can display a safe-frame guide from the Viewport Hot Box (press alt+ right-click on the viewport).

9.2.3. Browse subtitles using the spotlist

It is possible to quickly navigate from one subtitle line to another one in the composition.

  • Access the SUBTITLES tab from the TOOLS panel and choose TEXT.

  • Double click on the timecode REC IN or REC OUT :


Of course this is based on the timecode of the subtitle track, not on the composition’s one. If your subtitle track is not starting on the first frame of your composition, the timecode will be different.

If the Timecodes in the spotlist are displayed in yellow, it means that they have been adjusted from the original source timecode to match the actual composition.


10.1. About CMS in ICE

The Color Management System defines the working color space of the composition. It is possible to mix the color pipelines within the same project, however a composition can only refer to one CMS.

Color processing in ICE works in 32-bit floating point.

10.1.1. Color Management Systems

First, you need to define which color system you want to use for your composition.

ICE supports 3 different color management systems :

  • the Native CMS of the content.

  • the ACES color management.

  • the ACES color management.

  • MTCMS, the custom color management of ICE.

The selection of the Color Management System is done in the Composition Settings.

10.1.2. Workflows

Once the CMS is chosen, you need to define a workflow for color processing.

Available workflows are different according to the type of CSM chosen.

Workflows for Native and MTCMS

allows you to manually define the Color Primaries, EOTF, etc.

Dolby Vision

Enables the Tone Mapping settings. When Dolby Vision mode is selected, the image viewport displays the Dolby Vision logo.


Automatically sets the Primaries and EOTF according to HDR10 standard specification. When HDR10 is selected, the image viewport displays a HDR10 logo.


Automatically sets the Primaries and EOTF according to HLG specification. When HLG is selected, the image viewport displays a HLG logo.

Workflows for Native and MTCMS

In ACES mode, the Workflow defines the output device transform (also referred as ODT).

10.1.3. Safe Gamut

Safe Gamut

displays the color boundaries in the scopes.

Light Levels

defines the boundaries for the working color space. This information is used in the HDR validation and some scopes.

10.1.4. Content Light Levels

Max FALL and Max CLL information can be manually entered here, if the media has not that information embedded.

Should the media has this information, ICE will automatically display them.

When a HDR Analysis is performed, the Light levels will be filled/replaced by the result of the analysis.

10.1.5. Timed Text Color Management

ICE allows to manage the subtitle luminance independently of the one of the video track. This setting is effective only in the case of an HDR composition (i.e using an EOTF PQ or HLG) and using the MTCMS system.

Color Sapce

When defining the text color space, the conversion to a Rec 2020 HDR color space will be performed properly, especially in case of burned-in subtitles.


By default, a factor of 4 is applied in ICE (e.g. a 100 nits subtitle will appear at 400nits.) It is possible to check the luminance of the subtitles using the waveform scope.

When performing an HDR analysis remember that displayed subtitles are taken into account for the analysis.

10.2. Native CMS

This system uses the native color space of the source clip and allows you to encode your content without any color conversion (e.g. when using a source already encoded in XYZ for DCP output). ICE manages the media as is.


This means that no processing is applied to the source media unless you decide to apply a color correction or use a LUT.

In Native mode, if you define a different output color space and export the media, ICE will only modify the metadata of the format, and no color transformation will happen.

10.3. ACES

ACES, for Academy Color Encoding System, has been developed by AMPAS (Academy of Motion Picture Arts and Sciences) with the intend to help preserve the color integrity of the content from shooting to archiving.

Specific algorithm allows any kind of source to be processed within the controlled environment of the ACES color space and for a specific output.


10.3.1. Input Device Transform

When selecting the ACES CMS, ICE will inspect the media metadata to automatically set the right IDT.

However you may want to modify per clip the IDT. ICE has a non exhaustive list of IDTs including the ones defined for cameras and a list of main inverse ODTs.

From the GRADE PANEL in the Source tab, choose the IDT corresponding to the properties of each of your clips.


10.3.2. Output Device Transform

In ACES system, you need to choose an Output Device Transform to display properly your content. ODTs are defined for standard outputs only not specific pieces of equipment.

The ODTs are chosen in the Workflow drop down menu.

10.4. MTCMS

The Marquise Technologies' Color Management System is a custom color management system.


For more precision and reliability, MTCMS uses industry standards to compute on-the-fly the exact values for every possible color. No interpolation is done when using LUTs.

Using MTCMS, it is necessary to indicate to the software what are the color parameters of the source clip(s), in order to be able to perform the accurate color transformation.
The color primaries and the transfer curve of the source are set in the GRADE panel, tab SOURCE


By default source parameters are set on Rec709. However if color metadata exist in the media, the software will recognize them and characterize the source accordingly.

  • In the MTCMS panel,specify a Workflow before settings the Color Primaries and EOTF.

10.4.1. Chromatic Adapatation

ICE supports different chromatic adaptation also called Color Appearance Model (CAM). This adaptation makes it possible to match the original RGB color coordinates of the DSM to equivalent CIE XYZ coordinates. These colors are not matching from a colorimetric point of view but rather from a perceptual point of view. This is why ICE offers different methods that will meet the needs of each project. Select a method to apply for adapting the white point of your source media to the one set in the MTCMS:

XYZ Scaling

XYZ Scaling is an old algorithm generally considered to be less efficient than the new ones.


Bradford is most advanced than Von Kries and XYZ. Because of the varying color constancy of the samples, the algorithm was designed so that corresponding colors represented the same appearance under the different illumination sources, and not necessarily the same sample.

Von Kries

The algorithm assumes that chromatic adaptation is indeed an independent gain control of the cone responses of the human visual system and that the scaling is based on the ratio of the cone responses of the illuminants.

10.5. Displays

10.5.1. Mastering Display

The Mastering display is used to describe the capabilities of the display used to master the content : CIE (x,y) chromaticity coordinates for RGB primaries, White Point, and min/max luminance of the mastering display.

The characterization of the mastering display based on the SMPTE ST-2086 standard is a key element when working in PQ (ST-2084) and Dolby Vision. These statics metadata are critical for HDR mastering and ICE allows to care these informations through the whole process.

ST-2086 metadata is stored per composition, so in a project with multiple compositions, you can have different metadata settings for the mastering display.

However, ICE does not support only the mastering display metadata in files but also control the display device via their proprietary protocols to send the right metadata and avoid forcing the users to use the monitor menus.

When loading a composition (or changing its settings) ICE will communicate the parameters to the display.

  • Select a Preset:


The list of presets varies according to the CMS workflow chosen.

If the monitor list is empty, it might be a problem of Windows ‘access rights on the folder. In order to fix it, go to the ICE folder, right-click on the displays folder and go to Properties then Security to give the Full control.

Do not confuse the Mastering Display of the composition settings for ST-2086 metadata with the Mastering Display parameters in the Projects settings, allowing to configure the communication between ICE and the reference monitor.

If your monitor is not in the preset list, you can create one by using an existing sample in XML and edit it. This XML has to be place in : \program files\Marquise Technologies\ICE\resources\displays

  • Use Custom to define your own monitor characterization.


10.6. Source Characterization

As explained previously in the chapter Setting the CMS, the color management in ICE requires to define the colorimetric information of the sources placed on the timeline of our composition. With this information, ICE will be able to manage them appropriately for all future operations (e.g. conversions, color-grading, export etc.). Source settings can be managed individually by clip.

ICE reviews existing metadata of media. By default, settings are set on Rec709 and BT.1886 but if there is any color metadata present in the file, ICE will automatically load it as Source settings. If not, you need to characterize the source manually.

  • Access the GRADE panel (F9) at the bottom right of the timeline.

  • Select the clip(s) in the timeline

  • In the Source tab, select the source parameters.

From one colorimetric system to another, the required information may vary slightly.

  • To use the same settings on all the clips, select them on the timeline then use the GANG function.

10.6.1. in NATIVE mode


When you are in NATIVE mode, the most important setting you really need to pay attention to is the Range.

Setup the Range

ICE always compute in FULL range internally meaning that:

  • If HEAD range is selected, ICE will scale the legal range to fill the FULL range.

  • If FULL, range is selected, ICE will keep the native range of the media.

That’s why if no information of range is present in the media, the FULL range will be set by default. This avoid any additional compression if the source is indeed encoded as FULL content but with no metadata saying.

Be careful with this setting or your output may have levels that are not correct.

Use the Histogram (SHIFT+H) and check the SMPTE box option to see the scale of your source.

10.6.2. in MTCMS mode

  • If you use the MTCMS system, select the color space, the EOTF and the range corresponding to the nature of your source via the drop-down menu.

  • When converting SDR to HDR content, the HDR Gain is used to raise the levels of an SDR source. The percentage gain corresponds to its equivalence in nits (e.i 100 nits = 100%).

10.7. Apply a Look up Table (LUT)

ICE supports different formats of LUTs including 3D LUT (.3dl), ARRI LUT (.xml) and Iridas LUT (.cube).

You can add your own LUTs in .cube (IRIDAS), .3dl (3d LUT) and .xml (ARRI) format by placing them in the folder \program files\Marquise Technologies\ICE_x.x.x.x\luts

10.7.1. Apply a LUT for the Viewport

ICE allows you to add a LUT for the Viewport

  • In the Composition Settings, Displays tab, Viewport Display section, use the drop down menu to add a LUT:


10.7.2. Apply a LUT for a clip

ICE allows to apply a LUT per clip.

  • The CMS must be set on NATIVE

  • Select the clip(s) in the timeline

  • In the GRADE panel / SOURCE, select the LUT using the drop-down menu on the right:

  • To use the same LUT on all the clips, select them on the timeline then use the GANG function.

You can add your own LUTs in .cube (IRIDAS), .3dl (3d LUT) and .xml (ARRI) format by placing them in the folder \program files\Marquise Technologies\ICE_x.x.x.x\luts


ICE has a full sets of analysis tools, for the image, the audio and also the content bit rate.

It can also load 3rd party QC reports.

11.1. Image Analysis

A large set of Analysis tools are avaialble to perform image quality control.

11.1.1. The Histogram

A histogram is a graphical representation of the tonal distribution in a digital image.
It plots the number of pixels for each tonal value. By looking at the histogram for a specific image one is able to judge the entire tonal distribution at a glance.

  • You can use Shift+H to display the Histogram.




The Histogram can be set to display RGB information separately or overlaid (red, green and blue together).


Show the position in the Histogram of the pixel at the cursor’s location


For film:
Line 1 = Ref. black
Line 2 = Ref. Grey
Line 3 = Ref. white


Show Video Range (aka Head/Legal).


Show minimum and maximum values per color channel.

11.1.2. The Vectorscope

The vectorscope is used to visualize chrominance, which is encoded into the video signal as a subcarrier of specific frequency: it plots the Cb and Cr channels against each other, for the purpose of measuring and testing television signals.

  • You can use Shift+V to display the Vectorscope.




Show/Hide the color labels.


Show/Hide the color targets. They represent the maximal values according the Matrix chosen.


Show the position in the Vectorscope of the pixel at the cursor’s location.

11.1.3. CIE 1931 Chromaticity Diagram

This diagram allows you to see how the signal is displayed within the color space chosen.

You can use Shift+G to display the CIE diagram.



Color Space

Displays the working color space for your content (see Settings | CMS).

Safe Gamut

Display the safe area for a particular color space . (see Settings | CMS)

Mastering Display

Display the capabilities of the Mastering Display . Refer to the Mastering Display section for more information.



Show the position in the diagram of the pixel at the cursor’s location.


Show/Hide wavelenghts in nanometers


Show/Hide Mastering Display Information

Safe Gamut

Show/Hide Safe Gamut information

11.1.4. The Waveform

The Waveform is used to measure and display the level of the brightness, or luminance, of the part of the image being drawn onto a screen at the same point in time.

  • You can use Shift+W to display the Waveform.




You can switch between the traditional video waveform and the ST2084 or HLG modes used for monitoring HDR content.


the Waveform can be set to display information in a large variety of styles.

Color Space

You can choose between the RGB or YCbCr color spaces.


Move the mouse on the image to show the level at the cursor’s position (yellow line). This value is displayed in percentage in video mode, or in Nits (Candela/sqm) in ST2084 and HLG modes.

When using the type ST2084 the waveform will display the scale in relation to the settings selected for the Mastering Display.

11.1.5. Zebra patterning

This tools is very similar to the camera zebra mode for controlling the exposure.

The Zebra mode displays in blue the pixels below the boundaries, and in red those above:

  • to display the Zebra, use Alt+Z

The Zebra boundaries are defined by the Mastering Display parameters in the Composition Settings. You can also set your own maximal and minimal Luminance values by choosing Custom Mastering display.

The image scopes are affected by the Zebra display, as they analyse the additional red and blue information on the image.

11.1.6. The Luminance Meter

The Luminance meter is used to measure the photometric brightness of an HDR image. It measures the amount of light that strikes a surface in the picture.

  • You can use Shift+N to display the Luminance meter.

Max Light Level

Informs you about the higher light level on the current frame.

AVG Light Level

Informs you about the average light level on the current frame.


Indicates the highest frame average brightness per frame (entire content).


Indicates the brightest pixel (entire stream).


Move the mouse on the image to show the level at the cursor’s position (white line).

Live view

Analyses on the fly the MaxFALL and MaxCLL values of the content while it is playing. You can have different nit scales by selecting the desired one with the drop-down menu.

Global view

Allows you to display the full graph statistics values after the launch of a global analysis. For more details refer to the section HDR.

11.1.7. Pixel Inspector

You can display detailed color information for a specific area of the image (pixel accuracy).

  • To display or hide the Pixel Inspector, use Shift+K.


To display the color values, enlarge enlarge the size of the inspector window.

The information are displayed per color channel (R, G, B and A).

11.2. Audio Monitoring

There is a full set of Audio monitoring tools for the quality control.

  • You can use Shift+A to display the Audio meter.

Select a Type of meter and the desired scale for the meter:

  • Click on OPTION to toggle the meters.

11.2.1. VU-Meter

The VU-Meter displays a representation of the signal level per audio channel.


11.2.2. Peak Meters

You can display Peak Meters information, Sample Peak and True Peak.

True Peak

This shows the peak level of the waveform no matter how brief its duration.

Sample Peak

This meter shows only peak sample values, not the true waveform peaks.


11.2.3. Loudness Meter

The Loudness meter measures the human perceived loudness of an audio content.

Here the Loudness Meter is based on the EBU R128 Loudness recommendation.


11.2.4. Surround Meter

In this meter, the positions of the full range loudspeakers are marked on a graticule and the amplitude distribution of the sound-field is used to modulate a visual representation, also called "jellyfish display".


11.2.5. Phase Meter

The phase relationships that exist between channels of a multi-channel audio system represent critical information to a quality-control engineer.


11.2.6. Room Meter

This meter allows a real-time 3D visualization of the immersive audio objects positions in the room.

  • Click anywhere in the scope with Alt pressed for orienting the room in any direction.

11.3. Peak Signal-to-Noise Ratio

The PSNR computes the peak signal-to-noise ratio, in decibels, between two images. This ratio is often used as a quality measurement between the original and a compressed image. Using the same set of tests images, different image enhancement algorithms can be compared systematically to identify whether a particular algorithm produces better results. The higher the PSNR, the better the quality of the compressed, or reconstructed image.

The PSNR is usually expressed in logarithmic decibel scale. However, you must follow the requirements given by the company asking you a PSNR report. Each company has its own specifications.

11.3.1. Launch a PSNR analysis

  • To launch a PSNR analysis, you need to have the two sources to compare in the Library. The two sources must come from the same content part and have to be of different quality.

  • Open it into the Dual Viewport. For further information, please read the Dual Viewport chapter.

  • Proceed to a Frame Matching and click on LOCK.

  • Go to the Composition Analysis tool by pressing F6 and select PSNR from the Type drop-down menu.

  • Click on ANALYSE to launch the analysis.

11.3.2. Open the PSNR scope

As soon as the progress bar has finished, you can read the results of the analysis via the PSNR scope. To access it, press Shift+P.


11.3.3. Export a PSNR Report

If you want to export the PSNR report as a PDF file or an XML file, click EXPORT on the timeline.

  • Choose the file type from the File Type drop-down menu.


11.4. Bitrate Meter

The Bitrate Meter is used to measure the bitrate of the content playing.

  • You can use Shift+B to display the Bitrate meter.


Indicates the current bitrate in MBP/s when playing.

Max / Min

Indicates the maximum and minimum bitrate in MPB/s recorded by the meter.


Shows the average analysed bitrate.

the Bitrate meter is active only when you launch the playback. If you stop the playback, the counter will be reset.

  • You can select the desired scale of reference to read the measures in the drop-down menu.

11.5. Video Pipeline Diagram

The Video Pipeline Diagram allows you to have a quick look at the video pipeline set up.

  • press Alt+F6 to display the diagram:


This very useful tool gives you an immediate view of your displays connection settings as well as your color pipeline.

Thanks to a specific color code, you can easyly distinguish the different types of output signals: blue for SDI connexion and cyan for HDMI.

11.6. File based QC Support

Marquise Technologies' solutions integrates with automated file based QC solutions for providing a human review of the error reports.

Currently we provide support for the following automated QC reports:

  • Aurora (Tektronix)

  • Baton (Interra)

  • Pulsar (Venera)

  • Vidchecker (Telestream)

When loading the XML reports from these automated QC solutions, the operator is able to manually inspect the errors flagged for a media by navigating on the timeline from error to error.

11.6.1. Loading a QC report

  • To load a QC report for the content you have on the TimeLine, click on the IMPORT button in the Menu bar of the TimeLine.

  • Browse your directories to select the error report and select in the FILE TYPE drop-down menu the desired QC tool :


Once the report is loaded, the errors are displayed on the timeline on the Locators' track:

  • To navigate from error to error, open the TOOLS panel from the Command Panel access then go to the LOCATORS tab:

  • Click on a specific Timecode to jump to the location of the error.

  • To display the full error name you can resize horizontally the Command Panel.


12.1. About HDR

12.1.1. What is High Dynamic Range?

“High dynamic range is specified and designed for capturing, processing, and reproducing scene imagery, with increased shadow and highlight detail beyond current SDR video and cinema systems capabilities.” (Society of Motion Picture and Television Engineers® (SMPTE® Study Group Report High-Dynamic-Range (HDR) Imaging Ecosystem)).

HDR offers the ability to capture, process, distribute, and display large contrast ranges, resulting in more realistic images. The images are not just brighter with contrasts artificially dilated.

The brilliances of the objects are more faithful and the details in the high and the low lights are better represented.

HDR makes it possible to have images with more depth and better saturated highlights. Differences in brightness between indoor and outdoor scenes make more sense.

High Dynamic Range moving images capture is a reality since the first high-end digital cameras. RAW images have naturally a very high dynamic range, with an average of 14 stops for the ARRI Alexa and the SONY F65. Those camera manufacturers already offer wide gamut capture, 6K or 8K resolution at up to 120 frames per second. The bottleneck for HDR was the post-production workflows, not ready because not yet standardized, and no capable display devices were available.

Now that some pioneers have lead the way, like Dolby, and that standardization committees and industry alliances have made great works to specify what HDR is, deliveries in HDR have become a reality, pushed by the consumer market opportunities.

  • The Blu-ray association has already published specific metadata and requirements for HDR, based on the HEVC codec like the Ultra HD Blu-ray – HDR disc format using the HEVC, HDR10, and optionally Dolby Vision.

  • The Interoperable Master Format Studio Profile applications have been extended to support HDR content and metadata (also referred as Application 2e+).

12.1.2. Standards

A variety of SMPTE standards specifies the different types of HDR.


ST-2084 is based on the “perceptual quantizer” (PQ) initially proposed by Dolby. It defines the EOTF (for Electro Optical Transfer Function, a Gamma curve) for the HDR10 and the Dolby Vision formats.

This non-linear curve defines how Luminance is increasing above the standard white reference (100 nits), in the spectral highlights. ST-2084 is defined up to 10’000 nits. (Current HDR display devices support a maximum of 4’000 nits – Dolby Pulsar).


“Mastering Display Color Volume Metadata Supporting High Luminance and Wide Color Gamut Images”, this standard accompany the ST-2084 and defines the static metadata embedded in the HRD content.

This metadata is used to describe the capabilities of the display used to master the content : CIE (x,y) chromaticity coordinates for RGB primaries, White Point, and min/max luminance of the mastering display.

This is a characterization of the hardware used and has nothing to see with the MaxFALL and MaxCLL metadata, which are statistical measures of the content.

These parameters are essential to know what you are looking at.

SMPTE ST-2094 Dynamic Metadata for Color Volume Transform

The metadata are intended for transforming high dynamic range (HDR) and wide color gamut (WCG) image essence for presentation on a display having a smaller color volume than that of the mastering display. The metadata are content-dependent and can vary scene by scene or image by image.

Four technologies have been currently specified, even though currently the Applications 1 and 4 are more spread.

ST 2094-10 DMCVT – Application #1

A standardization of Dolby’s technology (Parametric Tone Mapping)

ST 2094-20 DMCVT – Application #2

A standardization of Philips’ technology (Parameter-based Color Volume Reconstruction)

ST 2094-30 DMCVT – Application #3 A standardization of Technicolor’s technology (Reference-based Color Volume Remapping)

ST 2094-40 DMCVT – Application #4

A standardization of Samsung’s technology (Scene-based Color Volume Mapping)


Based on the ITU-R BT.2100, the Hybrid Log Gamma curve is coming from a joint study of both BBC and NHK. Their aim is to insure the backward compatibility with SDR devices and content, a key element for broadcasters in the adoption / transition to HDR. The main difference with ST-2084 is that definition reached 5’000 nits and that is does not carry specific mastering metadata.


The ST 2067-21 Interoperable Master Format – Application #2 Extended has been extended to support HDR content and metadata (also referred as Studio Profile).

12.1.3. Vocabulary

Below you will have an overview of the vocabulary frequently used with the HDR technology used as well in this documentation:

  • WCG: Wide Color Gamut - Rec.2020 has 2x more colors than Rec.709.

  • HDR: High Dynamic Range TV (ITU-R BT.2100)

  • SDR: Standard Dynamic Range TV (Rec.601, Rec.709, Rec.2020)

  • HFR : High Frame Rate (100 & 120 fps)

  • HEVC: High-Efficiency Video Codec (H.265) - 2x more efficient than AVC

  • PQ: Perceptual Quantizer Transfer Function for HDR signals (SMPTE ST 2084, ITU-R BT.2100)

  • HLG: Hybrid Log Gamma Transfer Function for HDR signals (ITU-R BT.2100)

  • HDR10: 10-bit HDR using BT.2020, PQ and static metadata

  • DoVi: Dolby Vision – 12-bit HDR, BT.2020, PQ, Dolby Vision dynamic metadata

  • DMCVT: Dynamic Metadata for Color Volume Transforms SMPTE ST 2094

  • EOTF: Electro-Optical Transfer Function.

12.1.4. HDR Global Analysis

The HDR global analysis tool measures the light level of the composition range defined by the user to obtain the MaxFALL and the MaxCLL values.

The MaxFALL/MaxCLL information is mandatory for some deliverables, especially for HDR10.

Maximum Frame Average Light Level is a metadata recording the average brightness of every pixel in the brightest frame of a given program.

Maximum Content Light Level is a metadata recording the nit level of the brightest pixel in the frame.

Used among others in ST-2086, these values are important in order to control and display the images viewed on the display with accuracy.

Setup the Active Image Area

Before the launch of the analysis, you need to define the aspect ratio of the content to exclude any blanking area from the analysis which could skew the results.

  • Go to the Composition Settings then inside the FORMAT tab, select the Frame Aspect ratio to define the active area.

  • To see the active image area on the viewport, press Alt+B to display a green border frame :

Launch the analysis

Once the frame aspect ratio has been setup in order to have the appropriate active image area for your content, you can launch the analysis.

  • Go to the Composition Analysis by pressing the F6 key and select HDR Statistics with the Type drop-down menu:

  • Select the Source Color Space corresponding to the content and launch the analysis by clicking on the Analyze button:

View analysis results

As soon as the analysis has been completed, you can access the results of the analysis by revealing the Luminance Meter with the shortcut SHIFT+N.

  • Choose the Global type with the drop-down menu to display the full graph of the analysis:


The first values displayed at the top left are the MaxFALL and MaxCLL values of the current image displayed on the viewport. The values below concern all the analyzed content.

  • The Content Light Levels are reported automatically after an analysis into the Composition Settings in the CMS tab:


The Content Light Levels displayed in the CMS tab of the Composition Settings are updated after a new analysis.

It is also possible to type manually the MaxFALL and MaxCLL values in these fields.

Export an HDR statistics report
  • To export the HDR statistics as a PDF or an XML file, click on the EXPORT button. Define a file path, name the report and choose the desired File Type with the drop-down menu:


12.2. Dolby Vision

Dolby VisionTM is a proprietary HDR technology developed by Dolby Laboratories, Inc using the PQ curve on the operating principle of Parametric Tone Mapping.

See section Third-party Licenses of this manual regarding all obligations related to the use of DolbyTM technologies.

12.2.1. Dolby Vision Content Mapping and Metadata versions

Marquise Technologies' solutions support the different versions of Dolby Vision’s algorithms: the Dolby Vision Content Mapping version 2.9 (CMv2.9), the Dolby Vision Content Mapping version 4.0 (CMv4) (beta).

These algorithms are linked with different versions of Dolby Vision Metadata:

  • Metadata 2.0.5 (used by CMv2.9)

  • Metadata 4.0.2 (used by CMv4)

  • Metadata 5.1.0 (used by CMv4)

Users should select Metadata 2.0.5 for new projects unless otherwise stated from the studio during the transition phase to CMv4 ecosystem adoption.

Projects started or created in one version cannot be converted to another version.

12.2.2. Content Mapping Unit (CMU)

The Dolby Vision Content Mapping Unit (CMU) is able to emulate a number of secondary display targets and aims to produce images adapted to those displays. It maps the content with the metadata for a specific display, could it be at a standard brightness (e.g. SDR 100 nits) or higher.

The Dolby Vision PQ images as well as the color volume transform metadata are sent to the CMU which "renders" the images before they are outputted to a connected display device.

Marquise Technologies' solutions support the eCMU (external CMU), as well as the iCMU (internal CMU). Both requires a valid Dolby Vision license provided by Dolby Laboratories.

The preview of a Dolby Vision content requires either a iCMU license or a eCMU device. The inspection of the metadata only does not require Dolby’s CMU.

Setup of the eCMU

The Dolby Vision metadata are sent over the SDI to the eCMU. An Ethernet connection allows the software to control the eCMU.

The eCMU has to be setup following Dolby Vision CMU user’s manual. The following picture shows a typical case to setup properly the displays:

  • To enable the eCMU, select eCMU in the Project Settings, Dolby, Dolby Vision tab.

The good communication between the computer and the eCMU can be verified on the eCMU Home Page by typing its IP address into a web browser:


Sometimes the Firewall can interfere with the connection between the eCMU and the computer. Remember to configure it accordingly.

The eCMU’s status can also be visualised in the Video Pipeline Diagram by pressing Alt+F6.

Go to the eCMU Home Page to control the eCMU:

To perform color corrections, the signal must be set on Normal.

If Pass through is selected, the image is displayed without the corrections (bypass).

Setup of the Video Output with the eCMU

In order to display the Dolby Vision technology in an appropriate way, the video output settings into the Project Settings (F1) have to respect some rules:


The RGB 4444 format is mandatory


Choose the SDI level B


Scaling must be setup on HEAD

Pixel Format

Select 16 bit

Setup of the iCMU

See Project Settings / Dolby to install the license for the iCMU.

When using the iCMU, the tone mapping is automatically applied based on the Dolby Vision metadata before it is sent through the SDI or displayed on the image viewport of the software.

About HDMI Tunneling

This features refers to the capability to output the Dolby Vision metadata through HDMI using a AJA Kona5, directly to a Dolby Vision consumer TV device to simulate the behavior of the image.

If your video IO card is compatible, then HDMI tunneling for Dolby Vision happens automatically.

12.2.3. Composition settings for Dolby Vision

The Composition settings need to be carefully chosen for Dolby Vision content.

They are set in the Control Panel, SETTINGS

Select the Versioning mode

It is important to set the Composition in versioning mode to help prevent undesired modifications on the image.

  • Go to the Composition settings / GENERAL tab. Select the VERSIONING type.

Select the Active Area Aspect

Active image area metadata including the Canvas Aspect Ratio and Aspect Ratio Images information are crucial and mandatory when using Dolby Vision technology.

These metadata are required for knowing the Active Image Area of pixels to be processed. For example, this information is critical in the cases of letter and pillar boxing in order to exclude them from any processing.

To select the Active Area, go to the FORMAT tab of the Composition settings and select your aspect ratio from the drop-down menu on your right:


To see the active image area on the viewport, press Alt+B to display a green border frame.

If you are importing Dolby Vision metadata, make sure that the Active Area Aspect settings of your composition match those used in the existing Dolby Vision MDF file.

Setup the color pipeline for Dolby Vision
  • To setup the Color pipeline, go to the Composition Settings / CMS and choose a color system:


Choosing the NATIVE mode for Dolby Vision content requires a source in PQ rec2020 or P3. Also, be sure that no Look Up Table has been applied on it.


Using MTCMS, you first need to define the color primaries and the transfer curve (EOTF):

  • To work in Dolby Vision, the Transfer Response Curve must be set on ST 2084 (PQ).

  • The color primaries can be either Rec2020 or P3, according to your source content.

  • You can also setup the Luminance Levels for minimum and maximum brightness.

  • Remember that when using MTCMS, it is always necessary to identify the source color space:
    Source color parameters for the clip(s) are set in the SOURCE tab of the GRADE Panel.

    • Whatever CMS you chose, select Dolby Vision in the Workflow list:


This will enable the Tone Mapping parameters.

  • Select the metadata version:

  • Setup the Mastering Display for Dolby Vision in SETTINGS / DISPLAYS:


For more information, please refer to the chapter Mastering Display.

Set the Mastering Display after having set the CMS properly in order to display the right list of monitors characterization.

If you are importing Dolby Vision metadata, make sure that the Mastering Display information of the composition match those used in your existing Dolby Vision MDF file (i.e. if Dolby Pulsar is used into the Dolby Vision MDF file, select the Dolby Pulsar from the template list).

If you need further information on Color Management, please read the chapter Color Management.

12.2.4. Import Dolby Vision metadata

Marquise Technologies' solutions support the two variations of Dolby Vision metadata:

  • Dolby Vision color volume transform metadata into an XML file, aka MDF file.
    Further refered in this document as "Dolby Vision metadata XML file" or "Dolby Vision MDF file".

  • Dolby Vision color volume transform metadata into an ISDX track (MXF file), following the RDD 47 "Interoperable Master Format – Isochronous Stream of XML Data (ISDX) Plugin" specification.
    Further refered in this document as "Dolby Vision metadata MXF file" or "Dolby Vision metadata MXF track"

Import Dolby Vision metadata XML file

After the source content is added on the timeline and the composition settings properly set, you can add the corresponding Dolby Vision metadata XML.

  • Right-click on the desired clip on the timeline and choose Dolby Vision Metadata Import:

  • Choose the file through the browser. Select the type then validate with OK.
    A successful import will result in displaying a yellow bar on the concerned clip:


The Import Dolby Vision Metadata option is enabled visible only if the Compostions Settings are set on Dolby Vision workflow and a tone mapping chosen.

Import Dolby Vision metadata MXF track

The RDD 57 ISXD is an MXF file containing the Dolby Vision metadata. This MXF is behaving as any other MXF essence, and is manipulated as a track.

First, you need to refer the MXF file in the Media bin of the project.

  • Drag&drop or import the file:


If the MXF file is correct, the Dolby Vision logo is displayed in the thumbnail.

To import the MXF on the timeline, you must first add a special Auxiliary layer.

  • Press X+ on the timeline and select the corresponding metadata MXF file:


This will add a new layer on the timeline:

  • Drag&drop the MXF file on the Auxiliary track:


The Dolby Vision metadata XML track is now on the timeline:


12.2.5. Inspecting Dolby Vision metadata

To control the Dolby Vision metadata:

  • in the Command Panel / Metadata / Dynamic the Dolby Vision metadata are displayed:


This is also the same when using a Dolby Vision metadata MXF track file, the metadata are listed in the Auxiliary track section:


The presence of metadata into the timeline is shown as a color bar on each clip with metadata:


If there is no bar (and the dynamic tone mapping is well selected), it means that your content has no metadata or that the values have been reset.

In a case of overlapped tracks, the Control Panel Metadata always display the values of the top track.

If you want to be able to read the lower track metadata, you can hide the top track by clicking to the left of the layer control box:


12.2.6. Playback of Dolby Vision content with the CMU

The CMU allows to playback Dolby Vision content with metadata for the different targets present in the Dolby Vision metadata file.

All targets available for a Dolby Vision metadata file are available in the Composition Settings / DISPLAYS panel.

  • choose a Target from the drop-down menu:


Target Displays are available in the DISPLAYS panel only if a valid Dolby license is loaded and the CMS workflow is set on Dolby Vision.

  • You can also activate the dynamic tone mapping display for the viewport:


In this case the iCMU is used to display the final result on the image viewport.

12.2.7. Dual output for Dolby Vision

It is possible to output 2 different streams of Dolby Vision content, with or without tone mapping applied:


Refer also to Dual Video Output for setting up the video output matrix.

12.2.8. Analyzing clips

Image Characteristics Analysis

The Global Analysis and the Image Characteristics Analysis are two different analyzes. The behavior and the use of the tone mapping are not the same.

It is on the Image Characteristics Analysis made on each clip that the generation of Dolby Vision metadata is based.

The Image Characteristics Analysis is based on the levels of the image and not on the number of pixels of a given value.

Proceed of the analysis

Setting the timeline for an Image Characteristics Analysis of clips is done like for the Global Analysis. Please read the chapter HDR Global Analysis for reference.

  • Select the clip you need to analyse and open the Composition Analysis panel using the F6 key and select HDR Statistics from the drop-down menu.

If you don’t select any clip, the Global Analysis will be processed instead.

The clip has to be included within the composition range to be analyzed:


12.2.9. Editing the metadata

Common editing operations also apply in a Dolby Vision workflow. Nevertheless, some small differences in the behavior of metadata are worth noting.

Adjusting the duration of a clip

It is possible to adjust the duration of the clips, in particular to extend a clip duration with the cursor:

Cut a clip

The cuts in Dolby Vision correspond to the change of metadata from one image to another and not from one plan to another. Although Dolby Vision cuts may corroborate with narrative cuts, keep in mind that the reference unit for Dolby Vision is a frame and not a clip.

  • To cut a clip, right-click on the clip into the timeline to access to the Clip operations menu. By clicking on Cut, the clip will be split at the Playhead position:


By cutting a clip with Dolby Vision metadata, the two halves of the clip newly generated after the cut will keep the same metadata. When using the Razor tool, the behavior is the same. Please read the chapter Cutting clips for more information on editing operations.

Join function

The join function is accessible via the Clip operations menu. Place the Playhead on the desired clip to be joined, right-click on the clip and select Join:


When joining two clips, the metadata of the first clip are applied to the second one.

Working with multiple video tracks

It is possible to work in Dolby Vision with multiple video tracks.

Because the composition is set in VERSIONING mode, the output of the composition will always be flattened.

The two following schemes will help you understanding how the multiple tracks are rendered:

1. The upper track overlays the gap


2. The top track overlays a part of the clip from the bottom track

In this case, the Dolby Vision metadata that are taken into account are ALWAYS the one from the top track:

Management of gap

Dolby Vision doesn’t allow any gap in the metadata. When there are gaps between shots on the timeline, you have to fill them with black frames to match the duration of the gap. Once it is done, launch an Image characteristic analysis on it.

Editing of a muxed Dolby Vision file

A muxed Dolby Vision file is an MXF JPEG2000 with the Dolby Vision metadata embedded. This file is being described in a future released SMPTE document, RDD 56 "Track File for JPEG 2000 Codestreams with Time-Synchronous Metadata".

In the timeline, such a file is represented with the metadata colored in white, meaning read-only.

To edit the file, you must first perform a cut:

  • Import your muxed file into the timeline then right-click on the clip in order to access to the Dolby Vision Metadata operations. Click on Cut:


The muxed file is now cut in several clips and the color clip bar has changed from white to yellow, meaning the metadata are now editable.

Extract metadata from a muxed Dolby Vision file

To extract the metadata of a muxed Dolby Vision file, follow these steps:

  • Place the file on the timeline and right-click on the clip to access to the Dolby Vision Metadata operations menu. Then click on Extract:

  • In the window Export Dolby Vision Metadata, select a location, name your XML file and select Dolby Vision MDF as File Type.
    Click on OK to validate.

Exporting a Dolby Vision MDF file

It is possible to export a Dolby Vision MDF file from the timeline.

The actual composition range is not taken into account for this operation: all the clips of the timeline will be included in the MDF file.

  • To export a Dolby Vision MDF file, click the Export button:

  • When the export window opens, choose the path of the file to export. Then after having named it, select in the File Type drop-down menu the Dolby Vision MDF extension in the category EDIT DECISION LIST:

  • Click on OK to export the file.

13. DCP QC

ICE offers special tools for a proper QC and validation process of DCP packages.

ICE is able to work with complex packages, including multi CPLs and multi PKLs content as well as supplemental packages.


13.1. Importing a DCP

Importing a DCP into a Project is similar to the process of ingesting a DCP into a DCI compliant server. Each DCP has at least one composition playlist (CPL) for the original version and possibly a number of sub versions, each with its own CPL.

There is different ways to import a DCP package:

  1. Drag the DCP (root folder) directly into the Timeline of an existing project.

  2. Use the Import function from the starting menu.

  3. Create a new project using the DCP Import tool.

Once the DCP is imported within a project, all the elements in the timeline are ready for screening, quality control, etc.

13.1.1. Drag & Drop a DCP

Refer to chapter Media Import Drag & Drop

13.1.2. Import a DCP with the IMPORT function

Refer to chapter Media Import Import Function

13.2. Importing an encrypted DCP

When a DPC is encrypted, the first step is to import the package, either by Drag & Drop or using the IMPORT button.

Then proceed to the import of the KDM itself, using the IMPORT button from the bottom menu.

Browse directories, select the KDM and click ok.

You can filter your search by selecting the type of file:


13.3. DCP Validation

Using the DCP Import Tool from the starting menu, it is possible to validate the content of your package.

13.3.1. Contents tab

The Contents window displays what is present in the DCP: resources and metadata are clearly displayed for an immediate overview of the package. We can at a glance know how many CPLs are in the package with how many reels and for each of them, see the presence of video assets, audio and even auxiliary tracks (e.g. Dolby ATMOS). The ENCRYPTION column also allows us to know immediately if the content has been secured with a KDM.


Click on the + and - of each category to extend the view of the tree or reduce it.

13.3.2. Security tab

This tab can be ignored if the DCP is not encrypted. This is where the DCP Key Delivery Message (KDM) are to be loaded BEFORE the DCP is imported.

Make sure that the KDM for the CPL our are imported has been made using the software’s certificate.


There are two ways to import a KDM:

  • By loading it from the Marquise Technologies platform called "KeyMaster":

If you have a KeyMaster account and the KDM is avaialble on the platform, it will be downloaded directly when clicking the KEYMASTER button.

  • By importing a KDM from the System:

When you import a secure DCP, the DKDM folder is scanned to find the KDM referenced by the Secure DCP itself to decrypt its RSA and AES keys as well as the subsequently AES encrypted essences.

The following address shows where the DKDM folder is located in your system:


As long as the unaltered KDM is placed in this folder the content will automatically parsed to determine the proper KDM for each DCP CPL it needs to decrypt.

You may store as many KDMs in the DKDM folder as you like but it is a good idea to store the originals in another location and purge the DKDM folder periodically for the sake of organization.

  • To import the KDM click on the Import KDM button then in the new opened window, navigate to find your KDM file through the system. The loaded key appears in the KDM list:


If the KDM is not correct and therefore can’t decrypt the content, it will be loaded in the timeline but the file will appear as corrupted (i.e. noisy pictures etc).

Even if you can not access the content as it is read in case of a faulty key, you will still be able to import the package to check its structure.

13.3.3. Validation tab

When importing a DCP, it is possible to launch a Validation in order to check the integrity of the package. Test are based on standards used by DCI specifications.

  • To run a validation test, click on EXECUTE. The package is analyzed in its entirety and depending on its size, the time required to analyze one DCP from another may vary.

The SELECT ALL and DESELECT ALL buttons allow you to select and deselect the tests in the list. The CLEAR button is used to erase the last status of past tests.

As soon as the validation test is finished you have the possibility to export a detailed analysis as a PDF report. To do this, click SAVE and set the destination path for this report. The purpose of each test is detailed in the report in order to be able to rectify a possible error in case of failed test.

The report has several types of status:


Displayed in green. The test in question was successfully completed.


Displayed in red. The test has failed.


Displayed in yellow. Unlike the FAILED, this test result will not prevent the operation of the DCP. Nevertheless, it draws your attention to some results.


Displayed in white. The test is not relevant to the package. It can not succeed or fail. (e.g. Key delivery test category for a non encrypted DCP).


Test has not been performed yet.

An explanation of the validation tests can be found in the Appendix DCP Validation.

13.3.4. Log tab

If a DCP package is corrupted and cannot be opened, the Log window will open showing the details the errors found in the package.


13.3.5. Confirm the import

If you don’t need to run any validation test or load a KDM, you can go directly to this step. At this stage you are ready to begin the Import DCP process so click on the OK button.

By clicking OK the package with its assets will be imported in the media bin of an existing project or as a new project. The new project will be built based on the structure of the DCP. Clearly that means that every CPL in the package will be mapped to a composition with its own timeline. The video and audio tracks as well as the subtitle tracks will be added to these timelines according to what the DCP CPLs contain.

Although each DCP CPL will generate its own Composition, this does not apply to the media assets. In a DCP, a video, audio or subtitle track can be referenced by more than one composition. These tracks will only be imported once in the project media bin.

In the case of importing a Supplemental DCP, first import the original version before the supplemental version. Otherwise, some assets referenced by the supplemental version would be missing.

14. IMF QC

IMF packages require special tools for a proper QC and validation process. ICE has dedicated tools able to work with complex packages, including multi CPLs content, supplemental packages and sidecar files.

In addition of its own validation tools, ICE directly integrates Photon validation solution.

14.1. Importing an IMF package

There is different ways to import a DCP package into a project:

  1. Drag the IMF (root folder) directly into the Timeline of an existing project.

  2. Use the Import function from the starting menu.

  3. Create a new project using the IMF Import tool.

Once the IMF is imported within a composition, all the elements in the timeline are ready for screening, quality control, etc.

14.1.1. Drag & Drop an IMF

Refer to chapter Media Import Drag & Drop

14.1.2. Import an IMF with the IMPORT function

Refer to chapter Import Media Import Function

14.1.3. Using the IMF Import Tool from the Start menu

The IMF Import tool automatically creates a Project and places the video, and audio essences of the IMF CPL into a composition with each clip arranged in the timeline just as it is referenced by the IMF CPL.

Accessing the Import IMF Tool from the Start menu

It is possible to create a new project based on the IMF to be imported. To do so, when starting the application click on Import IMF button from the Start menu:


If the IMF has differential or supplement packages then their additional essences (if any) are imported to the same chosen media folder as the original IMF’s essences.

Each CPL of the package will be a Composition of the Project.

This way the operator may import a multiple CPLs IMF in a single process and easily switch between the different Compositions.

Select the Package Directory
  • Select the IMF to import using the browse button:


Once you have selected the package folder, and pressed OK, the contents get quickly analyzed and the various elements are then displayed in the exploded directory tree-view. You may scroll through the directory tree and expand or collapse individual elements by clicking on the plus and minus signs to the left of each item:


Do not modify the folder’s name or contents in any way or you risk destroying the IMF.

In the screen shot above, the various composition playlists are displayed. Each of them contains one or more reels. These reels also contain a video track and optionally audio tracks and subtitles.

The name of each asset is displayed in the left most column while additional metadata is displayed in the remaining columns as available or relative to the asset itself. This may help to quickly identify the contents of a particular IMF and its sub versions before going through the actual process of importing it, especially if the IMF name does not provide the information you need to do so. This may be the case with sub versions or with multi-segments IMFs.

Project Name

This is where you chose a name for your Project that will contain the entire contents of the DCP you wish to import. By default the folder’s name of the DCP will be displayed.

  • To modify the name of the Project, click on the Name text field, enter a new name and press Enter or click outside of the Name text field.

14.2. IMF Validation

Using the IMF Import Tool from the starting menu, it is possible to validate the content of your package.

14.2.1. Contents

The Contents window displays what is present in the IMF: resources, sidecars, metadata are clearly displayed for an immediate overview of the package.


it can be very useful in the case of an HDR IMF to check the presence of metadata at this step:


You can develop details for each resource by clicking on the “+” button or close it with the “-“ button:


Use the mouse wheel to go down to the asset list.

14.2.2. Output Profiles

The Output Profiles tab display the list of the OPLs present in the package.


14.2.3. Validation

The Validation tool allows to verify the integrity and compliance of the IMF package as well as the type of application used. This is done by launching for each category some analysis tests. Each test name is briefly described in the TEST column.

  • You can do a full scan by clicking SELECT ALL. A small yellow line appears next to the tests to be performed. Click on EXECUTE to start the analysis tests:


Once the validation test is complete, the status is displays using three labels:


the test was passed successfully.


the test is irrelevant. This does not cause the package to fail. It may mean that the package is not concerned by the test. For example, if an IMF does not use a digital signature, the X509 test will appear as a WARNING label in order to attract attention on it.


the test has failed.


the test is not relevant for the type of package being scanned.

The tests validating the Applications verify all supported applications. Therefore it is normal that the ones not concerned by the IMF package appears as fail. Note that it is possible that several applications are present in the package.

  • You can save the results of the validation as a PDF file by clicking on the button SAVE. This file will detail all the test results with explanations of the tests done allowing you to know what test have failed.

An explanation of the validation tests can be found in the Appendix IMF Validation.

14.2.4. Photon

Photon is an open source tool for parsing, interpreting and validating constituent files that make an Interoperable Master Package. This validation test is a supplemental tool to check the IMP and is integrated into Netflix ingestion Pipeline among others companies. Because this validation test is critical for some deliveries, especially for Application 2e, we frequently update the version of Photon.

  • To start an analysis, click on EXECUTE at the bottom left corner of the window.

For now, Photon is not yet capable of validating an IMF Dolby Vision package.

14.2.5. LOG

If an IMF package is corrupted and cannot be opened, the Log window will open showing the details the errors found in the package.

14.2.6. Import

PErforming a validation or a Photon validation is not a mandatory step to import the package. At any time you can click on OK to finalize the import.

After the import, the IMF package is displayed in the Project media bin and the first composition is imported on the TimeLine.


1. About ICE configuration

An ABOUT section in ICE gives you the essential overview about your system configuration.

It displays information the installed version of the system and its release date. When asking for some support, this information panel will help the user answering the first question: “on what version of ICE are you working on?”

  • The ABOUT section is accessible by pressing the keyboard key F12.

1.1. Release

This panel displays the version number of ICE currently running:

1.2. License Information

This panel recaps what options are included with the current License (if any).

1.3. System Information

This panel displays information about the system that runs ICE, like number of processors used or the type of GPU.

1.4. Plugins

This panel displays the plugins (options) that are currently installed with your version of ICE:

1.5. System Certificates

This panel displays information about the digital certificate that has been automatically created for your ICE and where it is located. This certificate needs to be given to create KDMs for ICE.

2. Where to find certificates for ICE

2.1. Public Certificate containing its Public Key and Digital Signature




C:\ProgramData\Marquise Technologies\ICE\certificates


This file contains ICE’s public key and digital signature. It must to be sent to any DCI mastering station that needs to create a secure DCP for use by ICE.

2.2. Public Certificate Chain




C:\ProgramData\Marquise Technologies\ICE\certificates


This file contains the entire certificate chain of digital signatures of the ICE. It is required if the certificate needs to be verified. It may be distributed freely in accompaniment with the Public Leaf Certificate. .pem = Privacy-enhanced Electronic Mail.

3. Input File Formats Support

3.1. Compositions

Format Name File Extension(s)

Advanced Authoring Format


D-Cinema Composition Playlist


EDL CMX 3600


Final Cut Pro


Final Cut Pro X


IMF Composition Playlist


3.2. Camera Files

Camera Models Format Names



h.264 QT MOV


Alexa (B6W, LF, SXR, SXT, XR, XT, 65)
Alexa Mini LF

ProRes MXF, ProRes QT MOV


Cinema, Pocket Cinema, URSA

Blackmagic RAW
Cinema DNG


EOS 1D / 5D / 7D
C100 / C200 / C300
C500 / C700 / C700FF

h.264 QT MOV
h.264 MXF
Canon XF-AVC



h.264 MP4


DSLR cameras



Varicam 35

Panasonic VRW RAW
Panasonic P2 MXF
h.264 QT MOV


4K Flex
HD Gold

.cine RAW


Epic, Epic Monochrome
Komodo 6K



F65, F55, F5
DSLR cameras

Sony Simple Studio Profile (SStP) MP4
Sony XDCam EX MP4

3.3. Image Sequences

Format Name File Extension(s) Comment





See DPX Support




.tif, .tiff




.sgi, .rgb


.jpg, .jpeg

JPEG2000 (J2K)

.j2k, .j2c



JPEG High Throughput






Photoshop PSD


Import composite image only.







Media Logic Artisan






Weisscam RAW


Canon RAW


Panasonic VRAW


3.3.1. DPX support

8 bit UYVY

10 bit YUV 4:2:2 b.e. V2

12 bit RGB b.e.

8 bit YUVA 4:2:2:4

10 bit YUV 4:2:2 Cineon b.e.

12 bit RGB b.e. V2

8 bit YUV 4:2:2 b.e. V2

10 bit YUV 4:4:4 b.e. V2

12 bit RGB l.e. V2

8 bit YUV 4:2:2 l.e. V2

10 bit RGB Cineon b.e.

8 bit YUV 4:4:4 b.e. V2

10 bit RGBA

16 bit RGB Cineon b.e.

8 bit RGB b.e. V2

8 bit RGB l.e. V2

DPX Monochrome

8 bit RGBA

DPX Alpha

b.e. = big endian / l.e. = little endian

3.4. Video

Format Name File Extension(s)

Advanced Systems Format (Windows Media Video)








AS-11 D10






Avid MXF















.mpg, .mpeg











Motion JPEG2000










3.5. Audio

Format Name File Extension(s)

Audio Interchange File Format (AIFF)

.aif, .aiff

Waveform Audio File Format (WAVE)


Free Lossless Audio Codec (FLAC)


Dolby Atmos


Digital Theater Systems (DTS-X)


3.6. Elementary Streams

Format Name File Extension(s)

H.265 (HEVC)


3.7. Subtitles & Captions

Format Name File Extension(s) Comments

Cheetah Closed Captions




Digital Cinema XML




EEG 708 Captions


Scenarist Closed Captions


Screen Electronics PAC


Sony BDN

.xml, .png, .tif

Spruce STL







.xml, .ttml

Apple iTunes Timed Text


Common File Format Timed Text



.xml, .dfxp



Internet Media Subtitles and Captions (IMSC)

.xml, .ttml

Support for v1. and v1.01. Animation not supported.





3.8. IMF Applications

The following IMF packages can be imported:

  • Application 2, 2e (Studio Profile)

  • Application 4 (Cinema Mezzanine)

  • Application 5 (ACES)

  • IMF ProRes RDD45

  • RDD 59-1 IMF Application DPP (ProRes)

  • RDD 59-2 IMF Application DPP (JPEG2000)

Supported JPEG2000 profiles:

  • Broadcast profiles, up to BPC L7

  • IMF profiles, up to 16 bit

4. Keyboard Shortcuts

Below is a recapitulation of the available Keyboards Shortcuts for ICE.




Go to Project



Go to Composition Analysis



Go to Video Pipeline Diagram



Go to Storyboard



Go to Timeline



Switch Calibrate/Timeline






Escape from current operation





Page Up

Show Timeline


Page Down

Hide Timeline



Exit Project






Show Clip Properties



Take Snapshot



Show/Hide Audio Mixer



Show/Hide Histogram



Show/Hide Vectorscope



Show/Hide Waveform



Show/Hide Audio levels



Show/Hide Bitrate



Show/Hide Luminance Meter



Show/Hide DMCVT Metadata



Show/Hide Dynamic Metadata



Show/Hide PSNR Meter



Show/Hide Zebra



Show/Hide Dynamic Mapping



Change time display



Fit Viewport



Toggle Full Screen Viewport



Center Viewport



Toggle Camera View



Toggle Safe Frames



Toggle Axis View



Show/Hide Color Picker Info



Toggle Color Picker Display Modes



Show/Hide Information



Toggle Red Channel



Toggle Green Channel



Toggle Blue Channel



Toggle Alpha Channel



Toggle Mask



Toggle Active Area



Toggle Mono/Stereo



Toggle Left/Right Display



Toggle Geometry Display



Toggle Dual Viewport



Toggle Single/Dual Viewport



Lock Dual Viewport



Toggle LUT Computer Display



Toggle LUT Mastering Display



Fit Timeline



Fit duration



Fit All durations



Center Timeline



Zoom In Timeline



Zoom Out Timeline



Toggle Clip Handles









Select all Clips



Deselect All Clips



Delete Selected Clips



Ripple Delete Selected Clips



Mark In



Mark Out



Clear Mark In



Clear Mark Out



Set Composition In



Set Composition Out



Clear Mark points



Razor at time marker



Lift marked region



Extract marked region






Toggle Insert/Replace Mode









Select Current Clip



Activate Layer Above



Activate Layer Below



Lock/Unlock Active Layer



Enable/Disable Active layer



Insert Dissolve



Insert from Source



Insert Audio layer



Insert Video Layer



Merge Stereo Video Layers



Paste Layered



Toggle Snap



Trim In -1 frame



Trim In +1 frame



Trim Out -1 frame



Trim Out +1 frame



Slip -1 frame



Slip +1 frame



Slide -1 frame



Slide +1 frame



J Pressed



J Released



K Pressed



K Released



L Pressed



L Released



Play Forwards



Play Backwards



Play Forwards Marked Range



Play Backwards Marked Range



Go to IN point



Go to OUT Point



Previous Frame



Next Frame



Previous 10 Frames



Next 10 frames



Previous 100 Frames



Next 100 Frames



Go to Previous Cut



Go to Next Cut



Go to First Frame of the Composition



Go to Last Frame of the Composition



Toggle Playback Mode



Go to Layer Start



Go to Layer End



Go to Clip Start



Go to Clip End


Alt+Page Up

Previous Audio Cut


Alt+Page Down

Next Audio Cut


Shift+Page Up

Previous Subtitle


Shift+Page Down

Next Subtitle



Go to Next Locator



Go to Previous Locator



Previous Composition Marker



Next Composition Marker



Previous Segment Marker



Next Segment Marker



Add Composition Marker



Add Segment Marker



Enter/Toggle Compare Mode



Exit Compare Mode



Copy Frame to Still Store



Toggle Comparator Visibility



Show/Hide Event Viewer


Alt+Ctrl+Page Up

Go to Previous reel


Alt+Ctrl+Page Down

Go to Next reel

5. Validation

In this section you will find some detailed explanations of the different validation tests performed for DCP and IMF Packages.

5.1. DCP Validation

5.1.1. Conformity

This test checks that the all assets including the ASSETMAP follow the VOLINDEX compliance (i.e. strictly either SMPTE or InterOp).

A failure on this tests may prevent ingest or playback. ---

5.1.2. Composition Playlist

Is TKR (Theatre Key Retrieval) Enabled

This test checks that Theatre Key Retrieval (TKR) is enabled for the Composition Playlist(s).

A failure on this test DOES NOT prevent ingest or playback.
Reel & Track Intrinsic Durations Match

This test checks that the reel intrinsic durations and track file intrinsic durations match.

A failure on this test could prevent ingest or playback.
Reel Durations Are Valid

This test checks that the reel durations match the asset durations. A failure on this test could prevent ingest or playback.

Reel Tracks Are Homogeneous

This test checks that all the reels of a Composition Playlist have the same track configuration (i.e. picture/audio/subtitles).

A failure on this test could prevent ingest or playback on some systems.
Reels Have Audio

This test checks that the reels have an audio track.

A failure on this test could prevent ingest or playback.
Reels Have Picture

This test checks that the reels have a picture track.

A failure on this test could prevent ingest or playback.
Track Encryption is Homogeneous

This test checks that the tracks across the reels have a homogeneous encryption status.

A failure on this test should not prevent ingest or playback.

5.1.3. Key Delivery Message

Lifetime Is Valid

This test checks if the lifetime of the Key Delivery Message(s) matches the lifetime of the signing certificates.

A failure on this test will prevent the ingest of the KDM(s).
Metadata Is Valid

This test checks if the metadata in the Key Delivery Message(s) is valid.

A failure on this test will prevent the ingest of the KDM(s).
X509 Signature

This test checks for the presence of the X509 digital signature in the Key Delivery Message(s).

Digital signature is mandatory, if not present the KDM(s) is/are not valid and ingest will fail.

5.1.4. Trackfile(s)

Frame Boundaries Are Valid

This test checks that the first and last frame of the track file can be accessed. A failure on this test could prevent ingest or playback. This test checks that the first and last samples of the track file can be accessed.

A failure on this test could prevent ingest or playback.
Trackfile(s) last at least one second

This test checks that the trackfiles last at least one second.

A failure on this test could prevent correct playback on some old servers.
Picture Characteristics Are Valid

This test checks that the characteristics of the picture trackfiles are valid.

A failure on this test could prevent ingest or playback.
Picture Data Rate Is Valid

This test checks that the bitrate of the picture trackfiles is valid.

A failure on this test could prevent correct playback.
Sound Characteristics Are Valid

This test checks that the characteristics of the sound trackfiles are valid.

A failure on this test could prevent ingest or playback.

Subtitle Font File Resources Are Valid

This test checks that the font file resources are valid (i.e. must be OTF or TTF format).

A failure on this test could prevent ingest or playback.
Subtitle Font File Sizes Are Valid

This test checks that the font file sizes are valid (i.e. do not exceed 640kB for InterOp).

A failure on this test could prevent ingest or playback.

5.2. IMF Validation

5.2.1. Composition Playlist

Has Extension Properties

This test checks the presence of the application ExtensionProperties in the Composition Playlist(s).

Has Timecode

This test checks the presence of the timecode information inside the Composition Playlist(s).

Edit Rate Is Homogeneous

This test checks if the EditRate of the resource(s) matches the Edit Rate of the Composition Playlist(s).

Has At Least One Content Version ID

This test checks if at least one ContentVersion instance is present in the Composition Playlist(s).

Has At Least One Main Audio Virtual Track

This test checks if at least one MainAudio virtual track is present in the Composition Playlist(s).

Has Exactly One Main Image Virtual Track

This test checks if one and only one Main Image virtual track is present in the Composition Playlist(s).

Has Homogenenous Virtual Track Durations

This test checks that all the virtual tracks in the Composition Playlist(s) has the exact same duration.

Segment Durations Are Valid

This test checks that the duration of segments is valid. The duration constraints depended on the edit rate. For non-integer edit rates the duration of a segment must be a multiple of 5 frames.

5.2.2. Output Profile List

Composition Playlist Exists In Volume

This test checks that the Composition Playlist referenced by the Output Profile List exists in the volume. A warning indicates that the Composition Playlist is external to this volume.

Has Exactly Once Preset Macro

This test checks that Simple OPLs contain a single Preset Macro instance.

OPLs with more than one Preset Macro are invalid and result in an ingest failure.
Macro Names Are Unique

This test checks that each macro in the Output Profile List(s) has a unique name.

Duplicate names will result in an ingest failure.
Preset Macro Is Defined

This test checks that Preset Macro(s) use a predefined URI known to the local host.

Unknown URIs will not trigger any preset macro processing.
X509 Signature

This test checks for the presence of the X509 digital signature in the Output Profile Lists(s).

Digital signature is optional, unless the assets are encrypted.
XML Schema

This test checks the XML schema of the Output Profile List file(s).

An error in the XML schema validation may result in an ingest failure.

5.2.3. Trackfile(s)

Essence Boundaries Are Valid

This test checks that the first and last frame/sample of the of a track file can be accessed.

A failure on this test could prevent ingest or playback.
Image Characteristics

This test checks the image characteristics of the track files against the the Application Specification constraints.

A failure on this test may prevent ingest and/or playback on some systems.

5.2.4. Asset Map

Contains All Volume Files

This test checks if all the files contained in the volume are referenced in the ASSETMAP file.

A volume that contains non referenced files might fail to ingest.

This test checks the presence of the ASSETMAP file at the root directory of the volume.

A package cannot be opened without this file.
File Names

This test checks that all the files listed in the ASSETMAP have a name that is compliant with the restrictions listed in ST429-9:2014 Annex A. These restrictions imply that path segments and file name do not contain other characters than: {a..z, A..Z, ., _, -}.

File names that does not follow the rules listed in ST429-9:2014 Annex A may fail to ingest and/or prevent playback.
File Sizes

This test checks that all the files listed in the ASSETMAP have a size that exactly matches the stored size.

A file with a size different from the one found in the ASSETMAP is probably corrupted and may fail to ingest.

6. Third-party Licenses

Marquise Technologies integrates third-party technology into its software solutions. Some of the below listed technology is available in options only.

The end-user is responsible for complying with any and all third-party terms that apply. Access to third-party software is provided for convenience only, and Marquise Technologies has no responsibility for such third-party software.


The ARRIRAW SDK is provided under the 2020 ARRI Partner Program Agreement.

Copyright © ARRI AG.

6.2. Avid DNxHD & DNxHR

Avid formats are provided by the Avid Media Toolkit SDK under Avid DNxHD Unified License Agreement.

Copyright © Avid Technology, Inc.

6.3. Dolby Technologies

Dolby VisionTM is provided under the Dolby Vision Content Solutions System Agreement.

Copyright: Dolby, Dolby Vision, and the double-D symbol are registered trademarks of Dolby Laboratories Licensing Corporation.

Confidential unpublished works. Copyright 2018-2020 Dolby Laboratories. All rights reserved.

6.4. DVO tools

DVO tools are provided under Digital Vision OEM License agreement.

Copyright © Digital Vision A.B.

6.5. Nexguard

NexGuard Pre‐release Video and Audio Watermark Embedder SDK is provided under the NexGuard Pre‐release Video and Audio Watermark Embedder Software Development License Agreement.

Copyright © Nagravision S.A.