Thank you for the request

 I will add it to the glossary within a few days if it is pertinent.

A

Additive color

Additive color, also called additive mixing, is a color model that describes how colors appear when using lights. With light, the more colors you add, the more your result will be white and bright. Most of the time we consider three primary colors, Red, Green and Blue, and mixing two of them together can give the secondary colors Yellow, Cyan and Magenta. The opposite model is substractive mixing, where the more colors you mix together, the darker your result is, which applies for paint.

​Alpha

The alpha is a channel that serves to describe opacity in an image. The three main channels, red, green and blue make the colors of the image, and you can have an alpha defining which part of the image should be opaque, meaning occluding the background, and which part should be transparent. It is almost always 'tied together' with the color using the operations Premult and Unpremult, to make for a coherent image altogether.

​Aperture

The aperture of a camera lens is the size of the hole created by the diaphragm. It is expressed if F-stop or T-stop and can be adjusted on the vast majority of lenses. The wider the hole, the smaller the F-stop (or T-stop) value, and the more light comes inside the lens to reach the sensor.
A side-effect of adjusting the aperture on a camera is the change in the amount of depth of field. The wider you open the aperture, the more shallow and intense ​depth of field your image will have.
In opposition, the smaller the aperture is, the bigger the F-stop (or T-stop) number is, and you will have less light reaching the sensor, resulting in a darker image, as well as a wider depth of field.

​Aspect ratio

An aspect ratio can define two things, depending on the context:

- it is basically the shape of an image. A square image has an aspect ratio of 1:1, meaning 1 unit of width by 1 unit of height, while other images can have an aspect ratio of 16:9 for example

- it can be specific to pixels, defining then the pixel aspect ratio, the shape of each pixel in your image. Most pixel aspect ratios are square, 1:1, as the image was shot using a spherical lens, while some images are shot using anamorphic lenses, which will then need a pixel aspect ratio of 2:1, rectangular, to compensate for that

It is important to note that the pixel aspect ratio and the image aspect ratio are completely independent. Despite having some industry standards, one doesn't enforce the other in any way.



B

Background

The background is the part of an image that was the furthest away from the camera when the image was taken. Following the same logic, it is often the element the furthest away from your final result in compositing, at the top of the node tree, straight up from your final Write node, making up for the trunk of your node tree, also called B-pipe.

​B-pipe

The B-pipe in the node graph is the main branch of your script, which is completely straight and goes from the furthest background element you have in your image at the top of the script, to the write node that will render your final image at the very bottom of your script. You can picture it as the trunk of your nodes tree.

The name B-pipe comes from the fact that the standard practice is to keep the B input always pointing up on Merge nodes, which makes for a straight line from the Background to the final Write node.

​Black point

The black point has two different meanings:

- In digital color theory, it is literally the point at which a pixel is perfectly black, meaning is has a 0 value in each channel.

- In an image, we refer to the black point as the darkest area in the frame, usually a small area deep in shadows. It is particularly important to take a moment to analyse it in terms of brightness and hue, as changing the black point of an image can have a significant impact on other areas of a frame, even brighter ones, potentially forcing you to start again your grading process entirely.

​​Bokeh

The bokeh is an element of an image when it has been captured with a fairly shallow depth of field, and when there was a bright light facing the sensor in an out-of-focus element. When out of focus, bright lights tend to  'eat' at the image outwards and grow in size compared to what's next to them, and will take a circular or polygonal shape, determined by the number of blades inside the diaphragm. The shallower the depth of field is, the bigger the bokeh in an image can be.

​Branch

A branch is a small part of a script, or node tree, that is being added on the B-pipe. It usually is an independent element like a character, smoke or explosion alone, that you then merge with the rest of the scene.

​Brightness

The brightness of a pixel or image is literally how bright or dark it is, regardless of its hue (aka color).

When receiving notes from a supervisor or simply discussion your work with a fellow colleague, it is common to make a distinction between brightness and hue, and be asked to increase the brightness while maintaining the same hue, or adjust the hue (sometimes roughly called temperature) without modifying the brightness.



C

CG

CG is an abbreviation for Computer Graphics, which refers to elements originally made digitally, using computers.

In compositing, we usually refer to CG as the elements made by our CG departments, such as Lighting, FX, animation etc, anything that doesn't come from a real camera, or made as part of the compositing process.

For 2D elements that we add in the shot at the comp level, we do not use the term CG as these elements have been shot through a real camera, and therefore we have a different way of working with them.

​Channel

A channel is a component of pixel information that contains one specific piece of information. It is a single value per pixel, often containing the red, green, blue or alpha information, but can contain pretty much whatever you want, as long as it fits into a single number per pixel. When combined, the four red, green, blue and alpha channels are bundled into a layer, always called rgba, sometimes referred to as the main part of the exr file.

​Colorspace

A colorspace can be summed up roughly as a specific range and amount of colors possible to have in an image. It also has specific primaries (in our case Red, Green and Blue). You might have seen names like rec.709, sRGB or Log for example. More precisely, we should speak about color gamut and color gamma rather than colorspaces, but these are more advanced and conceptual notions so feel free to stick to the colorspace idea until you're more comfortable with color, and ready to tackle the abstract notions of gamut and gamma.

​Compression

Compression is a way of reducing the size of a file by making compromises with a loss of quality or that file. For example, instead of storing the entire value 3.14159265, you could choose to round it at the third decimal, only writing 3.142. You will loose some precision since you're not writing all the decimals, but you need less space on your disk to store that value. Multiply that by millions or billions of values, the gain in space can be pretty significant.
There are many compression algorithms available and in research, whether they are dedicated for images, video, audio etc, and can range widely from a very heavy compression that will loose a lot of quality but result in a file size of less than 1% of the original size, or algorithms called loss-less, where you wouldn't loose too much precision (aka quality), but can obviously save less space. In the case of VFX, if there are only two algorithms to remember, it would be ZIP1, which is a loss-less compression, the most common in compositing, as well as DWAA, which is aimed at dividing the original size by around 4, at the small compromise of a slight quality loss, but very well designed, so much that it's pretty much invisible by eye. DWAA is such a great algorithm that lots of companies are starting to compress all the files they can with this to save space and improve reading speed, and it is the algorithm that I chose to compress my exercises.



D

Depth of field

The depth of field is a side effect of the aperture in a camera. It can be quite shallow, meaning only your subject is in focus and everything behind and in front of it will be out of focus (understand blurry), or at the opposite quite wide, where the entire image is in focus, regardless of the element's depth compared to your subject.
It is purely a relationship of distance from the camera, so if another object is at the same distance from the camera as your subject, it will have the same focus level.

​Despill

The despill operation is the process of removing the color spill of a green screen or blue screen and replacing it with the color that should be there if there was no screen to begin with. To replace the background in a scene, it is common to use green or blue screens behind actors to isolate them from their environment, but unfortunately as light bounces around all the time, the green or blue color will very often bounce from the screens onto the actors, ground, other buildings etc, on top of the obvious edges of the characters. The despill will focus on that, removing the overly green or blue colors on the scene and replacing them with colors ​that match better the environment we're putting instead of the screens.

⚠️ Please note that the despill process if very often overlooked and underestimated. It deserves even more attention than the keying operation, and should even be started before the keying to have a decent result from the beginning. A lot of people tend to try to fix despill issues by the keying process, not realizing that the key can only improve the alpha, the opacity of an object, and that color issues can only be fixed by the despill.

​Diaphragm

The diaphragm of a lens is the element responsible for the control of the aperture. It is composed of multiple curved blades that rotate slightly to create a bigger or smaller hole. The number of blades will influence the aspect of the resulting bokeh in images, as the bokeh will have the same number of sides as the number of blades inside the diaphragm.

​Downstream

Downstream can describe a department (compositing, layout, lighting etc), or part of a nuke script. In both cases, it is a description in comparison to a reference:

- In case of a department, we mean by downstream all departments that are after the one you are currently talking about, the ones that are dependent on said department to work. For example, lighting is downstream compared to layout, as you can't really do much lighting work without having done at least a bit of layout work. Texturing is downstream of modeling, as you need something to be modeled before you can apply some texture to it.

- In case of a node script, it is a similar story, nodes described as downstream of a blur node for example, are all the nodes below that blur node, and which depend on it.



E

Exposure

The exposure is the characteristics of how bright or dark your overall image is. When you're adjusting your aperture, ISO or shutter angle for example, you're ​changing the overall brightness of the image, instead of a specific region. Some parts might still be very dark or very bright if you have lots of contrast in your scene, such as a windows with a bright sun from inside a building.
It is often said to expose for something, meaning to change the exposure of the whole image to prioritize the subject to be at the appropriate brightness.



F

​F-stop

F-stop is the unit of the amount of light entering a lens through the diaphragm, and is therefore controlled by the aperture. It is written f/number, where big numbers like f/22 mean a small aperture and a small amount of light entering the lens, while small numbers such as f/1.8 means a big aperture and a big amount of light entering the lens. F-stop is more of a photography unit, as in videography we'll prefer lenses rated in T-stop, which is more precisely the amount of light reaching the sensor, rather than just entering the lens. Most of the time the lens, via reflection and absorption, will capture a tiny portion of the light entering and will not redirect it properly to the sensor.

​Focal length

The focal length is the distance between the optical center of the lens itself and the camera sensor. It mostly determines the angle of view that is captured by the lens. A 10mm focal length will capture a very wide angle of a scene, while a 300mm focal length will only capture a very narrow angle, usually called a telephoto lens.

An induced effect of the focal length is the perceived perspective of the image. A long focal length will remove a sensation of distance between elements, everything will appear right behind or in front of each other, while a very short focal length will do the opposite and exaggerate the sensation of distance, making ​your node look much bigger than it actually is in real life for example (I speak from experience...).

Another side-effect that isn't always true this time, is the amount of distortion the lens applies on the image (see lens distortion). When using a short focal length, ​straight lines in real life such as buildings for example will tend to be quite curved around the edges of the image, while a telephoto lens will do a better job at keeping straight lines straight in the image. This is however not always the case as some lens manufacturers are doing a great job at preventing/fixing internally this problem and can propose short focal length lenses with a minimal amount of distortion.

​Focus

The focus is the process of choosing which part of the image is gonna be sharp. Unless you have a very small aperture (aka a big F-stop number), it is likely that some part of the image will fall out of focus and will be blurred, so you adjust the focus distance on your camera or lens to be at the distance where your subject is, guaranteeing that it will be sharp.
If your subject is moving in depth, you will likely want to adjust the focus distance while recording the video to make sure it stays sharp during the entire time, this is called a focus pull.

​Footage

Footage is the name of all the material shot through a camera. It can be the plate itself, or other types like witness camera footage, chrome ball or grey ball images, HDRIs, clean plates, smoke or fire isolated etc.

​Foreground

The foreground is the part of an image that was the closest from the camera when the image was taken. Following the same logic, it is often the element the closest from your final result in compositing, at the bottom of the node tree, added from the side onto the B-pipe as a separate branch.

​Frame

A frame is a synonym for an image.

To frame a subject means to give an intention to its position in the image created through the camera. You can frame your subject in the center or following the rule of thirds for example.

​Frame rate

The frame rate is the speed at which all your images are shown one after the other, to give the illusion of a video. The standard for cinema is often 24 FPS (frames per second) or close to that, while internet videos tend to be around 30 FPS or a factor of that (60 FPS, 120 FPS...). It is both an artistic and technical choice as it will have consequences whether the recorded motion is close to the human eye perception or not.

​FX

FX is the name of the vfx department responsible for making all kinds of simulations. It can be a ship navigating through some rough waters, some fire or smoke simulations, a house being destroyed by a tornado, some peaceful grass flowing with the wind etc.

In compositing we usually take a shortcut and speak of the result of those simulations at 'the FX'.
E.g., "You just received a new FX, have a look and let me know if all is well with it."



G

Gain

The gain is a parameter to control the white point of an image, aka the pixels having a value of 1 in each channel. It acts as a multiply in the sense that it will not affect the black point at all (pixel of a value of 0), since anything multiplied by 0 equals to 0.

Be careful to avoid having a binary vision of this parameter ; even if it is meant to control primarily the white point, and therefore the bright areas of an image, it will still influence the darker areas, in a more subtle way. Only the absolute 0 value will not be affected at all, but anything close to that will still receive some kind of influence.

​Gamma

The gamma is technically a transfer function to convert image between linear and non-linear color spaces.

In a more practical way, and commonly used within the Grade node, the gamma parameter is used to influence primarily the midtones of your image. Basically ​all areas that are not too bright and not too dark.
However, in the same way that the gain is not exactly limited to bright areas only, the gamma is not limited to midtones, and will still influence the bright and dark ​values.

​Glow

A glow can take have two different meanings depending on what is creating it:

- An atmospheric glow is caused by the diffusion of light from a defined light source due to a high humidity rate in the air or lots of dust for example. It can be seen by the human eye and depends on the environment around you.

- An optical glow is caused by the diffusion and reflection of light inside a camera lens, resulting in a glow uniformly formed around an intense light source directly seen by the camera. Light will tend to 'bleed' on darker areas of the image directly next to the light source, regardless of the humidity or environment around you.

​Grade

A grade is a node inside Nuke allowing you to change the colors of an image in different ways, via the gain, gamma and lift parameters for example.

To grade is the action of changing the colors of an image.



H

​Hue

The Hue is the color part of light, and therefore, pixels. When talking about the color of an image, we are actually simplifying two things quite independent, the amount of light, and its actual color. A bright light can lack color, and a dim light can have a ton of color.

The hue is therefore towards which color the pixel or image goes to, red, green, blue, yellow, magenta, cyan, warm, cold etc, regardless of how bright it is.

When receiving notes from a supervisor or simply discussion your work with a fellow colleague, it is common to make a distinction between brightness and hue, and be asked to increase the brightness while maintaining the same hue, or adjust the hue (sometimes roughly called temperature) without modifying the brightness.



I

Image (digital)

A digital image is a grid of pixels (usually millions of them), each with it's own information of color and opacity, encapsulated in a layer (by default and implied, the RGBA one, for Red, Green, Blue, Alpha), and written down inside a file that can have many different extensions, meaning ways of writing it down, for example jpegs, png or exr files.
An image also has its own implied colorspace. It is rarely written down inside the metadata, and is not interchangeable, so you must know what you are starting ​with, and which colorspace it is in at all times when you're working on images.

​ISO (sensor sensitivity)

The sensor sensitivity, expressed in ISO (400 ISO, 3200 ISO...), determines how much amplification the camera gives to the light when it's hitting the sensor. Therefore it can be considered as an 'artificial' way to brighten the image, as it doesn't provide more light to the sensor, it only amplifies the electric signal made by the sensor when receiving light.

It is very handy in dark or night scenes where there is no other way for the camera to make an image bright enough, but it should only be used as a last resort as it has a drawback, it will create more noise on your image (see signal-to-noise ratio and noise).

On high-end cameras the noise produced remains subtle on the whole range of sensitivity, and is fairly minimal until you reach around 6400 ISO or even more, but on the majority of cameras, setting you ISO to 6400 will already have a significant impact on the noise, and a range below 3200 ISO is preferred.
Obviously it is very specific to each camera, and you should check what the manufacturer indicates as the 'native ISO', the only rule of thumb is to remember to increase it only when your other options are not possible (see aperture and shutter speed), a higher ISO basically means a bit more noise in your image.



J



K

Kernel

A kernel in compositing is the 'texture' a camera lens can have, due to dust or hair stuck on it for example. While most cinema lenses are maintained in pristine condition to avoid any visual defect, sometimes they are not perfectly clean and will have some dust on them.

While that dust will be mostly invisible, it can appear in some cases, and particularly inside the bokeh created by the lens, where you will see the whole texture fairly sharp.

In compositing, when we put some CG renders within the proper depth-of-field, we often plug a kernel image (a lens texture if you will) to simulate that same effect and add another level of richness and breakup to the overly-clean CG renders.

​Key-to-fill ratio

The key-to-fill ratio is simply the ratio between the intensity of the key light over the fill light. It can be considered roughly as the contrast of the lighting, meaning a high ratio will have a very strong key light with little fill light, or a lower ratio, aka lower contrast, with a fairly present fill light and a key light not that much stronger.

​Key frame

A key frame is an image that holds a particular importance, whether it is for the shot, with a composition that is particularly meaningful, or in animation, where you define precisely what position all elements should have, instead of being automatically interpolated by curves.

​Keying

Keying is the compositing process of creating a mask of opacity deduced directly from an input image, often stored in the alpha channel.

It is most commonly deduced using criteria such as luminance, saturation, red, green or blue hue, or even red, green or blue channels.

⚠️ Please note that in the case of a green or blue screen, keying is only part of the whole process to make an image look good, and should even be considered secondary to the despill process.



L

​Layer

Can have two different meanings depending on the context:

- A layer is an element inside your script that is added over or under another element, e.g. a smoke element on top of a roof

- A layer is an image contained either in a multi-part EXR file or inside your script itself, e.g. the RGBA layer or the Normals layer

​Lens (anamorphic)

An anamorphic lens, as opposed to a spherical lens, distorts the image in different proportions vertically and horizontally. The light that goes through the lens will be distorted more horizontally to be applied on the sensor, usually by a factor of 1.5. What it means it that while you capture a scene with a very wide aspect ratio, the image captured by the sensor is almost square, and everything looks squished horizontally. You will need to apply a change in the pixel aspect ratio to stretch each pixel wider than it usually is to compensate for that effect, and see the scene as it is in real life.

​Lens (spherical)

A spherical lens is a common type of lens that distorts the image onto the camera sensor in the same proportions as it sees it. In other words, the light going through the lens is distorted in equal amounts vertically and horizontally. The shape of the resulting image matches the shape of the sensor capturing it, as opposed to an anamorphic lens.

​Lens (telephoto)

A telephoto lens is a camera lens that has a long focal length, showing therefore a narrow field of view of the scene you're capturing, but enabling to see objects at a very long distance.

E.g., A 200mm lens

​Lens distortion

Distortion of the footage caused by the glass elements inside the lens.

The lens distortion can be very strong or imperceptible depending on the lens used with the camera, with a global 'rule' that wide lenses tend to have more pronounced distortion than telephoto ones, due to the constraints themselves of wide angle lenses.

It is a side effect, something the lens manufacturer doesn't intend to put in there, but is mostly a compromise they have to make to the benefit of another feature.

​Lift

The lift parameter within a Grade node is aimed at controlling the black point of an image, aka the pixels having a value of 0 in each channel. It acts as a rotation around 1 in the sense that it will not affect the white point at all (pixel of a value of 1). In the same way as the gain and gamma parameters, it is not purely limited to its intended range, and will also affect midtones and brighter values, in a more subtle way.



M

Merge

A Merge is a type of node in Nuke used to combine two elements together, using different operations such as over, multiply, plus etc.

To merge elements simply means combining them into a single image, one added onto the other, or put over it, or multiplied by it etc.

​Multi-part EXR

A multi-part EXR is an OpenEXR file (extension .exr) that contains multiple layers inside, meaning multiple, independent images within the same file, as opposed to a single-part EXR which only contains one layer, the RGBA one.



N

Node

A node is a tool within Nuke that enables you to read, modify or write an image in almost any way you want. There are many nodes available, each with their own function, from the most basic ones - e.g. the Blur node to... blur an image - to advanced ones like the IBKGizmo which does many operations in it to help you key uneven green/blue screens (see keying).

​Nodes tree

The nodes tree is the entire assembly of the nodes you created and connected together in your script to make your final image. When laid-out properly, it should resemble an actual tree, with the trunk being your B-pipe and the branches being all your elements (also called layers in this context) put on top of each other, from top to bottom, going from the background to the foreground.

​Node graph

The node graph is an interface within Nuke, the tab that will allow you to see your nodes tree and open the properties of any node, as well as manipulation them and creating new ones. It is one of the three main tabs you will work in the most (see Viewer and Properties for the other two).

​Noise

Camera noise:
digital imperfection on the image caused by some disturbance of the electrical signal on the camera sensor. It is a tiny pattern of one to a few pixels grouped randomly in small pockets, changing slightly in intensity and color between every frame.

It is mostly visible in dark areas of the image because in brighter areas, there is so much light that it hides the noise, the fill or key light is so much brighter than the difference the noise makes becomes invisible, even though technically it is still there. When you have much less light, at night for example, the noise becomes more visible since the light doesn't out-power it.

In compositing, we take a shortcut and often speak about it using the term 'grain', which is a similar aspect of the image, only for analog cameras, referring to the literal grain of the video film used, that captures light using tiny particles of silver. Even though the cause is different, it creates a similar artifact in the images.


Nuke node:
Noise is the name of a Nuke node used to create a noise pattern of which you can control the size, frequency etc, to add spatial or temporal variation in another element.

E.g., "Add a noise to this mask to break up the edges"



O

​Opacity

The opacity of an object is its capacity to hide what's behind it.

In compositing, we use a channel other than the Red, Green and Blue ones to store the opacity information, often called the alpha channel, or just alpha.

When combined, our elements with an alpha channel will have some degree of opacity and therefore hide the elements put under them, or if they don't have any alpha, or it is entirely black, the foreground element will still appear, but will lack any kind of opacity, resulting in a 'plus' operation, where both elements are only ​added onto the other.

​Over (Merge operation)

The merge operation 'Over', simply put, can be seen as a combination of the 'stencil' operation followed by the 'plus' operation.

The 'stencil' will use the alpha of the foreground element to turn the background element black, effectively hiding it, and the 'plus' operation will add the ​foreground to the background, regardless of the alpha.

Granted you thought about properly premultiplying your foreground, you will end up with the illusion that the foreground is properly in front of the background, hiding it where it should.

In reality, the match behind it is slightly different as the exact operation is an inverse multiplication of the alpha rather than a stencil, but if that's a bit too far for your taste, my analogy works plenty enough.



P

Pane

Detachable and re-sizeable sub-window inside nuke, used to create a particular layout that fits your needs. A pane can only show one selectable tab but can contain many tabs, and can be floating in a separate window or not.

All the panes in Nuke make for the workspace.

​Pixel

A pixel can be seen as a little box within your image, which is the smallest component of the image to contain an individual color information. It can be square or rectangular (see aspect ratio) and once thousands of them are laid out in a grid pattern, they  form your image. The more pixels, the bigger resolution your image has.

These could be called virtual pixels, existing within images, while you also have physical pixels, which are basically the same thing, small boxes that contain an individual color information, but physically emit light using three or four tiny lights, and together they make your monitor or screen.

​Plate

Shot material that is gonna be used as the main source for a movie. It is the base our vfx work will be built upon.

E.g., actors in front of a green screen, landscape drone shot etc.

​Primaries

Base colors used to be mixed and create all the other variants possible within a colorspace.

In VFX, whether you are in the ACES color system or not, our primaries are always Red, Green and Blue, the primary colors. Anything else such as yellow, pink or brown for example, are simply Red, Green and Blue mixed in different amounts to give the desired result.

⚠️ Please note that for editors and colorists, primaries is a term commonly used to talk about the first, global color corrections applied to a frame.



Q



R

Resolution

The resolution of an image is the description of how many pixels it contains. The more pixels, the bigger the resolution. HD or 4K for example are standards defining a specific resolution. More complete names could be HD 1080p and 4K 2160p, that specify the amount of rows of pixels, aka the height of the image. In this case HD has 1080 pixels in height and 4K has 2160 pixels in height. It is good practice to use the complete name, as 4K can actually have different, although similar, resolutions and aspect ratios.

⚠️ Please note that the 'p' at the end of 4K 2160p or HD 1080p does not mean pixel but progressive. This is a video term (see scanning method). Confusing, I know. Since it's only for video, at least when we're working with images, we can disregard this, yet it is still good to know what it means if you're brave enough to dig it.



S

Script

A script is the file Nuke saves your work in. We often refer to the nodes tree as the script as well, as it is the biggest part of the save file.

​Sensor (camera)

A camera sensor is the element that captures the light and converts it in a digital signal. It is the part that truly turns light into a digital image.

Some common characteristics of sensors are its size (especially useful to know for camera tracking and depth of field), its range of sensitivity (ISO, which participates in how much digital noise there can be on an image) and its resolution (as it will determine the original resolution you'll get in your digital images, as well as the original aspect ratio.

​Single-part EXR

A single-part EXR is an OpenEXR file (extension .exr) which contains only a single layer inside, the RGBA layer, as opposed to multi-part EXR files.



T

Tab

A tab is a selectable part of a pane, containing a specific Nuke tool, such as the Viewer or the Node graph for example.

​Tracking

Process of following part of an image through time in order to capture its movement, relative to the image coordinates.

Regular tracking can be summed up as following one or multiple points in the image, while planar tracking is used to track a surface and its perspective.
Both are dedicated to that point or surface, and will only work on that specific part of the frame. If you attempt to use tracking data that hasn't been done specifically on the same area of the frame where you intend to use it, you expose yourself to a lot of troubles, as the movement is likely different.

For an actual tracking of the camera itself, for which you can then use a 3D space to place many objects accurately and get their precise movement, see camera tracking.

​Tracking (camera)

Operation of recovering digitally the motion of the camera that captured a shot and create a virtual camera that will match as close as possible the real one.
It is an advanced 3D technique, usually done by a specialized department other than compositing (often called Matchmove) and only done when truly needed due to the complex and time-consuming nature of it.

Some of the real camera characteristics are crucial to be able to do this process successfully, such as the sensor size, the focal length used, a map of the distortion applied by the lens, an accurate scale of the real objects being shot as well as some 3D geometry to reference the position in the 3D universe.
The process is split into two different tasks, the first one being the tracking itself, meaning following points in the image and finding out where that same point will be in the next or previous image, and then the solving, which based on the tracked points and the parallax between them, computes the position of the camera and its movement.

⚠️ A lot of senior compositing artists do not master this skill for the good reasons that they don't have access to all of these information, and they don't have a need for it as the CG (virtual) camera is provided to them.
I would therefore strongly advise you to spend time learning this process only if you are very comfortable with regular tracking methods, as you will likely do it less often, and the required material to make a proper one is rarely provided.

​Tracking (planar)

Similar to tracking, except that instead of capturing point movements, planar tracking will look at a whole surface and track that.

Please not that it differs from a regular tracking with 4 points, at the corners of a phone screen for example, as planar tracking will also look at the middle of the ​screen since it follows the whole surface. It is therefore not always a good idea to use it if you have reflections for example, since they will move differently than the screen itself, but the planar tracker will still attempt to reconcile what it captures, often ending in a bad track.

​T-stop

T-stop is the unit of the amount of light reaching the sensor after having passed through a lens. Most of the time the lens, via reflection and absorption, will capture a tiny portion of the light ​entering and will not redirect it properly to the sensor.

T-stop is mostly a videography unit, as opposed to F-stop which is enough for photography as it needs a little bit less precision in the light amount rating. They do however work in the same logic, with small t/numbers meaning lots of light and a big aperture, and big t/numbers meaning little light and a small aperture.



U



V

​Viewer

The viewer is the node inside Nuke which allows you to see your work and the resulting image coming out of your node tree. On top of being a node itself, it has a whole window to show you the image alongside lots of options to modify the image that you see instead of having to modify the actual image written on disk, for example to quickly change the exposure to check your work more easily in the shadows or highlights.



W

​White point

The white point has two different meanings:

- In digital color theory, it is literally the point at which a pixel is perfectly white, meaning is has a 1 value in each channel.

​- In an image, we refer to the white point as the brightest area in the frame, usually the source of a light if it is seen directly from the camera (the sun for example). It is important to take a moment to analyse its brightness and hue as any element you might add into the frame will have to look like it receives the same light in order to look like it belongs in the scene.



X



Y



Z



Thank you for the request

 I will add it to the glossary within a few days if it is pertinent.

Database neutralized for testing: no emails sent, etc.