TERM 2 – EXPLORATORY PRACTICES
Assignment – Fire in London
Putting Smoke and adjusting the smoke color with the background:

I tried the colorspace ybcbr to match the shade of the smoke with the background, but I am not sure if it is what exactly it should look like.
Matte painting can be done in Nuke and photoshop both, but for now I am using photoshop because I still get a little bit confused with the nodes and adding more nodes might confuse me more.
But I will definitely try to do it in nuke as well!

Tried Photoshop generative AI to generate broken wall and fire and this is what the results are.

Adding smoke in the background
I used blur node to make it look like it is far away from the camera.

I was trying the building collapse effect but I couldn’t do it the building in the background. So, I put the ready footage of the collapsed building.
and added the bullet hitting effect.

added a bomb blast effect to make it look like it blasts after the hit

Adding smoke in the middle part and applying a little less blur to show it is closer to the camera than the previous one
Here I think I should work on the roto part of the front area which is closer to the camera
But if there has to be more smoke coming in front then the smoke and the roto edes would not matter much.

Adding more fire in the closer areas.

Adding the grey and black shades on the wall to show the burning effect.

Adding the fire for the window
And the card for the photoshop AI generative image of a broken building


News Channel edit

Node Graph

Final node graph
Link for submission
PU002772YA23/24: Gonzalo’s class | Moodle (arts.ac.uk)
Doubts:

issues:

even when it is redistorted still has the issue

WEEK 15 – London city fire

Week 14 – London city Fire
Also, please have a look at this link with more footage elements resources:)
Use retime node – to change the speed of footage
OFlow – used to manage the speed with three different options

Kronos: another tool to slow down elements
TimeWrap_loop – to make the footage run through all the timeline even if it has less number of frames relative to the main footage
T_OverStack
Adding fake Motion blur – depending on how close the pate is to the camera

Using expression to edit the luminosity

Fibonacci glow adds more details compared to the normal glow node
Week 13 – London city part 2
Haze and Depth techniques
First Technique:
smoke LumaKey
Issues with this
We didnt treat the color in the background of the smoke
we just kept the alpha
Second Technique:
Luma key Inverted
Third Technique:
smoke LumaKey log
Loglin node
operation (loglin or linlog)
Fouth technique:
relight smoke
[way to add colors to the smoke]

Adding average of both the plates: avg of the bg on to the fog light
Matte painting – use project 3d node – project it onto the card
WEEK 12 – London city
Project Idea: try to change the look of the footage using stock footages, textures, smoke, etc to show the city is under attack.
References for before and after of the footage:
What to do – tracking, projections, smoke, fog and no CG
Inspiration:
breakdown: “Battle Los Angeles – VFX Breakdown by Spin VFX (2011) (youtube.com)“
Ideas:


Smoke composition:
Fire, depth, grade of the smoke, motion blur.
Planning your shot Process:
Match move – extract camera, position, cards, placement.
Matte Painting – how to damage the building.
Next week – how to add smoke.
last week – Grade the shot
Lower third and animation of the TV news channel
Camera trackers edit:
Mask – mask alpha.
Focal length – known.
Length -film back preset – film back size
Settings: Increase number of features
Error max – Min length – 3
Delete unsolved.
delete rejected.
use a point to decide the ground as we do not have a lot of information of the ground.
Let’s pretend a point is a ground (take one point on the ground)
Define the scale distance.
select last point. and define the axis.
WEEK 11 – Particles

CG Machine

WEEK 10 copycat
Intrinsically both node types are identical, both a GROUP and a GIZMO are the very definition of “Nodes within Nodes
The key difference in the two is, a GIZMO is a reference in node that is stored externally to the NUKE script, and a GROUP node, is saved inside of the NUKE script, and can also be saved as a ToolSet.

Glint node – poster line effect
Two ways to add the content to the group
1: manage user knobs, add labels
2: open the pencil above, drag and drop also we can drag and drop from one node to another if the pencil is turned on
What is a CopyCat?
CopyCat is a Machine Learning Node which alows to artist to train their own network
WEEK 10 – Matte painting
Gizmos and Tools
WEEK 9 – Motion Vectors
The Smart Vector toolset allows you to work on one frame in a sequence and then use motion vector information to accurately propagate work throughout the rest of the sequence. The vectors are generated in the SmartVector node and then piped into the VectorDistort node to warp the work.
Smart vector – export write
WEEK 8 – Remove Markers
UV map – are just the representation of axis in Nuke
We use U and V instead of X and Y just to not get confused with the letters
U & V are representation of X & Y in nuke, we dont need to actually import the 3D model in Nuke instead we cancopy the information in a UV map – so it is generally red and green channel that shows X and Y and blue is Z which is not availablee in UV map
It is a 2D wrapped version of a 3D model, so if you import a a texture i Nuke, you can direstly map it on to the UV map of a 3D object
We can work on textures in compositing with the help of UV map, just like a cheat, instead of re-rendering the textures
Removing markers – clean up

Regrain – to put back the grain on the node
Plate grain – To just show where the grain is
Normalised Grain – flat grain pattern
Adapted grain – takes the shadows, midtones and highlights of the grain
Image Frequency Separation
Preserve and Reuse Details During Clean
Using roto paint for low freq and high freq:
Low Frequency – light of the face – light and shoadows
High Frequency -Details of the face – for the details

Interactive_ligth_Patch – to remove markers
Using transform of the node – divide and multiply and taking the average of the transformed area
Curve tool – to manage the change in light
We copy values from maximum luma value of curve tool node – copy link
(crop the area which has maximum changes in light)
Go to grade node – gain – paste absolute
Curve tool – copy minimum links – paste on the lift of grade node
Home Work – Green screen removal

Green Screen Space Man

WEEK 7 GREEN SCREEN
Clamp node : can be used to keep the channel values between 0 to 1 and not more than that even when we multiply two channels.
Denoise : plate gets soft – you loose some details

ADDITIVE KEY is not an actual keyer but it is an image blending technique used
to recover fine detail in difficult areas such as wispy hair, soft trasparencies and motion blur
If combined with a good matte, edge treatment and despill can create very good results to better integrate your plate to the bg
Additive Key manipulates lightness values in the fine details and ADDS it to the bg under the foreground plate
To remove green screen – first we remove the despill
Keylight (green channel =1) – merge(minus) – (A-B) – (keylight – original footage)
– got the despill – saturate it down to zero
– roto out the areas you dont want to remove green despill from

– By minus Original from despill plate you get the luminosity lost by removing green
– Desaturate that values to remove unwanted colors
– Multiply the bg over those luminosity values
– Plus it over original to add bg in the trasparencies
Addmix – used to adjust the alpha at the edges
Week 6 Green Screen
Hue correction-
to change one colour and replace with another.
R-sup – R- suppres used to manage alpha
Keyer – we can select different channels and select different areas of the same image – using keyer operation channels.
YPbPr is the analog video signal carried by component video cable in consumer electronics.
The green cable carries Y, the blue cable carries PB and the red cable carries PR.
R = HUE: Hue literally means colour
G = Saturation: Saturation pertains the amount of white light mixed with a hue.
B = Luminance: Luminance is a measure to describe the perceived brightness of a colour
IBK stands for Image Based Keyer
It operate with a subtractive or difference methodology.
It is one of the best keyers in NUKE for getting detail out of fine hair and severely motion blurred edges
IBKcolour – remove the man and keep the background
IBKgizmo –
IBK stands for Image Based Keyer
It operate with a subtractive or difference methodology.
It is one of the best keyers in NUKE for getting detail out of fine hair and severely motion blurred edges

Colorspace is used to convert Liner channel to HSV channel
HSV stands for hue saturation value.
R = HUE: Hue literally means colour.
G = Saturation: Saturation pertains the amount of white light mixed with a hue.
B = Luminance: Luminance is a measure to describe the perceived brightness of a colour
HueCorrect – can be used to replace a colour with another one.
HueShift – can be used to shift one colour with another.
Keyer Operaion – can help select one particular channel of image to convert it to alpha
Nodes used to remove Green screen:

Green despill – is not about removing the green color or the background its about removing the green spill that the character is having.
Shot – keylight – green channel to 1 – merge the difference (A – B) – removes the green spill – then we desaturate it – and Merge back the shot (A + B) – we have removed the spill
Week 5 CG Nuke Machine
If the shot is approved by all the departments then it is compositors responsibility to match the rendered comp with the surrounding – that is the line between the departments.
Compositors role – Adding highlights, mid tones, defocus, shadows, grain , lens distortion
Lens distort works only in Nuke X – if you export it as an STMap you can use it in other versions also.
To match whites between images –
Grade node – White point – select pixel – press ctrl
select Gain- select color pixel you want in to the grade node gain point
To match black points
Grade node – black point – select area
select lift – select area
Grade – whites and blacks
Color Correct
Hue correct – to select particular color and desaturate it – ctrl + alt – select the particular color need to be desaturated
UV pass – retexture

WEEK 4 – Multipass composition

ID map – to change different elements in the comp footage
KeyID node – to select one colour from the ID
Normal pass – red, green , blue channel info can be used to relight the footage
AO pass – to produce contact shadows
Motion vector – to adjust motion blur
Multiply comp-
subtractive method
remove the channel from the original footage – that is subtract it – using shuffle node – grade
AO – merge Multiply
Other channels – merge plus
Unpremultiplying channels help keep edges intact and not affect it
Position Pass –
The position pass represents the 3d scene in color values. Red is the x coordinate, green is y, blue is z. You can use a Position ToPoints node to visualize it in the 3d space. A depth channel stores the distance from a point in 3d to the position of the camera (and will be moving with the camera)
WEEK 3 – Multipass composition
What is multipass compositing? It allows you to break down your shot into separate elements, such as diffuse, specular, and reflection passes, and work on each one individually. This not only gives you more control over the final look of your shot but also saves time and effort by eliminating the need for re-rendering the entire sequence.
precomp – creating a sequencing in a project, made in project
Select write node – then press R
You need not compute lensdistort every time – it makes the computer processing slow
Put Lens distort only to the elements that will be added or the patches that will be removed and put the len redistort back before merging the whole footage
Types of Projection
– Patch Projection

– Coverage Projections

– Nested Projection

-Combination of all above
.abc – alembic file import – to import animated footage in 3D form
Projection Artefacts:
Smearing: An image smearing or striking across the glancing angle of the object.
Doubling: When a matte painting projects onto a multiples geometry
Resolution: Not enough resolution in the painting because of the camera getting to close
ModelBuilder – create card – create mesh – bake mesh
Rendering passes of the CG
The Objective of unbuild the beauty and rebuild it via the passes (channels) is to have an higher control of every pass and grade it according to the background image
When we build a cg beauty we simply combine information of highlights, midtones and shadows. Passes naming is different depending on the render engine.
This information is contained in the passes (channels). We use the shuffle node to
call out of the exr the channel that we need.
The rule to follow to rebuild the cg asset is ALWAYS plus lights and ALWAYS multiply
shadows
Diffuse +
Indirect +
Spec +
Reflections +
AO *
Shadows *
Every pass should be graded separately
A final grade can be applied to the entire asset if needed
+ stands for Merge plus
* stands for multiply merge


Ctrl + shift – replace the node with other node
shake node – to disconnect
HOMEWORK

in the third technique is the project node projecting every frame onto the card as in frame by frame, as we have not used framehold?

Issue resolved: rotopaint should be rechecked for all the frames



why do we use project 3d node in nuke?
One of the handy feature of the Project3D node in Nuke is that we can quickly texture a geometry in Nuke through the use of camera projection
WEEK 2
Nuke 3D Matchmove advance
Different techniques of Camera projection
Lens distortion
detect the grid – solve – put the lens distortion
Technique 1:
3d patch – project on mm geo
HOLDING THE PATCH

PCG – analyse sequence – track points – (viewer should be connected to source node) –
double click the camera – delete rejected points – vertex selection – add group – change vertex selection to node selection – select group — bake selected group to mesh
The Project3D node projects an input image through a camera onto the 3D object.

blue camera is used to project the texture to the card that is taken from the rotonode
to tell the camera that this texture is from
Technique 2
HOLD THE CAMERA

USE replace for the roto

TECHNIQUE 3:
3d patch – project UV

merge mat – same as merge in but in 3D
Model builder, point cloud generator – can only be found in NukeX
Project 3D node
WEEK 1
Nuke 3D Matchmove
avoiding reflectons in the footage, like water reflections, mirrors, using roto for better tracking of a 3D scene
to see the tracking points – mask – source alpha – settings – preview
Camera node
– mask -Mask alpha prevents the masked areas to be tracked

attaching the mask input to mask out the reflection areas
settings
– number of features – 500
– refine feature location – check X
– preview features check X
press – Track
press – solve
Error – 0.9 – to solve the errors
Panel – Auto tracks – error max – press F on graph – reduce max error- delete unsolved – delete rejected – solve error has been removed

camera tracker – export – scene (in option) – create












Adding grid


Point cloud generator – analyase sequence
track points





select group and then bake selected group to mesh


Lens distortion – detect – solve – undistort
Use same lens distorton that you used to detect the distortion
copy paste same lens distort – Redistort
WEEK 10
Real scenarios in production
1: Review your work
2: Stages in post-production
Temps/Post viz – rough versions of how a shot is going to look like
Trailers
Finals
QC –
Software project management
1: Google docs and sheets – free
2: Ftrack –
you can get all the project details, latest updates and all the information and data related to the project.

3: SG Shotgun
Shot Grid is a project management software owned by Autodesk. Shot Grid is primarily used for visual effects and animation project management in television shows and movie production and video game development.

Production Roles
Line producer
VFX producer – make sure to complete the project on time and set the standard high.
VFX Dailies
Every morning meeting.
to make sure everything is working the right way.
Tech Check before publishing a version.
Desh Daily reviews
Small cinema dailies
Big Cinema dailies – big budget films (avenger, ff)
WEEK 9
2D cleanup
p – shortcut for rotopaint
Rotopaint – clone – ctrl – hold – paint


Shift – to change tool radius size
Hardness value



check the box to see the alpha channel on rotopaint node –
2 ways to seperate out the paint from the roto paint tool


Regraining the patches
Using keymix

Grain matching – ensuring that all of the elements in a composite, including those which were digitally generated, look like they were shot on the same film stock
*Start with denoise
Nuke 3D Match move

press TAB – 3D/2D
Customization
Nuke – Preferences – viewer handles – 3D Navigation – Maya
Properties – Right click – Pixel Analyzer
Workspace – Save Workspace
preferences – startup – select customized Workspace
Scaline renders –

When connected to a Scene node, the ScanlineRender node renders all the objects and lights connected to that scene from the perspective of the Camera connected to the cam input (or a default camera if no cam input exists). The rendered 2D image is then passed along to the next node in the compositing tree, and you can use the result as an input to other nodes in the script.
WEEK 8
Channels and depth channels
Bounding box –

Everything you measure in Nuke is about Pixels and not m/cm

Using Defocus to emulate a shot and not Blur
Blur gives a smudgy look whereas defocus if more of a filmy shot look
How to control the mid ground, foreground and background dept / blur

ZDefocus – used to control depth channel of an image

Changing the Focal point to decide which area you want to focus
Changing the depth of field of the focal point to adjust the area of focus

To find out how smooth you want the progression


WEEK 7
Match moving – point tracking
2D TRACKER
This is a 2D tracker that allows you to extract animation data from the position, rotation, and size of an image. Using expressions, you can apply the data directly to transform and match-move another element. Or you can invert the values of the data and apply them to the original element – again through expressions – to stabilize the image.
This is the general process for tracking an image:
1. Connect a Tracker node to the image you want to track.
2. Use auto-tracking for simple tracks or place tracking anchors on features at keyframes in the image.
3. Calculate the tracking data.
4. Choose the tracking operation you want to perform: stabilize, match-move, etc.
Tracker – is an information of data that is moving in X and Y
2D track – tracking in X and Y
2.5 Track – getting the illusion of tracking in perspective
3D – match move ( matching movement in X, Y and Z)
Nature of the shot will be deciding what tracker we can use
2 ways to extract the data in X and Y
1: add the tracker data in the plate that needs to be tracked and change the tranform setting of tracker

2: Creating another node with the help of tracker, copying the data into match move node.
Match move – 1 point
copying the transform data

4 point tracking
Match move 4 points – if the plate has slight variation in rotation and scale
copying all the data – T R S

Test Planar Tracking
Filtering and Concatenation
Filtering Algorithm
Image filtering algorithms exist to help us define what happens to our pixel values as we transform & process our images. In this article, we’ll break down all the image filters we have available, and what they’re doing to our images in Nuke.
Ctrl – pixel value
Ctrl + shift – average value of the pixels
Choosing a Filtering Algorithm – https://learn.foundry.com/nuke/content/comp_environment/transforming_elements/filtering_algorithm_2d.html
Reformat node comes with the fliter option which helps us decide how we are gonna work with the pixels of the plate


Transform is a super useful node that deals with translation, rotation, and scale as well as tracking, warping, and motion blur.

CONCATENATION
Concatenation is the ability to perform one single mathematical calculation across several tools in the Transform family. This single calculation (or filter) allows us to retain as much detail as possible.

WEEK 6
Merging and color matching
WEEK 5
Learning about Rotoscopy
The key workflow of rotoscopy is we want to generate an alpha matte from the roto node and use that alpha to cut our footage or play
We can do that in three ways
1:
Streamlines Workflow
- Normal node stacking
- Input the plate that you want to roto into the background input of the rotor node
- Draw the shapes and keep the output to alpha
- Set the premult to alpha as well
- That way we have both the alpha channel of the roto node and the rgb channel of the original plate existing on the same string but not yet multiplied
2:
Branched way
- Here you have roto on a different plate
- You create a copy of alpha channel of the original plate
- We use copy node which copies the alpha of original plate along with b input to the roto node (alpha)
- And then we use the premult to add the alpha channel we got to the original plate
3:
Using merge node
- Using merge instead of copying
- It does not actually copies the alpha channel instead it just cuts down the alpha channel from A stream to the B stream
1:Streamlines Workflow
Benefits – you have a streamlined workflow
Limits – you cannot transform the node easily it will affect the rgb channel as well
Premult: Is used to multiply the alpha channel with the rgb
2:Branched way
Benefits: it gives a bit more broken-down workflow of our script
Basic nodes for Rotoscoping
- Roto Node: The fundamental tool for rotoscoping. You can add paths matching on the object. To create a Roto node, press O.
- RotoPaint Node: This node allows for painting directly on the footage, facilitating the rotoscoping process. Using multiple RotoPaint nodes can be heavier compared to Roto nodes.
- Shuffle Node: The Shuffle node is vital for manipulating input and output channels. It is particularly useful when you need to remove an alpha channel. Simply disconnect the alpha channel from the output layer.
- Remove Node: This node empowers you to select the specific channel you want to remove.
I/O Points: Adjacent to the play button, the I/O button allows you to set intro and outro points within your composition.

Calculate motion blur in Nuke: 24 frames per second, 1/60 shutter speed > 24fps/60=0.4

Employ nodes like ‘EdgeDetect’ to create an outline of alpha, ‘Dilate’ to reduce or expand the alpha, and ‘FilterErode’ for alpha manipulation using filters. ‘Erode’ comes in handy when fine-tuning the outline with minimal changes (1-2 pixels).
Ensure that alpha values remain within the 0 to 1 range for accurate compositing.

Understanding how smaller shapes are important and how we can do it the right way. The shape I used for the T shirt part didn’t work out very well and I changed its shape every other frame so it turned out to be a very jiggle kind of animation. So I used a smaller shape at the shoulder part and made sure that it follows the track and not to change shape from what the shape at the origin was, as a result I realised that it is much easier and accurate even if we have to take lot of shapes.

WEEK 4
Intro to Digital Compositing and Nuke software interface
There are different types of softwares used for compositinf and editing.
- Adobe After Effects – Compositing and visual effects
- Nuke – Creation of new plugins
- Blender – Open source software for 3D creation
- Autodesk Flame – 3D compositing
- Natron – Cross-platform
Adobe After effects – Layer based software
Nuke – Node based software
Basic shortcuts in Nuke –
- Edit-Project settings: check the settings before start working
- F: focus selected node
- Middle mouse: move
- Space: maximize selected workspace
- (select video)H: fit height
- Select node + number: show selected node on view
- Tab: floating search bar
- B: Create a blur node
- Read(R): create a node which can import components
- Write(W): create a node which render outcome
- O: create roto
Extension and preference
- .nk: default extension of Nuke
- .exr: native extension for nuke
- (Nuke doesn’t work well with mp4/quicktime > it easily crahses)
- C:\Users\23008828.nuke : preference of Nuke, can copy to another person

Fullframe processing : play the video on full frame
File name > videoname.####.exr = start from videoname.0000.exr
Press render button> can set the frame range(input, global, custom) to render
TRAVEL
Short video describing your travel to understand the angles of videography, lights, focus and its importance in the footage.
WEEK 3
Cinematography Foundation II
Mise-en-scene
: The arrangement of everything that appears in the frame, such as actors, lighting, location, set design, props, costume, etc. A French term that means “placing on stage”. It is the filmmaker’s design of every shot, every sequence and ultimately the film. It is the tool filmmakers use to convey meaning and mood.

One of the most important aspects of the mise-en-scene is composition. Composition is the creative and purposeful placement of every element in the shot.

Every composition decision starts with the dimensions and proportions of your frame which is called as Aspect ratio. 2.35:1 is the standard of most films.

Every frame is basically two dimensional with the X, Y axis, but the Z axis can be shown as depth of the frame. To emphasize the Z axis could create depth and make the screen less boring and flat.

Rule of thirds is one of the conventions used to create a harmonious composition. It helps balance the frame and people pay attention to the size of the elements in the frame.

High angle: when the camera is placed above the eye line, it makes the subject look powerless and weak.

Low angle: when the camera is placed below the eye lien, it makes the subject look confident, powerful and in control.

The four attributes of light
Intensity: The brightness of the light. The measurement establishes our exposure settings in camera which we set in F-stops.
Quality: Soft/Hard light. Shadows are key to understanding quality of light.

Hard light will produce harsh shadows with defined edges and little fall-off. The source of it could be the sun, candles or light bulbs.

Soft light spreads very quickly and produces soft-edged shadows. The source of it could be over casted skies or windows.
Three factors can control either hard or soft.
- Size of source: small sources make hard light and big sources make softer light.
- Distance from subject: the further away from the subject, the harder the light. The closer to the subject, the softer the light.
- Filtering: filtering light through a diffuser or bouncing it off a surface makes it soft.

Angle: The foundation lighting set up used in filmmaking is Three points lighting.
The convention tells us to put a main light (Key light) 45-degree angle towards the subject, a second auxiliary light (Fill light) at 45-degree angle opposite the key light and a third light behind the subject (Backlight).
Colour: Every light has its own native colour temperature or colour cast.

Colour grading is the process of manipulating the colour and contrast of images or video to achieve a stylistic look. There are colour schemes for colour grading
- Complementary colours

- Analogous colours

- Triadic colours

- Monochromatic colours

Moodboard – Time
Designing a moodboard related to the concept of ‘time’.
For this I chose some of my favourite pictures from my camera roll that replicates TIME for me.
1: Depicting a long way to go.
2: Timeless architectures.
3: Peagon running for his next hunt for survival.
4: just showing time waits for none
5: Every moment, making sure sky is the limit for any work you do.

WEEK 2
Cinematography Foundation
Cinematography: The word from Greek,
means “drawing movement”
Cinema is the art of moving images, yet it could be said to be simply made up from a series of still images, or A story told in pictures.


Exposure is vital element of a picture. If I took a picture with the correct exposure, making some under or over exposure pictures is easy, however, the opposite is quite difficult.

The correct exposure value (EV) is a combination of three elements: ISO, Aperture, Shutter speed

ISO stands for “International Organization for Standardization”.
ISO refers to the sensitivity of film to light and it is measured in numbers. The higher number, the more sensitive to light. At first, it was only for film camera though, it was adopted by digital camera. As high ISO makes a picture too grainy, we try to use the lowest ISO value as possible according to the lighting conditions of the scene.

Aperture is known as F-value or F-stop. A large f-stop equals a small aperture opening, which lets through a small amount of light. A small f-stop equals a large aperture, which lets through a large amount of light.

We can adjust Depth of Field using aperture. A large F-stop increase the depth of field(Deep DoF) and a small f-stop decrease the depth of field(Shallow DoF).


Shutter speed is the length of time the camera shutter is open, exposing light onto the camera sensor. It controls motion in the image. The slower shutter speed the brighter and blurred picture is taken.

We can check the metadata of the picture on camera and phone including ISO, shutter speed, and so on. Even if I forgot the setting of camera, I can check it later.

Frame rate is the amount of individual frames that the camera captures per second. 24fps/25fps is the standard for video on web or film as those look the most natural to the human eyes. For live TV or sports, 30fps is common. 60fps, 120fps, and higher frame rates are used for recording video to be played back in slow motion.
- NTSC(National Television System Committee): 30fps and it used in North America and Japan.
- PAL(Phase Alternating Line): 25fps and it used in Europe and South East Asia.

Angle of view is the extent of the scene captured by the camera. Changing the Focal length, changes the angle of view. The shorter the focal length the wider the angle of view. If I shoot everything through 8mm, it makes hard to focus on particular thing(appropriate for landscape). 50mm lens is similar to the normal view seen by the human eye.
Digital file format is a type of standardized encoding to store digital video data on a computer. The file format container includes video and audio data.
- MP4: Most digital devices and platforms support which is the most universal video format around. MP4 is appropriate for unprofessional videos such as Instagram, Youtube, etc.(not for film)
- MOV: Developed by Apple, it is the video format specifically designed for QuickTime Plyaer. However, it takes up significantly more space as its quality is usually so high
- AVI: AVI video format is one of the most versatile video formats. However, with such high quality, AVI video format file size is large, which is more conducive for people to storage and archival than to stream or download(DNxHR/DNxHD).
- ProRes: a high quality, lossy video compression format developed by Apple. It is used in post production that supports up to 8K. it is widely used as a final format delivery method for HD broadcast files in commercials, features, Blu-ray and streaming.