Final Major Project

Week 32

Initial Idea:

Hyper reality
Tracking, rotoscoping, Composition
(FX, Composition, CGI)

References
Experience hyperreality 2050

Starting from a room, walking out with headsets of hyper reality.
Games to play on the way.
https://youtu.be/mM4r2o-9kN4?si=kNFIeGs8VOnsQPbz

A green screen footage to show virtual world
Unreal engine for green screen to put futuristic city
How is the building going to be constructed?
add dinosaurs for CGI in a museum, cooking on am induction with recipe

CAR REFERENCE FOR THE HYPER REALITY

FEEDBACK : Moodboard, storyboard, presentation
(Dreaming hyperrealistic)

Environment, what is the city gonna look like, narratives.
Check: A day made of glass

Modelling a flying car/Air Taxi

I really like modelling; I think with the time I am getting better at adding specific details which I really didn’t care before. I think its the details that make any model look better. The more you focus on small details the better it looks.
For this model I want it to look exactly like it is in the picture. So far I have worked on making it symmetrical and adding every single detail as it is in the picture.
For its tyre, I checked this tutorial on making the honeycomb for spheres.

I have been working on this model as I love modelling, and I feel I have quite improved on my modelling skills and I should use that in my FMP.

I am thinking of painting this in Maya itself as it doesnt need lot of detailed textures. Also my plan was to add some advertising content on the car, so might do that in substance painter or photoshop.

this is what it looks like in the render with physical sky lights

Adding details in the interior. This is something I am doing for the first time. I never worked on interiors, so it is quite fun and new to learn. I need to learn how we add more color in one single geometry. I used to do it in blender.

This is how it looks after adding all the materials in the maya itself. Quite satisfied, but the main task is to blend it with the real footage. Also I got a perfect canary wharf HDRI from poly heaven as I am going to use the footages from there itself.

I have collected some footages to check the track and placement of the car. I am planning to do tracking in Nuke and then exporting the same to Maya.

For the last term I didn’t really work on tracking, so I think it is going to be difficult to track so many scenes I have planned according to the script.
Currently working on resolving tracking issues with footages.

I have been working on tracking the above the scene and I realized, there is need of markers on the floor which could help me track the planar that is closer to the camera. At present camera couldn’t track the floor maybe because of the texture.
Also I could have shot the scene where we have some type of markers on the floor.

I tried tracking another footage, and it did a pretty good job at it.
But it is still not the one like in the previous shot. I might use this as one of the shot.

Exporting the tracked scene to maya.

Checking lights in the render.

It blends quite well with the background.
I can use this as one of the scene in my story.
My next task is to show this as a taxi in my story. I recorded few shots in the morning on the street and I tracked them so that I can put the car in it.

I was looking for the spots where I can find more markers or points to track on the road. And this is one of best spots I got. It also has a taxis only written on it, which perfectly matches my requirements. Those markers on the road helped me to get a good camera track.
I will be adding the character in this, the Actor of the story. I Might add the character shot in front of a green screen so that will help me to add more overlays.

I am working on elements to get the render and for some reason the render is missing some of the frames so I rendered the missing frames separately.

Missing frames

when done the composition, I realise the glass material is too transparent that makes it look empty cars. I would reduce the transparency also change the colour, maybe that works. The second thing I need to change is the rotation angle of the car. at the start it doesn’t look as if it going in the right direction.
After changing these 2 thing, I would re render it.

I am also adding another taxi in the above footage. Below are the credits mentioned for the same. I downloaded it from sketchfab.

“VTOL Air Taxi” (https://skfb.ly/oTzIW) by Annelida is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/).


Tracking one more scene where I want to show a close up shot of the cab when someone wearing headset is wearing it.
I shot the scene from two angles and tried tracking both of them.

I am not quite satisfied with this track as it is in the wrong orientation. Putting a car in it might be difficult. And also I dont know if we can change the orientation of the car. So I tracked another scene.

This is how the second shot looks like. if I could work better on this I would maybe reshoot this with the actor in it.

A good decision is to check the render before you render out the whole sequence using render farm. This closeup scene made me do back and forth with maya and nuke and number of renders. The lights were working pretty good in the previous scene when I rendered. but I realized, no its not working well. Fixed the lights and the shadows.

This is how it looks after n number of trails.
I think the size of the car has to be fixed and the shadow catcher plane.
The scene I tracked has much smaller scale than the car I built. I scaled the car down to match the size. I was wondering how lights are not working well. It is now I have realized it was because of scaling the whole car down.

The size looks much better. The interior light, render quality still needs to be fixed.

Thats the final rendered comp.

I want the third scene from the perspective if a viewer if he is seating inside a building. I took a shot with the help of a friend who works in canary wharf.

This is the one of the frame of the shot. I could track it, but I am not enough satisfied with the track. It has given me pointes too far from the camera. i want the car to pass close to this glass.
The challenge is to roto the spaces in between and to put back the glass effect on it. One of the issues with this is, it is the evening period in scene and it has got lights which I will have to make sure I get reflections on the car.

For this scene I scaled up the size of car. while for the other scenes I reduced the size of it.

I used the card and added constant. With the help of merge node, I could reduce the transparency.

Working on renders from the render farm.

I couldn’t find some of then files even when the monitor shows it has been rendered and completed. This has happened before. I have rendered missing frames separately to fix it.

FEEDBACK:
try reflections using an image with overlay on car, from outside.
Can put a Cg character – ( I feel that would affect the script theme as I am not using any other CG character)
by Paul: add one more car flying from backside

FEEDBACK:
Match the grains, whites and black with the background
fix shadows using samples and rendering single frames.

I want information to pop on building in my first footage. I am using After effects to create panel effects as it is easier to create effects.

This is not clearly visible. I will need to find some references to display these panels.
I took some references from you tube videos which I have added on my mood board.

This is the final look. I still need to take render of the cars and also some color grading.
I had to reshoot the footage of booking a cab as I planned o put a green screen footage of the actor in it. But as Manos suggested to take it in live footage I reshot the footage and now working on its track again.

This is how the new shot looks like.
Camera tracking used to be quite challenging for me, but in this project, I’ve finally overcome that hurdle. I feel more confident now, and I’m no longer afraid to experiment with it. One of the most crucial aspects was setting up the camera accurately for tracking, including alpha masking, camera lens details, and lens distortion. Working on these details made me realize how essential it is to follow this process carefully to achieve the best results.

After track results.

I have been working on two different scenes together. For the cab closeup view from the building, I have to show the reflection of the building in the car.

it is a bit challenging to do. I have never done this before. Roto was not working on this maybe because I have done the merge operation. I also tried premultiplying it, but it didn’t work. The last option was to render it out and then do the roto.

After roto results. added edgeblur. This is how it looks, but I am not quite satisfied with the results. I think it needs more color correction or fixing merge bending modes.

Got it!!Now I think the reflection looks good. Still the reflection on the front side glass is missing. and adding that would also look good.
I think this is good enough for this shot. I would still be adding some more cars in the same scene.

I am almost finished with the above scene. I added few more cars in the background also added some holograms to make it look more hyperealistic. for all the scenes that I worked on, I feel the most important part to do as post production would me to add more and more holograms.
In the above closeup shot that is from a perspective of a person sitting in the building looking at the car. Added a hologram on the table.

Adding some holograms.

Adding more cars in the background.

Booking a taxi scene is also done.
Added the holograms.
Fixed the defocus.
Blurred the background when the car arrives.
rendered the shadow and the car separately.

For the above scene, I added few more cars. Also from the feedback, cars should be seen as different sizes. Fixed that and it looks much better now,.
I am kind of not liking these panels as they are not matching with my other two shots. Also it needs more holographic information to make it look hype realistic.

I have thought of rewriting the whole story, depending on the scenes I have created. Working on the script as well. I plan to show it as a world of 2050 which is very different to today. People stop using there phone and now have adopted to wearing virtual headsets. 

I downloaded few holograms, and I am looking to add it in this scene.
Footages from: Hologram Videos: Download 66+ Free 4K & HD Stock Footage Clips – Pixabay

This is how it turned out. Found the footage of exactly like the concept I had in my mind. I wanted some connecting dots to the building. added a card to the tracked scene and placed the footage on it. card seems to be having issues, as I placed two cards.
Also this scene has a bit different lighting than the other two. need to work on grading. for the cars flying, I need to add blur effect to the cars that are behind the front ones. I am still not sure how I am going to do that.
I remember somewhere we used a Zdefocus node when Gonzalo taught us. And it resolved the issue.

Node graph so far.

Feedback for the renders:

Size of the car has to be made very tiny : re-render it.
Also the blue lines needs to be done masking

To remove the unnecessary hologram at the back of the wall
Change the start and the end of the car coming in the frame.

make the center of hologram to the actor’s headset.

Did the changes reccommended.
Removed the hologram from the closeup shot of a car. added a globe hologram downloaded from pixabay to make it look like someone is working in the office with headset on.

Hologram globe Video by Bellergy RC from Pixabay

I wanted to add one more scene to the whole scene. This is one of the random shot I took while walking and I thought about adding some information of holograms and information to the scene. I was a bit worried if I could just track the scene. but I think I am getting better at it. Most important part while tracking is I feel the LensDistortion and the mask alpha to track the scene. This has helped me so much with tracking..
I took a footage to put as a background for the frame from pixabay. Did the editing in After effects. Markers on the path also helped the track get smoother.

The initial idea started with doing something that will promote advertising using vfx. But when I started working on it I felt, I wanted to work more on my compositing skills. After working on all these scenes I am more confident on using Nuke now. I always liked to do modelling but compositing was a difficult task for me. I worked more on my compositing, fixing renders, lights and reflections. So eventually I changed the story according to the work I wanted to do and practise more on rather than being very specific about doing one particular script.

The story now goes like, how our everyday life and habits will change in 2050. It starts with introducing the story name on the screen saying ” Are you ready for 2050?” And it then begins where the actor leaves home with his headset on leaving behind his phone on the table, showing its the time we not use mobile phones anymore. Walking towards the taxi he can see and access the information panel. People will be using air taxis, how they will look in the air and the last scene where we can see a perspective of a person working in his office conference room and can see the air taxis. To end the story, I used the rewind as if its just a imagination and the story ends with a glitch.

Overall I am satisfied with the renders but I think I could have done much better at making it a good story, with a good start and end. I thought of keeping it short and not too long so that whatever I work, has to be the best of my efforts. It also helped to me to get reviewed from the professors before time and make the necessary changes. The more the changes and renders I did , I could see better results for the scene. If I look at the whole video, I think it still lacks colour grading and correctness. It was fun to work on the project and to see it turn out into a whole story video of the concept I visualized.

That is the link to my final video.

FMP Thesis Proposal

In the first few weeks, our focus was on researching our possible dissertation topics and analysing how to structure our dissertation proposal. As well as Nigel also told us how to construct a properly framed thesis, as well as giving an account of example arguments as well as the academic nature of the thesis.

Thesis Proposal Structure
A thesis should contain all the following sections:

  • Research title or question
  • Draft Introduction
  • Key words searched.
  • Draft literature review
  • A draft chapter
  • General outline of each chapter
  • Indicative bibliography

In any research proposal, the goal is to present the author’s plan for the research they intend to conduct.
In a research proposal, the author demonstrates how and why their research is relevant to their field. They demonstrate that the work is necessary to achieve the following:

  • Filling a gap in the existing body of research on their subject
  • Underscoring existing research on their subject, and/or
  • Adding new, original knowledge to the academic community’s existing understanding of their subject

Types of research

  • Quantitative research. This type of research provides numerical data (analysed using mathematical methods or statistics). It is an empirical research that explains trends so it is more objective.
  • Qualitative research. This type of research produces findings without using quantitative methods and it is more subjective. It explores the perception, feelings, or ideas of people,

Choosing your topic will involve a considerable amount of initial research. Research involves locating key sources.  There are two kinds of sources:

a) Primary or original sources.  These could be a mediaeval manuscript, a poem, a photograph, the records of an institution, a dress, an interview…

b) Secondary or interpretative sources i.e. papers/articles/books written about a subject. Each source/text is written/produced from a perspective and has a specific function. 

Research Resources

Google Scholar:  http://scholar.google.com/

Credo: https://search.credoreference.com/

The E-Library

The library subscribes to numerous electronic databases and journals.

http://www.arts.ac.uk/study-at-ual/library-services/e-library/

Finding E-Books Guide

https://web.microsoftstream.com/video/98d39c15-9b36-4966-813c-4a08dc7fe3a4?list=trending

Topic Research:

Researching about starting a business in VFX industry, what comes first in my mind is the heavy budget that is required to produce a film. There must be ways to improve things without always investing in so much. Studying about the films that were truly memorable and made on a very low budget would improve me as an efficient artist. The idea is to consider the topics related to films with low budgets and turned out to be successful hits.

TOPIC NAME:
Case Studies of Successful Low-Budget VFX Films: Analyzing Notable Indie Films and Lessons from DIY VFX Projects


Critical Practice

Topic for Critical Practice Thesis:
Ethical business practices for sustainable development.

Preparation for first tutorial:

· A research question or title
Ethical business practices for sustainable development.

· An Abstract 50-100 words

This study focuses on the integration of sustainable development with the profitability, for a company. Through a comprehensive review of multiple companies who have adopted the sustainability growth as an integral part of the business, it showcases the ideal framework for companies to promote long term sustainability. The study presents effective solutions and examples for companies looking for sustainable initiatives to maintain the moral principles for greener practices.

· A structure indicating chapter headings and subheadings.

Chapter 1: Introduction
objectives
questions
Significance of the study
Chapter 2: Literature review
Definition of ethical business practices
Chapter 3: Theoretical framework
Chapter 4: Methodologies
Chapter 5: Analysis and findings
Chapter 6: Conclusions

· Show development on Literature review.

· A bibliography

Feedback: To be more specific on the VFX business and its sustainable practices and to put proper citation of words taken from different papers and websites.
 

Updated Topic:
Ethical business practices in VFX Industry

Due to multiple sources of information for the critical source topic, I couldn't choose the right topic.

Ideas to choose the topic:

1: VFX other than films:
Taking this as a topic I think has a too broad aspect. In my opinion taking VFX as a topic in one particular field other than movies can be too narrow.

2:
Measuring ROI of VFX-intensive Advertising Campaigns
I think as a VFX artist, I support VFX driven ads as the best source for higher ROI. This shifts my focus to only see the good sides of vfx in advertising. I would prefer choosing a topic where I can explore varying fields as a VFX industry.

3:
The Evolution of Virtual Production in VFX Studios.
Virtual productions in VFX is something I always wanted to explore. This topic makes me answers the questions for choosing a topic for critical report. It definitely motivates me to research and discover more about it.

Researching the research

Primary

Virtual production, in its early sense, started around 2008, 2009, obviously with James Cameron’s developments with Avatar
[Pioneering Virtual Production With Real-Time VFX: An Interview With DVP Asa Bailey | ActionVFX]
Information related to Virtual production could be found in books that published after 2011.

Research gate:
Bao, Yanrong. (2022). Application of Virtual Reality Technology in Film and Television Animation Based on Artificial Intelligence Background. Scientific Programming. 2022. 1-8. 10.1155/2022/2604408.

Proquest E book central

ProQuest Ebook Central – Reader
The VES Handbook of Visual Effects: Industry Standard VFX Practices and Procedures
VES, Jeffrey Okun and VES, Susan Zwerman
Page 330

copying Harvard Citation
[Okun, VJ, & Zwerman, VS (eds) 2020, The VES Handbook of Visual Effects : Industry Standard VFX Practices and Procedures, Taylor & Francis Group, Oxford. Available from: ProQuest Ebook Central. [25 May 2024].]
research materials

  • Planning the research
PhaseTaskDeadline
Initial planning and literature reviewDeciding the topic 19 May
Preliminary literature review 19 May
refining research question and objectives23 May
Proposal development and approvalDraft research proposal 25 May
Submit for review25 May
revise proposal based on feedback28 May
Data collection and analysisPrimary research01 June
collect books, articles, papers02 June
Organize and clean data02 June
Writing and reportingDraft initial report08 June
Edit report based on feedback16 June
Audio-visual presentationcollecting the images and videos 13 June
Recording the presentation13 June
Final review and submissionFinal editing and proofreading18 June
Submit the report18 June
Timeline for critical report
  • Doing the research

Gathering data from all the different pages:

Virtual Production in Action (acm.org)
VIRTUAL PRODUCTION’S NEXT PHASE.: EBSCOhost (oclc.org)
VIRTUAL PRODUCTION.: EBSCOhost (oclc.org)
ProQuest Ebook Central – Reader
VFX – A New Frontier: The Impact of Innovative Technology on Visual Effects : WestminsterResearch
Movies like “Jurassic Park” (1993) showed the power of CGI
https://youtu.be/8r01mk6F_Pk?si=PVUcOuKfEaiofpWv

Making The Dinosaurs | Jurassic Park Documentary (1993) | Screen Bites

Available at: https://youtu.be/8r01mk6F_Pk?si=uw22cF4NAdDB6AhW

Audio-visual presentation

Taking notes from the documents provided
Create a story.
Set the scenario.
How does it end?
Pose a problem.
Outline the issue.
what resolution is required?
Ask a question
Ask us if we know enough.
How do we learn?

Taking image resources from:

Screen Bites (2024) Making The Dinosaurs | Jurassic Park Documentary (1993)
Available at: https://youtu.be/8r01mk6F_Pk?si=uw22cF4NAdDB6AhW

Bao, Yanrong. (2022). Application of Virtual Reality Technology in Film and Television Animation Based on Artificial Intelligence Background. Scientific Programming. 2022. 1-8. 10.1155/2022/2604408.

Zwerman, S. and Okun, J. A., eds. (2021) The VES Handbook of Virtual Production. New York: Routledge.

Zwerman, S., & Okun, J. A. (Eds.). (2010). *The VES Handbook of Visual Effects: Industry Standard VFX Practices and Procedures*. Burlington, MA: Focal Press.

Westminster Research (2021) VFX – A New Frontier: The Impact of Innovative Technology on Visual Effects. Available at: https://westminsterresearch.westminster.ac.uk/ (Accessed: 12 June 2024).

ACM. Virtual Production in Action: A Creative Implementation of Expanded Cinematography and Narratives.
Available at: https://dl.acm.org/ (Accessed: 12 June 2024).

Creating a Storyboard:

Also include: the topic you have researched and the benefits of this research.

Draft Storyboard:

Imagine a world in fast forward from the time Virtual production was not used to the time now, where it has become integral part of vfx industry. From the old movies to the very new, unforgettable ones.

The story begins with Jurassic Park
How the technology changed
Use of green screens and postproduction works
Highlighting the issues
Introducing Virtual production
Need to learn and adapt
What is the future?


Collecting all the visuals from different sources.


  • Finishing the research

EP – Group Porject

Ideas:

Final Idea:
THE BAG CHOICE

Script:
Minghan :Hi bro, how are you doing?
Mahadev: Good, Good 
Is it a plastic bag?
Minghan :yeahh (Pause)
You know these plastic bags are just so convenient. Who cares about reusable ones?
Mahadev :
I do
Using reusable bags is so important for the environment.
Its about making small changes for a better future

Minghan
Oh come on. Whats the harm in using platic once in a while

Mahadev : Imagine a world where everyone thought that way. Our cities would turn into chaotic messes, covered in plastic waste. Pollution would choke our air, and wildlife would suffer immensely.

Minghan : Thats so true, What have we done?

Mahadev:  Small choices can make a big difference. Choosing reusable over plastic isn't just about convenience; it's about preserving the beauty of our planet for generations to come.

Mighan : You’re right. I never looked at it that way. Maybe it’s time for a change.

we had a great time shooting for our script.
I took on the primary role of shooting and directing the video. I was responsible for conceptualizing the scenes, setting up the shots, and ensuring that the footage captured the intended narrative and quality.

I have to create a green city symbolizing the sustainable environment:
Setting up the buildings as mesh’s to grow plants on it.
Trying blender to create the plants growth.

Didnt turn out as expected

Getting started with houdini, so that I can use simulation for plants if needed

Quick notes:
G, h - to focus the object in viewport panel 
Space H - to focus the object in viewport panel
T - transform, R- rotate, E - Scale
Right part of the node - Display flag

pops- Particle Operators

Minghan has done the green part, on what I was planning to do
So moving forward to main composition, putting together all the shots.

Editing the intro effect – THE BAG CHOICE
edited all the clips together, as per the script and the dialogues
Sound FX – Pixabay

The task is to put together all the files edited by others, and some more effects. But its challenging as everyone does the node graph in their own style and some files are missing.
Might have to plan for some more efficient way to work together and make it work

Updates from the group chat

Trying to understand and organize the script so that I can work on adding the effects temporarily

Putting together works from everyones’s file

Garbage disposal:

Trying to fit the building in, the work done by yifang
but its quite difficult as there are different things infront of the building like cucle and cars.
I trid putting it on to the card and a normal placement as well but it doesnt fit well with the whole scene.
I am still figuring out the ways I can fix it.
for now I am going ahead with rest of the things.

Roto-ing the tree

Compositing all the scenes and the edits:

Adding the needed sounds:
Sound FX source – 4.5 million+ Stunning Free Images to Use Anywhere – Pixabay – Pixabay

adding the building in the scene is quite a task. I was doing it in the previous file but due to too many nodes in the same file it is taking very long to proceed and load.
Now I have started with a new file to put the building in.

Adding smoke to the plastic waste
Tracking the spots and adding the smoke.

This was the final color grade and look.

I did not get the concept of adding broken building in the scene. Communication among the team members was difficult as some of them do not understand English.
overall, I am not at all satisfied on how it turned out. I think all of us had better concepts in our mind and couldn’t express it to each other.
Also we did not wok properly in a given timeframe so at the end it was difficult to do the changes and ask everyone in the group to do specific tasks.

Also one major issue I faced is the working on Minghan’s nuke file. Due to some of the nodes the script took around 3-4 mins for every single frame to load, which made it even more difficult to do the changes and look for how it has been edited.
I added the butterflies as manos said, not too many. I downloaded a footage and roto-ed out 2-3 butterflies to be added in the scene and not all of them to avoid the gaudiness.

Breakdown:

TERM 2 PERSONAL PROJECT

Initial Idea: To create a commercial advertisement for the use or promoting an application using cool effects, transitions in animation, vfx for green screen and some motion graphics to show the use of the application.

I have selected an app, CARUDYOG, which is into selling of secondhand cars. The idea is to promote the working of the app and why it is better than other apps, with the help of a short video, using different techniques.

SCRIPT:

SHOT 1:
Hey there, car owners!
Forget the hassle of negotiating with just one buyer.
If you're thinking about selling your car and want top dollar, we've got the perfect solution for you!
Introducing CARUDYOG, the ultimate platform that connects you with multiple buyers, all competing to give you the best value for your beloved ride
(transition from one platform to another showing CARudyog)
SHOT 2:
Host:
With CAR UDYOG, selling your car is an easy Job. 
Here's how it works: simply download the app, enter your car's details, and watch as potential buyers start bidding for your vehicle. It's like having your very own car auction right at your fingertips!

. With our app, Choose the offer that suits you best, and let the selling begin!

These are some of the initial references for the background I want to model. The left side images show the old garage and old styles of doing the business. The right side image shows transition into CARUDYOG.

heres the first render to analyse the references and the overall size and placements of objects in the scene.

Here’s another sample modelling and putting a sample green screen image to take a look on render dimensions.

Adding more details
Plants:

Unwrraping for plants

starting with a new file as the previous one has got lot of glitches because of the animations and no proper grouping of objects.

Initial Modelling
Texturing in substance

Automatic texture import from substance
“https://youtu.be/F3c0BLv2pRk?si=tnYOG1tyKp_JwvaN”

Using Mocap for animations:

Combining character animations from mixamo:
“https://youtu.be/b5vsjV66nAs?si=6YpwwuiLE5v8VOJ_”

Using Mocap retargeting



Working on a footage in Nuke to be added to show how a live auction of cars would look like:

This is shot by me
The plan is to add bidding across different locations in the shot to show how the auction works, it is for the script part where it says [watch as potential buyers start bidding for your vehicle. It’s like having your very own car auction right at your fingertips!]

This is what the tracked footage looks like. I think this footage will not work for what I have planned.

Checking for some footages online so that I can do a good track and add elements to it:
Downloaded a footage from Adobe stock footage with a free trail:
Royalty-Free Video Stock Footage, HD Video Loops & Clips | Adobe Stock

Tracking the scene and putting 3d elements to check the accuracy.

Checking with the track to place elements to add and modify the attributes

Working on adding a 3D element here

I had this Idea of putting something as a character in the footage to show there will be multiple buyer across different locations in the city:
And this is what I came up with:

Thought of a quick modelling in Blender:
So I got the red and white colors to focus more on the logo colors.

Here’s the nodes for the tracking

I can use 2D tracking and 3D tracking both to add more of the characters in the footage. But I think using 3D tracking can help me using more of dept details and more options to relocate the card in scene.

Using the same scale for the cards might help to show the distance between two different points

As per the feedback from Manos, I have thought of changing the characters to different faces which would make a better difference.
Following are the references I look forward to creating the faces of characters.

Characters I modelled

Adding the bidding number plates


Editing details for animation and motion graphics to show the details of the app
Adding these details to my footage where I talk about how to use the app.

Using After effects to add the graphics. As I find it easier to use for motion graphics work.

Pictures for the car
2021 Nissan Qashqai 1.3 DIG-T Acenta Premium (158ps) Xtronic – £13,590 – CarGurus.co.uk

Enhancing the audio of original footage
Adobe Podcast | AI audio recording and editing, all on the web
Sound FX
4.4 million+ Stunning Free Images to Use Anywhere – Pixabay – Pixabay

Fixing The animation files as it is having issues with textures and working on lighting.

Trying on taking different camera angles


I have been trying to do the animation for the last part of this scene, where I want different characters to pop up, showing the best offer they have for the car.

Unfortunately this has been the 5th failed attempt to render the Mixamo characters.

This for some reason is not taking textures properly.
I tried to fix it in hypershade but failed to do it.
I might try to fix it again or else I will have to go for another character from some other website.
The other two characters I used in the script are working fine and render the textures properly.

I thought it might be my laptop issue. This render is from remote desktop. I think it is Mixamo issue or maybe it allows only 2-3 downloads.

After checking with multiple characters and different setting, I came to the conclusion that it is not the render settings that is affecting the render, rather it is the cosine and reflectivity in hyper shade that made the difference.

Re-Animating the character

So this has been a constant issue while working on these mixamo character files.
It shows that it has been corrected properly but after the render it always has this glossiness even after changing so many settings.
After many trials I realized it was the specular value in the hyper shade that needs to be changed to fix this issue and a lot of other values.

I am quite satisfied with how it turned out. I could have done a better job with the initial part of the script. Animation and lighting are something I need to learn and progress more in. For this project I think I did a good job at the script and imagining the scenes properly. I avoided doing the mistakes I did for the previous projects. Taking help of more references and real-life measurements was something that was missing in my works before. Also tracking the scene for the city area was quite interesting. Did lots of changes wherever needed.
What worked for me in this project was proper storyline, studying more reference styles, a clear idea on what I want to make and regular discussion with course leaders is I think one of most important part of the process.

Breakdown:

Film Theory

Introduction To story

Houdini

3DEqualizer

Camera

https://vfxcamdb.com/
All the details of camera

Export buffer compression – allows you to buffer through frames

E – emd frame
T – track
CTRL + left click and press T track
Select the track – where it stops – adjust the tracking frame – retrack – press T


config – horizon control – image controls- enable control lables – colour control enables

ALT + right click – to select points

ALT + C – to see the tracking results

Deviation browser- how accurate your track is

calc – all from scratch

05/02/2024 – Week 2

deviation browser for image control panels

Turn On image control enabled
Static camera shots
Two ways to do the static camera track for points –

1 : put tracking points – and frame – track again the same point from the other frame – splining through the frame – tracks the point automatically
2 : Put tracking point – track another similar static point

fixed camera position – if you used tripod

Adjusting the lens distortion
– Adjust parameter of the camera

Create Different groups for points to export to maya

Object track is for the 3D object and camera track is for the footage

Face tracking –
points should be mirrored to the opposite side of the face

Add 3D models – addnew

Turn off survey data – for 3D model

VFX Fundamentals

For my term project, I’m working on creating a special virtual world where people can have presentations and work in unique spaces. Imagine it like a magical metaverse.
In this virtual place, I’ll be designing different spots, buildings of workstations, kind of special desks for people to use. We’ll also make beautiful displays for products, just like in a shop or museum.
Also there will be a lovely outdoor area, like a park! It’s going to be a fantastic adventure in the world of virtual spaces.

Developing a virtual metaverse where people can access creative workspaces, product displays, and even explore outdoor environments. This project aims to offer a diverse and immersive virtual experience for presentations and work, combining different settings such as workstations, product galleries and outdoor spaces.

Designing the building

Creating a platonic mesh for sample buildings

Designing the basic layouts for building model

Layout to design general environment

Thinking of references to arrange the buildings

Reference for the building covers

Working on texture paint using substance

Deciding on colour shades

This looks better as ivory shade is a new modern architecture colour. Black is used to make the grille stand out with the building as in highlighting it.

Using wooden texture for poles but I think this is too dark and I would prefer going to lighter shades as we add trees , the overall look would be anyways enhanced.

Exporting textures to maya

Render in Arnold skydome

Checking with lights and HDRIs

Deciding on the setup to do the lighting and render

Importing meshes into unreal

Adding textures to the meshes imported from substance
Adding base and plants in unreal
Adding water body
separating out the glass material mesh to apply glass material on it
And this is the final look
Checking different HDRI images

Here’s the output render

Breakdown

UNREAL

Exploratory Practices

Week 12

Lighting Aesthetics

Undersanding the mood of the scene with the help of lights

Light Purposes and functions

  • mood and environment
  • indoor and outddor environment
  • controlled shadows

Nature of Shadows

2 Types of Shadows

Attached Shadows

dependent on object

Cast Shadow

Independent of object
Creates drama and mood

Fall off – Slow and fast

High key lighting and low key lighting

Above eye level key lighting and below eye level key lighting

Outer orientation Function

  1. spatial orientation
  2. Tactile orientation
  3. Time orientation

Lighting in cinematography

WEEK 10

Lecture: Rendering with Movie Render Queue (continued)

WEEK 9

Introduction to Sequencer and Movie Render Queue (MRQ)

MRQ- Movie render queue

Movie Render Queue is a successor to the Sequencer Render Movie feature, and is built for higher quality, easier integration into production pipelines, and user extensibility.

Plugins needed

  • Movie Render Queue
  • Movie render queue render passes

Plugins needed for rendering in video formats

Important Concepts:

  • Aliasing In CGI
  • Deferred rendering
  • colour space

WEEK 8

Cameras and Post Process Volume (PPV)

WEEK 7

Lighting and Global Illumination Systems in UE5

WEEK 6

 Landscape Material, Introduction to Water and Foliage Systems

creating landscapes
using different tools to create landscape
Using ramp tool to create slopes

WEEK 5

Materials part 2 and Intro to Landscape

ORD – occlusion, roughness, displacement

WEEK 4

Building assets from Quixel Bridge

importing assets and using quixel bridge content

WEEK 3

Asset Gathering and Intro to Modeling Tools

Blueprints – Unreal Engine approach to Visual scripting

Shortcut Keys

G – hide objects temporarily
Ctrl + Space bar – Content browser
W – move
E – rotation
R – Scale
Ctrl + G – group Objects
Shift + G – ungroup

Shortcut keys

Snapping Tools

Using content browser assets and props
To reduce lag and increase performance of unreal
Go to settings - Engine settings - set post process to medium and then rechange it to epic once done working
Settings

GPU visualizer

Type command profileGPU

The colored bar represent how much processing time each thing needs to go to the gpu each of the individual

Settings for better performance of Unreal
https://youtu.be/esrnQBq75qg?si=I09mnhuJ3AzvdDCF

WEEK 2

Epic Games marketplace, launcher and introduction to Quixel Bridge

Quixel Bridge plugin for Unreal Engine gives you full featured access to the Megascans library within the Level Editor. You can browse collections, search for specific assets, and add assets to your Unreal Engine projects.

Launching Bridge in Unreal Engine

  1. Click the Content dropdown in the toolbar and select Quixel Bridge.Access Quixel Bridge from the Content Menu
  2. From the top menu, select Window > Quixel Bridge.Access Quixel Bridge from the Window menu
  3. From the Content Drawer, right-click and select Add Quixel Content.Access Quixel Bridge from the Content Drawer

If you can’t find Quixel Bridge as directed above, select Edit > Plugins. Type Bridge in the search bar, and click the checkbox to enable the plugin.

You may be prompted to restart Unreal Engine for the changes to take effect.

Enable the Quixel Bridge Plugin

MAYA

WEEK 10

Realism

Temperature Lighting
How Red and blue lghts decide the temperature

using color temperature with the arnold lights

for shadow uh use arnold shadow matte material

\

SKY DOME LIGHT

IN ORDER TO RENDER CREATE CAMERA FROM FRONT VIEW

AOV’s in render settings are importants \ specular to add as the object is metallic\

add custom ambient occlusion
for variation in shadows

WEEK 9

Maya Animation 

Principles of Animation

  • Timing and spacing
  • Squash and stretch
  • Anticipation
  • Ease in And ease out
  • Follow through and overlapping
  • Arcs
  • Exaggeration
  • Solid drawing
  • Appeal
  • Straight ahead action
  • Secondary action
  • Staging

Auto keyframe toggle
Animation preferences

WEEK 8

Maya rigging & IK

IK – Inverse Kinematics

FK – Forward kinematics

  • changing pivot
  • Parenting – select child and then parent and press P
  • Another way to parent – select parent – then child – Animation – constraint – parent

Creating IK handles

IK handles for hands

Human IK

Rigged inbuilt model
definition
controls

Skin Binding

Using quick rig

Creating Bones and then IK handles
adding controllers for the overall movement with proper alignment

Adding Bones
Creating IK handles
Adding controllers

WEEK 7

Texturing with Substance Painter

WEEK 6

Texturing UV

WEEK 5

Organic modelling pt2

WEEK 4

 Organic Modelling

project and bring a mood board with references

For my term project, I’m working on creating a special virtual world where people can have presentations and work in unique spaces. Imagine it like a magical metaverse.

In this virtual place, I’ll be designing different types of workstations, kind of special desks for people to use. We’ll also make beautiful displays for products, just like in a shop or museum.

Also there will be a lovely outdoor area, like a park, and even a galaxy to explore! It’s going to be a fantastic adventure in the world of virtual spaces.

Sample models for exterior of the building

WEEK 3

3D modelling in Maya

Modelling Air balloon in Maya

Nuke – Gonzalo

TERM 2 – EXPLORATORY PRACTICES

Assignment – Fire in London

Putting Smoke and adjusting the smoke color with the background:

I tried the colorspace ybcbr to match the shade of the smoke with the background, but I am not sure if it is what exactly it should look like.
Matte painting can be done in Nuke and photoshop both, but for now I am using photoshop because I still get a little bit confused with the nodes and adding more nodes might confuse me more.
But I will definitely try to do it in nuke as well!

Tried Photoshop generative AI to generate broken wall and fire and this is what the results are.

Adding smoke in the background
I used blur node to make it look like it is far away from the camera.

I was trying the building collapse effect but I couldn’t do it the building in the background. So, I put the ready footage of the collapsed building.
and added the bullet hitting effect.

added a bomb blast effect to make it look like it blasts after the hit

Adding smoke in the middle part and applying a little less blur to show it is closer to the camera than the previous one
Here I think I should work on the roto part of the front area which is closer to the camera
But if there has to be more smoke coming in front then the smoke and the roto edes would not matter much.

Adding more fire in the closer areas.

Adding the grey and black shades on the wall to show the burning effect.

Adding the fire for the window
And the card for the photoshop AI generative image of a broken building


News Channel edit

Node Graph

Final node graph
Link for submission
PU002772YA23/24: Gonzalo’s class | Moodle (arts.ac.uk)

Doubts:

issues:

even when it is redistorted still has the issue

WEEK 15 – London city fire

Week 14 – London city Fire

Also, please have a look at this link with more footage elements resources:)

Use retime node – to change the speed of footage
OFlow – used to manage the speed with three different options

Kronos: another tool to slow down elements
TimeWrap_loop – to make the footage run through all the timeline even if it has less number of frames relative to the main footage

T_OverStack
Adding fake Motion blur – depending on how close the pate is to the camera

Using expression to edit the luminosity

Fibonacci glow adds more details compared to the normal glow node


Week 13 – London city part 2

Haze and Depth techniques

First Technique:
smoke LumaKey

Issues with this
We didnt treat the color in the background of the smoke
we just kept the alpha

Second Technique:
Luma key Inverted

Third Technique:
smoke LumaKey log

Loglin node
operation (loglin or linlog)

Fouth technique:
relight smoke
[way to add colors to the smoke]

Adding average of both the plates: avg of the bg on to the fog light

Matte painting – use project 3d node – project it onto the card

WEEK 12 – London city

Project Idea: try to change the look of the footage using stock footages, textures, smoke, etc to show the city is under attack.

References for before and after of the footage:
What to do – tracking, projections, smoke, fog and no CG

Inspiration:

breakdown: “Battle Los Angeles – VFX Breakdown by Spin VFX (2011) (youtube.com)

Ideas:

Smoke composition:
Fire, depth, grade of the smoke, motion blur.

Planning your shot Process:
Match move – extract camera, position, cards, placement.
Matte Painting – how to damage the building.
Next week – how to add smoke.
last week – Grade the shot
Lower third and animation of the TV news channel

Camera trackers edit:
Mask – mask alpha.
Focal length – known.
Length -film back preset – film back size
Settings: Increase number of features
Error max – Min length – 3
Delete unsolved.
delete rejected.
use a point to decide the ground as we do not have a lot of information of the ground.

Let’s pretend a point is a ground (take one point on the ground)
Define the scale distance.
select last point. and define the axis.

WEEK 11 – Particles

CG Machine

WEEK 10 copycat

Intrinsically both node types are identical, both a GROUP and a GIZMO are the very definition of “Nodes within Nodes

The key difference in the two is, a GIZMO is a reference in node that is stored externally to the NUKE script, and a GROUP node, is saved inside of the NUKE script, and can also be saved as a ToolSet.

Glint node – poster line effect

Two ways to add the content to the group
1: manage user knobs, add labels
2: open the pencil above, drag and drop also we can drag and drop from one node to another if the pencil is turned on

What is a CopyCat?
CopyCat is a Machine Learning Node which alows to artist to train their own network

WEEK 10 – Matte painting

Gizmos and Tools

WEEK 9 – Motion Vectors

The Smart Vector toolset allows you to work on one frame in a sequence and then use motion vector information to accurately propagate work throughout the rest of the sequence. The vectors are generated in the SmartVector node and then piped into the VectorDistort node to warp the work.

Smart vector – export write

WEEK 8 – Remove Markers

UV map – are just the representation of axis in Nuke
We use U and V instead of X and Y just to not get confused with the letters
U & V are representation of X & Y in nuke, we dont need to actually import the 3D model in Nuke instead we cancopy the information in a UV map – so it is generally red and green channel that shows X and Y and blue is Z which is not availablee in UV map

It is a 2D wrapped version of a 3D model, so if you import a a texture i Nuke, you can direstly map it on to the UV map of a 3D object

We can work on textures in compositing with the help of UV map, just like a cheat, instead of re-rendering the textures

Removing markers – clean up

Regrain – to put back the grain on the node
Plate grain – To just show where the grain is
Normalised Grain – flat grain pattern
Adapted grain – takes the shadows, midtones and highlights of the grain

Image Frequency Separation
Preserve and Reuse Details During Clean
Using roto paint for low freq and high freq:
Low Frequency – light of the face – light and shoadows
High Frequency -Details of the face – for the details

Interactive_ligth_Patch – to remove markers
Using transform of the node – divide and multiply and taking the average of the transformed area

Curve tool – to manage the change in light
We copy values from maximum luma value of curve tool node – copy link
(crop the area which has maximum changes in light)
Go to grade node – gain – paste absolute

Curve tool – copy minimum links – paste on the lift of grade node


Home Work – Green screen removal

Green Screen Space Man

WEEK 7 GREEN SCREEN

Clamp node : can be used to keep the channel values between 0 to 1 and not more than that even when we multiply two channels.

Denoise : plate gets soft – you loose some details

Core – base – hair

ADDITIVE KEY is not an actual keyer but it is an image blending technique used
to recover fine detail in difficult areas such as wispy hair, soft trasparencies and motion blur
If combined with a good matte, edge treatment and despill can create very good results to better integrate your plate to the bg
Additive Key manipulates lightness values in the fine details and ADDS it to the bg under the foreground plate

To remove green screen – first we remove the despill
Keylight (green channel =1) – merge(minus) – (A-B) – (keylight – original footage)
– got the despill – saturate it down to zero
– roto out the areas you dont want to remove green despill from

Roto Invert the eyes from saturation

– By minus Original from despill plate you get the luminosity lost by removing green
– Desaturate that values to remove unwanted colors
– Multiply the bg over those luminosity values
– Plus it over original to add bg in the trasparencies

Addmix – used to adjust the alpha at the edges


Week 6 Green Screen

Hue correction-
to change one colour and replace with another.
R-sup – R- suppres used to manage alpha

Keyer – we can select different channels and select different areas of the same image – using keyer operation channels.
YPbPr is the analog video signal carried by component video cable in consumer electronics.
The green cable carries Y, the blue cable carries PB and the red cable carries PR.

R = HUE: Hue literally means colour

G = Saturation: Saturation pertains the amount of white light mixed with a hue.

B = Luminance: Luminance is a measure to describe the perceived brightness of a colour

IBK stands for Image Based Keyer
It operate with a subtractive or difference methodology.
It is one of the best keyers in NUKE for getting detail out of fine hair and severely motion blurred edges

IBKcolour – remove the man and keep the background
IBKgizmo –

IBK stands for Image Based Keyer
It operate with a subtractive or difference methodology.
It is one of the best keyers in NUKE for getting detail out of fine hair and severely motion blurred edges

Colorspace is used to convert Liner channel to HSV channel
HSV stands for hue saturation value.
R = HUE: Hue literally means colour.
G = Saturation: Saturation pertains the amount of white light mixed with a hue.
B = Luminance: Luminance is a measure to describe the perceived brightness of a colour

HueCorrect – can be used to replace a colour with another one.
HueShift – can be used to shift one colour with another.
Keyer Operaion – can help select one particular channel of image to convert it to alpha

Nodes used to remove Green screen:

Green despill – is not about removing the green color or the background its about removing the green spill that the character is having.

Shot – keylight – green channel to 1 – merge the difference (A – B) – removes the green spill – then we desaturate it – and Merge back the shot (A + B) – we have removed the spill

Week 5 CG Nuke Machine

If the shot is approved by all the departments then it is compositors responsibility to match the rendered comp with the surrounding – that is the line between the departments.

Compositors role – Adding highlights, mid tones, defocus, shadows, grain , lens distortion

Lens distort works only in Nuke X – if you export it as an STMap you can use it in other versions also.

To match whites between images –
Grade node – White point – select pixel – press ctrl
select Gain- select color pixel you want in to the grade node gain point

To match black points
Grade node – black point – select area
select lift – select area

Grade – whites and blacks
Color Correct
Hue correct – to select particular color and desaturate it – ctrl + alt – select the particular color need to be desaturated

UV pass – retexture

WEEK 4Multipass composition

Layer contact sheet

ID map – to change different elements in the comp footage

KeyID node – to select one colour from the ID
Normal pass – red, green , blue channel info can be used to relight the footage
AO pass – to produce contact shadows
Motion vector – to adjust motion blur

Multiply comp-
subtractive method
remove the channel from the original footage – that is subtract it – using shuffle node – grade
AO – merge Multiply
Other channels – merge plus

Unpremultiplying channels help keep edges intact and not affect it

Position Pass –
The position pass represents the 3d scene in color values. Red is the x coordinate, green is y, blue is z. You can use a Position ToPoints node to visualize it in the 3d space. A depth channel stores the distance from a point in 3d to the position of the camera (and will be moving with the camera)

WEEK 3 – Multipass composition

What is multipass compositing?
It allows you to break down your shot into separate elements, such as diffuse, specular, and reflection passes, and work on each one individually. This not only gives you more control over the final look of your shot but also saves time and effort by eliminating the need for re-rendering the entire sequence.

precomp – creating a sequencing in a project, made in project
Select write node – then press R
You need not compute lensdistort every time – it makes the computer processing slow

Put Lens distort only to the elements that will be added or the patches that will be removed and put the len redistort back before merging the whole footage

Types of Projection
– Patch Projection


– Coverage Projections


– Nested Projection


-Combination of all above

.abc – alembic file import – to import animated footage in 3D form

Projection Artefacts:
Smearing: An image smearing or striking across the glancing angle of the object.
Doubling: When a matte painting projects onto a multiples geometry
Resolution: Not enough resolution in the painting because of the camera getting to close

ModelBuilder – create card – create mesh – bake mesh

Rendering passes of the CG

The Objective of unbuild the beauty and rebuild it via the passes (channels) is to have an higher control of every pass and grade it according to the background image
When we build a cg beauty we simply combine information of highlights, midtones and shadows. Passes naming is different depending on the render engine.
This information is contained in the passes (channels). We use the shuffle node to
call out of the exr the channel that we need.
The rule to follow to rebuild the cg asset is ALWAYS plus lights and ALWAYS multiply
shadows
Diffuse +
Indirect +
Spec +
Reflections +
AO *
Shadows *
Every pass should be graded separately
A final grade can be applied to the entire asset if needed
+ stands for Merge plus
* stands for multiply merge

Ctrl + shift – replace the node with other node

shake node – to disconnect

HOMEWORK

copying alpha of roto and alpha of corner pin?

in the third technique is the project node projecting every frame onto the card as in frame by frame, as we have not used framehold?

Removing Markers

Issue resolved: rotopaint should be rechecked for all the frames

Markers removed
https://learn.foundry.com/nuke/content/reference_guide/3d_nodes/project3d.html

why do we use project 3d node in nuke?
One of the handy feature of the Project3D node in Nuke is that we can quickly texture a geometry in Nuke through the use of camera projection

WEEK 2

Nuke 3D Matchmove advance

Different techniques of Camera projection

Lens distortion
detect the grid – solve – put the lens distortion

Technique 1:

3d patch – project on mm geo

HOLDING THE PATCH

Point cloud generator

PCG – analyse sequence – track points – (viewer should be connected to source node) –

double click the camera – delete rejected points – vertex selection – add group – change vertex selection to node selection – select group — bake selected group to mesh

The Project3D node projects an input image through a camera onto the 3D object.

we added the roto paint on the card as image we added in last project

blue camera is used to project the texture to the card that is taken from the rotonode
to tell the camera that this texture is from

Technique 2

HOLD THE CAMERA

Holding camera

USE replace for the roto

uncheck – it will make the roto part alpha and not the whole image

TECHNIQUE 3:

3d patch – project UV

sCALINE RENDER OPTIONS

merge mat – same as merge in but in 3D

Model builder, point cloud generator – can only be found in NukeX

Project 3D node

WEEK 1

Nuke 3D Matchmove

avoiding reflectons in the footage, like water reflections, mirrors, using roto for better tracking of a 3D scene

to see the tracking points – mask – source alpha – settings – preview

Camera node

– mask -Mask alpha prevents the masked areas to be tracked

attaching the mask input to mask out the reflection areas

settings

– number of features – 500

– refine feature location – check X

– preview features check X

press – Track

press – solve

Error – 0.9 – to solve the errors

Panel – Auto tracks – error max – press F on graph – reduce max error- delete unsolved – delete rejected – solve error has been removed

camera tracker – export – scene (in option) – create

Tracked camera
Shift select points

defning the scale
adding a 3D element
adding the card checker board to the ground
Tracking cone is a custom node added here

Adding grid

Point clod generator in X old version

Point cloud generator – analyase sequence
track points

hit delete
select vertex

select group and then bake selected group to mesh

This connected to point cloud generator baked node can help to export the camera axis to maya from nuke
ModelBuilder node

Lens distortion – detect – solve – undistort
Use same lens distorton that you used to detect the distortion
copy paste same lens distort – Redistort

WEEK 10

Real scenarios in production

1: Review your work
2: Stages in post-production
Temps/Post viz – rough versions of how a shot is going to look like
Trailers
Finals
QC –

Software project management

1: Google docs and sheets – free
2: Ftrack –
you can get all the project details, latest updates and all the information and data related to the project.

3: SG Shotgun
Shot Grid is a project management software owned by Autodesk. Shot Grid is primarily used for visual effects and animation project management in television shows and movie production and video game development.

Shotgrid | Get Prices & Buy Shotgrid 2023 | Autodesk

Production Roles
Line producer
VFX producer – make sure to complete the project on time and set the standard high.

VFX Dailies

Every morning meeting.
to make sure everything is working the right way.

Tech Check before publishing a version.

Desh Daily reviews
Small cinema dailies
Big Cinema dailies – big budget films (avenger, ff)





WEEK 9

2D cleanup 

p – shortcut for rotopaint

Rotopaint – clone – ctrl – hold – paint

changing the opacity of the clone tool for more organic pint

Shift – to change tool radius size
Hardness value

to change the values of roto paint brush for a particular clone
Paint tool – premult

check the box to see the alpha channel on rotopaint node –

2 ways to seperate out the paint from the roto paint tool

roto- merge (divide) – rotopaint- again merged
rotopaint- difference – copy alphas – premult – merge

Regraining the patches

Using keymix

Grain matching – ensuring that all of the elements in a composite, including those which were digitally generated, look like they were shot on the same film stock 

*Start with denoise



Nuke 3D Match move

press TAB – 3D/2D

Customization

Nuke – Preferences – viewer handles – 3D Navigation – Maya

Properties – Right click – Pixel Analyzer

Workspace – Save Workspace

preferences – startup – select customized Workspace

Scaline renders – 

When connected to a Scene node, the ScanlineRender node renders all the objects and lights connected to that scene from the perspective of the Camera connected to the cam input (or a default camera if no cam input exists). The rendered 2D image is then passed along to the next node in the compositing tree, and you can use the result as an input to other nodes in the script.

WEEK 8

Channels and depth channels

Bounding box –

Everything you measure in Nuke is about Pixels and not m/cm

Using Defocus to emulate a shot and not Blur
Blur gives a smudgy look whereas defocus if more of a filmy shot look

How to control the mid ground, foreground and background dept / blur

Depth channel- Visual representation of depth values for foreground midground and background

ZDefocus – used to control depth channel of an image

Changing the Focal point to decide which area you want to focus

Changing the depth of field of the focal point to adjust the area of focus

To find out how smooth you want the progression

layers

WEEK 7

Match moving – point tracking

2D TRACKER

This is a 2D tracker that allows you to extract animation data from the position, rotation, and size of an image. Using expressions, you can apply the data directly to transform and match-move another element. Or you can invert the values of the data and apply them to the original element – again through expressions – to stabilize the image.

This is the general process for tracking an image:

1. Connect a Tracker node to the image you want to track.
2. Use auto-tracking for simple tracks or place tracking anchors on features at keyframes in the image.
3. Calculate the tracking data.
4. Choose the tracking operation you want to perform: stabilize, match-move, etc.

Tracker – is an information of data that is moving in X and Y

2D track – tracking in X and Y
2.5 Track – getting the illusion of tracking in perspective
3D – match move ( matching movement in X, Y and Z)
Nature of the shot will be deciding what tracker we can use

2 ways to extract the data in X and Y
1: add the tracker data in the plate that needs to be tracked and change the tranform setting of tracker

2: Creating another node with the help of tracker, copying the data into match move node.
Match move – 1 point
copying the transform data

4 point tracking
Match move 4 points – if the plate has slight variation in rotation and scale
copying all the data – T R S

Test Planar Tracking

Filtering and Concatenation

Filtering Algorithm

Image filtering algorithms exist to help us define what happens to our pixel values as we transform & process our images. In this article, we’ll break down all the image filters we have available, and what they’re doing to our images in Nuke.

Ctrl – pixel value
Ctrl + shift – average value of the pixels

Choosing a Filtering Algorithm – https://learn.foundry.com/nuke/content/comp_environment/transforming_elements/filtering_algorithm_2d.html

Reformat node comes with the fliter option which helps us decide how we are gonna work with the pixels of the plate

Transform is a super useful node that deals with translation, rotation, and scale as well as tracking, warping, and motion blur.

Concatenation – nuke reads all the information one by one and then gives the results.

CONCATENATION

Concatenation is the ability to perform one single mathematical calculation across several tools in the Transform family. This single calculation (or filter) allows us to retain as much detail as possible.

WEEK 6

Merging and color matching

WEEK 5

Learning about Rotoscopy

The key workflow of rotoscopy is we want to generate an alpha matte from the roto node and use that alpha to cut our footage or play

We can do that in three ways

1:Streamlines Workflow

Benefits – you have a streamlined workflow
Limits – you cannot transform the node easily it will affect the  rgb channel as well

Premult: Is used to multiply the alpha channel with the rgb

2:Branched way

Benefits: it gives a bit more broken-down workflow of our script

Basic nodes for Rotoscoping
  • Roto Node: The fundamental tool for rotoscoping. You can add paths matching on the object. To create a Roto node, press O.
  • RotoPaint Node: This node allows for painting directly on the footage, facilitating the rotoscoping process. Using multiple RotoPaint nodes can be heavier compared to Roto nodes.
  • Shuffle Node: The Shuffle node is vital for manipulating input and output channels. It is particularly useful when you need to remove an alpha channel. Simply disconnect the alpha channel from the output layer.
  • Remove Node: This node empowers you to select the specific channel you want to remove.

I/O Points: Adjacent to the play button, the I/O button allows you to set intro and outro points within your composition.

Calculate motion blur in Nuke: 24 frames per second, 1/60 shutter speed > 24fps/60=0.4

Employ nodes like ‘EdgeDetect’ to create an outline of alpha, ‘Dilate’ to reduce or expand the alpha, and ‘FilterErode’ for alpha manipulation using filters. ‘Erode’ comes in handy when fine-tuning the outline with minimal changes (1-2 pixels).

Ensure that alpha values remain within the 0 to 1 range for accurate compositing.

Understanding how smaller shapes are important and how we can do it the right way. The shape I used for the T shirt part didn’t work out very well and I changed its shape every other frame so it turned out to be a very jiggle kind of animation. So I used a smaller shape at the shoulder part and made sure that it follows the track and not to change shape from what the shape at the origin was, as a result I realised that it is much easier and accurate even if we have to take lot of shapes.

WEEK 4

Intro to Digital Compositing and Nuke software interface

There are different types of softwares used for compositinf and editing.

Adobe After effects – Layer based software
Nuke – Node based software

Basic shortcuts in Nuke –

  • Edit-Project settings: check the settings before start working
  • F: focus selected node
  • Middle mouse: move
  • Space: maximize selected workspace
  • (select video)H: fit height
  • Select node + number: show selected node on view
  • Tab: floating search bar
  • B: Create a blur node
  • Read(R): create a node which can import components
  • Write(W): create a node which render outcome
  • O: create roto
Extension and preference
  • .nk: default extension of Nuke
  • .exr: native extension for nuke
  • (Nuke doesn’t work well with mp4/quicktime > it easily crahses)
  • C:\Users\23008828.nuke : preference of Nuke, can copy to another person
Sample VFX slate (VFX Slates & Overlays Guidelines – Netflix | Partner Help Center (netflixstudios.com))


Fullframe processing : play the video on full frame
File name > videoname.####.exr = start from videoname.0000.exr
Press render button> can set the frame range(input, global, custom) to render

TRAVEL

Short video describing your travel to understand the angles of videography, lights, focus and its importance in the footage.

WEEK 3

Cinematography Foundation II

Mise-en-scene

: The arrangement of everything that appears in the frame, such as actors, lighting, location, set design, props, costume, etc. A French term that means “placing on stage”. It is the filmmaker’s design of every shot, every sequence and ultimately the film. It is the tool filmmakers use to convey meaning and mood.

Examples of composition(A beginner’s guide to composition – Work Over Easy)

One of the most important aspects of the mise-en-scene is composition. Composition is the creative and purposeful placement of every element in the shot.

Different aspect ratios(Understanding Video Aspect Ratios: A Complete Guide – Muvi One)

Every composition decision starts with the dimensions and proportions of your frame which is called as Aspect ratio. 2.35:1 is the standard of most films.

Screenshot of film Parasite (2019)

Every frame is basically two dimensional with the X, Y axis, but the Z axis can be shown as depth of the frame. To emphasize the Z axis could create depth and make the screen less boring and flat.

Screenshot of film Parasite (2019)

Rule of thirds is one of the conventions used to create a harmonious composition. It helps balance the frame and people pay attention to the size of the elements in the frame.

Screenshot of film Parasite (2019)

High angle: when the camera is placed above the eye line, it makes the subject look powerless and weak.

Screenshot of film Parasite (2019)

Low angle: when the camera is placed below the eye lien, it makes the subject look confident, powerful and in control.

Shot size(SHOT SIZES (weebly.com))
The four attributes of light

Intensity: The brightness of the light. The measurement establishes our exposure settings in camera which we set in F-stops.

Quality: Soft/Hard light. Shadows are key to understanding quality of light.

Screenshot of film Parasite (2019)

Hard light will produce harsh shadows with defined edges and little fall-off. The source of it could be the sun, candles or light bulbs.

Screenshot of film Parasite (2019)

Soft light spreads very quickly and produces soft-edged shadows. The source of it could be over casted skies or windows.

Three factors can control either hard or soft.

  • Size of source: small sources make hard light and big sources make softer light.
  • Distance from subject: the further away from the subject, the harder the light. The closer to the subject, the softer the light.
  • Filtering: filtering light through a diffuser or bouncing it off a surface makes it soft.
Screenshot of film Parasite (2019)

Angle: The foundation lighting set up used in filmmaking is Three points lighting.

The convention tells us to put a main light (Key light) 45-degree angle towards the subject, a second auxiliary light (Fill light) at 45-degree angle opposite the key light and a third light behind the subject (Backlight).

Colour: Every light has its own native colour temperature or colour cast.

Chart of colour temperature(Understanding Color Temperature (Kelvin) (inlineelectric.com))

Colour grading is the process of manipulating the colour and contrast of images or video to achieve a stylistic look. There are colour schemes for colour grading

  • Complementary colours
Colour scheme from a shot of film La La Land (2016)
  • Analogous colours
Colour scheme from a shot of film La La Land (2016)
  • Triadic colours
The Psychology of Color in Film (anuirmyrdal.wixsite.com)
  • Monochromatic colours
Colour scheme from a shot of film La La Land (2016)

Moodboard – Time

Designing a moodboard related to the concept of ‘time’.

For this I chose some of my favourite pictures from my camera roll that replicates TIME for me.

1: Depicting a long way to go.
2: Timeless architectures.
3: Peagon running for his next hunt for survival.
4: just showing time waits for none
5: Every moment, making sure sky is the limit for any work you do.

WEEK 2

Cinematography Foundation


Cinematography: The word from Greek,
means “drawing movement”
Cinema is the art of moving images, yet it could be said to be simply made up from a series of still images, or A story told in pictures.

Similar structure of film camera(ishootfujifilm.com/film-101/articles/anatomy-of-a-film-camera)
an example of under/correct/over exposure(What is Overexposure in Photography & How to Fix It (studiobinder.com))

Exposure is vital element of a picture. If I took a picture with the correct exposure, making some under or over exposure pictures is easy, however, the opposite is quite difficult.

ishootfujifilm.com/film-101/articles/aperture-shutter-speed-iso#

The correct exposure value (EV) is a combination of three elements: ISO, Aperture, Shutter speed

image about ISO and film speed

ISO stands for “International Organization for Standardization”.

ISO refers to the sensitivity of film to light and it is measured in numbers. The higher number, the more sensitive to light. At first, it was only for film camera though, it was adopted by digital camera. As high ISO makes a picture too grainy, we try to use the lowest ISO value as possible according to the lighting conditions of the scene.

Image about aperture

Aperture is known as F-value or F-stop. A large f-stop equals a small aperture opening, which lets through a small amount of light. A small f-stop equals a large aperture, which lets through a large amount of light.

Image about Depth of Field

We can adjust Depth of Field using aperture. A large F-stop increase the depth of field(Deep DoF) and a small f-stop decrease the depth of field(Shallow DoF).

Example of different DoF(Guide to Depth of Field (+ Examples & Calculator) (shotkit.com))
Image about Shutter speed

Shutter speed is the length of time the camera shutter is open, exposing light onto the camera sensor. It controls motion in the image. The slower shutter speed the brighter and blurred picture is taken.

Information about the picture

We can check the metadata of the picture on camera and phone including ISO, shutter speed, and so on. Even if I forgot the setting of camera, I can check it later.

Image about Frame rate(Understanding Frame Rate In Video | by Vincent T. | High-Definition Pro | Medium)

Frame rate is the amount of individual frames that the camera captures per second. 24fps/25fps is the standard for video on web or film as those look the most natural to the human eyes. For live TV or sports, 30fps is common. 60fps, 120fps, and higher frame rates are used for recording video to be played back in slow motion.

  • NTSC(National Television System Committee): 30fps and it used in North America and Japan.
  • PAL(Phase Alternating Line): 25fps and it used in Europe and South East Asia.
Image about Focal length(How Focal Length Affects Viewing Angle | Digital Camera Know-Hows | Digital Camera | Digital AV | Support | Panasonic Global)

Angle of view is the extent of the scene captured by the camera. Changing the Focal length, changes the angle of view. The shorter the focal length the wider the angle of view. If I shoot everything through 8mm, it makes hard to focus on particular thing(appropriate for landscape). 50mm lens is similar to the normal view seen by the human eye.

Digital file format is a type of standardized encoding to store digital video data on a computer. The file format container includes video and audio data.

  • MP4: Most digital devices and platforms support which is the most universal video format around. MP4 is appropriate for unprofessional videos such as Instagram, Youtube, etc.(not for film)
  • MOV: Developed by Apple, it is the video format specifically designed for QuickTime Plyaer. However, it takes up significantly more space as its quality is usually so high
  • AVI: AVI video format is one of the most versatile video formats. However, with such high quality, AVI video format file size is large, which is more conducive for people to storage and archival than to stream or download(DNxHR/DNxHD).
  • ProRes: a high quality, lossy video compression format developed by Apple. It is used in post production that supports up to 8K. it is widely used as a final format delivery method for HD broadcast files in commercials, features, Blu-ray and streaming.