cine, Lessons Learned

On Color Grading

A while back, I posted about my Live Streaming Studio V3.1  setup, because many people wanted to know what gear I’m using and how I get the “cinematic” look on a live Zoom call. To achieve that look, one of the things I had to learn from scratch was how to color grade.

Here, I’m sharing a bit about my further digging myself into a hole adventures into color grading with Blackmagic Design’s DaVinci Resolve Studio (free download of the non-studio version). It’s an incredible piece of software. (If you’re thinking about ditching Adobe Premiere – just do it! Go for it. I’ve never regretted it for a second).

This is not a primer on color grading. I’m just dumping writing up and sharing what I’ve learned that works great for me. The following assumes you’re already familiar with some of the key concepts or at least have a faint interest in them. If not, this post will bore the living daylights out of you might not be for you. However, if you wish to start (or continue) on a color grading learning journey with DaVinci Resolve, Cullen Kelly’s YouTube channel is a wonderful place for that.

What started as a necessity during the lockdown era (aka building a professional-looking online tele-presence) turned into a rediscovery of my passion for (I did indeed start out studying to become a film director, albeit dropping out after two years – studying it wasn’t for me) the cinematic image.

And as a person most likely somewhere on the spectrum, of course I can’t stop digging until I’m getting somewhere interesting, somewhere where I can feel a sense of mastery and understanding of the full stack (lighting, lenses, camera, cinematography, sound design, microphones, color grading, post production), aka being able to make predictable outcomes – and making those outcomes look cinematic and pleasing (to me). It’s become sort of a new obsession hobby of mine (in addition to spreading startup entrepreneurship education, of course). Still digging…

The quick & dirty setup for the above shoot:

  • Camera: A tiny (300g, w 8,23cm X d 7cm X h 6.6cm), old (launched in 2012!), and cheap (I paid less than EUR 600,- for it on ebay used, including an 8sinn cage, handle, and octopus expansion cable), digital super 16mm MFT sensor Blackmagic Design Micro Cinema Camera (MCC), ISO 800 (native), 5.600K, shutter at 180 degrees and 24 fps – obviously, exposed to the right (ETTR)
  • Lens: A tiny (this being the largest in the series, but still tiny compared to e.g an EF lens) cheap (EUR 81,- on eBay, almost mint) vintage Pentax A110 (s16mm system) 70mm f2.8 fixed aperture (in this-lens-system-has-no-internal-iris! sense) on an MFT adapter, kitted with a 49mm metal lens hood that sports an 72mm ICE “IR/UV” filter (dirt cheap for the quality – and the MCC needs an IR filter if you’re shooting with any sunlight – if you don’t like pink and purple blacks), a Lee 2 stops IRND proglass filter, shooting into the sun (I don’t have powerful enough lights to fight the sun) coming in at far side of face (actually it was overcast and raining).
  • Lights: Key, Godox UL150 (silent, great value for money) with an Aputure Lantern modifier. Fill, Godox SL60 (not entirely silent, but OK – and dirt cheap for the color-accuracy) with an Aputure Light Dome Mini II softbox & honeycomb / grid modifier.
  • Image Acquisition: Blackmagic Design Film Generation 1 DNG (RAW, not to be confused with BRAW).

And with tiny – I mean TINY! (This is the A110 24mm, and the 70mm is much larger, but still tiny.)

Below is a teaser reveal of my “The Creator” franken-rig, super 16mm ghetto style, that the above clip what shot with (in the studio on the BMPCC4K). Yes, of course I also couldn’t help myself from digging myself into another hole, obsessively over-engineering building my own camera rig to feed my compulsions fit my needs…

This franken-rig doubles as shoulder and tripod mountable. On the shoulder it helps with stabilizing an otherwise jittery setup, and on the tripod, I can also remote control the camera with tracking (iOS app), and by joystick or 6-axis’ing with the remote (MasterEye).

This rig is now so heavy it’s given me months of seriously unpleasant neck and shoulder pain already. Back to the drawing board, I guess; I’m now thinking about adding a Proaim Flycam steadicam vest.

All of which amounts to an incredibly stupid amount of rigging for an old hd-only, tiny 300g camera.

Let me know if I should do a video breakdown on the complete rig build.

Since the last post, I’ve changed my Gamma output from 2.4 to 2.2 (because all I deliver for is online consumption and 2.4 is the old “TV” standard, 2.2 more in line with phones, tablets and computer monitors – I’ve been told).

I’m now also using a “Video Monitor Lookup Table” by Cullen Kelly called “macOS Viewing Transform v1.3“, insuring that what I’m watching when grading is indeed as good as identical (good enough for non-pros, and still good enough for someone like me who has been working with pixels for +40 years and can spot by eye if one pixel differs 1 in value in any of the RGB values to the neighbours) to what gets delivered (YMMV if you don’t have a P3 Apple display – Mine is a P3 calibrated Dell 5K which uses the same LG panel as in the iMac 5K – afaik).

I also use an old Samsung SyncMaster calibrated to rec709 / Gamma 2.2 as the “CleanFeed” to compare to what I’m seeing in the main view in DaVinci Resolve. BTW, can someone explain-it-like-I’m-5 how the DaVinci setting of r709 Gamma 2.2 gets parsed, what the pipeline looks like, when viewing the main view on a P3 display – and with the viewing transform LUT applied? To the best of my knowledge (and brute-forcing experience), if I want the exported video to look like when I graded it on the calibrated P3 display when viewed locally on a P3 or Mac/iPad display I have to export as rec709/rec709A – and if I want it too look like what I saw on the calibrated rec709/Gamma 2.2 ClearView monitor when playing it locally on same, I have to export it as rec709/Gamma 2.2. All of which kind of makes sense. Now, the real headaches – the real mind-fucks – start when you upload your videos to video content platforms like YouTube and Vimeo: They all have different ways of interpreting (or ignoring) your color space / gamma metadata when re-encoding – and they’re not all really sharing how you handle this predictably. #FML 

I’m also using an iPad Pro with the “DaVinci Monitor” app when grading. Make sure the iPad is on the same WiFi as your Mac running DaVinci Resolve Studio – a stupid & annoying limitation. And don’t get me started on the incredible hassle of having to copy and paste the session access string between devices when using the remote monitor per session… #JFC This should be as easy as a click of the mouse, tap of the finger! I mean it’s all on the same network – I’m an adult, I can handle the security issues, just give me the option to always allow when on the same network. If it’s good enough for Apple AirPlay, it’s good enough for me – and you, Blackmagic Design.

Primaries & Secondaries, My Clip-Level Nodes

Here’s my latest default clip-level node tree for primaries and secondaries – it works very well for me:

This node tree is almost verbatim copied from Cullen Kelly – and that’s because it’s an AWESOME framework that works very intuitively for me (too) – and disciplines me to keep things really simple.

Of other note, I’ve found using these Curve LUTs (esp. “Film 2”) to get the RAT node (ratio) 90% “right” (to my tastes) out of the box, adjusting the rest depending on the clip – they’re made for DWG / Intermediate and don’t break anything so far. (Don’t forget to set the right pivot point for your color space in your RAT node: DWG/Intermediate = 0.336, if you want to adjust it manually.)

Not shown in my default node tree above: Sometimes I add the Sat Shaper+ DCTL after the SAT HSV node or instead of it if I’m not completely satisfied with the saturation (I’m lazy), just to try out some more options – also its “vibrancy” setting has sometimes helped me get more pleasing color separation / spread in one simple operation.

Sometimes I also use the TETRA+ DCTL if there are clips with some gnarly color issues that I’m just incompetent to adjust otherwise.

I find myself more in the HDR wheels when adjusting exposure in the EXP node these days. I don’t know if that’s considered Kosher by the “pros” or not, but using the HDR controls for exposure feel so much more intuitive and natural for me – so I don’t really care.

My LOOK Node Tree, Timeline-Level Nodes

And this is my latest default timeline-level node tree for the overall “LOOK”:

Remember, you always want to be grading “underneath” your LOOK, aka always have your look nodes on the timeline level active when you start grading your primaries on the clip level.

BTW, I don’t always have internal grain activated in the Halation DCTL nor do I use the DaVinci Film Grain plugin so often, as I find the MCC is usually creating all the organic grain I need.

The idea behind the CST IN/OUT sandwiches is to be able to mix in creative and Film Print Emulation (FPE) LUTs that were not made for the DaVinci Wide Gamut / Intermediate color space that I work in. The node directly in front of the sandwiches does take LUTs for DWG / Int. I’m often adding more creative or “negative” LUTs made for other color spaces, often in addition to my first go-to taste tool – Cullen Kelly’s beautiful Voyager Pro v2 “taste” LUTs (worth every penny!) – to the mix when I feel like (usually from Arri – who doesn’t love Arri?!), here also using a Fuji 3510 Film Print Emulation (FPE) by Cullen Kelly (free download available both for DWG & ACES), and also Sony’s Technicolor Collection FPEs), for film density I’m using DRT&T’s Film_Density_OFX (sometimes I also use Density+), Dehancer is a great plugin for creating the film look, deactivated in this example (it can produce nice results, but I find myself using it less at the moment as I’m still not very good at creating a predictable and consistent results with it).

BTW, is there a DCTL / OFX plugin that ONLY does the FPE “analogue range limiter” part of Dehancer? That would make me happy!

Also deactivated by default is the Cullen Kelly YouTube export LUT (I only activate it if delivering for YT, I normally use Vimeo for distribution. I’ve found rec709 / rec709-A provides the best results when publishing on on Vimeo, aka looks most true to what I saw when grading after Vimeo has chewed on it and spat out their recompression.

There’s also a lazy “Global” node for anything I need to add as the last step for all clips, e.g. cool or warm it up a bit, take exposure up or down to taste, etc. – a handy node for quick and dirty experimenting with new ideas after I feel satisfied with the general look without touching the main nodes.

My approach for getting the look and feel I want is “less is better”, but anything goes (fuck around & find out!) – as long as it doesn’t break things (e.g. introduces unpleasant artifacting, banding, etc).

As I was writing this, I became aware that I should update my timeline node tree to include the MONONODES Balance and Clip utility DCTLs. I also added the False Color plugin to check exposure. So now I have five additional “utilities” (all turned off) at the end of my timeline-level nodes: False Color (I guess, at least intuitively, this one should be applied earlier in the pipeline to get the “true” exposure, but so far it works for me here at the end too – so whatever), Balance, White Clip, Black Clip, Sat Clip – and just by turning them on and off I can check the exposure, skin balance and potential clipping across all shots (clips) really fast (select “refresh all thumbnails”).

Alternative grade, removed some funny saturation clipping business in the yellowish greens in the bg with a quick & dirty Hue vs Sat curve.

Some more examples

Below you’ll find some more color grading examples where I’m going for the “super 16mm film” aesthetic, intentionally not of of the modern “shot-with-something-in-the-Sony-FX-camera-family” variety. (Maybe I’ll share some of my more “modern” and “corporate-friendlier” color grades shot on the BMPCC4K camera and the Sigma Sigma 18-35mm f1.8 Art DC HSM lens in a future post – for now, you can infer from my previous post on what that looks like from my streaming studio.)

The Input/Output screenshots below are not ICC color-profiled, so your results may vary a bit:

Above, BMD Micro Cinema Camera, DNG (RAW) Film G1, DaVinci Resolve Studio, color management bypassed (this is how it actually looks before you start color grading!)

Color management on (slight bias to green from a K&F Concept ND filter, I suspect)

Primaries and secondaries graded under the timeline-level LOOK nodes (LOOK nodes deactivated, notice the bias towards magenta when the LOOK nodes are turned off. I just left this view in for reference, it’s not something I often watch much when grading as the LOOK nodes are always on.)

And timeline-level LOOK nodes on (unpleasant magenta-bias gone – this is why you grade underneath your LOOK, aka with your LOOK nodes on)

BMD Micro Cinema Camera, Pentax A110 f2.8 70mm lens with ND.
BMD Micro Cinema Camera, Pentax A110 f2.8 50mm lens with ND.

BMD Micro Cinema Camera, Pentax A110 f2.8 50mm lens with ND.

Above, some more examples (upsampled to 4K – would you have noticed if I hadn’t told you?), all shot with various ND’ed Pentax A110 lenses on the MCC (the close-ups of the eye made with a +2 diopter attached).

Standard