cine, Lessons Learned

On Color Grading

A while back, I posted about my Live Streaming Studio V3.1  setup, because many people wanted to know what gear I’m using and how I get the “cinematic” look on a live Zoom call. To achieve that look, one of the things I had to learn from scratch was how to color grade.

BTW, do you need help with creating a great custom “look” for your film or video production, your camera, or your podcasting or stream? Give me a ping, and let’s talk.

Here, I’m sharing a bit about my further digging myself into a hole adventures into color grading with Blackmagic Design’s DaVinci Resolve Studio (free download of the non-studio version). It’s an incredible piece of software. (If you’re thinking about ditching Adobe Premiere – just do it! Go for it. I’ve never regretted it for a second).

This is not a primer on color grading. I’m just dumping writing up and sharing what I’ve learned that works great for me. The following assumes you’re already familiar with some of the key concepts or at least have a faint interest in them. If not, this post will bore the living daylights out of you might not be for you. However, if you wish to start (or continue) on a color grading learning journey with DaVinci Resolve, Cullen Kelly’s YouTube channel is a wonderful place for that.

What started as a necessity during the lockdown era (aka building a professional-looking online tele-presence) turned into a rediscovery of my passion for (I did indeed start out studying to become a film director, albeit dropping out after two years – studying it wasn’t for me) the cinematic image.

And as a person most likely somewhere on the spectrum, of course I can’t stop digging until I’m getting somewhere interesting, somewhere where I can feel a sense of mastery and understanding of the full stack (lighting, lenses, camera, cinematography, sound design, microphones, color grading, post production), aka being able to make predictable outcomes – and making those outcomes look cinematic and pleasing (to me). It’s become sort of a new obsession hobby of mine (in addition to spreading startup entrepreneurship education, of course). Still digging…

Read on below for this long non-startup post.

The quick & dirty setup for the above shoot:

  • Camera: A tiny (300g, w 8,23cm X d 7cm X h 6.6cm), old (launched in 2012!), and cheap (I paid less than EUR 600,- for it on ebay used, including an 8sinn cage, handle, and octopus expansion cable), digital super 16mm MFT dual gain sensor Blackmagic Design Micro Cinema Camera (MCC), ISO 800 (native), 5.600K, shutter at 180 degrees and 24 fps – obviously, exposed to the right (ETTR)
  • Lens: A tiny (this being the largest in the series, but still tiny compared to e.g an EF lens) cheap (EUR 81,- on eBay, almost mint) vintage Pentax A110 (s16mm system) 70mm f2.8 fixed aperture (in this-lens-system-has-no-internal-iris! sense) on an MFT adapter, kitted with a 49mm metal lens hood that sports an 72mm ICE “IR/UV” filter (dirt cheap for the quality – and the MCC needs an IR filter if you’re shooting with any sunlight – if you don’t like pink and purple blacks), a Lee 2 stops IRND proglass filter, shooting into the sun (I don’t have powerful enough lights to fight the sun) coming in at far side of my face (actually it was overcast and raining).
  • Lights: Key, Godox UL150 (silent, great value for money) with an Aputure Lantern modifier. Fill, Godox SL60 (not entirely silent, but OK – and very cheap for the color-accuracy, at the time it came out) with an Aputure Light Dome Mini II softbox & honeycomb / grid modifier. (And I cannot thank Rob Ellis enough for being a fantastic source of inspiration, teaching how to make the most out of simple lighting setups and how to manipulate color temperatures to achieve a truly “cinematic” look even if you have next to no budget!).
  • Image Acquisition: Blackmagic Design Film Generation 1 DNG RAW, not to be confused with BRAW.

And with tiny – I mean TINY! (This is the A110 24mm, and the 70mm is much larger, but still tiny.)

Below is a teaser reveal of my “The Creator” franken-rig, super 16mm ghetto style, that the above clip what shot with (in the studio on the BMPCC4K). Yes, of course I also couldn’t help myself from digging myself into another hole, obsessively over-engineering building my own camera rig to feed my compulsions fit my needs…

This franken-rig doubles as shoulder and tripod mountable. On the shoulder it helps with stabilizing an otherwise jittery setup, and on the tripod, I can also remote control the camera with tracking (iOS app), and by joystick or 6-axis’ing with the remote (MasterEye) – and it features redundant hot-swappable power through v-mount batteries & d-tap.

This rig is now so heavy it’s given me months of seriously unpleasant pinched nerves in the neck and god awful shoulder pains already. Back to the drawing board, I guess; I’m now thinking about adding a Proaim Flycam steadicam vest. To a shoulder rig… I’m a lost cause. No, I don’t want to hear a word about the sunken cost fallacy at this point.

All of which amounts to an incredibly stupid amount of rigging for an old hd-only, tiny 300g camera.

Let me know if I should do a video breakdown on the complete rig build & let’s geek out together.

Since the last post, I’ve changed my Gamma output from 2.4 to 2.2 (because all I deliver for is online consumption and 2.4 is the old “TV” standard, and 2.2 is supposedly more in line with modern day phones, tablets, and computer monitors – I’ve been told).

I’m now also using a “Video Monitor Lookup Table” by Cullen Kelly called “macOS Viewing Transform v1.3“, insuring that what I’m watching when grading is indeed as good as identical (good enough for non-pros like me, and still good enough for someone like me who has been working with pixels for +40 years and can spot by eye if one pixel differs 1 in value in any of the RGB values to the neighbours) to what gets delivered (YMMV if you don’t have a P3 Apple display – Mine is a P3 calibrated Dell 5K which uses the same LG panel as in the iMac 5K – afaik).

I also use an old Samsung SyncMaster calibrated to rec709 / Gamma 2.2 as the “CleanFeed” to compare to what I’m seeing in the main view in DaVinci Resolve.

BTW, can someone explain-it-like-I’m-5 how the DaVinci setting of r709 Gamma 2.2 gets parsed, what the pipeline looks like, when viewing the main view on a P3 display – and with the viewing transform LUT applied? To the best of my knowledge (and brute-forcing experience), if I want the exported video to look like when I graded it on the calibrated P3 display when viewed locally on a P3 or Mac/iPad display I have to export as rec709/rec709A – and if I want it too look like what I saw on the calibrated rec709/Gamma 2.2 ClearView monitor when playing it locally on same, I have to export it as rec709/Gamma 2.2. All of which kind of makes sense. I guess? I HAVE SO MANY QUESTIONS! Like, why is this so hard to grock? Still?

Now, the real headaches – the real mind-fucks – start when you upload your videos to video content platforms like YouTube and Vimeo: They all have different ways of re-interpreting (or ignoring) your color space / gamma metadata when re-encoding – and they’re not all really sharing how you’re supposed handle this predictably.  In my brute-forcing experience with Vimeo (yes, I tried encoding and uploading using most non-obscure color spaces and gamma metadata settings and combinations thereof, and yes that took an unhealthy amount of time to do, and yes the results were completely disappointing), TL;DR exporting to rec709/rec709A is the most accurate fit to what I saw when grading.

I’m also using an iPad Pro with the “DaVinci Monitor” app when grading. Just make sure the iPad is on the same WiFi as your Mac running DaVinci Resolve Studio – a stupid & annoying limitation. (The Mac I’m grading on is usually only connected to the Internet & NAS/RAID via LAN, WiFi completely turned off, so I had a minute or two of frustration before finding out.) And once you get it up and running, don’t get me started on the incredible stupid hassle of having to copy and paste the session access string between devices when using the remote monitor per session… #JFC This should be as easy as a click of the mouse, tap of the finger! I mean it’s all on the same network – I’m an adult, I can handle the security issues, just give me the option to always allow when on the same WiFi network. If it’s good enough for Apple AirPlay, it’s good enough for me – and should be for you, Blackmagic Design.

Primaries & Secondaries, My Clip-Level Nodes

Here’s my latest default clip-level node tree for primaries and secondaries – it works very well for me:

This node tree is almost verbatim copied from Cullen Kelly – and that’s because it’s an AWESOME framework that works very intuitively for me (too) – and disciplines me to keep things really simple.

Of other note, I’ve found using these Curve LUTs (esp. “Film 2”) to get the RAT node (ratio) 90% “right” (to my tastes) out of the box, adjusting the rest depending on the clip – they’re made for DWG / Intermediate and don’t break anything so far. (Don’t forget to set the right pivot point for your color space in your RAT node: DWG/Intermediate = 0.336, if you want to adjust it manually.)

Not shown in my default node tree above: Sometimes I add the Sat Shaper+ DCTL after the SAT HSV node or instead of it if I’m not completely satisfied with the saturation (I’m lazy), just to try out some more options – also its “vibrancy” setting has sometimes helped me get more pleasing color separation / spread in one simple operation.

Sometimes I also use the TETRA+ DCTL if there are clips with some gnarly color issues that I’m just too incompetent to adjust otherwise.

I find myself more in the HDR wheels when adjusting exposure in the EXP node these days. I don’t know if that’s considered Kosher by the “pros” or not, but using the HDR controls for exposure feel so much more intuitive and natural for me – so I don’t really care.

My LOOK Node Tree, Timeline-Level Nodes

And this is my latest default timeline-level node tree for the overall “LOOK”:

Remember, you always want to be grading “underneath” your LOOK, aka always have your look nodes on the timeline level active when you start grading your primaries on the clip level.

BTW, I don’t always have internal grain activated in the Halation DCTL nor do I use the DaVinci Film Grain plugin so often, as I find the MCC is usually creating all the organic grain I need.

The idea behind the CST IN/OUT sandwiches is to be able to mix in creative and Film Print Emulation (FPE) LUTs that were not made for the DaVinci Wide Gamut / Intermediate color space that I work in. The node directly in front of the sandwiches does take LUTs for DWG / Int. I’m often adding more creative or “negative” LUTs made for other color spaces, often in addition to my go-to look tool – Cullen Kelly’s beautiful Voyager Pro v2 “taste” LUTs, here applied in the four nodes on the third line of nodes after DENSITY (worth every single penny!) – to the mix when I feel like (usually from Arri – who doesn’t love themselves some Arri?!), here also using a Fuji 3510 Film Print Emulation (FPE) by Cullen Kelly (free download available both for DWG & ACES), and also Sony’s Technicolor Collection FPEs), for film density I’m using DRT&T’s Film_Density_OFX (sometimes I also use Density+, undecided if one is better than other), and Dehancer is a great plugin for creating the photochemical film look, here deactivated in the look node tree (it can produce nice results, but I find myself not really using it much as I’m still not very good at creating a predictable results with it and waste too much time trying to brute-force the look).

UPDATE: Cullen Kelly just launched the fantastic “Contour” film look builder plugin that is now at the top of my wishlist. That is to say this is a pro level plugin with a (fair) price tag that I’ll only allow myself to buy if and when someone will actually pay me for grading, lookdev, shooting, etc.

BTW, is there a DCTL / OFX plugin that ONLY does the FPE “analogue range limiter” part of Dehancer? That would make me happy. At least for a couple of minutes. Bueller… Bueller… Anyone?

Also deactivated by default is the Cullen Kelly YouTube export LUT (I only activate it if delivering for YT, I normally use Vimeo for distribution. (Like I mention above, I’ve found rec709 / rec709-A provides the best results when publishing on on Vimeo, aka looks most true to what I saw when grading after Vimeo has chewed on it and spat out their recompression).

There’s also a lazy “Global” node to abuse for anything I need to add as the last step for all clips, e.g. cool or warm it up a bit, take exposure up or down to taste, etc. – a handy node for quick and dirty experimenting with new ideas after I feel satisfied with the general look without touching the main nodes.

My approach for getting the look and feel I want is “less is better”, but anything goes (fuck around & find out!) – as long as I like it and it doesn’t break things (e.g. unpleasant skintones, artifacting, banding, etc), it’s a keeper.

As I was writing this, I became aware that I should update my timeline node tree to include the MONONODES Balance and Clip utility DCTLs. I also added the False Color plugin to check exposure. So I did: I now I have five additional “utilities” (all turned off) at the end of my timeline-level nodes: False Color (I guess, at least intuitively, this one should be applied earlier in the pipeline to get the “true” exposure, but so far it works for me here at the end too – so whatever), Balance, White Clip, Black Clip, Sat Clip – and just by turning them on and off I can check the exposure, skin balance and potential clipping across all shots (clips) really fast (select “refresh all thumbnails”).

Alternative grade above, removed some funny saturation artifacting business in the yellowish greens, introduced by pushing the density of green too hard, with a quick & dirty Hue vs Sat curve (anchored red and green, pulled yellow down)

And here’s a more extreme grade below that I made after discovering the highly interesting 2499 DRT dctls from Juan Pablo Zambrano. This was my first try screwing around with them (looking forward to be playing around with the 2499 tools much more):

Now with a quick and dirty 2499 test – does it look more “cinematic” to you? This split tone thing adds a little something to the highlights, IMO. Oh, and I applied some experimental MTF simulation, zero noise reduction applied.

Some more examples

Below you’ll find some more color grading examples where I’m going for the “super 16mm film” aesthetic, intentionally not of of the modern slick “shot-with-something-in-the-Sony-FX-or-Alpha-camera-family” variety. Maybe I’ll share some of my more “modern” and “corporate-friendlier” color grades shot on the BMPCC4K camera and the Sigma Sigma 18-35mm f1.8 Art DC HSM lens in a future post – for now, you can infer from my previous post on what that looks like in my live streaming studio and this screenshot:

The more “modern” look, a screenshot from my live streaming studio (BMD PCC4K, Sigma Art 18-35mm f1.8, Viltrox EOS EF M2 SpeedBooster, custom “look” LUT by me from scratch, here shown with noise reduction added to simulate the streamed recompressed – smeared by Zoom – streamed output. The input signal is MUCH sharper). The light tubes with barn doors in the background are real – I’m not using virtual backgrounds or green screens.

BTW, this is what the camera signal looks like before applying my custom studio grade and look:

My studio feed without the in-camera look LUT applied.
Before & After color grading footage. This particular look node tree includes an official Sony Technicolor LUT. They also have many more free look LUTs for you to download.

Above, BMD Micro Cinema Camera, DNG (RAW) Film G1, DaVinci Resolve Studio, color management bypassed (this is how it actually looks before you start color grading!)

Color management on (slight bias to green from the K&F Concept ND filter I used on this shot, I suspect – Today I’m only using Lee Filters IRDN Pro Glass as it saves me work in post, they deliver as good as perfect color results for an ND. They are expensive, but you might like me so far get away with the 100mm x 100mm versions, depending on your camera and lenses setup – the smaller squares are like less than 1/3 the price of the larger “cine” rectangular ones)

Primaries and secondaries graded under the timeline-level LOOK nodes (LOOK nodes deactivated, notice the bias towards magenta when the LOOK nodes are turned off. I just left this view in for reference, it’s not something I often watch when grading as the LOOK nodes are mostly on.)

And timeline-level LOOK nodes on (unpleasant magenta-bias gone – this is why you grade “underneath” your LOOK, aka with your LOOK nodes on). Also of note, there are some gnarly distortions in the saturation of the greens here that in hindsight I should have corrected for.

BMD Micro Cinema Camera, Pentax A110 f2.8 70mm lens with ND. Just a quick natural light shoot to test a filter and lens combo and create some test footage to practice grading and looks on.
BMD Micro Cinema Camera, Pentax A110 f2.8 50mm lens with ND. The first shots I graded where I sort of knew what I was doing (thanks to Cullen Kelly & Darren Mostyn) AND was pleased with the “cine” look achieved. Lighting anything but natural: Key + Kicker/Backlight + Fill + Hair light.
Let’s take it over the top; You go! No, you go! CKC Look LUTs, CKC Fuji FPE LUT – AND DEHANCER! Push it! Push it real good! BMD Micro Cinema Camera, Pentax A110 f2.8 50mm lens with ND.

Arri color science has entered the chat. Experiment with official Arri LogC4 look LUTs applied to BMD Film Gen1 footage using Color Space Transform DCTLs to go in and out of LogC4.
An alternative grade / look made exclusively with the 2499 DRT DCTLs (plus Film Look Creators native Halation and Bloom on).

Above, some more examples (actually – accidentally because of how Apple’s pixel density thingamajig works when taking screenshots – upscaled to 4K and 5K – although the MCC “only” shoots 1080p. Would you have noticed it was upscaled and not native if I hadn’t told you?). Most shots made with various ND’ed (Kood) Pentax A110 lenses on the MCC if not stated otherwise. The close-ups of the eye was achieved by using a +2 diopter (Kood) attached.

Standard