cine, Lessons Learned

On my Color Grading in 2024

Second grade / look pass

A while back, I posted about my Live Streaming Studio V3.1  setup, because many people wanted to know what gear I’m using and how I get the “cinematic” look on a live Zoom call. And to achieve that look, one of the things I had to learn from scratch – in addition to operating a digital cinema camera and properly use lighting – was how to color grade.

For reference: Non-graded image without look and effects applied, DNG RAW (BMD Film Gen 1) to rec709 / Gamma 2.2, exposed to the right (ETTR).

Below, after some grading, applying a ton of stuff for the look, and even throwing in some effects to emulate anamorphic edge distortions and a fake film gate crop for good measures:

BTW, do you need help with creating a great custom “look” for your film or video production, your camera, or your podcasting or stream? Give me a ping, and let’s talk. I wasted a silly amount of time and money making all kinds of mistakes starting out, so I’m happy to help you avoid that.

In this post, I’m sharing a bit about my further digging myself into a hole adventures into color grading with Blackmagic Design’s DaVinci Resolve Studio (free download of the non-studio version). It’s an incredible piece of software, by the way. If you’re thinking about ditching Adobe Premiere – just do it! It’s a joy to work with and I’ve never regretted it for a second.

This is not a primer on color grading. It’s just me dumping writing up and sharing what I’ve learned that works best for me so far. If you too wish to start (or continue) on a color grading learning journey with DaVinci Resolve, Cullen Kelly’s YouTube channel is probably the best place for that.

The following assumes you’re already familiar some of the concepts of color grading – or at least have a faint interest in how to create a cinematic image with digital tools. If not, fair warning, this post will bore the living daylights out of you is probably not for you.

What started as a necessity during the lockdown era (aka building a professional-looking online tele-presence) turned into a path of rediscovery, reigniting my passion for the cinematic image. Fun fact: You might not know that I actually started out studying cinema with the goal to become a film director – but I dropped out after only two years as university and studying film wasn’t really my thing – and then the commercial Internet happened and the rest is history

As a person most likely somewhere on a spectrum of some kind, of course I can’t, I won’t, and I don’t stop digging until I’m getting somewhere interesting, somewhere where I can feel a sense of mastery and understanding of the full stack (in this case lighting, lenses, physics, camera sensor, cinematography, color grading, look development – everything that goes into the sausage factory of a nice digital “cine” image), aka being able to make predictable outcomes and making those outcomes look cinematic and pleasing – to me. It’s become sort of a new time sink obsession hobby of mine (in addition to helping other startup founders go farther faster, of course).

And I’m still digging.. .

Read on below for this long non-startup (but hey – still full of tech & geekery) post.

A lot going on under the hood here.

The Setup (for the footage above)

Camera: A tiny (300g, w 8,23cm X d 7cm X h 6.6cm), old (launched in 2012!), and cheap (I paid less than EUR 600,- for it on ebay used, including an 8sinn cage, handle, and the octopus expansion cable), digital super 16mm MFT dual gain sensor Blackmagic Design Micro Cinema Camera (MCC), set at ISO 800 (native?), 5.600K, shutter at 180 degrees and 24 fps – obviously, exposed to the right (ETTR)

The tiny BMD Micro Cinema Camera

One Little Remote v4.x – This device is the single reason why I’m still using the display-less, incredibly-cumbersome-to change-settings-on, Micro Cinema Camera: I managed to score one of the very last of these, a One Little Remote directly from the inventor Phil Lemon himself. It’s an INSANELY helpful device if you own and operate a MCC! Thank you, Phil! It’s a damned shame they’re not available anymore. (Wouldn’t it be cool to have a USB-C / BT remote like this one for the Micro Studio 4K G2 too?!) Image credit: Amazon Australia

And with tiny – I mean TINY! (This is the A110 24mm, and the 70mm is much larger, but still tiny.)

Lens: A tiny (this being the largest in the series, but still tiny compared to e.g an EF lens) cheap (EUR 81,- on eBay, mint) vintage Pentax A110 (s16mm system) 70mm f2.8 fixed aperture (in zomg-this-lens-system-has-no-internal-iris! sense) on an MFT adapter

Filters, etc: Lens is kitted with a 49mm metal lens hood that sports a 72mm ICE “IR/UV” filter (dirt cheap, fair quality) as the MCC needs a lot of IR filtering if you’re shooting with any sunlight in it and don’t love pink and purple blacks, a Lee Filters 2 stops IRND Proglass 100mm x 100mm filter (best ND filter I’ve ever used, and the IR part also helps a great deal on this Micro Cinema Camera that has a very basic native IR filter) on a cheap-ass Zomei 100mm filter holder to shoot into the sun (classic backlit since I don’t own lights powerful enough to fight the sun) coming in at far side of my face (actually it was overcast and raining).

<rant>

I don’t use variable NDs as every single “variable” ND I’ve tested so far  produces unpredictable results, aka effs up the colors. To be fair, I have yet to test any “variable ND” above 200 bucks so far, so this rant is to be taken with a sizable helping of salt – but please do tell if you know of a variable ND filter that delivers as-good-as results as the Lee Filters ProGlass IRND. Until then, I’ll take the inconvenience and inflexibility of using fixed and stacked ND filters instead of “variable” any effing day.

</rant>

Lights: Key, Godox UL150 (silent,  great value for money when it came out) with an Aputure Lantern modifier. Fill, Godox SL60 (not entirely silent, old-ish but OK – one of the best budget options for the color-accuracy at the time it came out) with an Aputure Light Dome Mini II softbox & honeycomb / grid modifier.

Image Acquisition: Blackmagic Design Film Generation 1 DNG RAW (not to be confused with BRAW).

Mental note to self: Never forget to turn off all other light fixtures, especially LED sources (Halogen & Tungsten is usually OK – yay, physics). It’s been more than once that I’ve caught myself forgetting to turn off existing interior lights, only to discover it after the fact when grading the footage. It usually manifests in disgusting skin tones that are nearly impossible to balance to a satisfyingly natural place.

A big thanks to Rob Ellis for being a fantastic source of inspiration, teaching how to make the most out of simple lighting setups and how to manipulate color temperatures to achieve the “cinematic” look even on a next to no budget.

Also a huge thanks to Patrick “WanderingDP” O’Sullivan for teaching n00bs like me “the framework” and how to achieve non-flat, cinematic-looking images consistently (and keeping me giggle-snort-level entertained). Hint: Shoot into the L (the angle) of the room, shoot backlit, get your ratios right, etc.

TL;DR: if your image looks “flat” or just doesn’t look right – not very cinematic at all – it’s probably because you’re not shooting backlit, your lighting ratios (key to fill, key to background, etc) are out of whack, you’re shooting dead on instead of at an angle, you don’t have enough “layers”, and you don’t have enough “salt & pepper” in the shot. #sheersandblinds

Rigging Deeper

And of course I also couldn’t help myself from digging myself into another hole and obsessively over-engineering building my own camera rig to feed my compulsions fit my needs…

Below is a teaser reveal of my “The Creator” franken-rig, super 16mm ghetto style, that the above clip what shot with (the shot of the rig was done in the studio with the BMPCC4K):

Update: In the video above, you can see the Franken-rig was jiggling a bit when rotating on the tripod. I later fixed this by adding a longer Manfrotto base-plate that helps tie more of the rig parts together, making it more structurally sound. (See the Kit.co page for details). 

I think it looks more like a jury-rigged weapon out of “Wolfenstein II – The New Colossus” than a camera rig.

My Franken-rig is shoulder and tripod / crane mountable. On the shoulder it helps with stabilising an otherwise micro-jittery setup, and on the tripod, I can also remote control the camera with tracking (iOS app), and by joystick or 6-axis’ing with a remote (the MasterEye or the Remote Handle) – and it features redundant hot-swappable power through v-mount batteries & d-tap. 

The rig is so obnoxiously heavy that it has given me months of seriously unpleasant pinched nerves in the neck and god awful shoulder pains. Back to the drawing board. I’m now thinking about adding a cheap-ish Proaim Flycam steadicam vest. To a shoulder rig… Yes, I’m a lost cause. No, I don’t want to hear a stinking word about sunken cost fallacy at this point.

All of which amounts to an incredibly stupid amount of rigging for a tiny old 300g camera that “only” shoots 1080p, but I am in love with the images that come out of that tiny camera – when you get rid of the micro-jitters, light it and grade it properly.

Why not a DJI gimbal and a standard Tilta light shoulder mount, I get asked a lot. The sunken cost fallacy and pride price and availability is my answer – I had a Zhiyun Weebill S already just laying around and this gimbal setup has a total cost like 1/3 of that of acquiring a new equivalent setup from DJI, I also had many of the rigging parts that went into it laying around too. And of course, using an off-the-shelf rig would deprive me of the fun of building one myself – at least once (the next shoulder rig I would probably buy stock, though).

Maybe I’ll geek out further and do a breakdown on the complete franken-rig build in a future post.

My Grading Today

Since the last post, I’ve changed my Gamma output from 2.4 to 2.2 (because all I so far deliver for is online consumption and Gamma 2.2 should be more in line with modern day phones, tablets, and computer monitors) instead of good ole’ Gamma 2.4, which is more in line with cathode ray tube TVs – I guess.

I’m still working “color managed” in DaVinci Wide Gamut / Intermediate, mostly not manually in a CST sandwich, as my shots are almost all shot by myself in BMD DNG or BRAW. Update: I’ve since started to work not “color managed” by DaVinci Resolve but color managed by myself with a CST sandwich. I know this is a bit pedantic, in fact very unnecessary if we’re being honest, as I am currently only doing my own stuff, only shooting Blackmagic DNG and BRAW, but hey – I now get to pretend I’m color grading like the pros. #lulz

And why am I not working in say, ACES? I could work in ACES – sure, I guess. There would be no huge difference in my workflow, but as I’m currently only shooting projects in native Blackmagic formats, by myself for myself right now –  Blackmagic DWG/Intermediate is where I’m at.

Here are my DaVinci Resolve color management settings:

Update: As I’m now doing manual color management using a Color Space Transform sandwich, “DaVinci YRGB” sans “Color Managed” is now my color science setting).

I’m now also using a “Video Monitor Lookup Table” by Cullen Kelly called “macOS Viewing Transform v1.3“, pushing what I’m watching in the DaVinci Resolve GUI when grading (good enough for non-pros like me, and still good enough for someone like me who has been working with pixels for +40 years and can spot by eye if one pixel differs 1 in value in any of the RGB values to the neighbours) closer to what is actually delivered (YMMV if you don’t have a P3 Apple display with this viewing transform LUT. My main display is a P3 calibrated Dell 5K which uses the same LG panel as the iMac 5K, afaik).

I also use an old Samsung SyncMaster calibrated to rec709 / Gamma 2.2 as the “CleanFeed” to compare to what I’m seeing in the main view in DaVinci Resolve, representing your “average” LCD monitor out there.

Update: I’ve also started using a BMD UltraStudio Monitor 3G piped through a Blackmagic Design MiniConverter 6G SDI to HDMI LUT Box (I had several of these laying around already to apply LUTs to live streaming with cameras that don’t support applying LUTs in-camera) via SDI and further connected via HDMI to a 100% r709 Asus ProArt FHD monitor – complete with a LX1 Bias Lighting strip attached around the frame at the back – that is in turn calibrated using an X-Rite i1 with DisplayCAL and Argyll to create a monitor LUT that is loaded onto the 6G LUT Box to calibrate the image. This setup now serves as my “source of truth” monitoring solution until perhaps one day I’ll get a “real” reference monitor (probably from Flanders or osee – the latter could also enticingly double as a neat field monitor) – but someone has to pay before I’ll jump to that next level. 

I might even get Nobe OmniScope Pro one of these days and run the signal from the Monitor 3G (if it can do dual SDI and HDMI output at a time, if not via the SDI loop out on the 6G LUT Box) to another older Mac Mini running OmniScope via SDI into the Blackmagic Design UltraStudio Mini Recorder Thunderbolt (the one that I started this whole streaming “cine” journey with) and pipe the OmniScope screen back to my DaVinci Resolve Mac using Apple’s Remote Desktop (should be fast enough on my LAN) or by HDMI splitter (worst case) to a dedicated monitor – at least that’s the idea (I have yet to test if this theoretical wiring will actually work as intended and I have no idea what I’m doing).

<RANT>

Working as a digital “pro” with color-accurate pixels for +30 years (without a dang “reference” monitor), I do not subscribe to the level of “reference” monitor hype that a lot of colorist “pros” tend to spew at you when asking them the simplest of questions. On my grading journey to date, I’ve found that a lot of them peddle reference monitors as the the single answer to all your color grading woes – which is demonstrably not true – and I have come to find this a useful heuristic to instantly identify idiots by.

I think there are several reasons – all bad – why so called “pros” do this.

One, they are intimidated by any and all n00bs because they see us as potential zero-sum competition and are using expensive solutions as their instant moat or flex of choice.

Another reason is, I suspect, that a lot of the real geezers in the biz are from the ancient analogue broadcast world and haven’t really gotten the memo on how digital works, how you can compensate for not having a +2K $ “reference” monitor setup, how to get to “good enough” in the digital paradigm.

A third reason, I think, is that many may actually not know any better, aka just because you get paid to do what you do doesn’t have to mean that you know much about how things work under the hood. I’ve met many “pros” in different kinds of businesses that are only “going through the motions”, never knowing what’s going on and why underneath the surface of things (not even caring to know) – but are still getting paid (often even getting paid very well).

It reminds me a lot of my experience with the AVID fanatics coming to digital and NLE from an analogue broadcast and Betacam world, introducing legacy jargon to digital that still makes zero sense today. It also reminds me about the advertising industry “pro” people like ADs and copywriters I had the misfortune encounter in the late 80s to early 90s that had just started to use Macs, insisting that their Monitors had a 1.200 DPI resolution just because there was a system setting where you could select 1.200 DPI somewhere – never you mind that DPI is for printing resolution (and not the same as PPI) or that the absolute max resolution at the time on a high-end Mac CRT would be more like 72 pixels per inch. (Coming from a bigoted “pro” Amiga background myself, no wonder we Amiga people always thought Mac and PC people were a bunch of complete luddite buffoons. #smileyface)

Discussions where one side totally lacks even the most basic understanding of (nor really care to know) how it actually works – but high on their own (legacy) entitlement – never ends well.

On the other hand – of course it’s very nice and very helpful to have a real “reference” monitor, but I know you do not need one to get great and consistent results in the digital age (if you sort of know what you are doing) when starting out. If you’re at the point of delivering for Netflix or consideration from the Academy, you’re already using one, though.

Also, I think the same is also true about the need for a control surface: It’s nice to work with one, but you don’t need one to create great results. Besides, Lift / Gamma / Gain trackballs are more of an residual analog relic than a modern color grading interface optimized for digital & RAW: It’s fabulous that they are getting cheaper and more accessible each year, but it’s an interface still in need of a rethink for the digital domain in my highly biased opinion. The three trackball paradigm just feels dated: Today we’re tweaking exposure in HDR, doing color balancing in linear Gain – not juggling all of Lift / Gamma / Gain up and down, back and forth to compensate and counter each and every move. Today, in the digital domain, I do not see any sense in fighting ourselves like that to a proper color grade.

</RANT>

FWIW, all of the examples on this page was graded previous to buying a dedicated monitoring setup of any kind – I’ve been using generic Samsung SyncMaster LCD monitors as a second and third display for +14 years, only calibrated to r709 / Gamma 2.2 using the relatively cheap x-rite i1 Display calibration doodad in the last years. My old FHD Samsung SyncMaster monitors “only” needed a +1 shift in blue from my own calibrated-by-eye settings I had used for a decade before that “actual” calibration – so there’s that for you. (Yes, I do know these monitors do not cover the whole r709 gamut – and that’s the point: They were “good enough” for me nonetheless.)

An iPad Pro with the “DaVinci Monitor” app is providing me with further reference when grading, just to see what it looks like on iDevices in real time. If you want to use it too, just make sure the iPad is on the same WiFi (update, see below) as your Mac running DaVinci Resolve Studio – a weird & annoying limitation. (The Mac I’m grading on is usually only connected via LAN, WiFi completely turned off – so I might have had a minute or two of frustration before finding out).

<RANT>

Here we go again: Once you do get the app up and running, don’t get me started on the incredibly stupid hassle of having to copy and paste a session access token string thing between devices when using the remote monitor – per f*ing session! This should be as easy as a click of the mouse, tap of the finger! I mean it’s all on the same network! I’m an adult, I can handle the security issues. Just give me the option to always allow when on the same WiFi network. (And what’s the point of a Blackmagic Cloud in the first place if it can’t connect across different networks? I did check my firewall and there’s nothing technically blocking the connection). If it’s good enough for Apple AirPlay, it’s good enough for me – and should be for you too, Blackmagic Design.

</RANT>

Update: You can work around this by checking on the “Use Remote Monitoring Without Blackmagic Cloud” in the DaVinci Resolve main Preferences AND changing the (hidden) settings in the iPad app via the general iPadOS Settings -> Apps -> DaVinci Monitor and change to “Use without Blackmagic Cloud”.  This way, you have to enter the numeric IP address of the computer you’re grading in DaVinci Resolve (that is reachable for your iPad) instead of an access token for each session – after you’ve started the remote monitoring session in Resolve. YMMV, but I’ve also noticed a slight performance gain bypassing the Blackmagic Cloud.

And now for the kicker: Regardless of the monitoring setup, more fun new color and gamma antics usually awaits when uploading videos to the different content platforms like YouTube, Vimeo, Meta, TikTok, et al. They all have their own little wicked ways of re-interpreting your color space / gamma metadata when re-encoding – basically straight up pooping on your work, your choice –  and not all of them are publicly sharing (and not all of them publish updates to how they handle this in a timely manner) what they expect to go in and what comes out of their sausage factories, making getting a predictable end result a trial and error ordeal. 

In my brute-forcing experience with my platform of choice Vimeo (yes, I tried encoding and uploading using most not-totally-obscure color spaces and gamma metadata settings and combinations thereof, and yes that took an idiotic amount of time to do, and yes the results were not exciting), exporting to rec709/rec709A or rec709/Gamma 2.2 is the most accurate fit to what I see when grading. (Yes, I know rec709A is not a “real thing” and only spoofs Apple QuickTime into displaying something more close to what I see when grading on the Mac – but again: If it works – who cares if it’s “correct”? I don’t.).

I grade my content on a Mac and for digital displays only – YMMV.

Here’s rec709, rec709A on Vimeo:

And here’s rec709, Gamma 2.2 on Vimeo:

Is there a difference between uploading rec709A or Gamma 2.2 on Vimeo? You tell me.

My Clip-Level Nodes, Primary & Secondary 

Here’s my latest default clip-level node tree for primaries and secondaries – it works very well for me:

Default Clip Nodes V4

This node tree is copied almost verbatim from Cullen Kelly – and that’s because it’s an AWESOME framework that works very intuitively for me (too) – and disciplines me to keep things really simple.

Top row = primary grade, second row = secondary grade (mostly power windows for finessing levels).

NR = Noise Reduction. This is the most CPU intensive node, placed at the start because of how proxies are generated and cached, using “UltraNR” as default. (I do grade without proxies activated to work on the full real captured image, but I also do my own edits where I do use proxies.)

EXP = Exposure in the HDR palette. Often I’m not touching Lift, Gamma, or Offset at all (if the footage is somewhat properly exposed) – I often still use a touch of Gain too, though. Using the HDR controls for exposure feel more intuitive for me than effing around with the Primaries wheels. (I also occasionally use the HDR Global wheel for color balancing by adjusting Temperature – YOLO! Don’t tell anyone.)

RAT = Ratio or Contrast Curve. Lately, I’ve been using these ready-made Curve LUTs (esp. the “Film 2” LUT) to get the RAT node (ratio) 90% “right” (to my tastes) out of the box on most shots, adjusting the rest to taste depending on the clip. They’re made for DWG / Intermediate and don’t seem to break anything. Don’t forget to set the right pivot point for your color space in your ratio node (e.g. DWG/Intermediate = 0.336) if you want to push and pull it manually too.

BAL = (Color) Balance. This node is set to use Linear Gamma and I’m using Gain to color-balance.

I also use the TETRA+ or Crosstalk dctls from time to time, especially if there are clips with some subtle iffy color issues that I’m just too incompetent to adjust otherwise. I find myself using it especially for skin tones and skin “roll-off”. (I have no fundamental knowledge of what I’m doing when operating these – I just know how I sometimes can achieve better results (to me) by using them.)

SAT HSV = Saturation in Hue Saturation Value mode (the node set to operate in a HSV color space and with only Channel 2 – the S for “Saturation” activated).

Update 1: SAT HSV is currently deactivated and added a Color Slice node to do its bidding instead. Let’s see if this works out over time.

Sometimes I (still) add the Sat Shaper+ or the Saturator (I like Saturator’s “preserve highlights” option) DCTL instead or in addition to the “SAT HSV” node on rhw clip level when I’m not completely satisfied with the saturation and sometimes also to modify the color density (yes, I’m lazy) and their “vibrancy” setting has sometimes helped me get more pleasing color separation or spread with one simple slider (not shown in my default node tree above).

Update 2: Now trying to use the new “Color Slice” to do the bidding of Sat Shaper or Saturator for me, but I keep reverting to the two above dctls and / or the old SAT HSV node.

Sharpen = Radius to taste (this node is placed after the primary and secondary grades to avoid potential unwanted artifacting that could happen if you would put this node inside the primary or inside the secondary, by way of how the parallel nodes get combined). As I’m often using soft vintage lenses combined with softening MTF shenanigans, a bit of Radius sharpening can make the image pop just the right way on footage shot with e.g. Pentax A110 lenses.

Update 3: I’ve added a compound node with a hodgepodge of effects jury rigged to create more or less of the kind of vintage anamorphic lens distortions that I like. (Keep on reading below for the details.)

First pass, first grade / look
The actual clip level nodes used in the above footage (Lens Degrader deactivated, though).

MTF Curves

Did they tell you about “Modulation Transfer Function Curves“? I didn’t know anything about any of this! After stumbling into discovering MTF – more or less by random chance – it’s like a whole new black hole secret world of additional geekery has been revealed! Why didn’t I know anything about all of this sooner? Is this a secret guild initiation ritual type of thing or something?

I accidentally discovered this was the missing piece of the puzzle to achieve what my brain recognises as a “cinematic” image by random effing around and finding out, screwing around with the native “Soften and Sharpen” ofx (S&S); I wanted to figure out what that ofx thing actually does to the image, motivated by the need to find something that helps me with noise reduction without taxing the CPU as much as applying heavy native NR. By chance, I found out that – WOW – I can often achieve that vintage lens-y soft yet sharp cine-feelz by finessing the settings in this S&S thing!

But sometimes the S&S can be hard to tweak for that right subtle effect, though. Searching further, I learned the effect I achieved is related to the MTF curves of vintage “cine” glass, and I found a very helpful video on how to achieve MTF simulation and added that one too as a compound node (default off) to my default clip level node tree:

And I find sometimes using both a dash of the S&S ofx and the compound MTF simulation node serially can produce very pleasing (at least to me) results.

At times, adding a bit of radius sharpen after S&S and MTF simulation can help sell the effect.

WARNING: It can also be very useful to apply some grain in the Film Look Creator before tweaking these MTF-like settings; Sometimes the results of S&S plus MTF simulation alone look like a heap of meh until you slap some light film grain on top of it.

(I’ve also played around with this MTF Curve dctl, but so far I’m happier with the results from the two other above mentioned methods. I need to play around with this DCTL some more, though.)

And I guess the MTF effects should go on the clip level, being an effect related to the characteristics of each respective lens, and to adjust high frequency “noise” related to each respective lens / shot, right?

Right? (Cue “Phantom Menace” meme).

The first footage (a stand-in thing, experimenting with lighting setups – so the focus and the camera is all over the place) I shot and graded in late 2023 where I knew a bit more about what I was doing (thanks to Rob Ellis, Patrick O’Sullivan, Cullen Kelly, and Darren Mostyn) – and was sort of pleased with the “cine”-like outcome. Also, I think the cine feelz was in no small part achieved for me because I accidentally stumbled into a kind of MTF simulation thing by applying the “Soften & Sharpen” FX for the first time (effing around and finding out FTW). Lighting anything but natural: Key + Kicker/Backlight + Fill + Hair light. Godox lights, Aputure modifiers. BMD Micro Cinema Camera, Pentax A110 50mm f2.8 lens with Kood ND6 or Tiffen ND 0.7.
Here are my S&S settings that often deliver nice vintage MTF characteristics (on 1080 footage, adjust to taste on your footage depending on your sensor, lens, and resolution)

My Default Timeline-Level Nodes, My “LOOK” Node Tree

And this is the kitchen sink my latest default timeline-level node tree for the “LOOK”:

Default Timeline Nodes v9

PSA: As Cullen Kelly says, you always want to be grading “underneath” your LOOK, aka always have your look nodes on the timeline level active (on) when you start grading your primaries on the clip level. This one of the things I wish I would have known sooner. It is such a simple thing, but it changed my grading game from painful brute-forcing to predictable fun for me: I used to think (not knowing any better) that you’d have to first grade your primaries before adding your look – but that’s a recipe for a lot of headaches and unnecessary work, often delivering crap results to boot, when compared to turning on your look sauce in your Timeline Nodes before you start grading your primaries instead.

Update: I use Grain in the Film Look Creator (FLC). (How many places can you now add film grain in DaVinci natively? 3 or 4 places? Do you want grain to go with that grain on that grain?)

Update: Also using a node with joo.works Halation dctls. It can work very well instead of the native Halation ofx, but I find joo.works’ Halation often breaks the illusion on clips with a lot of fine or high frequency details, e.g. close-ups with hair, eyebrows, beard stubs, etc (regardless of trying to compensate high frequency details with MTF simulation). I’m no expert, but I do not think that the real photochemical Halation effect looks like that on high frequency noise – it just looks very “off” with an overly pronounced orangy-red halo on each hair. No bueno. That’s when I might revert to using the native Halation fx (as the Film Look Creator’s Halation seem to be very limited in affecting the image and few options to tune it to how I want it), which also can help to create a more vibrant skin glow – if wanted.

The idea behind the additional Color Space Transform (CST) IN/OUT sandwiches is to be able to mix in creative (would that be a “negative” equivalent LUTs as to Film Print Emulation (FPE) LUTs? I have questions…) LUTs and Film Print Emulation (FPE) LUTs that were not made for the DaVinci Wide Gamut / Intermediate color space that I work in (the node directly in front of the sandwiches does take LUTs for DWG / Int should I happen to use one made for my actual working space).

Update: As of my Timeline Nodes v9, I’m not using additional CST sandwiches for LUTs and DCTLs intended for different color spaces than the one I’m working in (DWG/Intermediate). I’m just right-clicking the node in question and selecting the correct color space and gamma directly (when going from scene to scene space or to display space only for utility / checking purposes). This does make complex and potentially image-breaking settings a bit more hidden away – but it makes for a less messy and bloated node tree.

However, I am now using a CST sandwich for manually selecting footage color space and gamma (most often BMD Film Gen 1 / BMD Film) as the first node in the clip level node tree and going from scene / working color space (DWG/Intermediate) to the desired display space (usually r709, Gamma 2.2) as the last node in the timeline level node tree (both not shown in the node tree screenshot above). This means I’ve also disabled automatic color management in the settings, like I already mentioned.

There are some really good (and free) look LUTs available from reputable sources – like directly from most camera makers themselves – just google it (and remember to use the correct settings, the right color space and gamma it was intended for, either with a CST sandwich or changing it with right click directly on the LUT Node).

For setting up a CST sandwich that renders results as intended, you need to know when to apply “Forward OOTF” and “Inverse OOTF” – and when not to. These CST options have always been a mystery to me – until recently when it was explained by Cullen Kelly.

To wit, for my future CST Sandwich reference:

Forward OOTF = ON when going from a scene (or working) color space to a display color space, tone mapping set to luminance mapping and checked on custom max input set to 10.000, gamut mapping set to saturation compression.

Inverse OOTF = ON when going from a display (or screen) color space to a scene (working) color space.

Both OOTF = OFF when going from a scene space to another scene (working) color space (tone and gamut mapping off,)

In addition to my go-to look tool, Cullen Kelly’s beautiful Voyager Pro v2 “taste” LUT pack – worth every single penny, I’m often adding in additional creative or “negative” LUTs made for other color spaces to the mix (just a dash, using the “Key Input Gain” in the “Node Key” settings to limit the amount of influence) – like look luts from Arri (who doesn’t love Arri colors?). I also generously use the Key Input Gain in the Node Key to “walk back” the grade / look when I’m about to go overboard. More on that later.

And my goto Film Print Emulation (FPE) is a Fuji 3510 (in the pre-DSLR days, I always shot on Sensia and Velvia with my “OG” Nikon FM, so I’m a big-bias Fuji Fanboi here) by Cullen Kelly (free download available both for DWG & ACES, fantastic quality), and I also sometimes go crazy with Sony’s Technicolor Collection.

For global (timeline-level) color density I’m using DRT&T’s Film_Density_OFX and Density+ alternately, as I’m still undecided which one I actually prefer.

Dehancer is another great plugin for creating a photochemical film look, but I keep it deactivated in the look node tree as I find myself wasting too much time trying to brute-force a look with it. I’m still not very good at creating predictable results with it and I haven’t really invested enough time to learn how to use it properly – yet.

Sidenote: Is there a DCTL / OFX that ONLY does the FPE “analogue range limiter” part of Dehancer? That would make me happy. At least for a little while. Bueller… Bueller… thatcherfreeman… Anyone?

Also deactivated by default is the Cullen Kelly YouTube export LUT, which I only turn on for YT delivery (duh). Myself I mostly use Vimeo for distribution, and like I mentioned earlier, I’ve found rec709 / rec709A provides the best results when publishing on on Vimeo, aka looks most true to what I saw when grading after Vimeo has chewed on my upload and spat out their version of it. YMMV.

There’s also a lazy “Global” node to abuse for anything I need to add as the last step for all clips, e.g. cool or warm it up a bit, take exposure up or down to taste, etc. – a handy node for quick and dirty experimenting with new ideas after I feel satisfied with the general look without touching the main nodes. It’s also in there to remind me to keep trying different things – even when I’m satisfied with the look.

My approach for getting the look and feel I want is “less is better”, but anything goes (eff around & find out is the best method yet). As long as I like it, and it doesn’t break things (e.g. unpleasant skintones, artifacting, banding, etc), it’s a keeper – I ain’t too proud (or “pro”) to not keep piling it on. (And you know what -nobody cares what goes on in the sausage factory as long as what comes out is tasty).

My timeline look node tree also includes the MONONODES Balance and Clip utility DCTLs (so worth it – incredible productivity boosters – and keeps getting updated with new great features!) and additionally I’ve added the native False Color plugin, as five six (UPDATE: Added MONO’s new great “Heatmap” for exposure check in lieu of an “EL Zone System”, and also this “False Colors” dctl) “utility” nodes; “Zones” (aka DIY EL Zones System – don’t tell EL), Balance, White Clip, Black Clip, and Sat Clip. By just turning them on and off I can check the exposure and ratios, skin balance, and unwanted clipping across all shots (clips) really fast – turn the utility node on, select “refresh all thumbnails”, go to the Lightbox – and BOOM! – you’re checking all clips in seconds! I might also add this “blanking checker” utility DCTL in the future if I’m working on projects with footage from different sources.

<rant>

To the person(s) who invented it: I hate “False Color” visualisations as I have absolutely no connection to the color gradients that I’m seeing whatsoever, especially with Blackmagic Design False Colors (green, pink, and +80% of the rest of the color scheme is shades of gray? IRE? WTF? Get out of here! You got me bristling with ire, alright!). I have never understood how somebody would think this is a good and helpful visualisation for exposure. But what do I know. I thought I might be dumb or just missing some important clue they only share with you if you’re in the secret guild – but it turns out smarter and much more experienced people than me also don’t like False Colors; Enter the EL Zone System – finally a “false colors” scheme that makes intuitive sense! I wish this scheme would be implemented, either by contributing to EL’s patent coffers retirement fund or by implementing something similar, see the “Zones” compound node in my node tree) as a native DaVinci Resolve fx AND BMD Video Assist hardware monitoring feature – in addition to the not so useful “False Colors” option).

</rant>

UPDATE 1: Cullen Kelly launched the fantastic “Contour” film look builder plugin that is now at the top of my wishlist. That is to say, this is a pro level plugin with a (fair) price tag that I’ll only allow myself to buy if and when someone will actually pay me for grading, lookdev, shooting, anything. Playing around with the free (watermarked) demo version to get some mileage with it for now.

Look entirely made with Contour (Demo Version), no FPE LUT, also added my Lens Degrader power grade fx – I already love how easy it is to make skin look really good with Contour (this look took just 1-2 minutes of tweaking to achieve)

UPDATE 2: I’ve also added the new native DaVinci Resolve “Film Look Creator” (FLC) as a default node with the “clean slate” setting to my default timeline node tree.

UPDATE 3: I’ve also added the 2499 Custom Curves to the default timeline node tree to add a little something something, OFF by default, to experiment and evaluate. So far I feel adding a hint of this beats the Film Look Creator’s split toning results. Felt cute, might delete later.

Update 5: I’ve changed how I use the DaVinci Resolve’s native Halation effects. It’s still in the node tree, but deactivated – I’ve added a Joo.Works Halation power grade. The Joo.Works’ Halation power grade looks more like real photochemical film Halation to me. (It was made for ACES but works just fine in DWG/Intermediate, within or without a Color Space Transform sandwich, with or without changing the parameter in the power grade’s CST to DWG/Intermediate.)

And a heads up: The S (as in small) power grade compound node might render wonky in DWG (I’ve notified Joo.Works about it). Just wire the nodes in the S power grade instance up like in the M and L versions instead and you’re good to go.

And if I want to abuse Halation to have skin subtly glow or irradiate (a feature I find the Joo.Works Halation doesn’t really do for me) I can activate the old native Halation FX and check if it works for this purpose. (On writing this, I realised the native deactivated Halation FX should probably go on the clip-level instead for this clip-centric purpose.

I’ve also played around with this Halation dctl, but so far I’ve had no luck in generating the results I was looking for.

Below is another grade / look I made after discovering the interesting 2499 DRT dctls from Juan Pablo Zambrano. If you’re looking for instructions on how to apply them, I found this helpful post from the author over on liftgammagain (why this info isn’t included in the documentation is a mystery – or I am too stupid to find it on github). This was my first try screwing around with them, adding it to my look nodes mix. Looking forward to be digging myself into another hole playing around with the 2499 DRT tools some more:

Above, a quick and dirty 2499 DRT test added to the look mix. Does it look more “cinematic” to you? This split tone thing adds a little something-something to the highlights, I think. I like it. Could the new native Film Look Creator do the same? Maybe. I’ve also applied some experimental MTF emulation and switched to the new native ULTRA Noise Reduction with automatic analysis.

More Grading / Look Examples

Below you’ll find some more color grading examples where I’m going for a super 16mm or 35mm photochemical film aesthetic, not so much a more “modern” or “shot-with-something-in-the-Sony-FX-or-Alpha-camera-family” aka the “influencer” (or “OnlyFans-pr0n-streamer) look – because the vintage “cine” look is my kind of kink. (Although I am a big fan of the look of the movie “The Creator“).

Also an enormous thanks to Nikolas “Media Division” Moldenhauer for the most inspiring, highest quality – and seriously geeked out (in the best cinephile way possible) content on them Interwebs. This is the level of production quality most “creators” can only aspire to – to me, it’s the “benchmark”. It’s amazing:

Here, some great vintage Canon FD S.S.C. 50mm f2.8 glass – one of my Canon k35 Cine Prime budget-“ersatz” lenses – on a Metabones 0.58x SpeedBooster, featuring digital MTF simulation, S&S, and my edge distortion “Lens Degrader” power grade.
Adventures in anamor-fake (sans oval Bokeh): MCC, great vintage Canon FD SSC 24mm glass – another Canon k35 Cine Prime budget-“ersatz” – on a Metabones x0.58 super16mm speedbooster. The Film Look Creator providing the film gate “letterboxing” crop and 65mm grain in post, my “Lens Degrader” power grade messing up the edges, digital MTF simulation and S&S also added to compensate for the FD lenses’ sharpness. The goal here was to shoot some reference material to compare my DIY real micro vintage anamorphic lens project to – more on that in a later post.
Input footage for reference. Lighting 5600K: Godox UL150 with an Aputure Lantern modifier, Godox SL60 with a gridded Aputure Light Dome Mini II (you can see how they were positioned by observing the two catchlights in my eyes; The UL150 to backlight and to try selling an illusion of natural light coming in from the sky, the SL60 as a fill on the shadow side to bring up the level on the shadow side of my face to reach a light to dark [side of my face] ratio that creates shape without crushing the blacks or blowing out the highlights).
Above, an actual REAL DIY anamorphic 24mm vintage lens for s16mm / the MCC (here at beyond f22 & ISO 1600). All of this for around 50 bucks and weighing in at less than 150 grams? Is it at all possible? I’m still working on getting it sharp enough for actual production – stay tuned.

Maybe I’ll share some of my more “modern” color grades shot on the BMPCC4K camera with a Sigma 18-35mm f1.8 Art DC HSM lens in a future post – for now, you can infer from my previous post what that looks like and this screenshot from my live streaming studio:

A more “modern” look; a screenshot from my live streaming studio (BMD PCC4K, Sigma Art 18-35mm f1.8, Viltrox EOS EF M2 II SpeedBooster, custom “look” LUT by me from scratch, here shown with noise reduction added to simulate the streamed recompressed – smeared by Zoom – output. The input signal upstream is much sharper to account for the degrading by the compression Zoom applies downstream). And the light tubes with barn doors in the background are real, btw – I’m not using virtual backgrounds or green screens (people always ask).

And this is what the signal looks like before applying my custom studio grade / look LUT in-camera:

My studio feed without the in-camera look LUT applied, also proof that the lights in the bg are real (I get asked a LOT) – And yes, exposure is baked into my LUT for “legacy” reasons. (I thought I was being clever at the time. Turned out to be more stupid than clever. But hey – it works! Will probably make a new LUT to go with a balanced input – if and when I change the look).
Before & After color grading footage: This particular look includes an official Sony Technicolor LUT in a CST sandwich. Sony has many more free LUTs for you to download too.
Who needs vintage Cooke when you’ve got vintage Asahi A110? Lulz.

Above for reference: this is how it actually looks before color grading. (BMD Micro Cinema Camera, DNG (RAW) BMD Film G1, Pentax A110 70mm f2.8, K&F Concept ND64 + Tiffen ND7, ICE IR Cut)

And here’s how it looks and moves after color grading and applying a look. This is just a quick natural light thing, shot to test a filter and lens combo – and to create some test footage to practice my grading and looks on. (BMD Micro Cinema Camera, Pentax A110 70mm f2.8 lens with K&F Concept ND64 + Lee Filters ND6, ICE IRCut filter.)
My neighbours on a cloudy day, Franken-Rig shoulder mounted
I went back around a year later in 2024 and regraded some of that footage where I accidentally stumbled into S&S and MTF simulation, basically just applying what I’d learned since then. Here I also used Chromatic Adaptation instead of using HDR Temp (camera 3.200K and light 5.600K temperatures were mismatched – n00b mistake), added some density to the skin, added more contrast with a curve LUT and manual push, added CKC Fuji FPE, replaced native Halation with Joo.Works Halation S, added my lens distortion effects thingy, and a partridge in a pear tree.
Another year, another grade.
Also added a bit of Film Look Creator and MTF emulation.
Another one.
And another one.
And MOAR!
Let’s take it over the top; You go! No, you go! CKC Look LUTs, CKC Fuji FPE LUT – AND DEHANCER! Push it! Push it real good! BMD Micro Cinema Camera, Pentax A110 50mm f2.8 lens with Kood ND6. Also, pulling manual focus while operating the camera handheld with fixed iris f2.8 lenses is a crapshoot.
Arri color science has entered the chat; Experimenting with official Arri LogC4 look LUTs applied to BMD Film Gen 1 footage using a Color Space Transform (CST) sandwich to go in and out of LogC4 within my own look node tree that operates in DaVinci Wide Gamut / Intermediate.
An alternative B & W grade / look made exclusively with the 2499 DRT DCTLs (added Film Look Creator’s native Halation and Bloom plus MTF simulation for subtle effects, also added the new native Ultra Noise Reduction).
Alternate version with anamorphic blur simulation and using the Chromatic Aberration Removal FX (sans Prism FX) to actually ADD chromatic distortions in the blurry edges, like a real vintage anamorphic, here without a fake FLC “ultra-wide” aspect crop. 4K upsampled. (This version is still using the 3rd party plugins for saturation and density, which I’ve later replaced with the new native tool “ColorSlice”; If you wish to compare, the version at the top of this post uses the new ColorSlice instead of the 3rd party dctls).
A sample from the first longer form content I ever shot with the BMD MCC and a Pentax A110 50mm lens – and grading it and matching shots in DaVinci Resolve.
Some test footage I use to practice on my grading and testing out new stuff to audition for my default look nodes. I keep coming back to this clip to test if I can make it more interesting, to see if I can level up with the new stuff I’ve recently learned. BMD MCC, Pentax A110 70mm f2.8, Tiffen ND6, Kood +2 diopter. 4K upsampled. Some sort of ambience or music added by AI. In this clip I’m using the native Halation FX.
UPDATE: Using an alternative Halation power grade from Joo.Works (for ACES, although they work just fine in DWG/Intermediate, within or without an ACES CST sandwich). I like the way Joo.Woorks’ Halation looks more “real” than the native DaVinci Resolve in FLC or the Halation stand-alone FX. Also added a bit of S&S MTF simulation, Film Look Creator (FLC), and 2499. Compare this Halation effect to the clip above to decide which version of Halation you like better.

Simulating Lens Artifacts, enter my “Lens Degrader”

Now with 100% more digital lens degradation! (Another look not using Contour, added CKC Fuji FPE)

Lately I was also inspired by Barrett Kaufman’s video to screw around with lens distortion simulation effects and I added some chromatic edge distortion to the mix using the Chromatic Aberration Removal FX and the Prism Blur FX. If you want to tinker with this yourself, below is how I set up the nodes in a compound clip. There are probably many better ways to do this, to wire the nodes up correctly and also to achieve more optically correct chromatic distortions – I’m not claiming that I know what I’m doing here – so be warned doing this may severely break things and/or ruin your day. Tweak to taste:

Caveat Emptor: This wiring may break stuff.

One Little Trick

One intervention heuristic I’ve started applying when I’ve dug myself too far into the hole on a grade / look is to go back and first make another version of the clip and timeline nodes, then dial everything going into the look and any effects back 50% to create a new starting point to finesse from. More often than not, the feelz that result is more pleasing to me (also when after sleeping on it and comparing later).

Because digging myself into a hole when grading is the rule, not the exception for me – I always manage to dig myself down way too deep. Deep down to that special place where sunken cost fallacy and tunnel vision meet up to move goal posts and host your personal echo chamber of mediocracy.

Maybe you too know that special place where the mind starts to go “Wow – this is my best work yet – I’ll just keep adding even more shit and it’ll be perfect!” – and then you take your eyes off the grade for a couple of seconds to watch something else – then returning to the exact same “perfect” look, recoiling with a viceral “yikes – wtf is this crap?!”.

Reference, the unadulterated BMD Gen 1 Film image converted to display space, color managed
Certainly a “look”, but still a bit too “clean” for my taste
Also certainly a “look”, but rapidly approaching the too-much-of-everything depth of the hole
Now with everything dialed back 50% from “too much”, it’s usually not good to go – but it is often a much more helpful place to continuing finessing from than digging sideways or even further down from the bottom of the hole
A better place to finesse further from, IMO

More Examples, Stills Gallery

Accidentally upscaled to 4K or 6K – because that’s how Apple’s pixel density Retina voodoo works when taking screenshots around here. The MCC “only” shoots 1080p. (Would you have noticed it was upscaled and not native if I hadn’t told you?). The footage was shot with various ND’ed (K&F Concept, Kood, Tiffen, or Lee Filters ProGlass IRND) tiny vintage Pentax A110 lenses on the MCC where not stated otherwise. The close-up of the eye was shot with a 49mm +2 diopter from Kood.

Standard