cine, grading, Rants, video

On My Color Grading in 2025

In another blog post that nobody asked for, I thought I’d summarise what I’ve learned and updated in my color grading journey hobby in the last year (instead of updating the 2024 post ad absurdum).

Above, no new test footage as I haven’t really made any in the last year or so, but this is the “Director’s Cut” for 2025 – as good (or bad) as it gets for my grading and look skills (tastes) so far. It may not be for everyone, but this is my kind of kink rn.

What changed since 2024? TL;DR – not much.

Mostly because of two things:

1. I feel I’ve reached a happy place where I (at least think) I know what I’m doing and I can consistently and predictably get to results that I like (thus radically reducing the urge and curiosity to keep digging, 80/20 and all that I guess).

2. I’ve been busy doing other things (aka “work“), leaving less time for the “hobby”.

So what am I doing differently in 2025 compared to 2024?

Current default clip-level node tree:

Default Clip Node Tree v6.4 actually v6.5 now – just a bit of housekeeping done

Of note, I’ve added Pixel Tool’s “Prime Grade” plugin (it was on sale @ YOLO price point) to experiment with one node to rule all primary grading as it looked like it would save me a lot of time and work. And so far, I find it gets me to a great spot faster than hopping around in more nodes and places in DaVinci Resolve manually, making great results easier and faster to achieve. It’s a keeper. (I only wish I 100% understood how and what Prime Grade actually does to achieve its effects – e.g. to make sure I won’t break stuff when using other dctls or grading techniques – so I will have to dig deeper on that at some point).

But I’ve also left my old default nodes for primaries in there to lean on just in case – a comfortable and easy fall-back to what I already know how to use – should I ever get lost using this new “Prime Grade” thing.

My default clip node tree (v6.4) in 2025 vs 2024 (compare to v6.5 above for the minor updates)

In the RATio/CONTrast node, I’ve pre-added a .336 pivot point on the Custom Curves (as I’m working in DaVinci Wide Gamut / Intermediate) using Cullen Kelly’s middle gray / exposure DCTL, locking down my middle gray as a default (no-brainer – no idea why I didn’t do this sooner). BTW, you can also use the excellent – and also FREE – Middle Gray dctl from MONONODES.

The rest, like my custom “MTF Sim” and “Lens Degrader” compound nodes, I’ve previously described in the 2024 post. (Let me know in the chat below if you would be interested in my “Lens Degrader” DCTL – I might make it available to everyone…)

I’ve also been experimenting with turning OFF the “use S-Curve” setting in DaVinci. Undecided if I should keep it ON or OFF so far. I guess I need more time with OFF to decide, as the vast majority of my time spent in DaVinci Resolve so far it has always been ON, I’m heavily biased.

Current default Timeline node tree:

Default Timeline Node Tree v9.4 v9.6 (v9.6 = housekeeping & some changes to better map to logical processing order of nodes & how photochemical film behaves – I think)

The main difference in my current Timeline node tree is a bit more of thought and organising going into separating the “Creative” or “Look” part(s) and the “Print” (FPE – Film Print Emulation) parts, also getting them in a more “correct” order – I think. Kinda. Maybe?

The thinking goes: adding photochemical aspects like Halation, Bloom, Grain as soon as possible in the pipeline (for the effects to be dragged through the look and fpe process – not added on top), then either I’m first doing the LOOK via a combination of taste-LUTs or a plugin like the native Film Look Creator or Contour (also opting to throw a tad of a creative LUT like from Arri into the mix, and using e.g. the 2499 Custom Curves DCTL to add some split toning micro adjustment secret sauce) – then adding film print emulation (FPE), either via an DWI FPE LUT, an ACES FPE LUT, or by using a plugin like Genesis or Dehancer.

Also, I’ve changed settings to use “neutral” grays in the UI preferences (less of a color tone bias in the DaVinci Resolve UI that could trick the eye).

My default timeline node tree (v9.4) in 2025 vs 2024 (see more current v9.6 update above)

Density is still occupied by Iridescent Color’s Density dctl as I tend to stay away from the native ColorSlice options (good intentions, faulty execution – sat model is good, though) in fear of breaking the image in horrible ways.

My main “goto” for the Look is (still) the native Film Look Creator (FLC), Cullen Kelly’s Voyager Pro v2 taste LUTs, the official Arri LogC3/4 Look LUTs, and just a hint of JP’s Custom Curves for LOG2499 dctl (applied in DWG/I, without going into Log 2499 first – YOLO!) for micro adjustments in the highlights.

I’ve also kept the free trial version of Cullen Kelly’s wonderful “Contour” lookdev plugin in there to get more mileage with it – should I decide to get even deeper out of pocket with into this hobby in the future.

The ACES FPE node is a compound node first going into ACES using CST and then ADX using ACES Transform before applying the FPE LUT to get the most out of the native DaVinci FPE LUTs (they are actually pretty good when used this way!), as suggested by Cullen Kelly.

Just make sure to turn OFF OpenDRT (or any old cst / drt you’re using to go to display space) for going out to display space if you turn this compound node ON – The native FPE LUTs supplied with DaVinci Resolve have their own working color space transform to rec709 display space thing baked in:

When using a single FPE LUT compatible with or made for DWG/I, my main goto is still the Fuji 3510 FPE LUT by Cullen Kelly. I’m a Fujicolor Fanboi at heart – still.

That huge parallel node fan thing is just a way for me to better organise a bunch of on/off “checkers” like Zones, Heatmap, and SweetSpot (in lieu of “False Colors” – which I hate), Skintones, Blanker, etc – basically all the utility dctls I use put in a single stack in parallel to fit them all on one screen and to keep the whole Timeline node tree somewhat usable.

The Halation FLC H&B node is DaVinci Resolve’s native FLC plugin with only Halation & Bloom turned on.

I’ve left free trial versions of both Cullen Kelly & Co’s brilliant “Genesis” FPE plugin and the older (but still very good) Dehancer plugin in there to eff around with and get some experience with – if and only if this hobby would somehow turn into to paid ops in the future, justifying – to myself as a non-pro – the purchase(s).

In v9.6 I’ve also added the wonderful (AND FREE) DCTLs RGB Chips by Thatcher Freeman and Cullen Kelly’s CKC Grayscale Ramp to better visualise what is actually happening to my image as I manipulate it.

Of note, what is completely new is that I’m using the latest version of OpenDRT to convert out to rec709 / Gamma 2.2 (yes, I’m still exporting to screens-only, mostly no Gamma 2.4 for me) from working in DaVinci Wide Gamut / Intermediate. I find that the later versions of OpenDRT provide more pleasing (waaaaay more pleasing than using the native CST – no going back by now) results, arguably also nicer faster than the fantastic (and also FREE) 2499 DRT.

The results I get on my own footage by just using a provided preset in OpenDRT like “high contrast” usually does it for me – and on the off-chance it doesn’t, I find it easier and faster to tweak to good taste than using the other DRT candidates.

Have a look and decide for yourself – DRT-only, no grade, look, or FPE applied:

The source material above is Blackmagic DNG Film Gen 1 shot on a s16mm BMD Micro Cinema Camera (MCC) using a vintage Canon FD S.S.C. 50mm f1.4 lens on a 0.58x Metabones SpeedBooster with a 2 stops Lee Filters proglass IRND filter.

Compare 2499 DRT vanilla to OpenDRT with the “High Contrast” preset applied:

Comparison of DRTs – 2499 out of the box VS OpenDRT “High Contrast” preset

Compare OpenDRT “High Contrast” to the native DaVinci Resolve Color Space Transform (CST):

Comparison of DRTs – OpenDRT “High Contrast” preset VS DaVinci Resolve’s native CST, rec709 / Gamma 2.2

<rant>

I do think the native CST provides more of a “what the camera actually saw” kind of look, but do I actually want that as I’m always going to be grading for that “cine” photochemical vibe? Hell no!

I do get the color science nerds on the interwebs who keep going on about getting the colors out of the camera as correct or neutral as scientifically possible – a fine and interesting hobby (a guilty pleasure I do enjoy & indulge in, enjoying their content from the sidelines) – but they are working hard on refining things on the <20% end that NOBODY ACTUALLY COLOR GRADING [worth listening to] FERKING CARES ABOUT! In short, why the hell would I NOT use something like OpenDRT instead of a “camera LUT” or a more “scientifically correct” CST when it gets my job done effortlessly, pulling me at least 50% over the hill instantly?

I’m trying to bend the image to my tastes with the least amount of effort here – not trying to win an effing science fair competition!

Of course I had to find out this the hard way: I sure have my fair share of “perfected” camera or color conversion LUTs – some I painstakingly and time-consumingly brute-forced “crafted” myself, others (and waaaay better ones) I bought from reputable sources like Leeming, but don’t get me started on how many of the so called other “camera LUTs” out there are also total nonsense when working color managed in a wider intermediate color space as they either expect some an unknown color space going in – or much worse, they were already created within / for a limited display space, basically degrading the result you can ever possibly achieve by using it.

Today, scientifically color accurate camera LUTs just do not matter at all (to me).

When shooting, (if I remember – most often I forget as I’m not that organised and it’s that unimportant to me when shooting short sequences on the same camera, same lighting setups), I may shoot a grey card, and I am only intently focusing on getting the blocking, exposure, and ratios in a good spot instead of worrying about “colors”. The only thing I do try to remember is to turn off any questionable incidental / existing light fixtures or make sure skin receives (aka gets most of its illumination from) more color accurate (+97 CRI) light from added artificial “cine” lights, as consumer home LED lights suck ass when it comes to color accuracy – and based on my own incompetence experience, it can really make color grading contaminated footage (especially with skin tones) into a usable place a next-to-impossible, and definitely zero-fun, task.

YMMV, but I shoot on Blackmagic Design cameras in DNG and BRAW, and their native DaVinci Resolve Color Science Transform (CST) profiles for going from camera LOG (aka DNG BMD Film Gen 1 or BMD BRAW Film Gen 5) to DWG / Intermediate work great – if you work color managed AND have set up your color management pipeline correctly – and I don’t have to worry about matching different cameras with wildly different color science.

I mean, riddle me this: I’m shooting in DNG/BRAW log, for myself, and using a huge intermediate color space to grade in, bending the image to my (extremely) subjective taste – so why on earth would I worry about getting the colors to line up on a scientifically accurate chart, like ever? Why? Why? WHY??!!11

</rant>

Also new in the hardware department is using an additional dedicated ASUS ProArt display with a custom DIY made middle gray background, sporting LX1 Bias Lighting backlights on the display, the input signal coming from a BMD UltraStudio Monitor 3G calibrated with DisplayCAL and Argyll using my X-Rite i1, resulting in a correction LUT applied on the input signal running it through a Blackmagic Design MiniConverter 6G SDI to HDMI LUT Box.

Now, did adding all of this fancy (albeit non-pro / non-reference) hardware improve my color grading? Not really – at least not directly in proportion. I would say that it has enabled me to be bolder, though. I now push things that I would previously be kind of hesitant to. Now that I can be more confident that the results should hold up, it feels a bit liberating – but it feels more of an incremental refinement in the 20% than a significant “in the 80% of the job” improvement for me.

<RANT>

Dear Blackmagic Design, The Ultrastudio Monitor 3G box gets so hot I can possibly fry an egg on it – so feel I need to disconnect it when I am not using DaVinci Resolve. Which is a hassle, because when I start DaVinci Resolve again later, I will have forgotten to first reconnect the device again – which forces me to restart DaVinci Resolve again. Could you perhaps do something to make the device not feel like it will burn down the entire neighbourhood (and fueling a fear in me that it will just stop working prematurely because of thermal weathering and decay) if I leave it connected when not using? Like, how about a non-nuclear-meltdown-temperature “sleep” mode when DaVinci Resolve is not running?

</RANT>

That’s about it for my color grading in 2025.

TODOs:

I’m interested in checking out MONONODES’ new v2 of Film Elements demo in-depth – when I get the time – for a couple of reasons:

1. I hear the Film Grain is awesome. (I’ve never been 100% happy with the results of native or FLC grain on my own content).

2. The Vignette, Chromatic Aberration, Lens Blur, and Lens Distortion dctls included are bound to be much better at doing their things than my own “Lens Degrader” power grade kludge that is only (ab)using DaVinci Resolve native stuff.

3. CAN HAZ MTF DCTL!!! ZOMG – MOAR Modulation Transfer Function Curves!

No seriously, that part got me super excited – I get VERY excited for anything MTF-related – because MTF is THE ONE THING that makes the digital content (that I’ve shot myself using my own gear, at least – which is currently 100% of what I grade) instantly look more analogue or “cine”. MTF is It’s the one thing that finally tips the scale, it’s my missing ingredient to actually sell the illusion (for me).

<rant>

If you can’t tell already, MTF is where I really get my nerd-rocks off, because the science is incredibly fascinating and it’s a terribly undervalued and under-appreciated aspect in digital film grading. I guess that is mainly because most “pro” colorists are not the DP of what they are grading, and are thus used to grading “pro” footage shot on “pro” cameras using “pro” “cine” glass by “pros”; In short, MTF is just not a thing pro colorists have to deal with on a regular basis as there’s been a “pro” DP involved upstream, making sure – or so my theory goes – that the MTF part of what goes into creating that “cine” feelz is already baked in by the DP’s choice of [quite possibly outrageously expensive & exotic] optics used.

But what do I know…

For those of us who are our own DPs and colorists currently without access to outrageously sexy optics (or working with DPs of similar dispositions), MTF is an aspect your “cine feelz” grading pipeline should cover.

</rant>

Currently, I’m still using an MTF emulation power grade inspired by Marieta Farfarova to feed my MTF addiction – and the results are great, but it is a bit cumbersome to use, not very intuitive when changing settings around (which lends itself poorly to encouraging more eff around & find out experimentation), so any MTF that is easier to use, provides more intuitive control on UX surface level – and possibly delivering even better results – would be extremely welcome in my default node tree!

Also on the subject of emulating lens characteristics (of lenses we cannot get our hands on), I need to get more time with Lewis Potts’ new and exciting “Lens Node” plugin (a shame it seems the free trial has a limited number of available lenses to emulate? Why? Just watermark and get on with it!).

I might also look into effing around more with how to do more interesting stuff in the the bottom of the image, aka how to achieve more nuances in the blacks – however, I’m OK with a little “crunch” (or I guess “compression” would be a more technically correct term for what I usually have going on at my low end, not rolling off to black in a linear way) in there, so I’m not sure if I need to “see into my blacks” more – but it’s worth experimenting more with it, though. It’s on my to-do list in custom curves experimentation, aka creating more looks and feelz just using my own custom curves and no plugins/LUT/dctl/etc.

Update: OK. OK. OK. Went back later to eff around with an old grade, trying to iron out a lot of the crunch to get a more pleasing transition between light and dark and a less exaggerated “feelz” in the shadows, a more pleasing skin color, adding some more “interest” in the dark void right – overall, trying to “see into the dark” more. First pass results below. You be the judge:

An old grade revisited. The one with less crunch is the new grade.

Gear used: Blackmagic Design Micro Cinema Camera, Pentax A110 50mm f2.8, Godox FL150S Bicolor LED Mat (footage part of a series I shot to test the Godox light mat and learn how to best use it).

RGB Chips

Also, one of these days I’ll add Thatcher Freeman’s brillant RGB Chips utility dctl to my default node tree – or at least as a convenient power grade.

Update: Writing this, I just did add the “RGB Chips” to my default Timeline node tree, default OFF. This is a wonderful utility DCTL for checking and visualising what your are actually doing to to the colours in your image. It’s also great for putting unknown “found” LUTs on the casting couch, screening them for unwanted side-effects (for instantly getting a better, more intuitive understanding of what they are doing to the image) before you apply them – in my experience.

OK, some new 2025 footage:

Only light source in addition to the natural / incidental light is a GVM SD600D-II (also gotten at a YOLO-bargain, priced as if to accelerate the end to western civilisation), native reflector attached, pointed at the ceiling. This wasn’t a staged thing, pure spur of the moment, so I didn’t have time to go fetch & rig any negging / shaping options before the window of opportunity would close, OK Patrick? OK?

Shot on my BMD MCC, vintage s16mm Pentax A110 50mm f2.8 glass with 2 stops of Lee Filters proglass IRND. In Resolve I used my custom standard node trees, including OpenDRT (high contrast preset), MTF emulation, and my own “Lens Degrader” power grade applied, etc:

Oh, I almost forgot, I did also eff around a bit with JPLog2 – the free dctls that are supposed to offer a working color space with higher dynamic range, like 2-3 stops more than DWG. It’s pitched as mostly for new ARRIs – which is arguably going to do eff all for my legacy BMD MCC footage, but I didn’t let that discourage me from effing around with it anyways.

I still don’t understand how you can go to a wider color space *AFTER* you’ve already done an IDT/CST to the “inferior” working color space, though.

But what do I know…

I just do not get it. It’s not like DWG is not already wide enough to accommodate their own (Blackmagic Design’s) 16-stops-of-dynamic-range cameras, is it? IS IT? Because that seems to be what JPLog2 is trying to sell me on. I’m not buying it, though. Nah. Nope.

Update: It seems I’m not alone

Above: JPLog2 to ACEScct DCTL applied right after the IDT CST (BMD Film G1 / BMD Film to ACES (AP1) / ACEScct) and JPLog2 to ACEScct DCTL right before the ODT OpenDRT (ACEScg / ACEScct out to rec709 / Gamma 2 using the High Contrast / Creative White D65 presets. (I did regrade the footage (main differences without using “Prime Grade” & making extensive use of the native Color Slicer) and added some power windows to take down the exposure in the background, aka increase the ratio between key and background, so not directly comparable to the other above DRT examples.)

Standard
cine, Lessons Learned, video

My Cinematic Streaming Studio v3.1

By popular demand, I’ve jotted down some details about my updated “cinematic” Live Streaming Studio V3 setup and gear. I’ve also shared some Lessons Learned at the end of this article that might be helpful if you also want to achieve a more professional or “cinematic” look for your streaming or Zoom calls.

UPDATE 2025: On my color grading in 2025
UPDATE 2024: On my color grading in 2024

UPDATED to Lighting & Look v3.1: Above, a screenshot from my current v3.1 setup.

See for yourself in the video below what v3.0 actually looks looked like when recorded and check out the comprehensive list (constantly updated) of the gear I’m currently using to achieve the look on this page.

I’ve since added back some contrast, some more definition or 3D-ness, by returning to a higher ratio / difference between key and fill light for the v3.1 as opposed to v3.0 that had a tad too little definition (and was a little too heavy on the red/warm side), as seen in the video below.

 Of note, video compression in conferencing or streaming smears the image a whole lot (that’s why I have the camera output set to be so sharp – more details in = more details out when compressed in Zoom), and depending on the conferencing software and the operating system, things happen to your saturation and gamma (here desaturated, less contrast-y – which makes me think it was not captured on a Mac).

Fun fact: One of the other changes from v3 to v3.1 is the choice of microphone. Can you hear it? (One costs 1.600,- Euros, the other 117,-). I’m actually now using the cheap-ass microphone(!) instead of my (still beloved) Neumann. Check out this list of gear for the deets.

The path to getting there

Continue reading

Standard
entrepreneurship, startup, video

Interviewed by TechHustlers

Recently I got to sit down with TechHustlers‘ (@techhustlers) Eric Strait (@ericstrait) and had a conversation via Skype over the atlantic between Cologne, DE and San Marco, TX.

From the post:

I recently had the pleasure of sitting down with my new friend, Vidar Andersen, & founding genius behind the “people magnet” app Gauss! We recently had the craziest happenstance way of connecting and believe it or not, we meet each other via Glancee, one of his competitors app at SXSW! He happened to be the closest SXSW event attendee staying near my home, so I thought I would connect. Little did I know, he was actually launching their direct competitor app Gauss. Gauss is a mobile app that let’s you discover and connect with interesting people around you. They describe it as a “People Magnet for your pocket.” I would not just say this and just blow smoke up your a$%, but after Vidar demoed Gauss to me, I was blown away at it made Glancee look like an amature! []

Wow! Thank you, Eric!

Make my day: Get Gauss on the Apple App Store for free and start discovering new people around you today! :)

 

Standard
Events, open source, Software, video

Ryan Dahl on node.js @cowoco [VIDEO]

I recently attended a brilliant and funny talk by Ryan Dahl (@ryha) on node.js at @cowoco a great co-working/co-location initiative in Cologne, Germany.

Here’s my recorded live stream from the event. (I apologise for the quality and the Flash format of the videos. I hope USTREAM implements HTML5 video soon too.)

Part 1

Part 2

Part 3

Standard
Hardware, Software, video

Autodesk’s ‘Lustre’ at IBC 2008

I happened to stumble in on an interesting presentation of the software based colour grading tool from Autodesk named ‘Lustre‘ at the IBC 2008 and bagged a small part of it. Have a look for yourself in the videos below.

Apologies for not catching the name of the presenters and for the poor sound quality of my Samsung NV8.

Standard