A while back, I posted about my Live Streaming Studio V3.1 setup, because many people wanted to know what gear I’m using and how I get the “cinematic” look on a live Zoom call. And to achieve that look, one of the things I had to learn from scratch was how to color grade.
Below, after some grading, applying a ton of stuff for the look, and even throwing in some effects to emulate anamorphic edge distortions and a fake film gate crop for good measures:
BTW, do you need help with creating a great custom “look” for your film or video production, your camera, or your podcasting or stream? Give me a ping, and let’s talk. I wasted a silly amount of time and money making all kinds of mistakes starting out, so I’m happy to help you avoid that.
In this post, I’m sharing a bit about my further digging myself into a hole adventures into color grading with Blackmagic Design’s DaVinci Resolve Studio (free download of the non-studio version). It’s an incredible piece of software, by the way. If you’re thinking about ditching Adobe Premiere – just do it! It’s a joy to work with and I’ve never regretted it for a second.
This is not a primer on color grading. It’s just me dumping writing up and sharing what I’ve learned that works best for me so far. If you too wish to start (or continue) on a color grading learning journey with DaVinci Resolve, Cullen Kelly’s YouTube channel is probably the best place for that.
The following assumes you’re already familiar some of the concepts of color grading – or at least have a faint interest in how to create a cinematic image with digital tools. If not, fair warning, this post will bore the living daylights out of you is probably not for you.
What started as a necessity during the lockdown era (aka building a professional-looking online tele-presence) turned into a path of rediscovery, reigniting my passion for the cinematic image. Fun fact: You might not know that I actually started out studying cinema with the goal to become a film director – but I dropped out after only two years as university and studying film wasn’t really my thing – and then the commercial Internet happened and the rest is history.
As a person most likely somewhere on a spectrum of some kind, of course I can’t, I won’t, and I don’t stop digging until I’m getting somewhere interesting, somewhere where I can feel a sense of mastery and understanding of the full stack (in this case lighting, lenses, physics, camera sensor, cinematography, color grading, look development – everything that goes into the sausage factory of a nice digital “cine” image), aka being able to make predictable outcomes and making those outcomes look cinematic and pleasing – to me. It’s become sort of a new time sink obsession hobby of mine (in addition to helping other startup founders go farther faster, of course).
And I’m still digging.. .
Read on below for this long non-startup (but hey – still full of tech & geekery) post.
The Setup (for the footage above)
Camera: A tiny (300g, w 8,23cm X d 7cm X h 6.6cm), old (launched in 2012!), and cheap (I paid less than EUR 600,- for it on ebay used, including an 8sinn cage, handle, and octopus expansion cable), digital super 16mm MFT dual gain sensor Blackmagic Design Micro Cinema Camera (MCC), ISO 800 (native?), 5.600K, shutter at 180 degrees and 24 fps – obviously, exposed to the right (ETTR)
Lens: A tiny (this being the largest in the series, but still tiny compared to e.g an EF lens) cheap (EUR 81,- on eBay, mint) vintage Pentax A110 (s16mm system) 70mm f2.8 fixed aperture (in this-lens-system-has-no-internal-iris! sense) on an MFT adapter
Filters, etc: Lens is kitted with a 49mm metal lens hood that sports a 72mm ICE “IR/UV” filter (dirt cheap, good quality) as the MCC needs a lot of IR filtering if you’re shooting with any sunlight in it and don’t love pink and purple blacks, a Lee Filters 2 stops IRND Proglass 100mm x 100mm filter (best ND filter I’ve ever used, and the IR part also helps a great deal on this Micro Cinema Camera that has a very basic native IR filter) on a cheap-ass Zomei 100mm filter holder to shoot into the sun (classic backlit since I don’t own lights powerful enough to fight the sun) coming in at far side of my face (actually it was overcast and raining).
<rant>
I don’t use variable NDs as every single “variable” ND I’ve tested so far produces unpredictable results, aka effs up the colors. I have yet to test any “variable ND” above 200 bucks so far, so take this with a grain of salt, but do tell if you know of a variable ND filter that delivers as-good-as results as the Lee Filters ProGlass IRND. Until then, I’ll take the inconvenience and inflexibility of using fixed and stacked ND filters instead of “variable” any day.
</rant>
Lights: Key, Godox UL150 (silent, great value for money when it came out) with an Aputure Lantern modifier. Fill, Godox SL60 (not entirely silent, old-ish but OK – one of the best budget options for the color-accuracy at the time it came out) with an Aputure Light Dome Mini II softbox & honeycomb / grid modifier.
Mental note to self: Never forget to turn off all other light fixtures, especially LED sources (Halogen & Tungsten is usually OK – yay, physics). It’s been more than once that I’ve caught myself forgetting to turn off existing interior lights, only to discover it after the fact when grading the footage. It usually manifests in disgusting skin tones that are nearly impossible to balance to a satisfyingly natural place.
Image Acquisition: Blackmagic Design Film Generation 1 DNG RAW (not to be confused with BRAW).
A big thanks to Rob Ellis for being a fantastic source of inspiration, teaching how to make the most out of simple lighting setups and how to manipulate color temperatures to achieve the “cinematic” look even on a next to no budget.
Also a huge thanks to Patrick “WanderingDP” O’Sullivan for teaching n00bs like me “the framework” and how to achieve non-flat, cinematic-looking images consistently. Hint: Shoot into the L (the angle) of the room, shoot backlit, get your ratios right, etc. E.g. if your image looks “flat” or just doesn’t look right – not very cinematic at all – it’s probably because your lighting ratios (key to fill, key to background, etc) are out of whack.
Rigging Deeper
And of course I also couldn’t help myself from digging myself into another hole and obsessively over-engineering building my own camera rig to feed my compulsions fit my needs…
Below is a teaser reveal of my “The Creator” franken-rig, super 16mm ghetto style, that the above clip what shot with (the shot of the rig was done in the studio with the BMPCC4K):
Update: In the video above, you can see the Franken-rig was jiggling a bit when rotating on the tripod. I later fixed this by adding a longer Manfrotto base-plate that helps tie more of the rig parts together, making it more structurally sound. (See the Kit.co page for details).
I think it looks more like a jury-rigged weapon out of “Wolfenstein II – The New Colossus” than a camera rig.
My Franken-rig is shoulder and tripod / crane mountable. On the shoulder it helps with stabilising an otherwise micro-jittery setup, and on the tripod, I can also remote control the camera with tracking (iOS app), and by joystick or 6-axis’ing with a remote (the MasterEye or the Remote Handle) – and it features redundant hot-swappable power through v-mount batteries & d-tap.
The rig is so obnoxiously heavy that it has given me months of seriously unpleasant pinched nerves in the neck and god awful shoulder pains. Back to the drawing board. I’m now thinking about adding a Proaim Flycam steadicam vest. To a shoulder rig… Yes, I’m a lost cause. No, I don’t want to hear a word about the sunken cost fallacy at this point.
All of which amounts to an incredibly stupid amount of rigging for a tiny old 300g camera that “only” shoots 1080p, but I am in love with the images that come out of that tiny camera – when you get rid of the micro-jitters, light it and grade it properly.
Why not a DJI gimbal and a standard Tilta light shoulder mount, I get asked a lot. The sunken cost fallacy and pride price and availability is my answer – I had a Zhiyun Weebill S already just laying around and this gimbal setup has a total cost like 1/3 of that of acquiring a new equivalent setup from DJI, I also had many of the rigging parts that went into it laying around too. And of course, using an off-the-shelf rig would deprive me of the fun of building one myself at least once (the next shoulder rig I would probably buy stock, though). Maybe I’ll geek out further and do a breakdown on the complete franken-rig build in a future post.
My Grading Today
Since the last post, I’ve changed my Gamma output from 2.4 to 2.2 (because all I so far deliver for is online consumption and Gamma 2.2 should be more in line with modern day phones, tablets, and computer monitors) instead of good ole’ Gamma 2.4, which is more in line with cathode ray tube TVs – I guess.
I’m still working “color managed” in DaVinci Wide Gamut / Intermediate, mostly not manually in a CST sandwich, as my shots are almost all shot by myself in BMD DNG or BRAW. Update: I’ve since started to work not “color managed” by DaVinci Resolve but color managed by myself with a CST sandwich. And why am I not working in say, ACES? I could work in ACES – there would be no substantial difference in my workflow, afaik – but as I’m currently only delivering projects shot by myself for myself right now – I’m happy with DWG/Intermediate for me.
Here are my DaVinci Resolve color management settings:
Update: As I’m now doing manual color management using a Color Space Transform sandwich, “DaVinci YRGB” sans “Color Managed” is now my color science setting).
I’m now also using a “Video Monitor Lookup Table” by Cullen Kelly called “macOS Viewing Transform v1.3“, pushing what I’m watching in the DaVinci Resolve GUI when grading (good enough for non-pros like me, and still good enough for someone like me who has been working with pixels for +40 years and can spot by eye if one pixel differs 1 in value in any of the RGB values to the neighbours) closer to what is actually delivered (YMMV if you don’t have a P3 Apple display with this viewing transform LUT. My main display is a P3 calibrated Dell 5K which uses the same LG panel as the iMac 5K, afaik).
I also use an old Samsung SyncMaster calibrated to rec709 / Gamma 2.2 as the “CleanFeed” to compare to what I’m seeing in the main view in DaVinci Resolve, representing your “average” LCD monitor out there.
Update: I’ve also started using a BMD UltraStudio Monitor 3G (the first one I ordered came brand new as a dud – cue sad trombone – and this fact combined with finding a very large amount of non-working Monitor 3Gs available on eBay, leaves serious doubts regarding the longevity of this product) piped through a Blackmagic Design MiniConverter 6G SDI to HDMI LUT Box (I had several of these laying around already to apply LUTs to live streaming with cameras that don’t support applying LUTs in-camera) via SDI and further connected via HDMI to a 100% r709 Asus ProArt FHD monitor that is in turn calibrated using an X-Rite i1 with DisplayCAL and Argyll to create a monitor LUT that is loaded in the 6G LUT Box to correct the signal. This setup now serves as my “source of truth” monitoring solution until perhaps one day I’ll get a “real” reference monitor (probably from Flanders or osee – the latter could also enticingly double as a neat field monitor) – but someone has to pay before I’ll jump to that next level.
I might even get Nobe OmniScope Pro one of these days and run the signal from the Monitor 3G (if it can do dual SDI and HDMI output at a time, if not via the SDI loop out on the 6G LUT Box) to another older Mac Mini running OmniScope via SDI into the Blackmagic Design UltraStudio Mini Recorder Thunderbolt (that I started this whole streaming “cine” journey with) and pipe the OmniScope screen back to my DaVinci Resolve Mac using Apple’s Remote Desktop (should be fast enough on my LAN) or by HDMI splitter (worst case) to a dedicated monitor – at least that’s the idea and I have yet to test if this theoretical wiring will actually work as intended.
<RANT>
Working as a digital “pro” with color-accurate pixels for +30 years (without a dang “reference” monitor), I do not subscribe to the level of “reference” monitor hype that a lot of colorist “pros” tend to spew at you when asking them the simplest of questions. On my grading journey to date, I’ve found that a lot of them peddle reference monitors as the the single answer to all your color grading woes – which is demonstrably not true – and I have come to find this a great heuristic to instantly identify idiots by.
I think there are several reasons (all bad) why they do this is: One, they are intimidated by any and all n00bs because they see them as potential zero-sum competition and are using expensive solutions as their instant moat or flex of choice. Another reason is, I suspect, that a lot of the real geezers in the biz are from the ancient analogue broadcast world and haven’t gotten the memo on how digital works, how you can compensate for not having a +2K $ “reference” monitor. A third reason, I think, is that many may actually not know any better, aka just because you get paid to do what you do doesn’t have to mean that you know much about how things work under the hood. I’ve met many “pros” in different kinds of businesses that are only “going through the motions”, never knowing what’s going on and why underneath the surface of things – but are still getting paid (sometimes even very well).
It reminds me a lot of my experience with AVID fanatics coming to a digital NLE from an analogue Betacam world or advertising industry “pro” people like ADs and copywriters in the late 80s to early 90s that had just started to use Macs, insisting that their Monitors had a 1.200 DPI resolution just because there was a system setting where you could select 1.200 DPI somewhere – never you mind that DPI is for printing resolution (and not the same as PPI) or that the absolute max resolution at the time on a high-end Mac CRT would be more like 72 pixels per inch). Discussions where one side totally lacks even the most basic understanding of how it actually works – but high on their own legacy entitlement – never lead anywhere good.
On the other hand – of course it’s very nice and very helpful to have a real “reference” monitor, but I don’t think you need one to get great (and consistent) results in the digital age (if you know what you are doing) – especially when starting out. If you’re going to be delivering for Netflix or consideration from the Academy, you’d probably use one, though.
Also, I think the same is also true about the need for a control surface: It’s nice to work with one, but you don’t need one to create great results (and besides, Lift / Gamma / Gain trackballs are more of an residual analog relic than a modern color grading interface optimized for digital & RAW, if you ask me).
</RANT>
FWIW, all of the examples on this page was graded previous to buying a dedicated monitoring setup of any kind – just a really old stock Samsung SyncMaster LCD monitor calibrated to r709 / Gamma 2.2 using the relatively cheap x-rite i1 Display calibration doodad. And my +14 years old stock FHD Samsung monitors “only” needed a +1 shift in blue from my own calibrated-by-eye settings I had used for a decade before that “actual” calibration for use with sRGB static images – so there’s that for you. (Yes, I do know this monitor doesn’t cover the whole r709 gamut – that’s the point: They were “good enough” regardless.)
An iPad Pro with the “DaVinci Monitor” app is providing me with further reference when grading, just to see what it looks like on iDevices in real time. If you want to use it too, just make sure the iPad is on the same WiFi (update, see below) as your Mac running DaVinci Resolve Studio – a weird & annoying limitation. (The Mac I’m grading on is usually only connected via LAN, WiFi completely turned off – so I might have had a minute or two of frustration before finding out).
<RANT>
Here we go again: Once you do get the app up and running, don’t get me started on the incredibly stupid hassle of having to copy and paste a session access token string thing between devices when using the remote monitor – per f*ing session! This should be as easy as a click of the mouse, tap of the finger! I mean it’s all on the same network! I’m an adult, I can handle the security issues. Just give me the option to always allow when on the same WiFi network. (And what’s the point of a Blackmagic Cloud in the first place if it can’t connect across different networks? I did check my firewall and there’s nothing technically blocking the connection). If it’s good enough for Apple AirPlay, it’s good enough for me – and should be for you too, Blackmagic Design.
</RANT>
Update: You can work around this by checking on the “Use Remote Monitoring Without Blackmagic Cloud” in the DaVinci Resolve main Preferences AND changing the (hidden) settings in the iPad app via the general iPadOS Settings -> Apps -> DaVinci Monitor and change to “Use without Blackmagic Cloud”. This way, you have to enter the numeric IP address of the computer you’re grading in DaVinci Resolve (that is reachable for your iPad) instead of an access token for each session – after you’ve started the remote monitoring session in Resolve. YMMV, but I’ve also noticed a slight performance gain bypassing the Blackmagic Cloud.
And now for the kicker: Regardless of the monitoring setup, more fun new color and gamma antics usually awaits when uploading videos to the different content platforms like YouTube, Vimeo, Meta, TikTok, et al. They all have their own little wicked ways of re-interpreting your color space / gamma metadata when re-encoding – basically straight up pooping on your work, your choice – and not all of them are publicly sharing (and not all of them publish updates to how they handle this in a timely manner) what they expect to go in and what comes out of their sausage factories, making getting a predictable end result a trial and error ordeal.
In my brute-forcing experience with my platform of choice Vimeo (yes, I tried encoding and uploading using most not-totally-obscure color spaces and gamma metadata settings and combinations thereof, and yes that took an idiotic amount of time to do, and yes the results were not exciting), exporting to rec709/rec709A or rec709/Gamma 2.2 is the most accurate fit to what I see when grading. (Yes, I know rec709A is not a “real thing” and only spoofs Apple QuickTime into displaying something more close to what I see when grading on the Mac – but again: If it works – who cares if it’s “correct”? I don’t.).
I grade my content on a Mac and for digital displays only – YMMV.
Here’s rec709, rec709A on Vimeo:
And here’s rec709, Gamma 2.2 on Vimeo:
Is there a difference on Vimeo? There used to be – now I’m not so sure.
My Clip-Level Nodes, Primary & Secondary
Here’s my latest default clip-level node tree for primaries and secondaries – it works very well for me:
This node tree is copied almost verbatim from Cullen Kelly – and that’s because it’s an AWESOME framework that works very intuitively for me (too) – and disciplines me to keep things really simple.
Top row = primary grade, second row = secondary grade (mostly power windows for finessing levels).
NR = Noise Reduction. This is the most CPU intensive node, placed at the start because of how proxies are generated and cached, using “UltraNR” as default. (I do grade without proxies activated to work on the full real captured image, but I also do my own edits where I do use proxies.)
EXP = Exposure in the HDR palette. Often I’m not touching Lift, Gamma, or Offset at all (if the footage is somewhat properly exposed) – I often still use a touch of Gain too, though. Using the HDR controls for exposure feel more intuitive for me than effing around with the Primaries wheels. (I also occasionally use the HDR Global wheel for color balancing by adjusting Temperature – YOLO! Don’t tell anyone.)
RAT = Ratio or Contrast Curve. Lately, I’ve been using these ready-made Curve LUTs (esp. the “Film 2” LUT) to get the RAT node (ratio) 90% “right” (to my tastes) out of the box on most shots, adjusting the rest to taste depending on the clip. They’re made for DWG / Intermediate and don’t seem to break anything. Don’t forget to set the right pivot point for your color space in your ratio node (e.g. DWG/Intermediate = 0.336) if you want to push and pull it manually too.
BAL = (Color) Balance. This node is set to use Linear Gamma and I’m using Gain to color-balance.
I also use the TETRA+ or Crosstalk dctls from time to time, especially if there are clips with some subtle iffy color issues that I’m just too incompetent to adjust otherwise. I find myself using it especially for skin tones and skin “roll-off”. (I have no fundamental knowledge of what I’m doing when operating these – I just know how I sometimes can achieve better results (to me) by using them.)
SAT HSV = Saturation in Hue Saturation Value mode (the node set to operate in a HSV color space and with only Channel 2 – the S for “Saturation” activated).
Update 1: SAT HSV is currently deactivated and added a Color Slice node to do its bidding instead. Let’s see if this works out over time.
Sometimes I (still) add the Sat Shaper+ or the Saturator (I like Saturator’s “preserve highlights” option) DCTL instead or in addition to the “SAT HSV” node on rhw clip level when I’m not completely satisfied with the saturation and sometimes also to modify the color density (yes, I’m lazy) and their “vibrancy” setting has sometimes helped me get more pleasing color separation or spread with one simple slider (not shown in my default node tree above).
Update 2: Now trying to use the new “Color Slice” to do the bidding of Sat Shaper or Saturator for me, but I keep reverting to the two above dctls and / or the old SAT HSV node.
Sharpen = Radius to taste (this node is placed after the primary and secondary grades to avoid potential unwanted artifacting that could happen if you would put this node inside the primary or inside the secondary, by way of how the parallel nodes get combined). As I’m often using soft vintage lenses combined with softening MTF shenanigans, a bit of Radius sharpening can make the image pop just the right way on footage shot with e.g. Pentax A110 lenses.
Update 3: I’ve added a compound node with a hodgepodge of effects jury rigged to create more or less of the kind of vintage anamorphic lens distortions that I like. (Keep on reading below for the details.)
MTF Curves
Did they tell you about “Modulation Transfer Function Curves“? I didn’t know anything about any of this! After stumbling into discovering MTF – more or less by random chance – it’s like a whole new black hole secret world of additional geekery has been revealed! Why didn’t I know anything about all of this sooner? Is this a secret guild initiation ritual type of thing or something?
I accidentally discovered this was the missing piece of the puzzle to achieve what my brain recognises as a “cinematic” image by random effing around and finding out, screwing around with the native “Soften and Sharpen” ofx (S&S); I wanted to figure out what that ofx thing actually does to the image, motivated by the need to find something that helps me with noise reduction without taxing the CPU as much as applying heavy native NR. By chance, I found out that – WOW – I can often achieve that vintage lens-y soft yet sharp cine-feelz by finessing the settings in this S&S thing!
But sometimes the S&S can be hard to tweak for that right subtle effect, though. Searching further, I learned the effect I achieved is related to the MTF curves of vintage “cine” glass, and I found a very helpful video on how to achieve MTF simulation and added that one too as a compound node (default off) to my default clip level node tree:
And I find sometimes using both a dash of the S&S ofx and the compound MTF simulation node serially can produce very pleasing (at least to me) results.
At times, adding a bit of radius sharpen after S&S and MTF simulation can help sell the effect.
WARNING: It can also be very useful to apply some grain in the Film Look Creator before tweaking these MTF-like settings; Sometimes the results of S&S plus MTF simulation alone look like a heap of meh until you slap some light film grain on top of it.
(I’ve also played around with this MTF Curve dctl, but so far I’m happier with the results from the two other above mentioned methods. I need to play around with this DCTL some more, though.)
And I guess the MTF effects should go on the clip level, being an effect related to the characteristics of each respective lens, and to adjust high frequency “noise” related to each respective lens / shot, right?
Right? (Cue “Phantom Menace” meme).
My Default Timeline-Level Nodes, My “LOOK” Node Tree
And this is the kitchen sink my latest default timeline-level node tree for the “LOOK”:
PSA: As Cullen Kelly says, you always want to be grading “underneath” your LOOK, aka always have your look nodes on the timeline level active (on) when you start grading your primaries on the clip level. This one of the things I wish I would have known sooner. It is such a simple thing, but it changed my grading game from painful brute-forcing to predictable fun for me: I used to think (not knowing any better) that you’d have to first grade your primaries before adding your look – but that’s a recipe for a lot of headaches and unnecessary work, often delivering crap results to boot, when compared to turning on your look sauce in your Timeline Nodes before you start grading your primaries instead.
Update: I use Grain in the Film Look Creator (FLC). (How many places can you now add film grain in DaVinci natively? 3 or 4 places? Do you want grain to go with that grain on that grain?)
Update: Also using a node with joo.works Halation dctls. It can work very well instead of the native Halation ofx, but I find joo.works’ Halation often breaks the illusion on clips with a lot of fine or high frequency details, e.g. close-ups with hair, eyebrows, beard stubs, etc (regardless of trying to compensate high frequency details with MTF simulation). I’m no expert, but I do not think that the real photochemical Halation effect looks like that on high frequency noise – it just looks very “off” with an overly pronounced orangy-red halo on each hair. No bueno. That’s when I might revert to using the native Halation fx (as the Film Look Creator’s Halation seem to be very limited in affecting the image and few options to tune it to how I want it), which also can help to create a more vibrant skin glow – if wanted.
The idea behind the additional Color Space Transform (CST) IN/OUT sandwiches is to be able to mix in creative (would that be a “negative” equivalent LUTs as to Film Print Emulation (FPE) LUTs? I have questions…) LUTs and Film Print Emulation (FPE) LUTs that were not made for the DaVinci Wide Gamut / Intermediate color space that I work in (the node directly in front of the sandwiches does take LUTs for DWG / Int should I happen to use one made for my actual working space).
Update: As of my Timeline Nodes v9, I’m not using additional CST sandwiches for LUTs and DCTLs intended for different color spaces than the one I’m working in (DWG/Intermediate). I’m just right-clicking the node in question and selecting the correct color space and gamma directly (when going from scene to scene space or to display space only for utility / checking purposes). This does make complex and potentially image-breaking settings a bit more hidden away – but it makes for a less messy and bloated node tree.
However, I am now using a CST sandwich for manually selecting footage color space and gamma (most often BMD Film Gen 1 / BMD Film) as the first node in the clip level node tree and going from scene / working color space (DWG/Intermediate) to the desired display space (usually r709, Gamma 2.2) as the last node in the timeline level node tree (both not shown in the node tree screenshot above). This means I’ve also disabled automatic color management in the settings, like I already mentioned.
There are some really good (and free) look LUTs available from reputable sources – like directly from most camera makers themselves – just google it (and remember to use the correct settings, the right color space and gamma it was intended for, either with a CST sandwich or changing it with right click directly on the LUT Node).
For setting up a CST sandwich that renders results as intended, you need to know when to apply “Forward OOTF” and “Inverse OOTF” – and when not to. These CST options have always been a mystery to me – until recently when it was explained by Cullen Kelly.
To wit, for my future CST Sandwich reference:
Forward OOTF = ON when going from a scene (or working) color space to a display color space, tone mapping set to luminance mapping and checked on custom max input set to 10.000, gamut mapping set to saturation compression.
Inverse OOTF = ON when going from a display (or screen) color space to a scene (working) color space.
Both OOTF = OFF when going from a scene space to another scene (working) color space (tone and gamut mapping off,)
In addition to my go-to look tool, Cullen Kelly’s beautiful Voyager Pro v2 “taste” LUT pack – worth every single penny, I’m often adding in additional creative or “negative” LUTs made for other color spaces to the mix (just a dash, using the “Key Input Gain” in the “Node Key” settings to limit the amount of influence) – like look luts from Arri (who doesn’t love Arri colors?). I also generously use the Key Input Gain in the Node Key to “walk back” the grade / look when I’m about to go overboard. More on that later.
And my goto Film Print Emulation (FPE) is a Fuji 3510 (in the pre-DSLR days, I always shot on Sensia and Velvia with my “OG” Nikon FM, so I’m a big-bias Fuji Fanboi here) by Cullen Kelly (free download available both for DWG & ACES, fantastic quality), and I also sometimes go crazy with Sony’s Technicolor Collection.
For global (timeline-level) color density I’m using DRT&T’s Film_Density_OFX and Density+ alternately, as I’m still undecided which one I actually prefer.
Dehancer is another great plugin for creating a photochemical film look, but I keep it deactivated in the look node tree as I find myself wasting too much time trying to brute-force a look with it. I’m still not very good at creating predictable results with it and I haven’t really invested enough time to learn how to use it properly – yet.
Sidenote: Is there a DCTL / OFX that ONLY does the FPE “analogue range limiter” part of Dehancer? That would make me happy. At least for a little while. Bueller… Bueller… thatcherfreeman… Anyone?
Also deactivated by default is the Cullen Kelly YouTube export LUT, which I only turn on for YT delivery (duh). Myself I mostly use Vimeo for distribution, and like I mentioned earlier, I’ve found rec709 / rec709A provides the best results when publishing on on Vimeo, aka looks most true to what I saw when grading after Vimeo has chewed on my upload and spat out their version of it. YMMV.
There’s also a lazy “Global” node to abuse for anything I need to add as the last step for all clips, e.g. cool or warm it up a bit, take exposure up or down to taste, etc. – a handy node for quick and dirty experimenting with new ideas after I feel satisfied with the general look without touching the main nodes. It’s also in there to remind me to keep trying different things – even when I’m satisfied with the look.
My approach for getting the look and feel I want is “less is better”, but anything goes (eff around & find out is the best method yet). As long as I like it, and it doesn’t break things (e.g. unpleasant skintones, artifacting, banding, etc), it’s a keeper – I ain’t too proud (or “pro”) to not keep piling it on. (And you know what -nobody cares what goes on in the sausage factory as long as what comes out is tasty).
My timeline look node tree also includes the MONONODES Balance and Clip utility DCTLs (so worth it – incredible productivity boosters – and keeps getting updated with new great features!) and additionally I’ve added the native False Color plugin, as five six (UPDATE: Added MONO’s new great “Heatmap” for exposure check in lieu of an “EL Zone System”, and also this “False Colors” dctl) “utility” nodes; “Zones” (aka DIY EL Zones System – don’t tell EL), Balance, White Clip, Black Clip, and Sat Clip. By just turning them on and off I can check the exposure and ratios, skin balance, and unwanted clipping across all shots (clips) really fast – turn the utility node on, select “refresh all thumbnails”, go to the Lightbox – and BOOM! – you’re checking all clips in seconds! I might also add this “blanking checker” utility DCTL in the future if I’m working on projects with footage from different sources.
<rant>
To the person(s) who invented it: I hate “False Color” visualisations as I have absolutely no connection to the color gradients that I’m seeing whatsoever, especially with Blackmagic Design False Colors (green, pink, and +80% of the rest of the color scheme is shades of gray? IRE? WTF? Get out of here! You got me bristling with ire, alright!). I have never understood how somebody would think this is a good and helpful visualisation for exposure. But what do I know. I thought I might be dumb or just missing some important clue they only share with you if you’re in the secret guild – but it turns out smarter and much more experienced people than me also don’t like False Colors; Enter the EL Zone System – finally a “false colors” scheme that makes intuitive sense! I wish this scheme would be implemented, either by contributing to EL’s patent coffers retirement fund or by implementing something similar, see the “Zones” compound node in my node tree) as a native DaVinci Resolve fx AND BMD Video Assist hardware monitoring feature – in addition to the not so useful “False Colors” option).
</rant>
UPDATE 1: Cullen Kelly launched the fantastic “Contour” film look builder plugin that is now at the top of my wishlist. That is to say, this is a pro level plugin with a (fair) price tag that I’ll only allow myself to buy if and when someone will actually pay me for grading, lookdev, shooting, anything. Playing around with the free (watermarked) demo version to get some mileage with it for now.
UPDATE 2: I’ve also added the new native DaVinci Resolve “Film Look Creator” (FLC) as a default node with the “clean slate” setting to my default timeline node tree.
UPDATE 3: I’ve also added the 2499 Custom Curves to the default timeline node tree to add a little something something, OFF by default, to experiment and evaluate. So far I feel adding a hint of this beats the Film Look Creator’s split toning results. Felt cute, might delete later.
Update 5: I’ve changed how I use the DaVinci Resolve’s native Halation effects. It’s still in the node tree, but deactivated – I’ve added a Joo.Works Halation power grade. The Joo.Works’ Halation power grade looks more like real photochemical film Halation to me. (It was made for ACES but works just fine in DWG/Intermediate, within or without a Color Space Transform sandwich, with or without changing the parameter in the power grade’s CST to DWG/Intermediate.)
And a heads up: The S (as in small) power grade compound node might render wonky in DWG (I’ve notified Joo.Works about it). Just wire the nodes in the S power grade instance up like in the M and L versions instead and you’re good to go.
And if I want to abuse Halation to have skin subtly glow or irradiate (a feature I find the Joo.Works Halation doesn’t really do for me) I can activate the old native Halation FX and check if it works for this purpose. (On writing this, I realised the native deactivated Halation FX should probably go on the clip-level instead for this clip-centric purpose.
I’ve also played around with this Halation dctl, but so far I’ve had no luck in generating the results I was looking for.
Below is another grade / look I made after discovering the interesting 2499 DRT dctls from Juan Pablo Zambrano. If you’re looking for instructions on how to apply them, I found this helpful post from the author over on liftgammagain (why this info isn’t included in the documentation is a mystery – or I am too stupid to find it on github). This was my first try screwing around with them, adding it to my look nodes mix. Looking forward to be digging myself into another hole playing around with the 2499 DRT tools some more:
More Grading / Look Examples
Below you’ll find some more color grading examples where I’m going for a super 16mm or 35mm photochemical film aesthetic, not so much a more “modern” or “shot-with-something-in-the-Sony-FX-or-Alpha-camera-family” aka the “influencer” (or “OnlyFans-pr0n-streamer) look – because the vintage “cine” look is my kind of kink. (Although I am a big fan of the look of the movie “The Creator“).
Also an enormous thanks to Nikolas “Media Division” Moldenhauer for the most inspiring, highest quality – and seriously geeked out (in the best cinephile way possible) content on them Interwebs. This is the level of production quality most “creators” can only aspire to – to me, it’s the “benchmark”. It’s amazing:
Maybe I’ll share some of my more “modern” color grades shot on the BMPCC4K camera with a Sigma 18-35mm f1.8 Art DC HSM lens in a future post – for now, you can infer from my previous post what that looks like and this screenshot from my live streaming studio:
And this is what the signal looks like before applying my custom studio grade / look LUT in-camera:
Above for reference: this is how it actually looks before color grading. (BMD Micro Cinema Camera, DNG (RAW) BMD Film G1, Pentax A110 70mm f2.8, K&F Concept ND64 + Tiffen ND7, ICE IR Cut)
Simulating Lens Artifacts, enter my “Lens Degrader”
Lately I was also inspired by Barrett Kaufman’s video to screw around with lens distortion simulation effects and I added some chromatic edge distortion to the mix using the Chromatic Aberration Removal FX and the Prism Blur FX. If you want to tinker with this yourself, below is how I set up the nodes in a compound clip. There are probably many better ways to do this, to wire the nodes up correctly and also to achieve more optically correct chromatic distortions – I’m not claiming that I know what I’m doing here – so be warned doing this may severely break things and/or ruin your day. Tweak to taste:
One Little Trick
One intervention heuristic I’ve started applying when I’ve dug myself too far into the hole on a grade / look is to go back and first make another version of the clip and timeline nodes, then dial everything going into the look and any effects back 50% to create a new starting point to finesse from. More often than not, the feelz that result is more pleasing to me (also when after sleeping on it and comparing later).
Because digging myself into a hole when grading is the rule, not the exception for me – I always manage to dig myself down way too deep. Deep down to that special place where sunken cost fallacy and tunnel vision meet up to move goal posts and host your personal echo chamber of mediocracy.
Maybe you too know that special place where the mind starts to go “Wow – this is my best work yet – I’ll just keep adding even more shit and it’ll be perfect!” – and then you take your eyes off the grade for a couple of seconds to watch something else – then returning to the exact same “perfect” look, recoiling with a viceral “yikes – wtf is this crap?!”.
More Examples, Stills Gallery
Accidentally upscaled to 4K or 6K – because that’s how Apple’s pixel density Retina voodoo works when taking screenshots around here. The MCC “only” shoots 1080p. (Would you have noticed it was upscaled and not native if I hadn’t told you?). The footage was shot with various ND’ed (K&F Concept, Kood, Tiffen, or Lee Filters ProGlass IRND) tiny vintage Pentax A110 lenses on the MCC where not stated otherwise. The close-up of the eye was shot with a 49mm +2 diopter from Kood.