A while back, I posted about my Live Streaming Studio V3.1 setup, because many people wanted to know what gear I’m using and how I get the “cinematic” look on a live Zoom call. And to achieve that look, one of the things I had to learn from scratch was how to color grade.
BTW, do you need help with creating a great custom “look” for your film or video production, your camera, or your podcasting or stream? Give me a ping, and let’s talk. I wasted a silly amount of time and money making all kinds of mistakes starting out, so I’m happy to help you avoid that.
Here, I’m sharing a bit about my further digging myself into a hole adventures into color grading with Blackmagic Design’s DaVinci Resolve Studio (free download of the non-studio version). It’s an incredible piece of software, by the way. If you’re thinking about ditching Adobe Premiere – just do it! Go for it. I’ve never regretted it for a second. It’s a joy to use.
The following assumes you’re already familiar some of the concepts of color grading – or at least have a faint interest in how to create a cinematic image with digital tools. (If not, this post will bore the living daylights out of you is probably not for you.)
What started as a necessity during the lockdown era (aka building a professional-looking online tele-presence) turned into a path of rediscovery, reigniting my passion for the cinematic image. (Fun fact: You might not know that I actually started out studying to become a film director – but I dropped out after only two years as university and studying film wasn’t really my thing. And then the commercial Internet happened.)
And as a person most likely somewhere on the spectrum, of course I can’t, I won’t, and I don’t stop digging until I’m getting somewhere interesting, somewhere where I can feel a sense of mastery and understanding of the full stack (lighting, lenses, camera – cinematography – color grading, look development, etc), aka being able to make predictable outcomes – and making those outcomes look cinematic and pleasing – to me. It’s become sort of a new obsession hobby of mine (in addition to helping other startup founders go farther faster, of course).
I’m still digging.. .
Read on below for this long non-startup (but hey – it’s still full of tech geekery) post.
The Shoot Setup (for the footage above)
Camera: A tiny (300g, w 8,23cm X d 7cm X h 6.6cm), old (launched in 2012!), and cheap (I paid less than EUR 600,- for it on ebay used, including an 8sinn cage, handle, and octopus expansion cable), digital super 16mm MFT dual gain sensor Blackmagic Design Micro Cinema Camera (MCC), ISO 800 (native?), 5.600K, shutter at 180 degrees and 24 fps – obviously, exposed to the right (ETTR)
Lens: A tiny (this being the largest in the series, but still tiny compared to e.g an EF lens) cheap (EUR 81,- on eBay, almost mint) vintage Pentax A110 (s16mm system) 70mm f2.8 fixed aperture (in this-lens-system-has-no-internal-iris! sense) on an MFT adapter
Filters, etc: Lens is kitted with a 49mm metal lens hood that sports a 72mm ICE “IR/UV” filter (dirt cheap, good quality) as the MCC needs a lot of IR filtering if you’re shooting with any sunlight in it and don’t love pink and purple blacks, a Lee Filters 2 stops IRND Proglass 100mm x 100mm filter (best ND filter I’ve ever used, and the IR part also helps a great deal on this Micro Cinema Camera that has a very basic native IR filter) on a cheap-ass Zomei 100mm filter holder to shoot into the sun (classic backlit since I don’t own lights powerful enough to fight the sun) coming in at far side of my face (actually it was overcast and raining). <rant>I don’t use variable NDs as every single “variable” ND I’ve tested so far (did not test any above 200 bucks so far, though) produces unpredictable results, aka effs up the colors. Let me know if you know of a variable ND filter that delivers as-good-as results as the Lee Filters ProGlass IRND – and until then, I’ll take the inconvenience and inflexibility of using fixed ND filters instead of “variable” any day.</rant>
Lights: Key, Godox UL150 (silent, great value for money) with an Aputure Lantern modifier. Fill, Godox SL60 (not entirely silent, old-ish but OK – one of the best budget options for the color-accuracy at the time it came out) with an Aputure Light Dome Mini II softbox & honeycomb / grid modifier.
Mental note to self: Never forget to turn off all other light fixtures, especially LED sources (Halogen is usually OK). It’s been more than once that I’ve caught myself forgetting to turn off existing interior lights, only to discover it after the fact when grading the footage. It usually manifests in disgusting skin tones that are nearly impossible to balance to a satisfyingly natural place.
Image Acquisition: Blackmagic Design Film Generation 1 DNG RAW (not to be confused with BRAW).
A big thanks to Rob Ellis for being a fantastic source of inspiration, teaching how to make the most out of simple lighting setups and how to manipulate color temperatures to achieve the “cinematic” look even on a next to no budget.
Also a huge thanks to Patrick “WanderingDP” O’Sullivan for teaching n00bs like me “the framework” and how to achieve non-flat, cinematic-looking images consistently. Hint: Shoot into the L of the room, shoot backlit, get your ratios right, etc. E.g. if your image looks “flat” or just doesn’t look right – not very cinematic at all – it’s probably because your lighting ratios (key to fill, key to background) are out of whack (and maybe you’re not shooting into the L too).
Rigging Deeper
And of course I also couldn’t help myself from digging myself into another hole and obsessively over-engineering building my own camera rig to feed my compulsions fit my needs…
Below is a teaser reveal of my “The Creator” franken-rig, super 16mm ghetto style, that the above clip what shot with (the shot of the rig was done in the studio with the BMPCC4K):
In the video above, you can see the Franken-rig was jiggling a bit when rotating on the tripod. I later fixed this by adding a longer Manfrotto base-plate that helps tie more of the rig together, making it more structurally sound. (See the Kit.co page for details).
The Franken-rig is shoulder and tripod mountable. On the shoulder it helps with stabilizing an otherwise jittery setup, and on the tripod, I can also remote control the camera with tracking (iOS app), and by joystick or 6-axis’ing with the remote (MasterEye) – and it features redundant hot-swappable power through v-mount batteries & d-tap.
The rig is so obnoxiously heavy that it has given me months of seriously unpleasant pinched nerves in the neck and god awful shoulder pains. Back to the drawing board. I’m now thinking about adding a Proaim Flycam steadicam vest. To a shoulder rig… Yes, I’m a lost cause. No, I don’t want to hear a word about the sunken cost fallacy at this point.
All of which amounts to an incredibly stupid amount of rigging for a tiny old 300g camera that “only” shoots 1080p. Maybe I’ll geek out further and do a breakdown on the complete franken-rig build in a future post.
But I am in love with the images that come out of that tiny camera – when you get rid of the micro-jitters, light it and grade it properly.
My Grading Today
Since the last post, I’ve changed my Gamma output from 2.4 to 2.2 (because all I so far deliver for is online consumption and Gamma 2.2 should be more in line with modern day phones, tablets, and computer monitors) instead of good ole’ Gamma 2.4, which is more in line with cathode ray tube TVs – I guess.
I’m still working “color managed” in DaVinci Wide Gamut / Intermediate, mostly not manually in a CST sandwich, as my shots are almost all shot by myself in BMD DNG or BRAW. Why am I not working in say, ACES? I could certainly also work in ACES (there would be no real difference in my workflow, afaik), but as I’m basically only delivering projects shot by myself for myself right now – DWG/Intermediate serves perfectly for me.
Here are my DaVinci Resolve color management settings:
Update: I’m now doing manual color management using a Color Space Transform sandwich, “DaVinci YRGB” sans “Color Managed” as color science setting (not shown above).
I’m now using a “Video Monitor Lookup Table” by Cullen Kelly called “macOS Viewing Transform v1.3“, insuring that what I’m watching when grading is indeed as good as identical (well, good enough for non-pros like me, and still good enough for someone like me who has been working with pixels for +40 years and can spot by eye if one pixel differs 1 in value in any of the RGB values to the neighbours) to what gets delivered (YMMV if you don’t have a P3 Apple display with this viewing transform LUT. My main display is a P3 calibrated Dell 5K which uses the same LG panel as the iMac 5K afaik).
I also use an old Samsung SyncMaster calibrated to rec709 / Gamma 2.2 as the “CleanFeed” to compare to what I’m seeing in the main view in DaVinci Resolve, representing your “average” LCD monitor out there.
I am also considering getting a BMD UltraStudio Monitor and a 100% r709 Asus ProArt FHD monitor as a dedicated monitoring solution, a next step on the journey to a “real” reference monitor (probably Flanders or osee – the latter could enticingly double as a field monitor) setup.
Warning: More fun new color and gamma matching antics awaits you when uploading your videos to the different content platforms like YouTube, Vimeo, Meta, TikTok, et al as they all have their own little wicked ways of re-interpreting (basically just ignoring) your color space / gamma metadata when re-encoding – and not all of them are publicly sharing what they expect to go in and what comes out of their sausage factories, making getting a predictable end result a trial and error ordeal. In my brute-forcing experience with my platform of choice Vimeo (yes, I tried encoding and uploading using most non-obscure color spaces and gamma metadata settings and combinations thereof, and yes that took an idiotic amount of time to do, and yes the results were not exciting), exporting to rec709/rec709A is the most accurate fit to what I see when grading (YMMV – I grade the content on a Mac and for digital displays only).
An iPad Pro with the “DaVinci Monitor” app is providing further reference when grading. If you want to use it too, just make sure the iPad is on the same WiFi as your Mac running DaVinci Resolve Studio – a weird & annoying limitation. (The Mac I’m grading on is usually only connected via LAN, WiFi completely turned off – so I might have had a minute or two of frustration before finding out).
<RANT>Once you get the app up and running, don’t get me started on the incredibly stupid hassle of having to copy and paste a session access token string thing between devices when using the remote monitor – per f*ing session! This should be as easy as a click of the mouse, tap of the finger! I mean it’s all on the same network! I’m an adult, I can handle the security issues. Just give me the option to always allow when on the same WiFi network. If it’s good enough for Apple AirPlay, it’s good enough for me – and should be for you too, Blackmagic Design. </RANT>
My Clip-Level Nodes, Primary & Secondary
Here’s my latest default clip-level node tree for primaries and secondaries – it works very well for me:
This node tree is almost copied almost verbatim from Cullen Kelly – and that’s because it’s an AWESOME framework that works very intuitively for me (too) – and disciplines me to keep things really simple.
Top row = primary grade, second row = secondary grade (mostly power windows for finessing levels).
NR = Noise Reduction (this is the most CPU intensive node, placed at the start because of how proxies are generated and cached, using “UltraNR” as default).
EXP = Exposure in the HDR palette. Often I’m not touching Lift, Gamma, or Offset at all (if the footage is somewhat properly exposed) – I often still use a touch of Gain too, though. Using the HDR controls for exposure feel more intuitive for me than effing around with the Primaries wheels. (I also occasionally use the HDR Global wheel for color balancing by adjusting XY, Tint, or Temperature – YOLO! Don’t tell anyone.)
RAT = Ratio or Contrast Curve. Lately, I’ve been using these ready-made Curve LUTs (esp. the “Film 2” LUT) to get the RAT node (ratio) 90% “right” (to my tastes) out of the box, adjusting the rest to taste depending on the clip. They’re made for DWG / Intermediate and don’t seem to break anything. Don’t forget to set the right pivot point for your color space in your ratio node (e.g. DWG/Intermediate = 0.336) if you want to push and pull it manually too.
BAL = (Color) Balance. This node is set to use Linear Gamma and I’m using Gain to color-balance.
I also use the TETRA+ or Crosstalk dctls from time to time, especially if there are clips with some subtle iffy color issues that I’m just too incompetent to adjust otherwise. I find myself using it especially for skin tones and skin “roll-off”.
SAT HSV = Saturation in Hue Saturation Value mode (the node set to operate in a HSV color space and with only Channel 2 – the S for “Saturation” activated).
Update 1: SAT HSV is currently deactivated, added a Color Slice node to do its bidding instead. Let’s see if this works out over time.
Sometimes I add the Sat Shaper+ or the Saturator (I like Saturator’s “preserve highlights” option) DCTL instead or in addition to the “SAT HSV” node on rhw clip level when I’m not completely satisfied with the saturation and sometimes also to modify the color density (yes, I’m lazy) and their “vibrancy” setting has sometimes helped me get more pleasing color separation or spread with one simple slider (not shown in my default node tree above).
Update 2: Now trying to use the new “Color Slice” to do the bidding of Sat Shaper or Saturator for me, but I keep reverting to the two above dctls and / or the old SAT HSV node.
Sharpen = Radius to taste (this node is placed after the primary and secondary grades to avoid potential unwanted artifacting that could happen you would put this node inside the primary or inside the secondary, by way of how those parallel nodes get combined). As I’m often using soft vintage lenses combined with softening MTF shenanigans, a setting between 48 to 44 might make the image pop just the right way on footage shot with my Pentax A110 lenses.
Update 3: I’ve added a compound node with a hodgepodge of effects jury rigged to create more or less of the kind of vintage anamorphic lens distortions that I like. (Keep on reading below for the details.)
MTF Curves
Have you heard about “Modulation Transfer Function Curves“? I didn’t know anything about this – It’s like a whole new black hole secret world of additional geekery has been revealed! Why didn’t I know anything about all of this sooner? Is this a secret guild initiation ritual type of thing or something?
I accidentally discovered this was the missing piece of the puzzle to achieve what my brain recognises as a “cinematic” image by random effing around and finding out, screwing around with the native “Soften and Sharpen” ofx (S&S) as I wanted to figure out what that thing actually does to the image, motivated by trying to find something that helps with noise reduction without taxing the CPU as much as applying heavy native NR. Accidentally I found out that – WOW – I can often achieve that vintage lens-y soft yet sharp cine-feelz by finessing the settings in this S&S thing!
But sometimes the S&S can be hard to tweak for that right subtle effect, though. So searching further, I learned the effect I achieved is related to the MTF curves of vintage glass, and I found this very helpful MTF simulation video and I’ve since added it (too) as a compound node (default off) to my default clip level node tree.
And I find sometimes using both a dash of the S&S ofx and the compound MTF simulation node serially can produce very pleasing (at least to me) results.
At times, adding a bit of radius sharpen after S&S and MTF simulation can help sell the effect.
WARNING: It can also be very useful to apply some grain in the Film Look Creator before tweaking these MTF-like settings; Sometimes the result alone looks just plain awful until you slap some grain on top of it.
(I’ve also played around with this MTF Curve dctl, but so far I’m happier with the results from the two other above mentioned methods. I need to play around with this one some more.)
And I guess the MTF effects should go on the clip level, being an effect related to the characteristics of each respective lens, and to adjust high frequency “noise” related to each respective lens / shot, right?
Right?
My Default Timeline-Level Nodes, My “LOOK” Node Tree
And this is the kitchen sink my latest default timeline-level node tree for the “LOOK”:
PSA: As Cullen Kelly says, you always want to be grading “underneath” your LOOK, aka always have your look nodes on the timeline level active (on) when you start grading your primaries on the clip level. (Another thing I wish I would have known sooner. Such a simple thing. Changed the grading game from brute-forcing to predictable for me.)
I don’t have internal grain activated in the Halation FX nor do I use the DaVinci Film Grain fx much, as I mostly grade my own footage shot with the MCC that usually create all the graininess I need all by itself.
Update: I use Grain in the Film Look Creator (FLC) now, when needed. (BTW, how many places can you add grain in DaVinci natively by now? 3 or 4 places? Do you want grain with that grain with that grain?)
Update: Also using a node with joo.works Halation dctls. It can work very well instead of the native Halation ofx, but I find joo.works’ Halation often breaks the illusion on clips with a lot of fine or high frequency details, e.g. close-ups with hair, eyebrows, beard stubs, etc (regardless of trying to compensate high frequency details with MTF simulation). I’m no expert, but I do not think that the real photochemical Halation effect looks like that on high frequency noise – it just looks very “off” with an overly pronounced orangy-red halo on each hair. No bueno. That’s when I might revert to using the native Halation fx (as the Film Look Creator’s Halation seem to be very limited in affecting the image and few options to tune it to how I want it), which also can help to create a more vibrant skin glow – if wanted.
The idea behind the Color Space Transform (CST) IN/OUT sandwiches is to be able to mix in creative (would that be a “negative” equivalent LUTs as to Film Print Emulation (FPE) LUTs? I have questions…) LUTs and Film Print Emulation (FPE) LUTs that were not made for the DaVinci Wide Gamut / Intermediate color space that I work in (the node directly in front of the sandwiches does take LUTs for DWG / Int should I happen to use one made for my actual working space). There are lots of really good (and free) look LUTs available from reputable sources like directly from most camera makers themselves – just google it (and remember to use the correct settings, the right color space and gamma it was intended for, in the CST sandwich).
Update: As of my Timeline Nodes v9, I’m not using CST sandwiches for LUTs and DCTLs intended for different color spaces than the one I’m working in (DWG/Intermediate). I’m just right-clicking the node in question and selecting the correct color space and gamma directly (when going from scene to scene space or to display space only for utility / checking purposes). This does make complex and potentially image-breaking settings a bit more opaque – but it makes for a less messy and bloated node tree.
However, I am now using a CST sandwich for manually selecting footage color space and gamma (often BMD Film Gen 1 / BMD Film) as the first node in the clip level node tree and going from scene / working color space (DWG/Intermediate) to the desired display space (usually r709, Gamma 2.2) as the last node in the timeline level node tree (both not shown in the node tree screenshot above). This means I’ve also disabled automatic color management in the settings.
For setting up a CST sandwich that renders results as intended, you need to know when to apply “Forward OOTF” and “Inverse OOTF” – and when not to. These CST options have always been a mystery to me – until recently when it was explained by Cullen Kelly.
To wit, for my future CST Sandwich reference:
Forward OOTF = ON when going from a scene (or working) color space to a display color space, tone mapping set to luminance mapping and checked on custom max input set to 10.000, gamut mapping set to saturation compression.
Inverse OOTF = ON when going from a display (or screen) color space to a scene (working) color space.
Both OOTF = OFF when going from a scene space to another scene (working) color space (tone and gamut mapping off,)
In addition to my go-to look tool, Cullen Kelly’s beautiful Voyager Pro v2 “taste” LUT pack – worth every single penny, I’m often adding in additional creative or “negative” LUTs made for other color spaces to the mix (just a dash, using the “Key Input Gain” in the “Node Key” settings to limit the amount of influence) – like look luts from Arri (who doesn’t love Arri colors?). I also generously use the Key Input Gain in the Node Key to “walk back” the grade / look when I’m about to go overboard. More on that later.
And my goto Film Print Emulation (FPE) is a Fuji 3510 (in the pre-DSLR days, I always shot on Sensia and Velvia with my “OG” Nikon FM, so I’m a big-bias Fuji Fanboi here) by Cullen Kelly (free download available both for DWG & ACES, fantastic quality), and I also sometimes go crazy with Sony’s Technicolor Collection.
For global (timeline-level) color density I’m using DRT&T’s Film_Density_OFX and Density+ alternately, as I’m still undecided which one I actually prefer.
Dehancer is another great plugin for creating a photochemical film look, but I keep it deactivated in the look node tree as I find myself wasting too much time trying to brute-force a look with it. I’m still not very good at creating predictable results with it and I haven’t really invested enough time to learn how to use it properly – yet.
Sidenote: Is there a DCTL / OFX plugin that ONLY does the FPE “analogue range limiter” part of Dehancer? That would make me happy. At least for a little while. Bueller… Bueller… Anyone?
Also deactivated by default is the Cullen Kelly YouTube export LUT, only turning it on for YT delivery. I normally use Vimeo for distribution, and, like I mentioned earlier, I’ve found rec709 / rec709-A provides the best results when publishing on on Vimeo, aka looks most true to what I saw when grading after Vimeo has chewed on my upload and spat out their version – YMMV.
There’s also a lazy “Global” node to abuse for anything I need to add as the last step for all clips, e.g. cool or warm it up a bit, take exposure up or down to taste, etc. – a handy node for quick and dirty experimenting with new ideas after I feel satisfied with the general look without touching the main nodes.
My approach for getting the look and feel I want is “less is better”, but anything goes (eff around & find out is the best method yet). As long as I like it, and it doesn’t break things (e.g. unpleasant skintones, artifacting, banding, etc), it’s a keeper. (Nobody cares what goes on in the sausage factory as long as what comes out tastes good).
My timeline look node tree also includes the MONONODES Balance and Clip utility DCTLs (so worth it – incredible productivity boosters – and keeps getting updated with new great features!) and additionally I’ve added the native False Color plugin, as five six (UPDATE: Added MONO’s new great “Heatmap” for exposure check in lieu of an “EL Zone System”, and also this “False Colors” dctl) “utility” nodes; “Zones” (aka DIY EL Zones System – don’t tell EL), Balance, White Clip, Black Clip, and Sat Clip. By just turning them on and off I can check the exposure and ratios, skin balance, and unwanted clipping across all shots (clips) really fast – turn the utility node on, select “refresh all thumbnails”, go to the Lightbox – and BOOM! – you’re checking all clips in seconds! I might also add this “blanking checker” utility DCTL in the future if I’m working on projects with footage shot with several different cameras.
<rant>I hate “False Color” visualisations as I have absolutely no connection to the color gradients that I’m seeing whatsoever, especially with Blackmagic Design False Colors (green, pink, and +80% of the rest of the color scheme is shades of gray? WTF?! IRE? WTF? Get out of here!). I have never understood how somebody would think this is a good and helpful visualisation for exposure. I thought I might be dumb or just missing some important clue, but it turns out smarter and much more experienced people than me also don’t like False Colors; Enter the EL Zone System – finally a “false colors” scheme that makes intuitive sense! I wish this scheme would be implemented, either by contributing to EL’s patent coffers retirement fund or by implementing something similar, see the “Zones” compound node in my node tree) as a native DaVinci Resolve fx AND BMD Video Assist hardware monitoring feature – in addition to the not so useful “False Colors” option).</rant>
UPDATE 1: Cullen Kelly launched the fantastic “Contour” film look builder plugin that is now at the top of my wishlist. That is to say, this is a pro level plugin with a (fair) price tag that I’ll only allow myself to buy if and when someone will actually pay me for grading, lookdev, shooting, anything. Playing around with the free (watermarked) demo version to get some mileage with it for now.
UPDATE 2: I’ve also added the new native DaVinci Resolve “Film Look Creator” (FLC) as a default node with the “clean slate” setting to my default timeline node tree.
UPDATE 3: I’ve also added the 2499 Custom Curves to the default timeline node tree to add a little something something, OFF by default, to experiment and evaluate. So far I feel adding a hint of this beats the Film Look Creator’s split toning results. Felt cute, might delete later.
Update 5: I’ve changed how I use the DaVinci Resolve’s native Halation effects. It’s still in the node tree, but deactivated – replaced with the Joo.Works Halation power grade. The Joo.Works’ Halation power grade looks more like real photochemical film Halation to me. (It was made for ACES but works just fine in DWG/Intermediate, within or without a Color Space Transform sandwich, with or without changing the parameter in the power grade’s CST to DWG/Intermediate. And a heads up: The S (as in small) power grade compound node might render wonky in DWG (I’ve notified Joo.Works about it). Just wire the nodes in the S power grade instance up like in the M and L versions instead and you’re good to go.) And if I want to abuse Halation to have skin subtly glow or irradiate (a feature I find the Joo.Works Halation doesn’t really do for me) I can activate the old native Halation FX and check if it works for this purpose. (On writing this, I realised the native deactivated Halation FX should probably go on the clip-level instead for this clip-centric purpose. I’ve also played around with this Halation dctl, but I’ve had no luck in generating the results I was looking for so far.
Below is another grade / look I made after discovering the interesting 2499 DRT dctls from Juan Pablo Zambrano. If you’re looking for instructions on how to apply them, I found this helpful post from the author over on liftgammagain (why this info isn’t included in the documentation is a mystery – or I am too stupid to find it on github). This was my first try screwing around with them, adding it to my look nodes mix. Looking forward to be digging myself into another hole playing around with the 2499 DRT tools some more:
More Grading / Look Examples
Below you’ll find some more color grading examples where I’m going for a super 16mm or 35mm photochemical film aesthetic, not so much a more “modern” or “shot-with-something-in-the-Sony-FX-or-Alpha-camera-family” (aka the “OnlyFans-Look”) – because the vintage “cine” look is my kind of kink. (Although I am a big fan of the look of the movie “The Creator“).
Maybe I’ll share some of my more “modern” color grades shot on the BMPCC4K camera with a Sigma 18-35mm f1.8 Art DC HSM lens in a future post – for now, you can infer from my previous post what that looks like and this screenshot from my live streaming studio:
And this is what the signal looks like before applying my custom studio grade / look LUT in-camera:
Above for reference: this is how it actually looks before you start color grading, color management bypassed in DaVinci Resolve. (BMD Micro Cinema Camera, DNG (RAW) BMD Film G1, Pentax A110 70mm f2.8, K&F Concept ND64 + Tiffen ND7, ICE IR Cut)
Simulating Lens Artifacts
Lately I was also inspired by Barrett Kaufman’s video to screw around with lens distortion simulation effects and I added some chromatic edge distortion to the mix using the Chromatic Aberration Removal FX and the Prism Blur FX. If you want to tinker with this yourself, below is how I set up the nodes in a compound clip. There are probably many better ways to do this, to wire the nodes up correctly and also to achieve more optically correct chromatic distortions – I’m not claiming that I know what I’m doing here – so be warned doing this may severely break things and/or ruin your day. Tweak to taste:
One Little Trick
One intervention heuristic I’ve started applying when I’ve dug myself too far into a hole on a grade / look is to go back and first make another version of the clip and timeline nodes, then dial everything back 50% to create a new starting point to finesse from. More often than not, the look(s) that result is more pleasing to me (also after sleeping on it).
Because digging myself into a hole when grading is the rule, not the exception for me – I always manage to dig myself down way too deep. Deep down to that special place where sunken cost fallacy and tunnel vision meet up to form a personal echo chamber of mediocracy.
Maybe you too know that special place where the mind starts to go “Wow – this is my best work yet – I’ll just keep adding even more shit and it’ll be perfect!” – and then you take your eyes off the grade for a couple of seconds to watch something else – then returning to the exact same “perfect” look, recoiling with a viceral “yikes – wtf is this crap?!”.
More Examples, Stills Gallery
Accidentally upscaled to 4K or 6K – because that’s how Apple’s pixel density Retina voodoo works when taking screenshots around here. The MCC “only” shoots 1080p. (Would you have noticed it was upscaled and not native if I hadn’t told you?). The footage was shot with various ND’ed (K&F Concept, Kood, Tiffen, or Lee Filters ProGlass IRND) tiny vintage Pentax A110 lenses on the MCC where not stated otherwise. The close-up of the eye was shot with a 49mm +2 diopter from Kood.