cine, Lessons Learned

On Color Grading

A while back, I posted about my Live Streaming Studio V3.1  setup, because many people wanted to know what gear I’m using and how I get the “cinematic” look on a live Zoom call. To achieve that look, one of the things I had to learn from scratch was how to color grade.

Here, I’m sharing a bit about my further digging myself into a hole adventures into color grading with Blackmagic Design’s DaVinci Resolve Studio (free download of the non-studio version). It’s an incredible piece of software. (If you’re thinking about ditching Adobe Premiere – just do it! Go for it. I’ve never regretted it for a second).

This is not a primer on color grading. I’m just dumping writing up and sharing what I’ve learned that works great for me. The following assumes you’re already familiar with some of the key concepts or at least have a faint interest in them. If not, this post will bore the living daylights out of you might not be for you. However, if you wish to start (or continue) on a color grading learning journey with DaVinci Resolve, Cullen Kelly’s YouTube channel is a wonderful place for that.

What started as a necessity during the lockdown era (aka building a professional-looking online tele-presence) turned into a rediscovery of my passion for (I did indeed start out studying to become a film director, albeit dropping out after two years – studying it wasn’t for me) the cinematic image.

And as a person most likely somewhere on the spectrum, of course I can’t stop digging until I’m getting somewhere interesting, somewhere where I can feel a sense of mastery and understanding of the full stack (lighting, lenses, camera, cinematography, sound design, microphones, color grading, post production), aka being able to make predictable outcomes – and making those outcomes look cinematic and pleasing (to me). It’s become sort of a new obsession hobby of mine (in addition to spreading startup entrepreneurship education, of course). Still digging…

The quick & dirty setup for the above shoot:

  • Camera: A tiny (300g, w 8,23cm X d 7cm X h 6.6cm), old (launched in 2012!), and cheap (I paid less than EUR 600,- for it on ebay used, including an 8sinn cage, handle, and octopus expansion cable), digital super 16mm MFT sensor Blackmagic Design Micro Cinema Camera (MCC), ISO 800 (native), 5.600K, shutter at 180 degrees and 24 fps – obviously, exposed to the right (ETTR)
  • Lens: A tiny (this being the largest in the series, but still tiny compared to e.g an EF lens) cheap (EUR 81,- on eBay, almost mint) vintage Pentax A110 (s16mm system) 70mm f2.8 fixed aperture (in this-lens-system-has-no-internal-iris! sense) on an MFT adapter, kitted with a 49mm metal lens hood that sports an 72mm ICE “IR/UV” filter (dirt cheap for the quality – and the MCC needs an IR filter if you’re shooting with any sunlight – if you don’t like pink and purple blacks), a Lee 2 stops IRND proglass filter, shooting into the sun (I don’t have powerful enough lights to fight the sun) coming in at far side of face (actually it was overcast and raining).
  • Lights: Key, Godox UL150 (silent, great value for money) with an Aputure Lantern modifier. Fill, Godox SL60 (not entirely silent, but OK – and dirt cheap for the color-accuracy) with an Aputure Light Dome Mini II softbox & honeycomb / grid modifier.
  • Image Acquisition: Blackmagic Design Film Generation 1 DNG (RAW, not to be confused with BRAW).

And with tiny – I mean TINY! (This is the A110 24mm, and the 70mm is much larger, but still tiny.)

Below is a teaser reveal of my “The Creator” franken-rig, super 16mm ghetto style, that the above clip what shot with (in the studio on the BMPCC4K). Yes, of course I also couldn’t help myself from digging myself into another hole, obsessively over-engineering building my own camera rig to feed my compulsions fit my needs…

This franken-rig doubles as shoulder and tripod mountable. On the shoulder it helps with stabilizing an otherwise jittery setup, and on the tripod, I can also remote control the camera with tracking (iOS app), and by joystick or 6-axis’ing with the remote (MasterEye).

This rig is now so heavy it’s given me months of seriously unpleasant neck and shoulder pain already. Back to the drawing board, I guess; I’m now thinking about adding a Proaim Flycam steadicam vest.

All of which amounts to an incredibly stupid amount of rigging for an old hd-only, tiny 300g camera.

Let me know if I should do a video breakdown on the complete rig build.

Since the last post, I’ve changed my Gamma output from 2.4 to 2.2 (because all I deliver for is online consumption and 2.4 is the old “TV” standard, 2.2 more in line with phones, tablets and computer monitors – I’ve been told).

I’m now also using a “Video Monitor Lookup Table” by Cullen Kelly called “macOS Viewing Transform v1.3“, insuring that what I’m watching when grading is indeed as good as identical (good enough for non-pros, and still good enough for someone like me who has been working with pixels for +40 years and can spot by eye if one pixel differs 1 in value in any of the RGB values to the neighbours) to what gets delivered (YMMV if you don’t have a P3 Apple display – Mine is a P3 calibrated Dell 5K which uses the same LG panel as in the iMac 5K – afaik).

I also use an old Samsung SyncMaster calibrated to rec709 / Gamma 2.2 as the “CleanFeed” to compare to what I’m seeing in the main view in DaVinci Resolve. BTW, can someone explain-it-like-I’m-5 how the DaVinci setting of r709 Gamma 2.2 gets parsed, what the pipeline looks like, when viewing the main view on a P3 display – and with the viewing transform LUT applied? To the best of my knowledge (and brute-forcing experience), if I want the exported video to look like when I graded it on the calibrated P3 display when viewed locally on a P3 or Mac/iPad display I have to export as rec709/rec709A – and if I want it too look like what I saw on the calibrated rec709/Gamma 2.2 ClearView monitor when playing it locally on same, I have to export it as rec709/Gamma 2.2. All of which kind of makes sense. Now, the real headaches – the real mind-fucks – start when you upload your videos to video content platforms like YouTube and Vimeo: They all have different ways of interpreting (or ignoring) your color space / gamma metadata when re-encoding – and they’re not all really sharing how you handle this predictably. #FML 

I’m also using an iPad Pro with the “DaVinci Monitor” app when grading. Make sure the iPad is on the same WiFi as your Mac running DaVinci Resolve Studio – a stupid & annoying limitation. And don’t get me started on the incredible hassle of having to copy and paste the session access string between devices when using the remote monitor per session… #JFC This should be as easy as a click of the mouse, tap of the finger! I mean it’s all on the same network – I’m an adult, I can handle the security issues, just give me the option to always allow when on the same network. If it’s good enough for Apple AirPlay, it’s good enough for me – and you, Blackmagic Design.

Primaries & Secondaries, My Clip-Level Nodes

Here’s my latest default clip-level node tree for primaries and secondaries – it works very well for me:

This node tree is almost verbatim copied from Cullen Kelly – and that’s because it’s an AWESOME framework that works very intuitively for me (too) – and disciplines me to keep things really simple.

Of other note, I’ve found using these Curve LUTs (esp. “Film 2”) to get the RAT node (ratio) 90% “right” (to my tastes) out of the box, adjusting the rest depending on the clip – they’re made for DWG / Intermediate and don’t break anything so far. (Don’t forget to set the right pivot point for your color space in your RAT node: DWG/Intermediate = 0.336, if you want to adjust it manually.)

Not shown in my default node tree above: Sometimes I add the Sat Shaper+ DCTL after the SAT HSV node or instead of it if I’m not completely satisfied with the saturation (I’m lazy), just to try out some more options – also its “vibrancy” setting has sometimes helped me get more pleasing color separation / spread in one simple operation.

Sometimes I also use the TETRA+ DCTL if there are clips with some gnarly color issues that I’m just incompetent to adjust otherwise.

I find myself more in the HDR wheels when adjusting exposure in the EXP node these days. I don’t know if that’s considered Kosher by the “pros” or not, but using the HDR controls for exposure feel so much more intuitive and natural for me – so I don’t really care.

My LOOK Node Tree, Timeline-Level Nodes

And this is my latest default timeline-level node tree for the overall “LOOK”:

Remember, you always want to be grading “underneath” your LOOK, aka always have your look nodes on the timeline level active when you start grading your primaries on the clip level.

BTW, I don’t always have internal grain activated in the Halation DCTL nor do I use the DaVinci Film Grain plugin so often, as I find the MCC is usually creating all the organic grain I need.

The idea behind the CST IN/OUT sandwiches is to be able to mix in creative and Film Print Emulation (FPE) LUTs that were not made for the DaVinci Wide Gamut / Intermediate color space that I work in. The node directly in front of the sandwiches does take LUTs for DWG / Int. I’m often adding more creative or “negative” LUTs made for other color spaces, often in addition to my first go-to taste tool – Cullen Kelly’s beautiful Voyager Pro v2 “taste” LUTs (worth every penny!) – to the mix when I feel like (usually from Arri – who doesn’t love Arri?!), here also using a Fuji 3510 Film Print Emulation (FPE) by Cullen Kelly (free download available both for DWG & ACES), and also Sony’s Technicolor Collection FPEs), for film density I’m using DRT&T’s Film_Density_OFX (sometimes I also use Density+), Dehancer is a great plugin for creating the film look, deactivated in this example (it can produce nice results, but I find myself using it less at the moment as I’m still not very good at creating a predictable and consistent results with it).

BTW, is there a DCTL / OFX plugin that ONLY does the FPE “analogue range limiter” part of Dehancer? That would make me happy!

Also deactivated by default is the Cullen Kelly YouTube export LUT (I only activate it if delivering for YT, I normally use Vimeo for distribution. I’ve found rec709 / rec709-A provides the best results when publishing on on Vimeo, aka looks most true to what I saw when grading after Vimeo has chewed on it and spat out their recompression.

There’s also a lazy “Global” node for anything I need to add as the last step for all clips, e.g. cool or warm it up a bit, take exposure up or down to taste, etc. – a handy node for quick and dirty experimenting with new ideas after I feel satisfied with the general look without touching the main nodes.

My approach for getting the look and feel I want is “less is better”, but anything goes (fuck around & find out!) – as long as it doesn’t break things (e.g. introduces unpleasant artifacting, banding, etc).

As I was writing this, I became aware that I should update my timeline node tree to include the MONONODES Balance and Clip utility DCTLs. I also added the False Color plugin to check exposure. So now I have five additional “utilities” (all turned off) at the end of my timeline-level nodes: False Color (I guess, at least intuitively, this one should be applied earlier in the pipeline to get the “true” exposure, but so far it works for me here at the end too – so whatever), Balance, White Clip, Black Clip, Sat Clip – and just by turning them on and off I can check the exposure, skin balance and potential clipping across all shots (clips) really fast (select “refresh all thumbnails”).

Alternative grade, removed some funny saturation clipping business in the yellowish greens in the bg with a quick & dirty Hue vs Sat curve.

Some more examples

Below you’ll find some more color grading examples where I’m going for the “super 16mm film” aesthetic, intentionally not of of the modern “shot-with-something-in-the-Sony-FX-camera-family” variety. (Maybe I’ll share some of my more “modern” and “corporate-friendlier” color grades shot on the BMPCC4K camera and the Sigma Sigma 18-35mm f1.8 Art DC HSM lens in a future post – for now, you can refer to my previous post on what that looks like for my streaming stutio.)

The Input/Output screenshots below are not ICC color-profiled, so your results may vary a bit:

Above, BMD Micro Cinema Camera, DNG (RAW) Film G1, DaVinci Resolve Studio, color management bypassed (this is how it actually looks before you start color grading!)

Color management on (slight bias to green from a K&F Concept ND filter, I suspect)

Primaries and secondaries graded under the timeline-level LOOK nodes (LOOK nodes deactivated, notice the bias towards magenta when the LOOK nodes are turned off. I just left this view in for reference, it’s not something I often watch much when grading as the LOOK nodes are always on.)

And timeline-level LOOK nodes on (unpleasant magenta-bias gone – this is why you grade underneath your LOOK, aka with your LOOK nodes on)

BMD Micro Cinema Camera, Pentax A110 f2.8 70mm lens with ND.
BMD Micro Cinema Camera, Pentax A110 f2.8 50mm lens with ND.

BMD Micro Cinema Camera, Pentax A110 f2.8 50mm lens with ND.

Above, some more examples, all shot with various ND’ed Pentax A110 lenses on the MCC (the close-ups of the eye made with a +2 diopter attached).

Standard
cine, Lessons Learned, video

My Cinematic Streaming Studio v3

By popular demand, I’ve jotted down some details about my updated “Cine” Live Streaming Studio V3. I’ve shared some Lessons Learned at the end of this article that might be helpful if you also want to achieve a more professional or “cinematic” look for your streaming or Zoom calls.

BTW, here’s an UPDATE 07.06.2024

UPDATE: Above, a screenshot from the current 2023 v3.1 setup.

See for yourself what v3 actually looks like in a Zoom call below and feel free to check out the comprehensive list of the gear I’m currently using to achieve the look on my Kit.co page.

Because my studio setup has a price tag that makes it only relevant for professionals (aka people who make money using their setup in any way or form) it may not be that applicable to your average home office webcam setup.

That’s why I also did some experimenting to come up with a much less expensive (YET FULL-FRAME! ZOMG!) streaming “cine” setup that could be more accessible to more people (also, no color grading or LUTs needed). Here’s the results of my more budget-friendly(less than $500, aka less than the price of about any entry-level cinematic camera body alone) in the video below:

You can also find the gear used for this more budget friendly setup on my Kit.co page

The path to getting there

The different versions over the last years. v0 – v0.5 used a Blackmagic Design Micro Cinema Camera with a 12mm Samyang f2 MFT lens, V1 and beyond a Blackmagic Design Pocket Cinema Camera 4K with a Sigma 18-35mm f1.8 Art DC HSM APS-C lens on a Viltrox EF-EOS M2 speed booster adapter.

 

In the week of the first COVID-19 lockdown in March 2020, to me a live streaming studio became a necessity to keep serving my customers, so I instantly started to build the first version of my studio. It was my first decent attempt to achieve a cinematic webcam that I would actually use to stream live.

Why “cinematic”? Well, partly because I’ve always been interested in cinema – fun fact: I originally studied to become a film director way back when before dropping out and pursuing a completely different career path – and this seemed like a good way to combine passion with “work” again.

Mostly because my customers were already used to paying for the highest quality of live / in-person content – and I shuddered at the thought of serving them online with just a standard shoddy webcam stream; I felt I owed the people who put their trust and money with me to provide them the best quality of experience possible also when delivering online as a Zoom stream as well.

Before v1 there was also a lot of incremental versions during spring and summer 2020, lots and lots of trial and error while getting the hang of the very basics.

v0 being just natural light (which obviously doesn’t work if overcast or afternoon/night) or cheap light and no successful color grading; with the results of not good and worse, with cheap lights destroying the possibility of ever getting a clean grade.

You can e.g. see the terrible green tint of cheap lights in v0.2, and I couldn’t get completely rid of it when grading so I had to get more color-accurate lights. While obviously more expensive than my green tinted no-name lights, IMO getting color-accurate lights easily 10x’ed the results.

v0.5 was the first attempt to shape the light cinematically using my first new professional cinema lights (I went for the cheapest very color-accurate lights I could find – and thus they were in the end underpowered for my needs (Came-TV Boltzen 30W Fresnels, fresnels didn’t help either for my lighting setup – too narrow a beam) and I also started color correcting and grading my own LUT using an actual “x-rite ColorCheck” color chart. It took me a couple of months of experimenting and learning just to get to this point.

To me, this was the “now we’re finally getting somewhere” moment. IMO, a pleasing “cine”-like look, but too “flat” for my taste and way too edgy for my target audience and purpose.

v0 through v0.5 where made with the tiny Blackmagic Micro Cinema Camera super 16mm MFT camera on a cheap-ish Samyang MFT 12mm f2 lens and I used a Blackmagic Design UltraStudio Mini Recorder Thunderbolt (only works in a few apps like e.g. Zoom,  delivers better image quality than the HD60 S+, 8-bit 4:4:4 and 12-bit 4:2:2 over HDMI – also has SDI with 12-bit 4:4:4) and a Corsair Elgato Game Capture HD60 S+ for the rest (emulates webcam, somewhat lower quality signal, 4:2:0 chroma subsampling) until the release of the ATEM Mini Pro.

The gist of my lighting setup remains the same to date:

A key light close left (my side) of my face, a top/hair light just above my head, and a kicker further behind me to my right side (like a mirror version of the key light). This is the classic three-point film shot setup. I’ve later also added various filler tube lights to help shape and warm up the light on my face.

The 2023 V3 studio lighting setup illustrated above. For some reason, the back light and key light are on the wrong sides in this illustration. Key should be on the right hand side looking at this illustration, back light left.

v1, the “2020” look

This was the first serious “look” upgrading to more powerful (300W key and 150W kicker) professional cinema lights, adding softboxes with diffusers and grids to help shape the light, upgrading to the Blackmagic Pocket Cinema Camera 4k super 35mm camera and adding a pro Sigma Art zoom lens. Why 4K when the streaming standard is still mostly 1080p? Because more information going in equals better quality coming out in the downsampled 1080p signal, and to future-proof my camera setup.

The grading was updated to the new camera and lighting, but it was very similar to v0.5 with no “cinematic” look. Good contrast, shape and skin colors, perhaps – but lacking that little certain something that the brain recognizes as “cinematic”.

I had zero idea what I was doing with color grading at the time. It was also a bit too dark – it looked fine for me, but sometimes participants and customers reported it was a bit dark depending on their monitor an operating system.

Also, there was no “motivation” to where the light was coming from – just a black void. Which was what I was going for at the time, but in hindsight it is very boring to look at over time.

v2 the “2021 – 2022” look

I later tried to refine the shaping using two filler lights, one for the shadow side of the face and one at the front of the face, both set to 3200K color temperature to add some warmth, all the other lights 5600K, the camera set to 4400K by walking it back from the original 5600K to taste by monitoring in studio.

I practiced a bit with grading in Davinci Resolve (rapidly becoming an industry standard, a free download, unless you need e.g. Color Space Transform” (CST) nodes the free version is awesome – if you need e.g. CST, you’ll need the “Studio” version which comes free with Blackmagic Design’s cameras or as a paid upgrade) and found some trustworthy educators: e.g. Gerald Undone for e.g. no nonsense technical information and grading with a color chart and introduced me to the Leeming LUTs, and  Rob Ellis for awesome but simple, affordable – yet beautiful – cinematic looks with lighting setup tutorials, and especially Darren Mostyn and Cullen Kelly if you’re getting seriously into grading in DaVinci Resolve. A big thank you to all of them for making my life easier and way better informed.

CAVEAT EMPTOR: Most videos on youtube on how to grade in DaVinci Resolve are made by click-seeking BUFFOONS with no fundamental knowledge of color science, photochemical film science, and how grading actually works, or even how the technology or software works – let alone any sense of cinematic aesthetics. Thank you, massively lowered barrier to entry with free software and Plato / Dunning–Kruger, I guess. Heuristic: if they never speak about how they color manage (or always grade in linear / r709), avoid avoid avoid!

I was then able to make a more cinematic grading LUT. I also had to adjust the grade to having added a Teleprompter to the setup (yes, that added glass has an effect, about half an f stop darker). I still didn’t know what I was doing, though – an incredible amount of painstaking trial and error (brute-force) followed.

I also added the three lights in the background (approximately 2m further behind me) for “motivation”, aka fooling your brain to think that these are sources that light is coming from (no significant light actually reaches me from these lights, though – photons, inverse square law and all that), and it added some “interestingness” instead of just the dark void. It was still a tad bit too dark to account for variations in participants’ setup, though. In hindsight, I also find it a bit too saturated – especially in the highlights and shadows.

v3, late 2022 and beyond look

The major hardware change was replacing the individual fill lights with a light tube system that could be remotely controlled in concert (major headache to always have to finesse each fill light manually), also replaced the top/hair light with two tubes in the same system and built a custom softbox around them. The main reason for this, however, was that the new lights had a bit more power than the previous and would enable me to lighten the look or “wrap” the light around further.

I also changed the DOF (depth of field, went from f2.2 to f2.8) to better help staying in focus when naturally moving my head (yes, there is absolutely no autofocus in my cinematic setup), I then relit the whole thing by first cranking the ISO up from 200 to 800 to properly expose to the right (GEEK ALERT: more dynamic range using native ISO, allegedly – but feels like this should not be  applicable when piping the 3G/FHD signal over the HDMI to the ATEM Mini; I mean, it’s not a RAW signal – but I guess the more information going into it, the more comes out?), and to enable using less power from the lights to achieve the same result (because of less eye-strain, lack of flexibility in lighting the scene if the lights are already maxed out at a 100%, and of course to consume less energy and generate less heat in the studio).

The major grading change was, aside from adjusting to the new lights and the ISO change, updating to Blackmagic Gen 5 color science (pain in the ass, had to regrade everything – but not as hard as the first time now that I knew a little bit more about how to actually grade and could replicate steps instead of brute-forcing it) and a brighter, less “edgy” or stylized, look that still tries to retain that “cinematic” quality to it.

It is now bright enough to accommodate for the differences in participants’ setups. Some report it is also a more pleasing look than the previous one. I think it is definitely less “edgy”.

V3.1 2023

Update: I’ve since incrementally updated this look to a v3.1 (screenshot at the top of this post) – Only by changing the lighting values, the ratio between dark and bright, bringing back a bit more contrast the between light and dark side of face for IMO more “definition” and interestingness.

Davinci Resolve color grading nodes

My current Davinci Resolve node tree for color grading above. (Screenshot showing a more edgy grade than my live LUT, disregard the “Grain” node.) Discontinued – The screenshot above was my own brute-forced node tree that I previously used, described below:

The first node in my grading above is the “Leeming LUT Athena III – Blackmagic Design Pocket 4K – Gen5 Film”, the “Video to Full” node uses the “Leeming LUT Fixie – Video to Full Range” as I find it adds to the cinematic quality, and you can then add your creative cinematic grade to the “Creative LUT” node either manually or applying a cinematic LUT. Keep in mind that the creative LUT you apply should be expecting the same color science you are using. In my case using the “Leeming LUT Athena III – Blackmagic Design Pocket 4K – Gen5 Film” LUT converts the color space to something as close to rec709 as possible, so any LUT expecting a rec709 input will work – but any LUT expecting a different color space input will look like utter garbage. If you find a LUT that you like but it’s for a different color space than you have already set up (say Arri Log-C instead of rec709) – or conversely you’ve found a LUT but it looks like crap when applied and you don’t know what input it is based on – you can always add a node with a “Color Space Transform” effect in front of the Creative LUT node and experiment your way with converting your current color space to different ones to see if you can find something usable for the LUT to use as an input. Oh, and those “Limit Sat” nodes are for me to make sure that no colors snuck into the highlights or shadows during my grading process (I’m not going to claim I fully know what I’m doing here, there must be ways to do this more professionally) to mimic how photochemical film behaves.

Update: Now my basic node tree looks much more like that of Cullen Kelly’s (the updated version of his node tree, see his newer videos for the changes, e.g. no sharpening or smoothing modifiers in the secondaries anymore, added instead immediately after primaries and secondaries join together).

Update 2: See this update post for my latest node trees.

My current previous node tree below:

My new default Clip based node tree above. Of note, my skin tone usually renders weirdly so I have a custom skin correction node to adjust to taste and I also have a HSV node with only the S channel activated to temper saturation subtly to taste if needed. Noise Reduction is added to the start as an option if needed, the last node after the mix node is for any sharpness or blurring added (technically these two types of transforms should not be in the primaries or the secondaries to avoid potential unwanted artifacting, also I don’t think this node has any effect on the LUT and I would leave it turned off when exporting the LUT, as with obviously also any of the secondaries)

My new default Timeline level nodes establishing the overall look, using taste LUTs from Cullen Kelly’s Voyager Pro pack and optionally the Dehancer plugin when I want to mimic real photochemical filmstock when exporting video, but definitely leave Dehancer off when exporting for a camera streaming LUT unless you know what you’re doing (aka first turning off all the features that do not translate into a LUT and check if you’re still happy with the results).

You should also check out the Leeming LUT Pro (IMO the best color transform luts for the Blackmagic Cinema cameras out there by far!) before starting going crazy in Resolve yourself – worth every single buck. Update: Switching to a color managed workflow made the camera-specific Leeming color transform LUTs sort of obsolete for me for my Blackmagic cameras. I do still find that the Leeming Fixies LUT “Video to Full” can be helpful to achieve a better starting point for a cinematic grade when dealing with stuff already in Rec709 like my Canon 5D Mark II DSLR HDMI out and I do still use the Leeming LUTs when grading for my GoPros.

I now use Cullen Kelly’s Voyager Pro pack to create a look of several “taste” LUTs instead of a single creative LUT (in the “timeline” nodes to make the look apply globally to all clips). They are really, REALLY good – and also made to work perfectly with a color managed DWG / Intermediate workflow (which is not the case with the majority of LUTs out there – so be advised if some other LUT you purchased looks like utter crap with your color management workflow and/or camera).

Now, for the not-so-rocket-science going into exporting a new grade as a new LUT for the streaming studio camera, refer to the manual or just google how to export a LUT, and remember that if you are using a color managed Wide Gamut / Intermediate workflow in DaVinci, you have to add a Color Space Transform (CST) node or set the output to match your camera’s intended output Color Space and Gamma – which of course varies. E.g. for my BMD PCC4K I use rec709 / Gamma2.4 (which, I have been informed, used to be the industry standard to deliver in). However, my BMD Video Assist 5″ 12G seems to expect a P3 / D65 color space LUT, so YOLO.

For me and for my camera, before exporting the LUT I have to add a CST node as the last “clip” node, converting the color space from Timeline to rec709 / Gamma 2.4 explicitly, and I also set Tone Mapping to “Luminance”, Gamut Mapping to “Saturation” – and most importantly check the box “Apply Forward OOTF” under advanced. (The image is going to look terrible in DaVinci, but don’t worry – it’s going to be correctly interpreted in the camera or LUT box! Trust me – sort of.)

GEEK ALERT: Theoretically, and to the best of my knowledge, this CST node should not be necessary as I’m already operating in DaVinci Color Managed DWG / Intermediate timeline set to a rec709 / Gamma2.4 output color space, BUT THIS IS THE *ONLY* WAY the exported LUT will look right when imported to my camera for me. Be advised.

See for yourself what v3 actually looks like in live streaming action below and check out the comprehensive list of the gear I’m currently using to achieve the look on my Kit.co page.

Let me know if you have any questions!

Lessons Learned:

  • The quality of the output equals the quality of the input: camera, light, lens, and color grade / look matters equally
  • Use a camera with enough dynamic range, something with RAW / Log or similar capabilities instead of a baked-in r709 output, to be able to deliver a cinematic image at all
  • Only use lights that are very color-accurate
  • Diffusion is a prerequisite for that cinematic, pleasing, soft shadows “wrapped-around-the-skin” light, see also grid / honeycomb
  • All you need to know about diffusion is that you can either use a white shower curtain, a sheet of bleached muslin – or add a more productified version called a “soft box” (and add a “honeycomb grid” to avoid light spill) on it to your main lights for your light to become wonderfully diffused – or google “roger deakins cove lighting” for an alternative soft and wonderful wrap-around technique (that may or may not work in your studio setting)
  • All you need to know about Aputure Lightstorm lights vs Godox VL is that Godox is cheaper and provides the same quality of light (or even better) for this type of indoor, static studio use (today, the latest updated Aputure Amaran 100d/200d might be a better budget option, though – and let’s face it – The Aputure “Sidus Link” app is just fantastic – I love it! I can’t remember ever using the Godox app after testing it once.)
  • Where you put your lights MATTERS A LOTstudy what they’ve been doing in Hollywood for eons – experiment with placement and angles, get someone to help you move lights while you stay in the shot, take the time to “f*ck around and find out” what the optimal light positioning is for achieving the look you want – it’s going to pay off massively (get a stand-in doll or a friend or your better half to stay in shot when moving the lights around – and a video assist monitor you can hold in your hand (wireless or cable) to immediately review the results instead of having to run back to the camera constantly)
  • Optionally, use my ghetto wireless monitoring solution to monitor the camera output with your smartphone or tablet over wifi (not good enough to pull focus with, but good enough signal to evaluate lighting setup, and for around $20 on AliExpress –  can you really complain?)
  • Motivated lighting is a thing – and you might want to consider it in your setup
  • Use a lens that will support the creative vision of your output (shocker: all lenses are different), but it should probably not be a slow “kit lens”, more likely an f2.8 or faster prime lens (or a fixed t/f stop zoom lens, e.g. the Sigma 18-35mm f1.8 Art DC HSM for APS-C like I’m using in my v3 setup gives you an awesome image quality, some range of flexibility as to field of view and depth of field, at very reasonable price for the pro quality)
  • All you need to know about the f-stop factor is that the lower the f ot t number is a) theoretically the less light (less expensive lower wattage lights can suffice) needed to light the scene to get a decent exposure b) theoretically the more shallow depth of field (blurred out background / separation of foreground and background / bokeh) is possible
  • Crop factors, full-frame vs MFT vs APS-C, etc are all words you are going to learn to hate – it’s already a terrible stupid mess, and adding a speedbooster to the mix will just kill your will to live and make you give everything up (well, it’s not too hard to actually re-calculate it but it is such a killjoy for me – if what I see on the screen works for me, fuck it, I’ll shoot with it)
  • All you need to know about crop factors is to take the lens in question and mount it on your camera – if it fits (sometimes an adapter is needed) and if it looks good to you (no serious vignetting, you get the field of view you need, the depth of field, the smoothness or sharpness, the character you’re looking for) then it’s a keeper (screw the calculations – unless you’re working on a real movie production set and it matters) – oh, and never get into a discussion of crop factors and M43 vs full frame online or offline ever (a very bad time is to be had if you indulge)
  • All you need to know about speed boosters (or actually “Tele Compressors“) is that, if you add them, whatever the mm and t/f stop printed on your lens says is now out the window (don’t worry, it’s all fine if the image and field of view now coming out of the lens looks good to you) and you now need to adjust the lighting accordingly to taste (although feel free to cheat by using Zebras and False Color, don’t let the zealots tell you otherwise)
  • All you need to know about Metabones vs Viltrox “Speed Boosters” is that the Viltrox is almost an order of magnitude cheaper and will most likely be fine for your “cine” streaming studio or office webcam setup – I bought FIVE new Viltroxes (important: the newer EF-M2 II version) on ebay for the same price of a single used Metabones adapter
  • Contrary to common “cine” aesthetics, apply high sharpening in-camera if you intend to stream (compression garbles details so you want to have more details going in upstream), and no – any sharpening added in DaVinci will not transfer to your exported LUT
  • If you’re not happy with what your image looks like from your camera using any of the manufacturer’s settings / looks, you’ll probably need to know what LUT files are and how to properly use them
  • If your camera doesn’t support LUTs (or in only limited ways) you’re not necessarily out of luck – you could use a LUT box between the output (camera) and the input (capture card/box), provided that the signal coming out from your camera has enough information still in it (see e.g. log and chroma subsampling above)
  • When using someone else’s LUT, you need to know which Color Space it was intended for; Many LUTs expect an r709 input to be able to transform into whatever look they sold you on, some don’t – like the film looks supplied DaVinci – and some tells you in their file name or in the first couple of lines in the file, use a text editor to open the .cube file (it boggles my mind that there is no standard for a meta data tag in the LUT file that will make the recipient device or software recognize automatically for which Color Space the LUT is intended)
  • WARNING: If you’re going to create your own grades / look LUTs, be aware you need to grade using the actual output signal (e.g. directly out of the camera via usb-c to the Mac/PC is going to be a different signal from using an Elgato 4K stick or using a Blackmagic Ultrastudio Mini Recorder to capture and convert the signal from the camera to the Mac/PC because they all have different ways of interpreting and compressing the signal – so use a recording of the output of your actual signal chain setup when grading (otherwise you will get nasty surprises, unusable results), e.g. I needed to create different grades / LUTs for my ATEM Mini Pro (current setup), Elgato HD60 S+, Elgato Camlink 4K, and BMD Ultrastudio Mini Recorder (and to complicate matters further, also for each of the different cameras in combination with the respective capture solutions to match the look across them)
  • Color grading is an art, not a science – but a “cinematic” grading takes basic principles from  photochemical film science into account when grading AND you have to be aware that there are several aspects of a real photochemical film look like “Halation”,  “Grain”, and “Glow/Bloom” effects do not translate into a LUT (I would disable all plugin effects – except Color Space Transform if you’re using it at the end of the node chain to convert to your camera or monitor color space – in DaVinci when exporting a LUT), also – even if you could – adding grain to a streaming input would probably garble the compressed output in a way that you do not want it to
  • Learn how to use color management and Wide Gamut / Intermediate workflow if you intend to grade with DaVinci Resolve – it will take a lot of guessing and headaches out of the equation, making grading a faster and a much more fun and predictable process (thank me later)
  • Forget brute-forcing and using a hundred million custom nodes per clip by using something more like this simple node tree for clip-level (primaries: Exposure, Ratio/Contrast, and Balance + secondaries as parallel nodes) and process as your go-to starting point, see screenshots of my node tree above, and consider using a separate timeline-level node tree for your overall “look”
  • WARNING! If  you are on Apple Mac, you need to know this before stepping into grading with Davinci Resolve: If you don’t change a certain setting, there will forever be a difference in what you see in DaVinci and what the exported video or grade looks like! WARNING! (I wish I had known sooner! It would have taken away 90% of the painstaking trial and error, googling for this artifact gives you zero answers – only entitled industry asshats claiming you need better display hardware – tl;dr you don’t – it’s a software Apple Mac thing – obviously – and BMD has finally addressed it). Update: Using latest Resolve and a color managed DWG / Intermediate workflow, I now get identical WYSIWYG results to my calibrated r709 Gamma 2.2 clean view monitor when exporting, and with DaVinci Resolve > v18.5 I don’t think this is much of an issue anymore. I’m also using a Video Monitor Lookup Table by Cullen Kelly “macOS Viewing Transform v1.3” insuring that what I’m watching when grading is as good as identical to what gets delivered (only for Apple displays)
  • Save yourself even more pain and time by investing in an X-rite ColorChecker Video chart to proper white balance and check exposure, record in start of EVERY SHOT / setup – just in case! (also found on my kit.co page)
  • Update on my color grading post here

Standard
innovation, Lessons Learned, Rants

Talking about corporate innovation and the pandemic

Recently I was on Fabian Böck‘s BOECKBX podcast and talked a bit about corporate innovation and the effects of the pandemic on businesses – including my own.

Have a look and listen.

TL;DR

Based on my own experience working in outsourced innovation with governments, organisations and some of the largest companies in the world on and off between 1996 and 2010, I do not think it is a good idea to outsource (business model) innovation.

That’s the whole premise of my company, +ANDERSEN & ASSOCIATES.

From the +ANDERSEN mission statement:

“… we enable companies to manage and run innovation for tomorrow inside their own company, using their own people today.

Because innovation in your company will never happen by outside consultants.

It has to come from your most valuable assets – the people you already have on the inside.”

Standard
corporate entrepreneurship, innovation, Lessons Learned

An update from +ANDERSEN & ASSOCIATES

As some of you might know, I also run a company called +ANDERSEN & ASSOCIATES on the side. We help medium and large companies and organisations around the world get serious about innovation and delivering actual results instead of PowerPoints.

I thought I’d take a couple of minutes to update you on how we’ve been adapting to better meet your needs in these trying times.

tl;dr – We now offer all our programs and services also via remote (in the best possible quality available today) as an option for you.

Whether you’re back in the regular office or still working remotely from home, we’re providing you with our popular programs like directly to you in the conferencing tools you already use, in the quality you’ve come to expect.

Everything you’re seeing in this video is taken directly from our streaming studio. This is how it actually looks in your normal conferencing app.

And for the highest quality possible over the Internet today, we also offer a completely new direct point to point streaming solution – in full HD, with no dropped frames, no screen freeze, and no audio issues.

Also, by popular demand we’re now offering moderation and management of internal and public online events for those of you looking for quality outsourced alternatives.

For those of you who are new to remote working and online collaboration, have no fear, we of course offer you training, both for your managers and for all of the program participants in advance.

Thank you for your time, stay safe.

Vidar 

Standard