cine, Lessons Learned

On my Color Grading in 2024

Second grade / look pass

A while back, I posted about my Live Streaming Studio V3.1  setup, because many people wanted to know what gear I’m using and how I get the “cinematic” look on a live Zoom call. And to achieve that look, one of the things I had to learn from scratch – in addition to operating a digital cinema camera and properly use lighting – was how to color grade.

For reference: Non-graded image without look and effects applied, DNG RAW (BMD Film Gen 1) to rec709 / Gamma 2.2, exposed to the right (ETTR).

Below, after some grading, applying a ton of stuff for the look, and even throwing in some effects to emulate anamorphic edge distortions and a fake film gate crop for good measures:

BTW, do you need help with creating a great custom “look” for your film or video production, your camera, or your podcasting or stream? Give me a ping, and let’s talk. I wasted a silly amount of time and money making all kinds of mistakes starting out, so I’m happy to help you avoid that.

In this post, I’m sharing a bit about my further digging myself into a hole adventures into color grading with Blackmagic Design’s DaVinci Resolve Studio (free download of the non-studio version). It’s an incredible piece of software, by the way. If you’re thinking about ditching Adobe Premiere – just do it! It’s a joy to work with and I’ve never regretted it for a second.

This is not a primer on color grading. It’s just me dumping writing up and sharing what I’ve learned that works best for me so far. If you too wish to start (or continue) on a color grading learning journey with DaVinci Resolve, Cullen Kelly’s YouTube channel is probably the best place for that.

The following assumes you’re already familiar some of the concepts of color grading – or at least have a faint interest in how to create a cinematic image with digital tools. If not, fair warning, this post will bore the living daylights out of you is probably not for you.

What started as a necessity during the lockdown era (aka building a professional-looking online tele-presence) turned into a path of rediscovery, reigniting my passion for the cinematic image. Fun fact: You might not know that I actually started out studying cinema with the goal to become a film director – but I dropped out after only two years as university and studying film wasn’t really my thing – and then the commercial Internet happened and the rest is history

As a person most likely somewhere on a spectrum of some kind, of course I can’t, I won’t, and I don’t stop digging until I’m getting somewhere interesting, somewhere where I can feel a sense of mastery and understanding of the full stack (in this case lighting, lenses, physics, camera sensor, cinematography, color grading, look development – everything that goes into the sausage factory of a nice digital “cine” image), aka being able to make predictable outcomes and making those outcomes look cinematic and pleasing – to me. It’s become sort of a new time sink obsession hobby of mine (in addition to helping other startup founders go farther faster, of course).

And I’m still digging.. .

Read on below for this long non-startup (but hey – still full of tech & geekery) post.

A lot going on under the hood here.

Continue reading

Standard
cine, Lessons Learned, video

My Cinematic Streaming Studio v3.1

By popular demand, I’ve jotted down some details about my updated “Cine” Live Streaming Studio V3. I’ve shared some Lessons Learned at the end of this article that might be helpful if you also want to achieve a more professional or “cinematic” look for your streaming or Zoom calls.

UPDATE 1: Above, a screenshot from my current 2024 v3.1 setup.

UPDATE 2: Latest changes to how I color grade to get the “cinematic” look

And if you could use some help with putting your studio, podcasting, or event setup together (which camera, lenses, lighting, sound, live switchers, etc to get that are right for you and how to cable, set up, and install it all) – or creating that special signature “look” for your videos or streams – give me a ping, and let’s talk about it. I spent a idiotic amount of time and money doing things all sorts of wrong in the beginning, so I’m happy to help you not do the same.

See for yourself in the video below what v3 actually looks looked like when recorded and check out the comprehensive list (constantly updated) of the gear I’m currently using to achieve the look on my Kit.co page.

Here’s what v3.1 looks and sounds like in an actual real-life Zoom call interview situation:

 

Of note, compression smears the image a whole lot (that’s why I have the camera output set to be so sharp – more details in = more details out when compressed in Zoom), and depending on the conferencing software and the operating system, things happen to your saturation and gamma (here desaturated, less contrast-y – which makes me think it was not captured on a Mac).

Fun fact: One of the other changes from v3 to v3.1 is the choice of microphone. Can you hear it? (One costs 1.600,- Euros, the other 117,-). I’m actually now using the cheap-ass microphone(!) instead of my (beloved) Neumann. Check out my v3.1 kit.co page for the deets.

My studio setup has a price tag that makes it mostly only relevant for professionals (aka people who make money using their setup in any way or form) – or crazy people – it may not be that applicable to your average home office webcam setup.

That’s why I also did some experimenting to come up with a much less expensive (YET FULL-FRAME! ZOMG!) streaming “cine” setup that could be more accessible to more people (also, no color grading or LUTs needed). Here’s the results of my more budget-friendly(less than $300 for the camera, cheap-ass lighting, decent budget sound) in the video below:

You can also find the gear used for this more budget friendly setup on my Kit.co page

The path to getting there

The different versions over the last years: v0 – v0.5 used a Blackmagic Design Micro Cinema Camera with a 12mm Samyang f2 MFT lens. V1 and beyond a Blackmagic Design Pocket Cinema Camera 4K with a Sigma 18-35mm f1.8 Art DC HSM APS-C lens on a Viltrox EF-EOS M2 speed booster adapter.

 

A live streaming studio became a necessity became a  necessity in the first week of the COVID-19 lockdown in March 2020 to keep serving my customers. I instantly started to build the first version of my tele-presence studio. Thus my quest to achieve a cinematic looking output began.

Why “cinematic”? Well, partly because I’ve always been interested in cinema. Fun Fact: I originally studied to become a film director way back when before dropping out and pursuing a completely different career path – and this seemed like a good way to combine passion with “work” – again.

Also because my customers were already used to paying for the highest quality of live / in-person content – and serving them online with just a standard shoddy webcam wasn’t an option. I felt I owed the people who put their trust and money with me to provide them the best quality of experience possible also when delivering online on Zoom.

Before v1 there was also a lot of incremental versions during spring and summer 2020, lots and lots of idiotic trial and error while trying to get the hang of the very basics.

v0 being just natural light (which obviously doesn’t work if overcast or afternoon/night) or cheap lights and no real color grading, results being not good and worse. Also using cheap lights destroying the possibility of ever getting a clean grade to begin with.

You can e.g. see the terrible green tint of cheap lights in v0.2, and I couldn’t get completely rid of it when grading so I had to get more color-accurate lights. While obviously more expensive than my green tinted no-name lights, IMO getting color-accurate lights easily 10x’ed the results.

v0.5 was the first attempt to shape the light cinematically using my first new professional cinema lights (I went for the cheapest yet very color-accurate lights I could find – and thus they were in the end underpowered for my needs (Came-TV Boltzen 30W Fresnels, fresnels didn’t help either for my lighting setup – too narrow a beam) and I also started color correcting and grading my own LUT using an actual “x-rite ColorCheck” color chart. It took me a couple of months of experimenting and learning just to get to this point.

To me, this was the “now we’re finally getting somewhere” moment. IMO, a pleasing “cine”-like look, but too “flat” for my taste and way too edgy for my target audience and purpose.

v0 through v0.5 where made with the tiny Blackmagic Micro Cinema Camera super 16mm MFT camera on a cheap-ish Samyang MFT 12mm f2 lens and I used a Blackmagic Design UltraStudio Mini Recorder Thunderbolt (only works as a webcam in a few apps like e.g. Zoom,  but delivers better image quality than the HD60 S+, 8-bit 4:4:4 and 12-bit 4:2:2 over HDMI – also has SDI with 12-bit 4:4:4) and a Corsair Elgato Game Capture HD60 S+ for the rest (emulates webcam, somewhat lower quality signal, 4:2:0 chroma subsampling) until the release of the ATEM Mini Pro.

The gist of my lighting setup remains the same to date:

A key light close left (my side) of my face, a top/hair light just above my head, and a kicker further behind me to my right side (like a mirror version of the key light). This is the classic three-point film shot setup. I’ve later also added various filler tube lights to help shape and warm up the light on my face.

The 2023 V3.1 studio lighting setup illustrated above. ATTENTION: For some reason, the back light and key light are on the wrong sides in this illustration. Key should be on the right hand side looking at this illustration, back light left.

v1, the “2020” look

This was the first serious “look” upgrading to more powerful (300W key and 150W kicker) professional cinema lights, adding softboxes with diffusers and grids to help shape the light, upgrading to the Blackmagic Pocket Cinema Camera 4k super 35mm camera and adding a pro Sigma Art zoom lens. Why 4K when the streaming standard is still mostly 1080p? Because more information going in equals better quality coming out in the downsampled 1080p signal, and to future-proof my camera setup.

The grading was updated to the new camera and lighting, but it was very similar to v0.5 with no “cinematic” look. Good contrast, shape and skin colors, perhaps – but lacking that little certain something that the brain recognizes as “cinematic”.

I had zero idea what I was doing with color grading at the time. It was also a bit too dark – it looked fine for me, but sometimes participants and customers reported it was a bit dark depending on their monitor, device, and operating system.

Also, there was no “motivation” to where the light was coming from – just a black void. Which was what I was going for at the time, but in hindsight it is very boring to look at over time.

v2 the “2021 – 2022” look

I later tried to refine the shaping using two filler lights, one for the shadow side of the face and one at the front of the face, both set to 3200K color temperature to add some warmth, all the other lights 5600K, the camera set to 4400K by walking it back from the original 5600K to taste by monitoring in studio.

I practiced a bit with grading in Davinci Resolve (rapidly becoming an industry standard, a free download, unless you need e.g. “Color Space Transform” (CST) nodes the free version is awesome – if you need e.g. CST, you’ll need the “Studio” version which comes free with Blackmagic Design’s cameras or as a paid upgrade) and found some trustworthy educators: e.g. Gerald Undone for e.g. no nonsense technical information and grading with a color chart and introduced me to the Leeming LUTs, and  Rob Ellis for awesome but simple, affordable – yet beautiful – cinematic looks with lighting setup tutorials, and especially Darren Mostyn and Cullen Kelly if you’re getting seriously into grading in DaVinci Resolve. A big thank you to all of them for making my life easier and way better informed.

CAVEAT EMPTOR: Most videos on youtube on how to grade in DaVinci Resolve are made by click-seeking (or well-meaning but not knowledgeable and still) BUFFOONS with no fundamental knowledge of color science, photochemical film science, and how grading actually works, or even how the technology or software works – let alone any sense of cinematic aesthetics. Thank you, massively lowered barrier to entry with cheaper cameras and free software, Plato, Dunning–Kruger, and all that. Heuristic: if they never speak about how they color-manage – if they don’t color manage in any way or form – avoid, do not watch, do not read!

And after a while I was then able to make a more cinematic look LUT. I also had to adjust the grade after having added a Teleprompter to the setup – yes, that added glass has an effect, about half an f stop less light. I still didn’t know what I was doing, though – and an incredible amount of painstaking trial and error (brute-force) followed.

I also added the three light tubes in the background (approximately 2m further behind me) for “motivation”, aka fooling your brain to think that these are sources that light is coming from (no significant light actually reaches me from these lights, though – photons, inverse square law and all that), and it added some “interestingness” instead of just the dark void. It was still a tad bit too dark to account for variations in participants’ setup, though. In hindsight, I also find it a bit too saturated – especially in the highlights and shadows.

v3, late 2022 and beyond look

The major hardware change was replacing the individual fill lights with a light tube system that could be remotely controlled in concert (as it was always a major headache to have to finesse each fill light manually), also replaced the top/hair light with two tubes in the same system and built a custom softbox around them. The main reason for this, however, was that the new lights had a bit more power than the previous and would enable me to lighten the look or “wrap” the light around further.

I also changed the DOF (depth of field, went from f2.2 to f2.6) to better help staying in focus when naturally moving my head (yes, there is no autofocus in my cinematic setup, get a Sony Alpha or FX family camera instead if AF is important to you), I then relit the whole thing by first cranking the ISO up from 200 to 800 to properly expose to the right (GEEK ALERT: more dynamic range using native ISO) and to enable using less power from the lights to achieve the same result (because of less eye-strain, lack of flexibility in lighting the scene if the lights are already maxed out at a 100%, and of course to consume less energy and generate less heat in the studio).

The major grading change was, aside from adjusting to the new lights and the ISO change, updating to Blackmagic Gen 5 color science (a pain in the ass as I had to regrade everything – but not as hard as the first time around now that I knew a little bit more about how to actually grade and could replicate steps instead of brute-forcing it) and a brighter, less “edgy” or stylized, look that still tries to retain that “cinematic” quality to it.

It is now bright enough to accommodate for the differences in participants’ displays. Some report it is also a more pleasing look than the previous one. I think it is definitely less “edgy”, more rounded.

V3.1 2023

Update  1: I’ve since incrementally updated this look to a v3.1 (screenshot at the top of this post) – Only by changing the lighting values, the ratio between dark and bright, bringing back a bit more contrast the between light and dark side of face for IMO more “definition” and interestingness.

UPDATE 2: Latest changes to how I color grade to get the “cinematic” look in 2024

Davinci Resolve color grading nodes

My current Davinci Resolve node tree for color grading above. (Screenshot showing a more edgy grade than my live LUT, disregard the “Grain” node.) Discontinued – The screenshot above was my own brute-forced node tree that I previously used, described below:

The first node in my grading above is the “Leeming LUT Athena III – Blackmagic Design Pocket 4K – Gen5 Film”, the “Video to Full” node uses the “Leeming LUT Fixie – Video to Full Range” as I find it adds to the cinematic quality, and you can then add your creative cinematic grade to the “Creative LUT” node either manually or applying a cinematic LUT. Keep in mind that the creative LUT you apply should be expecting the same color science you are using. In my case using the “Leeming LUT Athena III – Blackmagic Design Pocket 4K – Gen5 Film” LUT converts the color space to something as close to rec709 as possible, so any LUT expecting a rec709 input will work – but any LUT expecting a different color space input will look like utter garbage. If you find a LUT that you like but it’s for a different color space than you have already set up (say Arri Log-C instead of rec709) – or conversely you’ve found a LUT but it looks like crap when applied and you don’t know what input it is based on – you can always add a node with a “Color Space Transform” effect in front of the Creative LUT node and experiment your way with converting your current color space to different ones to see if you can find something usable for the LUT to use as an input. Oh, and those “Limit Sat” nodes are for me to make sure that no colors snuck into the highlights or shadows during my grading process (I’m not going to claim I fully know what I’m doing here, there must be ways to do this more professionally) to mimic how photochemical film behaves.

Update: Now my basic node tree looks much more like that of Cullen Kelly’s (the updated version of his node tree, see his newer videos for the changes, e.g. no sharpening or smoothing modifiers in the secondaries anymore, added instead immediately after primaries and secondaries join together).

Update 2: See this update post for my latest node trees.

What my current previous node tree looked like below:

My new default Clip based node tree above. Of note, my skin tone usually renders weirdly so I have a custom skin correction node to adjust to taste and I also have a HSV node with only the S channel activated to temper saturation subtly to taste if needed. Noise Reduction is added to the start as an option if needed, the last node after the mix node is for any sharpness or blurring added (technically these two types of transforms should not be in the primaries or the secondaries to avoid potential unwanted artifacting, also I don’t think this node has any effect on the LUT and I would leave it turned off when exporting the LUT, as with obviously also any of the secondaries)


My new default Timeline level nodes establishing the overall look, using taste LUTs from Cullen Kelly’s Voyager Pro pack and optionally the Dehancer plugin when I want to mimic real photochemical filmstock when exporting video, but definitely leave Dehancer off when exporting for a camera streaming LUT unless you know what you’re doing (aka first turning off all the features that do not translate into a LUT and check if you’re still happy with the results).

You should also check out the Leeming LUT Pro (IMO the best color transform luts for the Blackmagic Cinema cameras out there) before starting going crazy in Resolve yourself – worth every single buck. Update: Switching to a color managed workflow made the camera-specific Leeming color transform LUTs obsolete for me and my Blackmagic cameras. I do still find that the Leeming Fixies LUT “Video to Full” can be helpful to achieve a better starting point for a cinematic grade when dealing with stuff already in Rec709 like my Canon 5D Mark II DSLR HDMI out and I do still use the Leeming LUTs when grading for my GoPros.

I now use Cullen Kelly’s Voyager Pro pack to create a look of several “taste” LUTs instead of a single creative LUT (in the “timeline” nodes to make the look apply globally to all clips). They are really, REALLY good – and also made to work perfectly with a color managed DWG / Intermediate workflow (which is not the case with the majority of LUTs out there – so be advised if some other LUT you purchased looks like utter crap with your color management workflow and/or camera).

Now, for the not-so-rocket-science going into exporting a new grade as a new LUT for the streaming studio camera, refer to the manual or just google how to export a LUT, and remember that if you are using a color managed Wide Gamut / Intermediate workflow in DaVinci, you have to add a Color Space Transform (CST) node or set the output to match your camera’s intended output Color Space and Gamma – which of course varies. E.g. for my BMD PCC4K I use rec709 / Gamma2.4 (which, I have been informed, used to be the industry standard to deliver in). However, my BMD Video Assist 5″ 12G seems to expect a P3 / D65 color space LUT, so YOLO.

For me and for my camera, before exporting the LUT I have to add a CST node as the last “clip” node, converting the color space from Timeline to rec709 / Gamma 2.4 explicitly, and I also set Tone Mapping to “Luminance”, Gamut Mapping to “Saturation” – and most importantly check the box “Apply Forward OOTF” under advanced. (The image is going to look terrible in DaVinci, but don’t worry – it’s going to be correctly interpreted in the camera or LUT box! Trust me – sort of.)

GEEK ALERT: Theoretically, and to the best of my knowledge, this CST node should not be necessary as I’m already operating in DaVinci Color Managed DWG / Intermediate timeline set to a rec709 / Gamma2.4 output color space, BUT THIS IS THE *ONLY* WAY the exported LUT will look right when imported to my camera for me. Be advised.

See for yourself what v3 actually looks like in live streaming action below and check out the comprehensive list of the gear I’m currently using to achieve the look on my Kit.co page.

Let me know if you have any questions!

Lessons Learned:

  • The quality of the output equals the quality of the input: camera, light, lens, and color grade / look matters equally
  • Use a camera with enough dynamic range, something with RAW / Log or similar capabilities instead of a baked-in r709 output, to be able to deliver a cinematic image at all
  • Only use lights that are very color-accurate
  • Diffusion is a prerequisite for that cinematic, pleasing, soft shadows “wrapped-around-the-skin” light, see also grid / honeycomb
  • All you need to know about diffusion is that you can either use a white shower curtain, a sheet of bleached muslin – or add a more productified version called a “soft box” (and add a “honeycomb grid” to avoid light spill) on it to your main lights for your light to become wonderfully diffused – or google “roger deakins cove lighting” for an alternative soft and wonderful wrap-around technique (that may or may not work in your studio setting)
  • All you need to know about Aputure Lightstorm lights vs Godox VL is that Godox is cheaper and provides the same quality of light (or even better) for this type of indoor, static studio use (today, the later Aputure Amaran 150c and 300c might be a better budget option, though – and let’s face it – The Aputure “Sidus Link” app is just fantastic – I love it! I can’t remember ever using the Godox app after testing it once.)
  • Where you put your lights MATTERS A LOTstudy what they’ve been doing in Hollywood for eons – experiment with placement and angles, get someone to help you move lights while you stay in the shot, take the time to “f*ck around and find out” what the optimal light positioning is for achieving the look you want – it’s going to pay off massively (get a stand-in doll or a friend or your better half to stay in shot when moving the lights around – and a video assist monitor you can hold in your hand (wireless or cable) to immediately review the results instead of having to run back to the camera constantly)
  • Optionally, use my ghetto wireless monitoring solution to monitor the camera output with your smartphone or tablet over wifi (not good enough to pull focus with, but good enough signal to evaluate lighting setup, and for around $20 on AliExpress –  do you really get to complain?)
  • Motivated lighting is a thing – and you should consider it in your setup
  • Use a lens that will support the creative vision of your output (shocker: all lenses are different), but it should probably not be a slow “kit lens”, more likely an f2.8 or faster prime lens (or a fixed t/f stop zoom lens, e.g. the Sigma 18-35mm f1.8 Art DC HSM for APS-C like I’m using in my v3.1 setup gives you an awesome image quality, some range of flexibility as to field of view and depth of field, at very reasonable price for the pro quality)
  • All you need to know about the f-stop factor is that the lower the f ot t number is a) theoretically the less light (less expensive lower wattage lights can suffice) needed to light the scene to get a decent exposure b) theoretically the more shallow depth of field (blurred out background / separation of foreground and background / bokeh) is possible
  • Crop factors, full-frame vs MFT vs APS-C, etc are all words you are going to learn to hate – it’s already a terrible stupid mess, and adding a speedbooster to the mix will just kill your will to live and make you give everything up (well, it’s not too hard to actually re-calculate it but it is such a killjoy for me – if what I see on the screen works for me, fuck it, I’ll shoot with it)
  • All you need to know about crop factors is to take the lens in question and mount it on your camera – if it fits (sometimes an adapter is needed) and if it looks good to you (no serious vignetting, you get the field of view you need, the depth of field, the smoothness or sharpness, the character you’re looking for) then it’s a keeper (screw the calculations – unless you’re working on a real movie production set and it matters) – oh, and never get into a discussion of crop factors and M43 vs full frame online or offline ever (a very bad time is to be had if you indulge)
  • All you need to know about speed boosters (or actually “Tele Compressors“) is that, if you add them, whatever the mm and t/f stop printed on your lens says is now out the window (don’t worry, it’s all fine if the image and field of view now coming out of the lens looks good to you) and you now need to adjust the lighting accordingly to taste (although feel free to cheat by using Zebras and False Color, don’t let the zealots tell you otherwise)
  • All you need to know about Metabones vs Viltrox “Speed Boosters” is that the Viltrox is almost an order of magnitude cheaper and will most likely be fine for your “cine” streaming studio or office webcam setup – I bought FIVE new Viltroxes (important: the newer EF-M2 II version) on ebay for the same price of a single used Metabones adapter
  • Contrary to common “cine” aesthetics, apply high sharpening in-camera if you intend to stream (compression garbles details so you want to have more details going in upstream), and no – any sharpening added in DaVinci will not transfer to your exported LUT
  • If you’re not happy with what your image looks like from your camera using any of the manufacturer’s settings / looks, you’ll probably need to know what LUT files are and how to properly use them
  • If your camera doesn’t support LUTs (or in only limited ways) you’re not necessarily out of luck – you could use a LUT box between the output (camera) and the input (capture card/box), provided that the signal coming out from your camera has enough information still in it (see e.g. log and chroma subsampling above)
  • When using someone else’s LUT, you need to know which Color Space it was intended for; Many LUTs expect an r709 input to be able to transform into whatever look they sold you on, some don’t – like the film looks supplied DaVinci – and some tells you in their file name or in the first couple of lines in the file, use a text editor to open the .cube file (it boggles my mind that there is no standard for a meta data tag in the LUT file that will make the recipient device or software recognize automatically for which Color Space the LUT is intended)
  • WARNING: If you’re going to create your own grades / look LUTs, be aware you need to grade using the actual output signal (e.g. directly out of the camera via usb-c to the Mac/PC is going to be a different signal from using an Elgato 4K stick or using a Blackmagic Ultrastudio Mini Recorder to capture and convert the signal from the camera to the Mac/PC because they all have different ways of interpreting and compressing the signal – so use a recording of the output of your actual signal chain setup when grading (otherwise you will get nasty surprises, unusable results), e.g. I needed to create different grades / LUTs for my ATEM Mini Pro (current setup), Elgato HD60 S+, Elgato Camlink 4K, and BMD Ultrastudio Mini Recorder (and to complicate matters further, also for each of the different cameras in combination with the respective capture solutions to match the look across them)
  • Color grading is an art, not a science (although a lot of color science goes into it) – but a “cinematic” look takes basic principles from  photochemical film into account when grading AND you have to be aware that there are several aspects of a real photochemical film look like “Halation”,  “Grain”, and “Glow/Bloom” effects do not translate into a LUT (I would disable all plugin effects – except Color Space Transform if you’re using it at the end of the node chain to convert to your camera or monitor color space – in DaVinci when exporting a LUT), also – even if you could, adding grain to a streaming input would probably garble the compressed output in a way that you do not want
  • Learn how to use color management and Wide Gamut / Intermediate workflow (or ACES) if you intend to grade with DaVinci Resolve – it will take a lot of guessing and headaches out of the equation, making grading a faster and a much more fun and predictable process (thank me later)
  • Forget brute-forcing and using a hundred million custom nodes per clip by using something more like this simple node tree for clip-level (primaries: Exposure, Ratio/Contrast, and Balance + secondaries as parallel nodes) and process as your go-to starting point, see screenshots of my node tree above, and consider using a separate timeline-level node tree for your overall “look”
  • WARNING! If  you are on Apple Mac, you need to know this before stepping into grading with Davinci Resolve: If you don’t change a certain setting, there will forever be a difference in what you see in DaVinci and what the exported video or grade looks like! WARNING! (I wish I had known sooner! It would have taken away 90% of the painstaking trial and error, googling for this artifact gives you zero answers – only entitled industry asshats claiming you need better display hardware – tl;dr you don’t – it’s a software Apple Mac thing – obviously – and BMD has finally addressed it). Update: Using latest Resolve and a color managed DWG / Intermediate workflow, I now get identical WYSIWYG results to my calibrated r709 Gamma 2.2 clean view monitor when exporting, and with DaVinci Resolve from v18.5 onwards, I don’t think this is much of an issue anymore. I’m also using a Video Monitor Lookup Table by Cullen Kelly “macOS Viewing Transform v1.3” insuring that what I’m watching when grading is as good as identical to what gets delivered (only for Apple P3 displays)
  • Save yourself even more pain and time by investing in an X-rite ColorChecker Video chart to proper white balance and check exposure, record in start of EVERY SHOT / setup – just in case! (also found on my kit.co page) I don’t really use it much anymore as I’m working color-managed and have very color accurate lights – but it will help you take guesswork out of the equation when you’re starting out, it will provide a ground truth, a reference, when you’re testing your lighting and start grading your look
  • Update on my color grading in 2024 post here

Standard
cine, Hardware, News

A-cam dII World Premiere

Yes, I’m back – and as it happens I’m attending the International Broadcast Convention 2008 in Amsterdam. I’ll be posting some impressions over the following days.

The launch of the A-Cam dII from Ikonoskop was my personal highlight today. Check out the friendly presentation given to me at the IBC in the video below. This may even be a world first.

To recap, Ikonoskop launched a Super 16mm film revival when they produced the A-Cam SP-16 – a highly affordable and modern Super 16 motion picture camera – back in 2004. I don’t know about you, but I was quite ecstatic about the camera back then.

Then came the digital revolution with the RED ONE. Sure, I was quite ecstatic about that camera too. However, it was not like you could ever afford one any-day-real-soon-now.

Then we all some creamed our their pants when Nikon finally launched a DSLR with HD video capabilities with the D90 (which was a little late and should have been included with the D300 already, if you ask me).

Today Ikonoskop launched their digital heir (or rather companion) to the A-Cam SP-16, the A-Cam dII. They call it a ‘High Definition RAW Format Motion Picture Camera’. It feels rock solid and very user friendly to the touch.

It’s available from December this year at 6.950,- EUR plus VAT including all you need to get started. That’s HALF the price of a RED ONE body only. You can could preorder here.

Many thanks to Daniel Jonsäter who let me record his presentation. Thanks, Daniel!

Contrary to popular belief, Swedes and Norwegians do get along just fine. ;)

Standard