Excited and honored to be back yet again as a mentor for the WHU Accelerator program!
Who would have thought… From very humble beginnings of the Rheinland Pitch – to becoming Germany’s largest startup pitching event!
Over ten years ago, in early 2013 Matthias & Lorenz Gräf and me where sitting at what would soon open as STARTPLATZ in Cologne, DE discussing how we could help the founders in the region be more competitive when pitching startup investors.
And keeping with the unpretentious “no frills” spirit of the Rheinland Pitch, we thought it didn’t feel right to be celebrating ourselves – but someone had acquired some festive balloons. ;)
We were (and still are) convinced that the region is a great place to found a startup and that we were (and still are) encountering many competent founders with great ideas here.
However, back then there were two pressing issues of concern:
- Every time we listened to their pitches, we didn’t understand what they were about
- The general public was not well informed about what a startup is and how it is different from other types of more traditional new businesses.
So we came up with the idea to create an event that would be open to everyone and:
- Curate and highlight the most interesting startups in the region
- Provide training to the applying startups so they will be more competitive and on their own terms
- Bring more mainstream acceptance and understanding of startup entrepreneurship
- A regular event as a platform to catch up with and network all sides of the community
I think we’ve succeeded beyond anybody’s expectations. In fact, we didn’t have any particular expectations back at the start. We just said, ok – let’s do it. And a month later the first Rheinland Pitch event was held to a standing-room only full house.
And the rest is history.
As you might know, Germany has decided to end using brown coal as an energy source – and that even faster than initially planned: 2030 instead of 2038 – and this will obviously have an effect, especially in the regions where they are depending on the coal, for cheap electricity to industry, a major source of employment, stability, and prosperity.
To help alleviate the transitional pain, public funds have been made available and regional projects have been formed to support the affected regions in many different ways.
One way is to support and stimulate innovation, to enable an environment that can facilitate the creation of new exciting opportunities for the industries of the future – a second industrial revolution if you will — for when the coal that powered the first industrial revolution and has been generating unprecedented prosperity (and unprecedented damage to the planet) comes to an end.
That’s where I and my company +ANDERSEN & ASSOCIATES come in; I was there to give a teaser overview of some of the methodologies anyone can use (startups as well as established industry) to create disruptive innovation at 50 (!) times faster (and with less resources) than previous methods – to inspire and also to showcase what is indeed possible today; what is actually actionable and teachable.
Special thanks to WFMG – Wirtschaftsförderung Mönchengladbach, Zenit GmbH, and Zweckverband LANDFOLGE Garzweiler for inviting me and to all of the very enthusiastic and motivated participants that came to my fully-booked workshops.
Looking forward to be of further service to the region going forwards!
(And yes, that is an actual original Junkers Ju 52 airplane inside the event location – Hugo Junkers was born in Rheydt in today’s Mönchengladbach.)
Excited and honored to have been invited to mentor participating startups in the Jacobs Startup Competition (JSC) founded and managed by students at the Jacobs University (now known as the Constructor University) in Bremen, DE.
By popular demand, I’ve jotted down some details about my updated “Cine” Live Streaming Studio V3. I’ve shared some Lessons Learned at the end of this article that might be helpful if you also want to achieve a more professional or “cinematic” look for your streaming or Zoom calls.
See for yourself what v3 actually looks like in a Zoom call below and feel free to check out the comprehensive list of the gear I’m currently using to achieve the look on my Kit.co page.
By the way…
Because my studio setup has a price tag that makes it mostly relevant for professionals (aka people who make money using their setup in any way or form) it may not be that applicable to your average home office webcam setup.
That’s why I also did some experimenting to come up with a much less expensive (YET FULL-FRAME! ZOMG!) streaming “cine” setup that could be more accessible to more people (also, no color grading or LUTs needed). Here’s the results of my more budget-friendly(less than $500, aka less than the price of about any entry-level cinematic camera body alone) in the video below:
You can also find the gear used for this more budget friendly setup on my Kit.co page
The path to getting there
In the week of the first COVID-19 lockdown in March 2020, to me a live streaming studio became a necessity to keep serving my customers, so I instantly started to build the first version of my studio. It was my first decent attempt to achieve a cinematic webcam that I would actually use to stream live.
Why “cinematic”? Well, partly because I’ve always been interested in cinema – fun fact: I originally studied to become a film director way back when before dropping out and pursuing a completely different career path – and this seemed like a good way to combine passion with “work” again.
Mostly because my customers were already used to paying for the highest quality of live / in-person content – and I shuddered at the thought of serving them online with just a standard shoddy webcam stream; I felt I owed the people who put their trust and money with me to provide them the best quality of experience possible also when delivering online as a Zoom stream as well.
Before v1 there was also a lot of incremental versions during spring and summer 2020, lots and lots of trial and error while getting the hang of the very basics.
v0 being just natural light (which obviously doesn’t work if overcast or afternoon/night) or cheap light and no successful color grading; with the results of not good and worse, with cheap lights destroying the possibility of ever getting a clean grade.
You can e.g. see the terrible green tint of cheap lights in v0.2, and I couldn’t get completely rid of it when grading so I had to get more color-accurate lights. While obviously more expensive than my green tinted no-name lights, IMO getting color-accurate lights easily 10x’ed the results.
v0.5 was the first attempt to shape the light cinematically using my first new professional cinema lights (I went for the cheapest very color-accurate lights I could find – and thus they were in the end underpowered for my needs (Came-TV Boltzen 30W Fresnels, fresnels didn’t help either for my lighting setup – too narrow a beam) and I also started color correcting and grading my own LUT using an actual “x-rite ColorCheck” color chart. It took me a couple of months of experimenting and learning just to get to this point.
To me, this was the “now we’re finally getting somewhere” moment. IMO, a pleasing “cine”-like look, but too “flat” for my taste and way too edgy for my target audience and purpose.
v0 through v0.5 where made with the tiny Blackmagic Micro Cinema Camera super 16mm MFT camera on a cheap-ish Samyang MFT 12mm f2 lens and I used a Blackmagic Design UltraStudio Mini Recorder Thunderbolt (only works in a few apps like e.g. Zoom, delivers better image quality than the HD60 S+, 8-bit 4:4:4 and 12-bit 4:2:2 over HDMI – also has SDI with 12-bit 4:4:4) and a Corsair Elgato Game Capture HD60 S+ for the rest (emulates webcam, somewhat lower quality signal, 4:2:0 chroma subsampling) until the release of the ATEM Mini Pro.
The gist of my lighting setup remains the same to date:
A key light close left (my side) of my face, a top/hair light just above my head, and a kicker further behind me to my right side (like a mirror version of the key light). This is the classic three-point film shot setup. I’ve later also added various filler tube lights to help shape and warm up the light on my face.
v1, the “2020” look
This was the first serious “look” upgrading to more powerful (300W key and 150W kicker) professional cinema lights, adding softboxes with diffusers and grids to help shape the light, upgrading to the Blackmagic Pocket Cinema Camera 4k super 35mm camera and adding a pro Sigma Art zoom lens. Why 4K when the streaming standard is still mostly 1080p? Because more information going in equals better quality coming out in the downsampled 1080p signal, and to future-proof my camera setup.
The grading was updated to the new camera and lighting, but it was very similar to v0.5 with no “cinematic” look. Good contrast, shape and skin colors, perhaps – but lacking that little certain something that the brain recognizes as “cinematic”. I had zero idea what I was doing with color grading at the time. It was also a bit too dark – it looked great for me, but sometimes participants and customers reported it was a bit dark depending on their monitor an operating system.
Also, there was no “motivation” to where the light was coming from – just a black void. Which was what I was going for at the time, but in hindsight it is very boring to look at over time.
v2 the “2021 – 2022” look
I later tried to refine the shaping using two filler lights, one for the shadow side of the face and one at the front of the face, both set to 3200K color temperature to add some warmth, all the other lights 5600K, the camera set to 4400K by walking it back from the original 5600K to taste by monitoring in studio.
I practiced a bit with grading in Davinci Resolve (rapidly becoming an industry standard, a free download, unless you need e.g. Color Space Transform” (CST) nodes the free version is awesome – if you need e.g. CST, you’ll need the “Studio” version which comes free with Blackmagic Design’s cameras or as a paid upgrade) and found some trustworthy educators: e.g. Gerald Undone for e.g. no nonsense technical information and grading with a color chart and introduced me to the Leeming LUTs, and Rob Ellis for awesome but simple, affordable, and yet beautiful cinematic looks with lighting setup tutorials – and especially Darren Mostyn and Cullen Kelly if you’re getting seriously into grading in DaVinci Resolve. A big thank you to all four for making my life easier and way better informed.
BEWARE: Most videos on youtube on how to grade in DaVinci Resolve are made by click-seeking buffoons with no fundamental knowledge of color science, photochemical film science, and how grading actually works, or even how the technology or software works – let alone any sense of cinematic aesthetics. Thank you, massively lowered barrier to entry and Plato / Dunning–Kruger, I guess.
I was then able to make a more cinematic grading LUT. I also had to adjust the grade to having added a Teleprompter to the setup (yes, that added glass has a noticeable effect, about half an f stop darker). I still didn’t know what I was doing, though – an incredible amount of painstaking trial and error (brute-force) followed.
I also added the three lights in the background (approximately 2m further behind me) for “motivation”, aka fooling your brain to think that these are sources that light is coming from (no significant light actually reaches me from these lights, though – photons, inverse square law and all that), and it added some “interestingness” instead of just the dark void. It was still a tad bit too dark to account for variations in participants’ setup, though. In hindsight, I also find it a bit too saturated – especially in the highlights and shadows.
v3, late 2022 and beyond look
The major hardware change was replacing the individual fill lights with a light tube system that could be remotely controlled in concert (major headache to always have to finesse each fill light manually), also replaced the top/hair light with two tubes in the same system and built a custom softbox around them. The main reason for this, however, was that the new lights had a bit more power than the previous and would enable me to lighten the look or “wrap” the light around further.
I also changed the DOF (depth of field, went from f2.2 to f2.8) to better help staying in focus when naturally moving my head (yes, there is absolutely no autofocus in my cinematic setup), I then relit the whole thing by first cranking the ISO up from 200 to 800 to properly expose to the right (GEEK ALERT: more dynamic range using native ISO, allegedly – but feels like this should not be applicable when piping the 3G/FHD signal over the HDMI to the ATEM Mini; I mean, it’s not a RAW signal – but I guess the more information going into it, the more comes out. ), and to enable using less power from the lights to achieve the same result (because of less eye-strain, lack of flexibility in lighting the scene if the lights are already maxed out at a 100%, and of course to consume less energy and generate less heat in the studio).
The major grading change was, aside from adjusting to the new lights and the ISO change, updating to Blackmagic Gen 5 color science (pain in the ass, had to regrade everything – but not as hard as the first time now that I knew a little bit more about how to actually grade and could replicate steps instead of trial and error) and a brighter, less “edgy” or stylized, look that still tries to retain that “cinematic” quality to it.
It is now bright enough to accommodate for the differences in participants’ setups. Some report it is also a more pleasing look than the previous one. I think it is definitely less “edgy”.
Update: I’ve since incrementally updated this look to a v3.1 (screenshot at the top of this post) – Only by changing the lighting values, the ratio between dark and bright, bringing back a bit more contrast the between light and dark side of face for IMO more “definition” and interestingness.
My current Davinci Resolve node tree for color grading above. (Screenshot showing a more edgy grade than my live LUT, disregard the “Grain” node.) Discontinued – The screenshot above was my own brute-forced node tree that I previously used, described below: The first node in my grading above is the “Leeming LUT Athena III – Blackmagic Design Pocket 4K – Gen5 Film”, the “Video to Full” node uses the “Leeming LUT Fixie – Video to Full Range” as I find it adds to the cinematic quality, and you can then add your creative cinematic grade to the “Creative LUT” node either manually or applying a cinematic LUT. Keep in mind that the creative LUT you apply should be expecting the same color science you are using. In my case using the “Leeming LUT Athena III – Blackmagic Design Pocket 4K – Gen5 Film” LUT converts the color space to something as close to rec709 as possible, so any LUT expecting a rec709 input will work – but any LUT expecting a different color space input will look like utter garbage. If you find a LUT that you like but it’s for a different color space than you have already set up (say Arri Log-C instead of rec709) – or conversely you’ve found a LUT but it looks like crap when applied and you don’t know what input it is based on – you can always add a node with a “Color Space Transform” effect in front of the Creative LUT node and experiment your way with converting your current color space to different ones to see if you can find something usable for the LUT to use as an input. Oh, and those “Limit Sat” nodes are for me to make sure that no colors snuck into the highlights or shadows during my grading process (I’m not going to claim I fully know what I’m doing here, there must be ways to do this more professionally) to mimic how photochemical film behaves.
Update: Now my basic node tree looks much more like that of Cullen Kelly’s (the updated version of his node tree, see his newer videos for the changes, e.g. no sharpening or smoothing modifiers in the secondaries anymore, added instead immediately after primaries and secondaries join together).
My current node tree below:
You should also check out the Leeming LUT Pro (IMO the best color transform luts for the Blackmagic Cinema cameras out there by far!) before starting going crazy in Resolve yourself – worth every single buck. Update: Switching to a color managed workflow made the camera-specific Leeming color transform LUTs sort of obsolete for me for my Blackmagic cameras. I do still find that the Leeming Fixies LUT “Video to Full” can be helpful to achieve a better starting point for a cinematic grade when dealing with older color science footage like Blackmagic Design Gen 1 of my BMD Micro Cinema Camera s16mm or stuff already in Rec709 like my Canon 5D Mark II DSLR and I do still use the Leeming LUTs when grading for my GoPros.
I now use Cullen Kelly’s Voyager Pro pack to create a look of several “taste” LUTs instead of a single creative LUT (in the “timeline” nodes to make the look apply globally to all clips). They are really, really good – and also made to work perfectly with a color managed DWG / Intermediate workflow (which is not the case with the majority of LUTs out there – so be advised if some other LUT you purchased looks like utter crap with your color management workflow and/or camera).
Now for the not-so-rocket-science going into exporting a new grade as a new LUT for the streaming studio camera, refer to the manual or just google how to export a LUT, and remember that if you are using a color managed Wide Gamut / Intermediate workflow in DaVinci, you have to add a Color Space Transform (CST) node or set the output to match your Camera’s intended output color space and Gamma – which of course can vary. E.g. for my BMD PCC4K I use rec709 / Gamma2.4 (which, I have been informed, is the industry standard).
For me and for my camera, before exporting the LUT I have to add a CST node as the last “clip” node, converting the color space from Timeline to rec709 / Gamma 2.4 explicitly, and I also set Tone Mapping to “Luminance”, Gamut Mapping to “Saturation” – and most importantly check the box “Apply Forward OOTF” under advanced. (The image is going to look terrible in DaVinci, but don’t worry – it’s going to be correctly interpreted in the camera or LUT box! Trust me.)
GEEK ALERT: Theoretically, and to the best of my knowledge, this CST node should not be necessary as I’m already operating in DaVinci Color Managed DWG / Intermediate timeline set to a rec709 / Gamma2.4 output color space, BUT THIS IS THE *ONLY* WAY the exported LUT will look right when imported to my camera for me. Be advised.
See for yourself what v3 actually looks like in live streaming action below and check out the comprehensive list of the gear I’m currently using to achieve the look on my Kit.co page.
Let me know if you have any questions!
- The quality of the output equals the quality of the input: camera, light, lens, and color grade / look matters equally
- Use a camera with enough dynamic range, something with RAW / Log or similar capabilities instead of a baked-in r709 output, to be able to deliver a cinematic image at all
- Only use lights that are color-accurate
- Diffusion is a prerequisite for that cinematic, pleasing, soft shadows “wrapped-around-the-skin” light
- All you need to know about diffusion is that you can either use a white shower curtain, a sheet of bleached muslin – or add a more productified version called a “soft box” (and add a “honeycomb grid”) on it to your main lights for your light to become wonderfully diffused – or google “roger deakins cove lighting” for an alternative soft and wonderful wrap-around technique (that may or may not work in your studio setting)
- All you need to know about Aputure Lightstorm lights vs Godox VL is that Godox is cheaper and provides the same quality of light (or even better) for this type of indoor, static studio use (today, the latest updated Aputure Amaran 100d/200d might be a better budget option, though – and let’s face it – The Aputure “Sidus Link” app is just fantastic – I love it! Can’t remember ever using a Godox app after testing it once.)
- Where you put your lights MATTERS A LOT – study what they’ve been doing in Hollywood for years – experiment with placement and angles, get someone to help you move lights while you stay in the shot, take the time to “fuck around and find out” what the optimal light positioning is for achieving the look you want – it’s going to pay off massively (get a stand-in doll or a friend or your better half to stay in shot when moving the lights around – and a video assist monitor you can hold in your hand (wireless or cable) to immediately review the results instead of having to run back to the camera constantly)
- Optionally, use my ghetto wireless monitoring solution to monitor the camera output with your smartphone or tablet over wifi (not good enough to pull focus with, but good enough signal to evaluate lighting setup, and for around $20 on AliExpress – I can’t really complain)
- Motivated lighting is a thing and you might want to consider it in your setup
- Use a lens that will support the creative vision of your output (shocker: all lenses are different), but it should probably not be a slow “kit lens”, more likely an f2.8 or faster prime lens (or a fixed t/f stop zoom lens, e.g. the Sigma 18-35mm f1.8 Art DC HSM for APS-C like I’m using in my v3 setup gives you an awesome image quality, some range of flexibility as to field of view and depth of field, at a price less than the camera brands’ own lenses at this pro level)
- All you need to know about the f-stop factor is that the lower the f ot t number is a) theoretically the less light (less expensive, lower wattage lights) needed to light the scene to get a decent exposure b) theoretically the more shallow depth of field (blurred out background / separation of foreground and background / bokeh) is possible
- Crop factors, full-frame vs MFT vs APS-C, etc are all words you are going to learn to hate – it’s already a terrible stupid mess, and adding a speedbooster to the mix will just kill your will to live and make you give everything up (well, it’s not too hard to actually re-calculate it but it is such a killjoy for me – if it works for me, fuck it, I’ll shoot with it)
- All you need to know about crop factors is to take the lens in question and mount it on your camera – if it fits (sometimes an adapter is needed) and if it looks good to you (no serious vignetting, you get the field of view you need, the depth of field, the smoothness or sharpness, the character you’re looking for) then it’s a keeper (screw the calculations – unless you’re working on a movie production set and it matters) – oh, and never get into a discussion of crop factors and M43 vs full frame online or offline ever (it’s like discussing religion with zealots)
- All you need to know about speed boosters (or actually “Tele Compressors“) is that, if you add them, whatever the mm and t/f stop printed on your lens says is now out the window (don’t worry, it’s all good if the image and field of view now coming out of the lens looks good to you) and you now need to adjust the lighting accordingly to taste (although feel free to cheat by using Zebras and False Color)
- All you need to know about Metabones vs Viltrox “Speed Boosters” is that the Viltrox is almost an order of magnitude cheaper and will most likely be fine for your “cine” streaming studio or office webcam setup – I bought FIVE new Viltroxes (important: the newer EF-M2 II version) on ebay for the same price of a single used Metabones adapter
- Contrary to common “cine” aesthetics, apply high sharpening in-camera if you intend to stream (compression garbles details so you want to have more details going in upstream) and any sharpening in DaVinci will not transfer to your LUT
- If you’re not happy with what your image looks like from your camera using any of the manufacturer’s settings / looks, you’ll probably need to know what LUT files are and how to properly use them
- If your camera doesn’t support LUTs (or in only limited ways) you’re not necessarily out of luck – you could use a LUT box between the output (camera) and the input (capture card/box), provided that the signal coming out from your camera has enough information still in it (see e.g. log and chroma subsampling above)
- WARNING: If you’re going to create your own grades / look LUTs, be aware you need to grade using the actual output signal (e.g. directly out of the camera via usb-c to the Mac/PC is going to be a different signal from using an Elgato 4K stick or using a Blackmagic Ultrastudio Mini Recorder to capture and convert the signal from the camera to the Mac/PC because they all have different ways of interpreting and compressing the signal – so use a recording of the output of your actual signal chain setup when grading (otherwise you will get nasty surprises, unusable results), e.g. I needed to create different grades / LUTs for my ATEM Mini Pro (current setup), Elgato HD60 S+, Elgato Camlink 4K, and BMD Ultrastudio Mini Recorder (and to complicate matters further, also for each of the different cameras in combination with the respective capture solutions to match the look across them)
- Color grading is an art, not a science – but a “cinematic” grading takes basic principles from photochemical film science into account when grading AND you have to be aware that there are several aspects of a real photochemical film look like “Halation”, “Grain”, and “Glow/Bloom” effects do not translate into a LUT (I would disable all plugin effects – except Color Space Transform if you’re using it at the end of the node chain to convert to your camera or monitor color space – in DaVinci when exporting a LUT), also – even if you could – adding grain to a streaming input would probably garble the compressed output in a way that you do not want it to
- Learn how to use color management and Wide Gamut / Intermediate workflow if you intend to grade with DaVinci Resolve – it will take a lot of guessing and headaches out of the equation, making grading a faster and a much more fun and predictable process (thank me later)
- Forget brute-forcing and using a hundred million custom nodes per clip by using something more like this simple node tree for clip-level (primaries: Exposure, Ratio/Contrast, and Balance + secondaries as parallel nodes) and process as your go-to starting point, see screenshots of my node tree above, and consider using a separate timeline-level node tree for your overall “look”
- WARNING! If you are on Apple Mac, you need to know this before stepping into grading with Davinci Resolve: If you don’t change a certain setting, there will forever be a difference in what you see in DaVinci and what the exported video or grade looks like! WARNING! (I wish I had known sooner! It would have taken away 90% of the painstaking trial and error, googling for this artifact gives you zero answers – only entitled industry asshats claiming you need better display hardware – tl;dr you don’t – it’s a software Apple Mac thing – obviously – and Blackmagicdesign has finally addressed it). Update: Using latest Resolve and a color managed DWG / Intermediate workflow, I now get identical WYSIWYG results to my calibrated r709 Resolve clean view monitor when exporting, and with DaVinci Resolve > v18.5 I don’t think this is much of an issue anymore
- Save yourself even more pain and time by investing in an X-rite ColorChecker Video chart to proper white balance and check exposure, record in start of EVERY SHOT / setup – just in case! (also found on my kit.co page)