cine, grading, Rants, video

On My Color Grading in 2025

In another blog post that nobody asked for, I thought I’d summarise what I’ve learned and updated in my color grading journey hobby in the last year (instead of updating the 2024 post ad absurdum).

Above, no new test footage as I haven’t really made any in the last year or so, but this is the “Director’s Cut” for 2025 – as good (or bad) as it gets for my grading and look skills (tastes) so far. It may not be for everyone, but this is my kind of kink rn.

What changed since 2024? TL;DR – not much.

Mostly because of two things:

1. I feel I’ve reached a happy place where I (at least think) I know what I’m doing and I can consistently and predictably get to results that I like (thus radically reducing the urge and curiosity to keep digging, 80/20 and all that I guess).

2. I’ve been busy doing other things (aka “work“), leaving less time for the “hobby”.

So what am I doing differently in 2025 compared to 2024?

Current default clip-level node tree:

Default Clip Node Tree v6.4 actually v6.5 now – just a bit of housekeeping done

Of note, I’ve added Pixel Tool’s “Prime Grade” plugin (it was on sale @ YOLO price point) to experiment with one node to rule all primary grading as it looked like it would save me a lot of time and work. And so far, I find it gets me to a great spot faster than hopping around in more nodes and places in DaVinci Resolve manually, making great results easier and faster to achieve. It’s a keeper. (I only wish I 100% understood how and what Prime Grade actually does to achieve its effects – e.g. to make sure I won’t break stuff when using other dctls or grading techniques – so I will have to dig deeper on that at some point).

But I’ve also left my old default nodes for primaries in there to lean on just in case – a comfortable and easy fall-back to what I already know how to use – should I ever get lost using this new “Prime Grade” thing.

My default clip node tree (v6.4) in 2025 vs 2024 (compare to v6.5 above for the minor updates)

In the RATio/CONTrast node, I’ve pre-added a .336 pivot point on the Custom Curves (as I’m working in DaVinci Wide Gamut / Intermediate) using Cullen Kelly’s middle gray / exposure DCTL, locking down my middle gray as a default (no-brainer – no idea why I didn’t do this sooner). BTW, you can also use the excellent – and also FREE – Middle Gray dctl from MONONODES.

The rest, like my custom “MTF Sim” and “Lens Degrader” compound nodes, I’ve previously described in the 2024 post.

I’ve also been experimenting with turning OFF the “use S-Curve” setting in DaVinci. Undecided if I should keep it ON or OFF so far. I guess I need more time with OFF to decide, as the vast majority of my time spent in DaVinci Resolve so far it has always been ON, I’m heavily biased.

Current default Timeline node tree:

Default Timeline Node Tree v9.4 v9.6 (v9.6 = housekeeping & some changes to better map to logical processing order of nodes & how photochemical film behaves – I think)

The main difference in my current Timeline node tree is a bit more of thought and organising going into separating the “Creative” or “Look” part(s) and the “Print” (FPE – Film Print Emulation) parts, also getting them in a more “correct” order – I think. Kinda. Maybe?

The thinking goes: adding photochemical aspects like Halation, Bloom, Grain as soon as possible in the pipeline (for the effects to be dragged through the look and fpe process – not added on top), then either I’m first doing the LOOK via a combination of taste-LUTs or a plugin like the native Film Look Creator or Contour (also opting to throw a tad of a creative LUT like from Arri into the mix, and using e.g. the 2499 Custom Curves DCTL to add some split toning micro adjustment secret sauce) – then adding film print emulation (FPE), either via an DWI FPE LUT, an ACES FPE LUT, or by using a plugin like Genesis or Dehancer.

Also, I’ve changed settings to use “neutral” grays in the UI preferences (less of a color tone bias in the DaVinci Resolve UI that could trick the eye).

My default timeline node tree (v9.4) in 2025 vs 2024 (see more current v9.6 update above)

Density is still occupied by Iridescent Color’s Density dctl as I tend to stay away from the native ColorSlice options (good intentions, faulty execution – sat model is good, though) in fear of breaking the image in horrible ways.

My main “goto” for the Look is (still) the native Film Look Creator (FLC), Cullen Kelly’s Voyager Pro v2 taste LUTs, the official Arri LogC3/4 Look LUTs, and just a hint of JP’s Custom Curves for LOG2499 dctl (applied in DWG/I, without going into Log 2499 first – YOLO!) for micro adjustments in the highlights.

I’ve also kept the free trial version of Cullen Kelly’s wonderful “Contour” lookdev plugin in there to get more mileage with it – should I decide to get even deeper out of pocket with into this hobby in the future.

The ACES FPE node is a compound node first going into ACES using CST and then ADX using ACES Transform before applying the FPE LUT to get the most out of the native DaVinci FPE LUTs (they are actually pretty good when used this way!), as suggested by Cullen Kelly.

Just make sure to turn OFF OpenDRT (or any old cst / drt you’re using to go to display space) for going out to display space if you turn this compound node ON – The native FPE LUTs supplied with DaVinci Resolve have their own working color space transform to rec709 display space thing baked in:

When using a single FPE LUT compatible with or made for DWG/I, my main goto is still the Fuji 3510 FPE LUT by Cullen Kelly. I’m a Fujicolor Fanboi at heart – still.

That huge parallel node fan thing is just a way for me to better organise a bunch of on/off “checkers” like Zones, Heatmap, and SweetSpot (in lieu of “False Colors” – which I hate), Skintones, Blanker, etc – basically all the utility dctls I use put in a single stack in parallel to fit them all on one screen and to keep the whole Timeline node tree somewhat usable.

The Halation FLC H&B node is DaVinci Resolve’s native FLC plugin with only Halation & Bloom turned on.

I’ve left free trial versions of both Cullen Kelly & Co’s brilliant “Genesis” FPE plugin and the older (but still very good) Dehancer plugin in there to eff around with and get some experience with – if and only if this hobby would somehow turn into to paid ops in the future, justifying – to myself as a non-pro – the purchase(s).

In v9.6 I’ve also added the wonderful (AND FREE) DCTLs RGB Chips by Thatcher Freeman and Cullen Kelly’s CKC Grayscale Ramp to better visualise what is actually happening to my image as I manipulate it.

Of note, what is completely new is that I’m using the latest version of OpenDRT to convert out to rec709 / Gamma 2.2 (yes, I’m still exporting to screens-only, mostly no Gamma 2.4 for me) from working in DaVinci Wide Gamut / Intermediate. I find that the later versions of OpenDRT provide more pleasing (waaaaay more pleasing than using the native CST – no going back by now) results, arguably also nicer faster than the fantastic (and also FREE) 2499 DRT.

The results I get on my own footage by just using a provided preset in OpenDRT like “high contrast” usually does it for me – and on the off-chance it doesn’t, I find it easier and faster to tweak to good taste than using the other DRT candidates.

Have a look and decide for yourself – DRT-only, no grade, look, or FPE applied:

The source material above is Blackmagic DNG Film Gen 1 shot on a s16mm BMD Micro Cinema Camera (MCC) using a vintage Canon FD S.S.C. 50mm f1.4 lens on a 0.58x Metabones SpeedBooster with a 2 stops Lee Filters proglass IRND filter.

Compare 2499 DRT vanilla to OpenDRT with the “High Contrast” preset applied:

Comparison of DRTs – 2499 out of the box VS OpenDRT “High Contrast” preset

Compare OpenDRT “High Contrast” to the native DaVinci Resolve Color Space Transform (CST):

Comparison of DRTs – OpenDRT “High Contrast” preset VS DaVinci Resolve’s native CST, rec709 / Gamma 2.2

<rant>

I do think the native CST provides more of a “what the camera actually saw” kind of look, but do I actually want that as I’m always going to be grading for that “cine” photochemical vibe? Hell no!

I do get the color science nerds on the interwebs who keep going on about getting the colors out of the camera as correct or neutral as scientifically possible – a fine and interesting hobby (a guilty pleasure I do enjoy & indulge in, enjoying their content from the sidelines) – but they are working hard on refining things on the <20% end that NOBODY ACTUALLY COLOR GRADING [worth listening to] FERKING CARES ABOUT! In short, why the hell would I NOT use something like OpenDRT instead of a “camera LUT” or a more “scientifically correct” CST when it gets my job done effortlessly, pulling me at least 50% over the hill instantly?

I’m trying to bend the image to my tastes here – not trying to win an effing science fair competition!

Of course I had to find out this the hard way: I sure have my fair share of “perfected” camera or color conversion LUTs – some I painstakingly and time-consumingly brute-forced “crafted” myself, others (and waaaay better ones) I bought from reputable sources like Leeming, but don’t get me started on how many of the so called other “camera LUTs” out there are also total garbage when working color managed in a wider intermediate color space, as they either expect some unknown Log going in – or much worse, they were already created in and for (some unknown variety of) rec709.

Today, scientifically color accurate camera LUTs just do not matter at all to me.

When shooting, if I remember (most often I forget – I’m not that organised and it’s that unimportant to me when shooting short sequences on the same camera, same lighting setups), I may shoot a gray card, and I am only intently focusing on getting the blocking, exposure, and ratios in a good spot instead of worrying about “colors”. The only thing I do try to remember is to turn off any questionable incidental / existing light fixtures or make sure skin receives (aka gets most of its illumination from) more color accurate (+97 CRI) light from added artificial “cine” lights, as consumer home LED lights suck a donkey’s ass when it comes to color accuracy – and based on my own incompetence experience, it can really make color grading contaminated footage (especially with skin tones) into a good spot a next-to-impossible, zero-fun task.

YMMV, but I shoot on Blackmagic Design cameras in DNG and BRAW, and their native DaVinci Resolve Color Science Transform (CST) profiles for going from camera LOG (aka DNG BMD Film Gen 1 or BMD BRAW Film Gen 5) to DWG / Intermediate work great – if you work color managed AND have set up your color management pipeline correctly – and I don’t have to worry about matching different cameras with wildly different color science.

I mean, riddle me this: I’m shooting in DNG/BRAW log, for myself, and using a huge intermediate color space to grade in to my (extremely) subjective taste – so why on earth would I worry about getting the colors to line up on a scientifically accurate chart, like ever? Why? Why? WHY??!!11

</rant>

Also new in the hardware department is using an additional dedicated ASUS ProArt display with a custom DIY made middle gray background, sporting LX1 Bias Lighting backlights on the display, the input signal coming from a BMD UltraStudio Monitor 3G calibrated with DisplayCAL and Argyll using my X-Rite i1, resulting in a correction LUT applied on the input signal running it through a Blackmagic Design MiniConverter 6G SDI to HDMI LUT Box.

Now, did adding all of this fancy (albeit non-pro / non-reference) hardware improve my color grading? Not really – at least not directly in proportion. I would say that it has enabled me to be bolder, though. I now push things that I would previously be kind of hesitant to. Now that I can be more confident that the results should hold up, it feels a bit liberating – but it feels more of an incremental refinement in the 20% than a significant “in the 80% of the job” improvement for me.

<RANT>

Dear Blackmagic Design, could you please fix the meh user experience using the BMD UltraStudio Monitor 3G with DaVinci Resolve on a Mac?

Now, to be able to use it in DaVinci Resolve again, I have to first connect the device – again, then start DaVinci Resolve (again, because usually I first forget to connect the device and only notice once I’ve started DaVinci Resolve), go into preferences and select the monitoring device, then QUIT DaVinci Resolve again for the changes to take effect, and finally START DaVinci Resolve AGAIN to be able to use the device for monitoring the output?

Seriously? Why? Why? Why? This feels like a shitty MSFT Windows 95 experience – in 2025!
Do. Not. Want.

The device also gets so hot you can fry an egg on it (I can’t touch it for more than 2-3 seconds in fear of a burn) – so feel I need to disconnect it when I am not using DaVinci Resolve. Could you also perhaps do something to make the device not feel like it will burn down the entire neighbourhood (and instill a fear in me that it will just stop working prematurely because of thermal weathering / decay) if I leave it connected when not using? Like, why isn’t the device in a non-nuclear-meltdown-temperature “sleep” mode when not activate?

</RANT>

That’s about it for my color grading in 2025.

TODO:

I’m interested in checking out MONONODES’ new v2 of Film Elements demo in-depth – when I get the time – for a couple of reasons:

1. I hear the Film Grain is awesome. (I’ve never been 100% happy with the results of native or FLC grain on my own content).

2. The Vignette, Chromatic Aberration, Lens Blur, and Lens Distortion dctls included are bound to be much better at doing their things than my own “Lens Degrader” power grade kludge that is only (ab)using DaVinci Resolve native stuff.

3. CAN HAZ MTF DCTL!!! ZOMG – MOAR Modulation Transfer Function Curves!

No seriously, that part got me super excited – I get VERY excited for anything MTF-related – because MTF is THE ONE THING that makes the digital content (that I’ve shot myself using my own gear, at least – which is currently 100% of what I grade) instantly look more analogue or “cine”. MTF is It’s the one thing that finally tips the scale, it’s my missing ingredient to actually sell the illusion (for me).

<rant>

If you can’t tell already, MTF is where I really get my nerd-rocks off, because the science is incredibly fascinating and it’s a terribly undervalued and underappreciated aspect in digital film grading – and I guess that is mainly because most “pro” colorists are used to grading “pro” footage shot on “pro” cameras using “pro” “cine” glass by “pros” – MTF is just not a thing they have to deal with on a regular basis, as most likely there’s been a “pro” DoP involved upstream, making sure – in theory – that the MTF part of what goes into creating that “cine”feelz is already checked off by the choice of [quite possibly outrageously expensive & highly exotic] optics used. But what do I know…

</rant>

Currently, I’m still using an MTF emulation power grade inspired by Marieta Farfarova to feed my MTF addiction – and the results are great, but it is a bit cumbersome to use, not very intuitive when changing settings around (which lends itself poorly to encouraging more eff around & find out experimentation), so any MTF that is easier to use, provides more intuitive control on UX surface level – and possibly delivering even better results – would be extremely welcome in my default node tree!

Also on the subject of emulating lens characteristics (of lenses we cannot get our hands on), I need to get more time with Lewis Potts’ new and exciting “Lens Node” plugin (a shame it seems the free trial has a limited number of available lenses to emulate? Why? Just watermark and get on with it!).

I might also look into effing around more with how to do more interesting stuff in the the bottom of the image, aka how to achieve more nuances in the blacks – however, I’m OK with a little “crunch” (or I guess “compression” would be a more technically correct term for what I usually have going on at my low end, not rolling off to black in a linear way) in there, so I’m not sure if I need to “see into my blacks” more – but it’s worth experimenting more with it, though. It’s on my to-do list in custom curves experimentation, aka creating more looks and feelz just using my own custom curves and no plugins/LUT/dctl/etc.

Also, one of these days I’ll add Thatcher Freeman’s brillant RGB Chips utility dctl to my default node tree – or at least as a convenient power grade.

Update: Writing this, I just did add the “RGB Chips” to my default Timeline node tree, default OFF. This is a great utility dctl for checking what your grading and look / print are actually doing to your image. It’s also great for auditioning unknown “found” LUTs for unwanted side-effects (for instantly getting a better, more intuitive understanding of what they are doing to the image) before you apply them, in my experience.

Some 2025 footage:

Only light source in addition to the natural / incidental light is a GVM SD600D-II (also gotten at a YOLO-bargain, suspectly priced for accelerating the end to western civilisation), native reflector attached, pointed at the ceiling. This wasn’t a staged thing, pure spur of the moment, so I didn’t have time to go fetch & rig any negging / shaping options before the window of opportunity would close, OK Patrick?

Shot on my BMD MCC, vintage s16mm Pentax A110 50mm f2.8 glass with 2 stops of Lee Filters proglass IRND. In Resolve I used my custom standard node trees, including OpenDRT (high contrast preset), MTF emulation, and my own “Lens Degrader” power grade applied, etc:

Oh, I almost forgot, I did also eff around a bit with JPLog2 (free dctls that are supposed to offer a working color space with higher dynamic range, like 2-3 stops more than DWG. It’s mostly for new ARRIs, which is arguably going to do f*all for my legacy BMD MCC footage – and I don’t quite understand how you can go to a wider color space *AFTER* you’ve already done an IDT/CST to the “inferior” working color space, but what do I know… I don’t get it. And it’s not like DWG is not wide enough already to accommodate their own (Blackmagic Design’s) 16-stops-of-dynamic-range cameras, is it? IS IT? Because that seems to be what JPLog2 is trying to sell me on. I’m not buying it, though. Nah. Uh-uh. Nope.):

Above: JPLog2 to ACEScct DCTL applied right after the IDT CST (BMD Film G1 / BMD Film to ACES (AP1) / ACEScct) and JPLog2 to ACEScct DCTL right before the ODT OpenDRT (ACEScg / ACEScct out to rec709 / Gamma 2 using the High Contrast / Creative White D65 presets. (I did regrade the footage (main differences without using “Prime Grade” & making extensive use of the native Color Slicer) and added some power windows to take down the exposure in the background, aka increase the ratio between key and background, so not directly comparable to the other above DRT examples.)

Standard
AI, Failure, innovation, open source, Rants, startup

On the Open AI coup

I of course have zero special insights into the matter (other than mostly publicly available information – growing by the hour – and having lived some), but that has never deterred me from mouthing off my opinions (and being substantially more right than wrong) in the past – so here are my two cents on the latest Open AI developments that you never asked for.

The Doomer vision of General AI, created by generative AI

The board (read: e.g. Ilya Sutskever or one Adam D’Angelo, who is also behind Poe, a ChatGPT competitor — of all things to have as an independent director — Toner, or McCauley) has had plenty of time to explain, but have so far chosen not to. Until I learn otherwise, I’ll assign ego and zealotry – two of the universe’s most destructive forces – as the reasons for this mess.

As long as there is no public explanation from the board as to why exactly they fired the co-founder CEO Sam Altman, I am going to assume the reason is one big f*ing nothing burger — and they know it. (Happy to change opinion if facts to the contrary appear — which I think is unlikely at this point, but would make for a great plot twist).

I guess the most likely scenarios so far are a) The Decel faction of the board getting cold feet about the current speed of things b) D’Angelo feeling snubbed by ChatGPT’s new AppStore stealing Poe’s thunder c) A combination of both, with or without an active coup conspiracy — although I lean heavily in the active conspiracy direction; Envy, spite, and zealotry (aka ego), are in my experience the usual suspects when it comes to the motivation behind cataclysmicly stupid decision making.

Now, Open AI is not operating in a vacuum. Open AI does not have monopoly on creating AGI; Other projects are trying to develop AGI — safely or otherwise. If AGI will happen, it will happen — with or without Open AI and with or without Open AI playing it safe & slow. Applying Game Theory would tell you Open AI would be playing a heavily biased game (which I guess you could also call hubris or ignorance — or of you’re a decel, “acting on [your religious sci-fi fantasy cult-like] conviction”). Ok, so now someone else developed AGI — what now Open AI? Congrats, you’ve become irrelevant. Who cares if you warned the world, who cares if you developed safe guidelines, who cares if you developed a “safe” neutered AGI now? I think this is also why I don’t lean towards “decel” as the real or only motivation behind the ousting, but it may have been used as an argument by the coup faction to sway the zealots on the board to make an insanely stupid decision.

Also, applying Game Theory 101 to an aftermath of ousting an “everybody’s darling” CEO with no outside or bottom up support, indicates a thinking (or perhaps just a complete lack of thinking?) that they could somehow survive this (which I guess means they have the legal paperwork to back them up — for now) — and to be fair, if the direction for Open AI is to regress into a [somewhat irrelevant] research organisation, I think they’ll achieve it quicker than they could ever dream of — and I also think they’ve already potentially achieved one of the fastest & largest destruction of value in history in the process — at least for now.

Update: I’m not the only one noticing the board flunked Game Theory 101 – Kindergarten Edition

Also, as a Norwegian, I’ll never pass on an opportunity to dunk on Swedes ;), so I guess I should also assign some blame and shame to the harebrained economist Nick Boström for fueling the AGI danger scare – I’m only half joking, though: Boström has been highly influential in turning Musk and others into doomers with his (IMO stupid and also in blatant disregard of Game Theory basics) book “Superintelligence”. And the kicker being Boström now has come to regret his scaremongering doom & gloom. Heuristic: Never believe a word an economist is saying. Ever.

Now, Sam and Greg “joining” MSFT sounds like MSFT creating an interim CYA (Cover Your Ass) vehicle (aka a story at the moment, not necessarily anything set in stone yet) to protect their AI interests (and more importantly, their share price) until situation gets further sorted out – MSFT is not exactly known for creating great products and daring innovations, so I don’t see how those two people would thrive under the MSFT (or any) corporate yoke for the long run. I’d bet against it.

I think MSFT definitely won the narrative so far, but I’m not so sure about actual value (did they sign / formalise anything yet?). Anyways, MSFT with Nadella taking a stance of seemingly extreme optionality was a good move, perhaps the only move to preserve any upside potential.

Update 2: Proof that clown car will keep on clowning: Ilya Sutskever has regrets. Regrets of the actions of the board that he was supposedly in control of. Give me a fucking break. I just can’t… How was / is any of this possible?

That said, hindsight of course having perfect vision, there was bound to be an enormous amount of friction building between the intentions of the founding of Open AI as a safety-first non-profit and the commercial entrepreneurial ambitions and speed of someone like Altman as the CEO — not to mention between egos, not to mention from having what seems like a competitor as an independent director on the board, perhaps. 

I can’t remember who said it on X/twitter, but I agree with their sentiment that the board might have still been a floppy kludge that Altman as CEO and co-founder didn’t pay too much mind to tighten up since “no CEO was ever fired by a board for being too successful”.

Now, what’s next for me as an Open AI / ChatGPT customer? What can we expect going forward? Neutered nanny ChatGPT v4 forever? Greatest come-back since Steve Jobs? I think it’s time for me to look into the current state of the alternatives again, regardless.

Although I don’t feel this soap opera is canceled just yet. I expect at least one more season of twists and turns. Stay tuned — I have a feeling The Sam & Greg Show can still surprise us.

Update: Yes, they did. Although, not so much of a surprise as a big ado about nothing in the end.

Standard
innovation, Lessons Learned, Rants

Talking about corporate innovation and the pandemic

Recently I was on Fabian Böck‘s BOECKBX podcast and talked a bit about corporate innovation and the effects of the pandemic on businesses – including my own.

Have a look and listen.

TL;DR

Based on my own experience working in outsourced innovation with governments, organisations and some of the largest companies in the world on and off between 1996 and 2010, I do not think it is a good idea to outsource (business model) innovation.

That’s the whole premise of my company, +ANDERSEN & ASSOCIATES.

From the +ANDERSEN mission statement:

“… we enable companies to manage and run innovation for tomorrow inside their own company, using their own people today.

Because innovation in your company will never happen by outside consultants.

It has to come from your most valuable assets – the people you already have on the inside.”

Standard
Rants, speaking, startup

On the Gründerstipendium.NRW so far

Panel debate about the Gründerstipendium.NRW at Startplatz Düsseldorf, April 1st 2019 (Image © MWIDE NRW/R. Sondermann)

Yesterday I gave feedback on a panel with Andreas Pinkwart, Minister for Economic Affairs of the state of NRW, and others, from my experience as a juror of the Gründerstipendium.NRW scholarship, the state-driven program aiming to help more people starting new innovative businesses in the German state of North-Rhine Westphalia, awarding a monthly EUR 1.000,- for 12 months to new innovative companies and founders.

My main feedback so far was:

1. A better, more transparent labelling or identification of the respective jurys’ (there are several spread across the state, each with their own set of main competencies) main competencies and domain experience to make it easier for a startup or founder to select which jury to pitch in front of that can best judge their degree of innovation and viability instead of having to travel criss-cross the whole state to finally find a jury that understands their domain by luck.

And by that I don’t imply that some juries are better than others. What I am saying is that most of them have different expertises and experiences. Judging a new retail store concept is not the same as judging a new nanoparticle coating material, judging the viability of a new social network for tweens a whole different ball game altogether.

It could be better for the applicants to be able to identify which respective jury would be best suited to evaluate their respective innovations, where they should best apply, in advance.

2. A stronger network between alumni, coaches and supporters to facilitate swift help and avoiding that new founders do the same most basic beginner mistakes over and over again. A “private” social network a la #slack MS Teams or Facebook for Work springs to mind als facilitating this. To avoid getting caught up in ministerial red tape, I’d suggest the participating networks set this up by and among themselves.

3. A regulation like an official cool-down period to limit how many times a founder or startup can apply within a set amount of time could be helpful. E.g. x amount of rejections in y time = z cool-down time before a new application will be accepted from you and you’ll have the time to work on their metrics or presentation to improve. Maybe this will improve by itself if the first point is addressed (see above).

4. On a personal note, I shared that I have some personal ethical ambivalence when recommending a founder or startup for the scholarship that has already received funding. (Side note: More of a libertarian than a liberal, I have an ethical reservation with government handouts for private enterprises. Full stop. Incentives – M’kay. Handouts – Nyah). In my view, the market has already voted for a funded company, they shouldn’t be needing this on-top, thus I find it questionable to be giving it government handouts out of my tax money (because let’s face it – this scholarship is the German tax payers’ money at play) or put in another way, supporting leeches gaming the (public) system.

I understand and accept the counter-argument that those cases could count as “collateral damage”, and if they are successful they will return the tax money invested with multiple in returns.

What I do not understand and do not readily accept is the counter-argument that limiting the eligible applicants to founders and companies who have not already taken on investment, not previously raised a round, would complicate the application process. You just need to have a look at the plethora of other criteria already imposed on the applicant to see that that argument is more rhetoric than logic. A simple checkbox yes/no on the application form and relying on scout’s honours would suffice.

Now that wouldn’t be very complicating, would it?

Standard