If Dunning and Kruger would have done that, they would maybe have found, the subject was already examined in Game Theory in the 90s and is known as unknown information problem as well. It's when a player has to make decisions based on incomplete information.
Well, as Lydiot pretty much nailed it - that's what probability theory is for. I don't factually know how the weather will be in Munich next year on 21st of August. It is entirely possible that it will be freezing rain. But if asked to commit to a decision now for what I promise to wear on that day, I would pick light summer clothes anyway, because the probability for summer weather is much higher than for freezing rain.
In game theory, faced with incomplete information, you have to adopt a probabilistic strategy where you choose certain courses of action with certain probabilities turn by turn. So we know how to solve that one, there's no mystery - we know what the best strategies in incomplete information situations are (we might not be able to correctly estimate the probabilities, but that's a different issue).
The problem in bringing this argument in online discussion is that people who don't know probability theory fail to recognize what it might be good for, argue that if I can't prove that X is absolutely right, adopting premise Y makes just as much sense even if X has a 99% chance of being right and Y a 1% chance, complain about all that useless techtalk and in general argue that it's all bullshit because after all that babble I still can't decide what's right and what's wrong.
Dunning-Kruger isn't about
dealing with incomplete information, it's about the step before -
recognizing the limits of your information. If you don't do that, you never adopt a strategy to deal with incomplete information because you hold the belief that you are in possession of complete information.
Which actually proves the Black Swan theory, because it says implicitly, events are unpredictable because information is unknown.
I don't need a line of math to figure that one out... but that's not the Black Swan thing.
In its core, the Black Swan thing is a criticism of economic modeling, in terms of models using the wrong probability distributions for risk assessment because then the math becomes simpler. So then you systematically underestimate rare events, believing that they might occur every 100.000 years while they actually occur every 100 years in the correct distribution.
However, natural scientists know these things... The probability to have a nuclear fusion reaction in a collision of two hydrogen isotopes in the sun is astronomically small. The overwhelming number of times nothing happens. But they collide so often and there are so many of them that in the end, the whole nuclear reaction is sustained from these tiny probabilities. So the tails of a distribution matter.
To argue that the unknown unknown can screw everything is a truism, but you can't do anything about it in your risk assessment. A Kamazozzupf event might destroy Earth tomorrow - but since we don't know what it is because it's a rare event and hasn't ever been witnessed before, so science knows nothing about it - how are we supposed to guard against it?
So there might be situations in which you can well be aware that you're not in the possession of all relevant information but still decide not to do anything about it because there's really no sensible course of action available - you just have to accept some risks. And that's very different from Dunning Kruger.