The world of culinary arts needs to be tested with scientific rigor.
Kenji, America’s Test Kitchen, Serious Eats, Ethan Chlebowski, and many others on YouTube post the results of their experiments doing various things, like boiling eggs or cooking steaks. They do a decent job of optimizing variables under controlled circumstances.
Where they fail is their sample size. They don’t repeat their experiments with sufficient technical replicates nor biological replicates before drawing firm conclusions. The reason, I suspect, is some combination of time and expense of labor and reagents.
The reason they get away with drawing conclusions without sufficient data (insufficient sample size, technical and biological replicates) is there is no process of peer review prior to publication on YouTube or in written/print media.
This is the equivalent of publishing in non-peer reviewed literature, which is essentially not science.
The people doing these experiments are very intelligent and some have scientific backgrounds, and they definitely mean well. However, just because they’re sure they’ve answered certain questions to their own satisfaction, and may have published thick cookbooks with their results, they’ve left the door open for having done so much work only to make incorrect or at least imprecise conclusions.
Chief among the areas of room for improvement is the issue you’ve brought up: the evaluation of the cooking results is generally not done by masked graders.
Ideally, the cooking results should be evaluated by trained masked graders the preparation of the food probably can’t be masked from the cook, but at least the experimental results should be assessed by graders who are masked to the preparation being studied, and who are also trained to assess subtle taste differences. Furthermore, experiments should be repeated several times (technical replicates) and using several different batches of raw materials, such as chickens, cows, eggplants, yams, from different farms (biological replicates).