Two extremely influential theories of economics rest on mistaken understanding of statistics.
Expected utility theory is partly created to solve a problem, that doesn't exist in the way people thought. At the time the knowledge about statistics was limited. Many concepts were discovered later. It shouldn't surprise us that this theory makes false predictions.
One correction of EUT is prospect theory (=behavioral econ). It doesn't fix the initial mistake but adds further assumptions and logic to explain contradicting observations. Consequently it's also deeply flawed.
Leading economists ignoring a foundational error of their theories highlights how deeply flawed economics is as science. These theories are not really falsifiable for many reasons.
What cognitive bias in understanding statistics (there are actually quite a few of them) are you talking about for EE?
Also note:
Behavioral economics has moved beyond basic prospect theory a long time ago (that was in the 1970s)
and Karl Popper's principles are not necessarily the gold standard:
"The origin of falsification was simple: Popper realized that no amount of data can really prove a theory, but that even a single key data point can potentially disprove it. The two scientific paradigms which were reigning then - quantum mechanics and relativity - certainly conformed to his theory. Physics as practiced then was adept at making very precise, quantitative predictions about a variety of phenomena, from the electron's charge to the perihelion of Mercury. Falsification certainly worked very well when applied to these theories. Sensibly Popper advocated it as a tool to distinguish science from non-science (and from nonsense).
But in 2014 falsification has become a much less reliable and more complicated beast. Let's run through a list of its limitations and failures. For one thing, Popper's idea that no amount of data can confirm a theory is a dictum that's simply not obeyed by the majority of the world's scientists. In practice a large amount of data does improve confidence in a theory. Scientists usually don't need to confirm a theory one hundred percent in order to trust and use it; in most cases a theory only needs to be good enough. Thus the purported lack of confidence in a theory just because we are not one hundred percent sure of its validity is a philosophical fear, more pondered by grim professors haunting the halls of academia than by practical scientists performing experiments in the everyday world.
Nor does Popper's exhortation that a single incisive data point slay a theory hold any water in many scientists' minds. Whether because of pride in their creations or because of simple caution, most scientists don't discard a theory the moment there's an experiment which disagrees with its main conclusions. Maybe the apparatus is flawed, or maybe you have done the statistics wrong; there's always something that can rescue a theory from death. But most frequently, it's a simple tweaking of the theory that can save it. For instance, the highly unexpected discovery of
CP violation did not require physicists to discard the theoretical framework of particle physics. They could easily save their quantum universe by introducing some further principles that accounted for the anomalous phenomenon. Science would be in trouble if scientists started abandoning theories the moment an experiment disagreed with them. Of course there are some cases where a single experiment can actually make or break a theory but fortunately for the sanity of its practitioners, there are few such cases in science."
https://blogs.scientificamerican.com/the-curious-wavefunction/falsification-and-its-discontents/
Based on your logic above Pedro, particle physics (and most of modern science) would also be deeply flawed.
Not that economics is perfect, but its not flawed because of the reasons you mentioned.