via Otto Scharmer
The premise around “field structures of attention”: We can’t solve level 4 problems with level 1-3 mechanisms. Consistent with general theory of complex adaptive systems.
(Also consistent with Gödel’s incompleteness theorems of mathematical logic translated to complex systems / computer architecture: A computer of a given size can model only a smaller computer — it cannot model itself. If it modeled a computer of its own size and complexity, the model would fill it entirely and it wouldn’t have any resources available to exercise the model. A corollary — we may never fully understand the human brain by using the human brain to understand it.)