What programmers can learn from economists: fundamentals and models
or: The frustrating atechnicality and rigidity of programmers
Economics and computer science, in some respects, share similarities. Both are heavily prone to assuming can openers,  are scarce in their reproducibility, are easy for laymen to deceive themselves into knowledge in, and are especially susceptible to politicization.
Where they differ sharply is that economists are starkly concerned with fundamentals, starting assumptions and axioms, but computer programmers (and often scientists) are hopelessly devoid of any. Economics is divided into many schools, in turn leaning neoclassical, heterodox or other. Thus, an economist will often operate under a Grand Theory (similar to how one might design a system under a Grand Abstraction) and derive their conclusions differently depending on the operational framework. Compare Minsky, Mises, Sraffa, Okishio and Friedman. Monetary theory alone has broadly endogenous and exogenous interpretations, each with its own branches with differing consequences on the role of central banking and of private and public debt.
Where economics and other disciplines tend to be structured bottom-up, programming is often top-down. In fact, top-down is too generous. It is more akin to arbitrarily traversing floors of a leaning structure, which also happens to be a purgatorial realm where inhabitants are continuously reincarnated in a vicious cycle of samsara.
Programmers are not good at proposing and critiquing models, therein lying a weakness. Programmers are also highly ahistorical (in contrast, economists may be a bit too reliant on history, hence the Lucas critique in response). This makes them both unable to pinpoint prior art, what is and is not innovative, learn from case studies, but also unable to generalize patterns - trading it in favor of specific comparisons to tools of the trade. Where the economist would be able to dissect a model based on the relations between its variables, the validity of its axioms, its definition of terms, its departure from other interpretations and so forth, the programmer being blind to any general theory has no recourse but to use extremely flimsy heuristics related to observing the external interface of a software architecture they are using. Fewer are able to for instance, decouple the idea of a file from a file system, or a file system from a hierarchical file system, a file from an on-disk file, how a file might be modeled in terms of a single-level store, why files would even be desirable in favor of data structures like boxes , whether a file must obey an inode, and so forth. Thus the cruel irony is that in a science revolving around layers of indirection and abstraction, all too many programmers are incapable of thinking abstractly enough and take for granted the models presented by the systems they use.
The reasons for this lack of a priori thinking or generalization from first principles is, because, for the most part programmers do not pick up their craft based on any principles whatsoever. Moreover, a lot of programmers consider it total anathema to even have any. Lack of any measurement, elegance or rigor (and perhaps even an active mismeasurement) is heralded in the form of the philosophy that encourages people to “move fast and break things” (where “moving fast” means reinventing the flat tire, not performing any research or innovation), to “do whatever seems to work” (for a low threshold of “works”).
Underpinning this is an intense subjectivism and anti-intellectualism. It is virtually impossible to properly critique a piece of software. Not because there are intrinsic barriers to doing so, but because much of the audience reading it either has insufficient background and derides any technicality as obscurantism, will rebut with slogans like “It works for me!”, and a knuckleheaded superstition that there are no truths and facts in CS, but only differing and mostly equally valid opinions. With a climate like this, why would anyone even bother pursuing such an endeavor? They know well enough they will not be taken seriously. Programmers' willingness to butcher terms with established meanings (i.e. multitasking and real-time) only complicates things further by letting people endlessly drift away on semantics, even if prior definitions were given. Couple it with a culture that fetishizes “pragmatism” and demonizes academia, creating a disincentive to be informed of terminology in the first place, and you get disastrous results.
Speaking of the latter, the fetishistic devotion to “pragmatism” is quite baffling. It is never defined, but rather the pragmatic solution to a problem tends to be one that best reaffirms the programmer’s implicit mental model (all too rarely are they capable of explicitly reifying it and then analyzing others). The inferior will trump the superior in the name of pragmatism. “Academics” are the ivory tower snobs who are allegedly unaware of the industry’s problems, never mind the fact that the pragmatists are stuck reinventing what academics perfected decades ago and has since long rightfully deemed obsolete. Not that many are inclined to even read the literature.
The industry is such that a programmer can make a living starting from the zeitgeist of their time (Nygaard-style, class-based OO), using the tools of the trade (PHP, Apache, MySQL), mature into the trends of their grace period (DevOps, containers, SPAs, ES6), have a slight bout of heterodoxy but still firmly within the echo chambers of the common industry (pure functional programming, unikernels) and throughout their career encounter next to no instances of true innovation, at least not any in wide use. Instead, they will deceive themselves into believing that they are the pioneers of a fast-moving and constantly evolving tech industry where innovation is rampant and old ideas rapidly dethroned, a view that only someone who seldom strays out of the industry bubble can hold, believing that framework, tool and methodology churn is somehow innovative. 
An oddity is that it seems a lot of programmers are content in furthering their self-destruction through the mass promotion of the “learn to code” movement. Note the focus on “code”. Writing code is the rote work of computer science and software design. Anyone can pick up an ALGOL-esque proof notation quickly. Understanding the fine aspects of software and computation takes devotion like any other field. Yet legislators, lobbyists, entrepreneurs and J. Random Hackers are clamoring for the supposed urgency that “coding is the new literacy” which all youth must learn. This rests on the fallacious assumption that learning the ALGOL flavor of the week will grant students with newfound understanding of the software-driven world around them. It mostly evidently will not, anymore than learning English makes one understand linguistics. A false sense of knowing is worse than not knowing. Forcing a notoriously inefficient public schooling system into teaching programming (commonly and incorrectly conflated with computer science by the ones making proposals) will end badly. Contrary to proponents' claims, there are in fact subjects more important to a 21st century layman than programming. The two I can name of are statistics and personal finance. Ultimately, the posturing over learning to code only devalues the field even further by associating it as if its end goal is to churn out executable symbols, and will lead youth into the rat race of the software industry.
Regarding the aforementioned rat race, one of the potent forces leading to the vicious cycle, is startup culture. The entrepreneurial hacker and the MIT-style hacker are two different beasts. The entrepreneur sees all from the lens of a cost function, the result being the relegation of fault tolerance and core engineering discipline into cost centers. A culture that encourages fast reiteration to a point where it is more of an unguided sprawl. A culture that heralds the minimum viable product, where the viability is strongly in question. Startups overwhelmingly focus on end user-facing web applications or tiered web-based SaaS and hence has helped proliferate “hackathons” (not the venerable OpenBSD-type hackathons, but the thinly veiled public recruitment drives) and boot camps. Accelerated by the rise of commodity cloud platforms, it has given way to the DevOps movement. A movement that claims many noble goals, but in reality is a pompous term for a sysadmin who cargo cults from a toolbox, thinking only in terms of rigid specific programs (Puppet, Ansible, Capistrano, Fabric, Docker, Vagrant…) and not general ideas or abstractions - a sysadmin who does not know their stack. A lot of programming activity here ends up the gluing of discrete black box components. Consequences include the stagnation of system software research (regardless of a few players like rump kernels or MirageOS gaining some attention, it is still as dead as Pike proclaimed it in 2000) due to it not being a revenue generator, and the death of precise technical language in landing pages for software targeted towards programmers.
Instead of concrete details, most DevOpsware and similar relies on vague smoke-and-mirrors description aimed to drum up a sort of problem-reaction-solution imminence, so as to entice the programmer to dive in without thought, much less any a priori analysis. Entire fields of research become boiled down to simplistic memes. Reliability becomes distilled down to static typing, functional programming and elimination of mutable state - nice, easily repeatable slogans, alas also just proactive techniques and one of many. 
Programmers and other STEM folk are often keen in taking jabs towards philosophy. Yet at least programmers might do themselves a great service in studying it. If philosophy involves questioning things to absurdly high lengths, then programming must be questioning things to absurdly low lengths.
What does all of what I have said mean? Many programmers would doubt there even are fundamentals. I find this naive. Of course there are, and they become increasingly elucidated as one studies the literature, but I must concede that the way computing is learned is usually incompatible with learning computing as a field. All of this knowledge is scattered in university FTP servers or directory listings, in conference papers, journals and some of it rotted or behind paywalls. Above all, a crucial aspect is systems thinking and a priori reasoning - being able to infer remarks from software architecture based on its internal design, being keenly aware of its prime inefficiencies and judging via holistic means, as opposed to simply observing its external API and settling for that. This is the minimum that must be employed for software debates to actually look like principled discussions with solid theoretical foundations and not just tired flame wars, anecdotes and cliches, which is the norm. One must also be aware of how the abstractions around the software they use controls their mind to the negative, the example above of what a “file” is, is one of many such examples. Question everything, hence my philosophy recommendation.
Once we begin seeing true software thinkers, true software philosophers even (economics is just as much philosophy as science, and I believe computer science is also) who debate on the basis of theoretical system models and are able to cohesively discuss architectures in light of its internal workings, knowledge of the historical prior art and being able to separate what is intrinsic to a problem domain and intrinsic to a particular implementation - then we will break samsara.