Economic take from a Philosopher - Attempt for a new Economic Model (OS)

Dear community,

I think I went into too much detail in my first thread ( RFC: Endogenous Parameterization for Post-Asset Economies (The Ontological Protocol v3.3) ).

It would probably have been better to start with an overview, the vision, and what led to the vision in the first place.

I will now try to make up for this here:

First of all, I am not a programmer, engineer, or anything of the sort. I am genuinely a philosopher and sociologist with a strong affinity for technology. So what I can contribute is logical analysis, deduction from ontological, epistemological, and ethical foundations, and evaluation on a sociological level.

In order to be as accessible as possible, I would like to start with the following hypothesis, which can be understood as metaphorically or literally as it fits into your own worldview:

There are three hierarchically structured, interrelated “operating systems” of humanity:

  1. Physics: Non-negotiable and immediate and continuous effect –> the core on which everything else is based
  2. Biology: Manipulable, but very slow to change and changeability, medium to long-term effect –> runs on physics
  3. Culture: Constant negotiation, the only OS that runs exclusively on the biological OS ‘human’, extremely dynamic, short- and medium-term effect – rarely long-term –> runs on biology

The ‘code’ in which all these interdependent systems are written is ultimately probably nothing more than logic, mathematics, or perhaps even more fundamentally: logos.

Why am I writing these metaphorical introductory lines?

Because it may make it easier to understand how I arrived at my current diagnosis and what role the ‘ontological protocol’ I propose could play in it.

Since physics is non-negotiable and biology is extremely sluggish, but culture must be compatible with physics and biology in order to function (sustainably), I believe it is essential to take a closer look at culture.

It should be quite obvious that there is not just one operating system/culture in this world.
In addition to the countless ‘sub’ cultures, I believe that until around 1990 there were four globally dominant cultural operating systems – two of which are political and two of which are economic.

  1. Democracy
  2. Communism/socialism
  3. Capitalism
  4. Planned economy

With the fall of the Soviet Union, operating systems 2 and 4 largely collapsed or lost so much significance that 1 and 3 were able to become hegemonic.

The problem with this is that without the diversity (competition) of 1 with 2 and 3 with 4, 3 was able to execute its greatest advantage almost perfectly –> assimilation.

Capitalism assimilates all other operating systems incredibly well, like no other operating system before it, as long as they do not radically contradict it (as the combination of the two OSs communism/socialism and planned economy did).

In principle, the ability to assimilate is a true superpower of capitalism – almost all ‘sub’ cultures can run on it. The problem, however, is that capitalism (hence its superpower) is a purely formal operating system –> i.e., it has no inherent ethics.

Ethics has (in the past!) ‘played’ the operating system of democracy onto capitalism, so to speak. However, after the end of its opponents communism/socialism and the planned economy, capitalism has been given ‘free rein’ to assimilate democracy, so to speak. To date, this has led to a formalization (bureaucratization) of democracy, whereby democracy has gradually lost its ability to ‘ethicize’ capitalism.

And what do we do with that now? And how could ‘The Ontological Protocol’ help here?

The problem we face, as I see it, is this:

We have an operating system that has become so hegemonic that it runs almost always and everywhere, but is unable to sustain itself.

Why?

Because in the long run, it contradicts the two underlying operating systems of physics and biology – at least in its current form.

What capitalism lacks here is its ‘ethicization’. But democracy is failing here in real time. As an already heavily assimilated ‘shell’ of itself, it simply can no longer compensate for capitalism’s ethical gap.

And this is where the Ontological Protocol comes into play:

The attempt is to create a cybernetic protocol that no longer understands ethics as a kind of “catalogue of rules” (in the style of “make a wish”), but rather as an adaptation of capitalism to the underlying and hardly negotiable operating systems of physics and biology.

And I have attempted this adaptation, this feedback, using thermodynamics and information theory.

I have already carried out numerous simulations and always come to the same conclusion:

such (not necessarily this!) cybernetic capitalism could indeed be a kind of political-economic operating system that contains all the advantages of capitalism and at least greatly cushions its disadvantages.

But in order to validate this and, above all (if something of this kind is really viable), to realize it, I need you—the community!

Because, as I said, I am only a philosopher and sociologist. My skills lie in theory, structure, and systematics. I have already stretched myself far into areas where my abilities have long since ceased to be sufficient. But perhaps – so I hope – it is precisely this stretching that enables me to connect with you – the programmers and engineers.

I will hold the baton for as long as I have to. I don’t know whether it will be taken up and carried on. Nor do I know whether it is even worth carrying on.

It’s up to you to tell me.

And it’s up to all of us to write a new, viable operating system for this world.
If I’m wrong about ‘The Ontological Protocol’, then it’s back to the drawing board. And feel free to use me – if you want – as what I am:

Maybe even a philosopher and sociologist can contribute something here.

Translated with DeepL.com (free version)

GitHub: GitHub - SkopiaOutis/ontologial-protocol: A Peer-to-Peer Causal Economy for Autonomous Agents and Humans

Hey - the problem is very much not economic. It is biological and genetic.

Basically one can not fix human behavior, since it is determined by humans evolving from apes.

If humans evolved say from dog, the society would be much different.

Unless humans are willing to do CRISPR to fix bugs like aggression, crowd think etc., the problem is unsolvable IMHO. Too many people tried in the past all social system, it all ends up be a tribe of apes.

Thanks for your reply!

But what if there would be an economic ‘fix’ for human behavior?

What if ‘greed’ could be ‘redirected’ to enabling action?

Did you read ‘The Ontological Protocol’ on the GitHub?

To my mind the only possibility is to treat humans as ‘nodes’ within a greater system. Or maybe even more precise as programs that run on a cultural OS that runs on a biological firmware.

From a protocol engineering perspective, the main constraint I see is observability and determinism.

TOP derives value and minting from concepts like structure, entropy reduction, or meaningful contribution. Even when approximated via compression metrics or thermodynamic analogies, these signals are difficult to make consensus safe. They depend on representation, encoding, preprocessing, and context, which means identical inputs can yield different results across nodes without any rule violation.

Once a protocol relies on signals that require interpretation rather than direct state transition validation, it moves away from deterministic enforcement and toward judgment encoded in logic. That tends to introduce tuning surfaces, ambiguity, and implicit governance, even when fork based mechanisms are used.

From a technology standpoint, protocols tend to be most robust when they only enforce constraints that are externally observable, reproducible, and invariant. Concepts like meaning, contribution quality, or ethical alignment may be better suited for the platform or application layer, where interpretation and iteration are expected and acceptable.

1 Like

First, I feel compelled to offer a sincere apology on behalf of my discipline regarding the last ~70 years of (mostly Continental) philosophy.

We have successfully obscured the concept of “meaning”, turning what should be a precise structural property into a purely subjective, hermeneutic, and mystical concept…:wink:

To be transparent: I come from the analytic tradition of Frege, Russell, and Whitehead.

These thinkers strove to ground meaning in rigorous logic, not intuition. In many ways, TOP is an attempt to finally execute their vision using modern computation (AIT).

So, when an engineer hears “meaning” and rightly suspects “subjective noise,” I am essentially on the engineer’s side. My goal is to rescue this concept from the realm of opinion and return it to the realm of physics and mathematics…where it belongs…

To address your specific engineering critique:

You are absolutely right: If the protocol relied on subjective interpretation or “fuzzy” context, it would be consensus-unsafe and inevitably lead to forks.

However, I believe there is a misunderstanding regarding how TOP measures “meaning.” We are not trying to encode human semantic interpretation. We are strictly applying Algorithmic Information Theory (AIT) via LZMA compression, which is fully deterministic.

The misunderstanding on the (continental) philosophers’ side is what ‘meaning’ means. And the misunderstanding on the engineers’ side is what '(analytic) philosophers mean with ‘menaing’…

1. Determinism via Canonical History

You mentioned that “identical inputs can yield different results… depending on context.”

In TOP, the “context” for the compression algorithm is explicitly defined as the canonical history of the causal graph (the Ledger).

Since the Ledger state is identical across all synchronized nodes (by definition of a blockchain), the compression ratio of a new transaction against that history is a mathematical constant at block height N.

LZMA(Input|LedgerState_N) → Constant

There is no “judgment” involved, only CPU cycles. Node A and Node B will calculate the exact same float value for \alpha.

2. Why Layer 1?

You suggest moving this to the App Layer. The problem is that current L1s (fiat/crypto) act as thermodynamically blind carriers. They treat high-entropy transfers (spam/noise) the same as low-entropy transfers (structure/signal).

By keeping the base layer “dumb,” we incentivize Moloch-dynamics (rent-seeking, noise-maximization) because they are cheaper to produce.

TOP proposes that the base protocol must have a thermodynamic bias towards structure to solve the alignment problem. It’s not about enforcing “ethics” via governance; it’s about making “noise” thermodynamically expensive via the B_{prod} and \alpha functions.

In short:

We are replacing “Subjective Consensus” not with “AI Judgment”, but with “Computational Thermodynamics”. It is as deterministic as calculating a SHA-256 hash (which is also just a mathematical transformation of input data), just computationally heavier.

Does this distinction between “semantic meaning” (subjective) and “algorithmic complexity” (objective) alleviate your concern regarding consensus safety?

I really hope that I was able to answer some of your concerns in a way that keeps this discussion on a productive way…