top of page
Search
  • Writer's pictureJose Arrieta

Salaried Algorithms

Bureaucracies abound. They are useful things but not important ones. Nor good ones. In an ideal and overly simplified world there would be no bureaucracies. People would just behave.


This might work in places without migration or poverty such as Denmark or Japan. It does not work in most other countries. Bureaucracy exist. And it is not always bad. Yet, when something is bad it tends to be because of a bureaucrat running loose.


I you have not noticed, i have a problem with bureaucracy. Two actually. One is that I like the name. And so I forget it's evil. Second, it's evilness. Below I try to amend the first problem and in doing so hopefully essen the burden of the second.


Bureaucratic problems usually come along when an eager beaver bureaucrat forgets why they enforce the processes someone set in stone. They believe that their databases and red flags matter more than the people they control. These eager beavers are what Hannah Arendt called the banality of evil. They just do what they are asked and dutifully perform genocide and many other major and minor acts against humanity.


In machine learning we have a term for this, an alignment problem. An alignment problem emerged when an optimization system performs an action that leads to high performance in the metric you gave it to maximize but in doing the action it does something you did not want. It is in someway related to the imperfect contract problem of transaction cost economics. And to solve an alignment problem one needs to change the objective function or add an extension. This effectively changes the algorithm and thus the problem.


I present this idea because it highlights a commonality between the digital algorithms that control our lives (e.g. machine learning) and the bureaucrats that do the same. I thus propose a metacategory that includes both forms of algorithmic performance. On one side we have digital algorithms runs by Amazon, SAP, and others. On the other we have salaried algorithms run by your local eager beaver bureacrat


The proposed nomenclature allows us to use the same solutions for alignment problems. Change the objective function and add corner cases and exceptions. But importantly shows us the crucial part. That the problem is NOT the algorithm. The problem is not the bureacrat. The problem is whoever let them lose in the first place. An algorithm is going to do inhuman acts in order to maximize its realm. That is the fundamental problem of AI safety and the fundamental moral failure that leads to inhumane acts on a daily basis.


By bundling digital and salaried algorithms together we take away some of the power from the owners of a desk. They are no longer bureaucrats. They are just one step away from being software running on a server farm. They are not important, they are just useful servants to tasks we have not managed to fully digitize just yet. Soon we will.



2 views0 comments

Recent Posts

See All
Post: Blog2_Post
bottom of page