Rise Of The Machines?
Rise Of The Machines? The true complexity of this world is in explaining and understanding.
Furthermore, a lot of human abilties stem from this quality.
Corollary one, on science/engineering/technology and on engineering jobs.
Engineering, especially software engineering, may well be the essence of the art of explaining things in formal terms. After all, computers are just that: ultra-performant entities with zero “prior knowledge” of how to interpret our “instructions”.
Thus, programming has to be absolutely precise. The ultimate challenge in formal explanations, IMHO, are computer science algorithms.
When one can formulate a CS problem well, chances are they can explain its solution well.
Even more true the other way: once one can clearly articulate how a CS algorithm works their explanation of the problem it solves tends to be sharp and concise as well.
Corollary two, on replacing jobs with AI.
There is an emerging trend of “no code” in my industry, the flagship of which today is the demo of Codex by OpenAI. Outside software engineering self-driving cars are a notable example, along with cashiers or clerks.
Here I am going to postulate an unpopular opinion that Codex would not change the world substantially, at least not any time soon.
The reason, I think, is simply that the set of assumptions to keep in mind grows exponentially once the problem domain gets broader than putting elements into a DOM tree.
My experience of building an NLP engine to convert English utterances into database queries proves this beyond reasonable doubt: the human language is inherently ambiguous, and the challenge an automated system faces is not in understanding every word, but in capturing their meaning, and this meaning is a) far greater than the sum of the words, and b) is very much context-dependent.
Corollary three, on implicit defaults.
One of the biggest killers of quality explanations is when the subject matter itself contradicts our intuitions. This alone may be a good reason why theoretical physics attracts people of certain mindsets: one has to have their intuition calibrated in a certain way to work with seemingly contradictory observations in order to see the big picture.
When humans communicate there are two “successful” modes and a death valley in between.
The first successful mode is solving a math problem: the conversation is analytical, the arguments clear, and all the parties effectively incorporate them into their mental models.
The second “successful” mode is discussing something ambiguous, such as what qualifies for oppression, with the adepts of certain mindset: their default state is to “agree”, the people tend to agree with virtually everything, as the sense of mutual agreement is more important to them than making sure the parties understand what exactly they are claiming to agree with.
Oftentimes their defaults are close enough for the differences to not matter.
The death valley in between is when two people are communicating in two different “successful” modes and their defaults are out of sync. This is what leads to disasters down the road. In addition, that’s why big decisions are best to be formalized, documented, and cross-checked.
This also happens to be exactly why I believe the AI today can neither understand nor explain stuff; unless it’s a very narrow AI, but we’re talking AGI here, of course. GPT-3 can easily talk about what qualifies for oppression in the modern day.
GPT-3 would fail miserably at solving anything nontrivial, as “just guessing” would not get one very far there. Try asking it to only talk to you in sentences of an even number of words.
In conclusion, the bright conclusion is that humans still matter, and we would for quite a while from now on.