The holy grail of software architecture can be described in a single word: abstraction. Whether the problem lies in scalability, performance, security, or user experience. Taming the complexity is always just a problem of the right level of abstraction.
In the recent years (2000+) we've witnessed a gigantic amount of abstractions. Just lately the word "lightweight" seems to have arrived (or been used for marketing purposes) in software architecture. A lightweight software (library, system, or services) does not imply no abstraction. Usually, it just implies using the right level of abstraction, i.e., doing one task (and one task only) right with the least amount of abstraction - least meaning: more than zero. The question now is: What is the right amount of abstraction?
Let's start with some analogy. In a kitchen there is a countertop - a surface one could use for, e.g., slicing bread or cutting meat. However, one would never do such things directly on the countertop. Instead, we prefer to use chopping boards. Why is that? Obviously, the chopping board is a level of abstraction. But - and that point is important - we don't want this level of abstraction to be virtual. We need a physical abstraction as the reason for the abstraction is physical in nature. We want to be able to replace a (cheap) chopping board instead of having a permanent damage in the (more expensive) countertop.
The analogy is useful to see two things: There are multiple kinds of abstractions (determined by the requirements) and abstractions are not artificial, but appear quite naturally. With the analogy in our mind let's go back to the original question: What is the right amount of abstraction? Obviously, we don't want to use multiple chopping boards. Still in the software world we sometimes do that. If the additional "chopping board layers" would all be virtual I don't see much of a problem here, but programming languages like Java or C# will always solely go for the physical abstraction. There is no way to have transparent abstractions - like in C++ or D or Rust - in the language. Virtual abstractions only exist at compile-time, i.e., it would be like talking about multiple stacked chopping boards when we will always only do the work on a single one.
Why is it time to distinguish between the two forms of abstractions? Because we can't go on to rewrite software in the future. We are already sitting on an enormous amount of software that requires a vast collection of manpower to be tamed. Creating yet another version of the same software (e.g., an operating system) just because one or the other component is not as lightweight as it should be is not future proof. Instead, we need to write software once in such a manner that it has the right virtual abstractions to make reading and working with the code easy. Additionally, the right physical abstractions have to be in the code to make working with the software system (i.e., OS API, devices, services, ...) easy.
As already expected I don't have a general answer to give (yet?). All I know is that determining the right amount of abstraction (as prophetic as it may be) is a crucial part of modern software architecture. Yet, without the right tools such as a programming language that allows the mode of abstraction to be set even the most beautiful and well-thought architecture cannot be implemented successfully.