Containerization is what you want. There are many containerization tools for Linux, Docker being the most popular, and systemd-nspawn being the most Linuxy, but a bit unknown.
I didn't know about systemd-nspawn — thanks for the suggestion.
I already use Dockerized aliases for some CLI apps (e.g.: ffmpeg) but I didn't find the approach as convenient as I'd like.
I've found docker mounts difficult to secure. I'd like to mount $HOME but exclude even read-only access to "$HOME/.ssh/", "$HOME/passwordsafe.pwsafe3" and a dozen other sensitive file patterns. Some kind of predefined "access profiles" to create FS access rules and assign processes to them ("assign all python processes to python profile") is probably what I'd like.
Containerization is probably the best approach but I'd prefer if it's more opaque and less effort than Docker. For example, if I create a pyenv environment and run its 'python' command, I want that python process to not have full access to the filesystem without having to create container images, command aliases, or volume mounts.
I hacked up a bash script for running arbitrary command in docker container, mounting only PWD. It traces dynamic libraries through ldd and creates a new image for each unique command. I got it working for ffmpeg:
In that case, you should look into NixOS/Nix package manager (or, if you're a GNU fan, GuixSD/Guix package manager). I've heard a lot of good things about them related to your problems, including extremely good support for virtual environments of any kind.
You have to assume that any code running inside a container has broken outside of its mount namespace & can interact with anything running on the host. Only Linux's traditional mechanisms (credentials; capabilities; SELinux policy; others are available) are able to defend against this.