With the advent of GPT and other "code facilitation tools", I wonder how companies will adapt to measure a persons aptitude. Maybe, as many others before me have lamented, this may cause the industry to rethink the underpinnings of such things.
If an AI can write perfectly acceptable and optimal "leet code", then testing a human on it is pointless. I suppose we'll have to shift to a model where we test thought processes; you know, the things that come before a prompt, softer skills like communication, attention, awareness etc.
I had the pleasure of doing an interview that comprised of simply writing an article about a proposed technology (think ADR). This was interesting, but given that large language models are great at giving the intended audience what it wants, with appropriate prompting, this too falls victim to the same issues described before.