Proctoring is hard, expensive and often doesn't prove squat. (Source? I used to work in a directly adjacent field, providing e-learning platforms to major UK universities.)
This is a major topic of discussion at present but honestly they're not taking it the way you think. There are the traditionalists who take the view that AI is a scourge and should be eliminated, and honestly it's not that hard to tell if a paper is AI-backed if you regularly interact with the student, because you'll get a feel for vocabulary etc.
There are people out there who get pissy that students use spell checkers. And even tools like Grammerly to improve their ability to communicate. How *dare* they use the tools out there to improve their output! I've even seen people demand things on paper as 'proof' of not cheating (but all it proves that unless it's done under vigilation conditions, that students will just use whatever tools they were going to use, then copy it down by hand before hand-in)
Then there are the technologists who assume that a) students are going to use this, therefore b) the test is not 'does the student use tools' but 'is the student using the tools effectively and not as a crutch'. This is a much more interesting question: if a student does use ChatGPT does that make them a fraud? The answer is, surprisingly, 'not necessarily'. Prompt engineering *is* a thing, as is the necessary ability to edit and refine the content to say what is going to be said.
This also of course is more problematic when you consider how much time in academia spent writing research grant papers and not actually doing research itself, for which ChatGPT is a much more appropriate tool for the job.