Not especially, because AI will inevitably swing towards producing the statistically most common words for a given intent and outcome.
But that’s not even the worst of it: there are two much larger problems.
Firstly, volume. You as a human cannot keep up with the volume of content an AI can produce. We’re going to drown in it soon enough - Kindle already is.
Secondly, hallucinations - because there’s no actual understanding, just statistical repetition of what it already knows/has/was trained on, it will often create things that *look* correct but aren’t. We’ve seen lawyers get bashed for their filings that refer to precedents that don’t exist (but are in the right format). We’ve seen a guy be told by ChatGPT that he’d passed away and it gave him a link to the Guardian website for supposedly for his obituary - it was the correct structure/format for a Guardian obit, but it didn’t exist and hadn’t ever existed.
So not only are we going to drown in information, it’s likely to be misinformation at the same time.