Share to Facebook Share to Twitter Share to Linkedin Should America do as China does when it comes to pre-testing generative AI before allowing public ...

[+] release, or is that a sour idea? getty In today’s column, I aim to closely examine a rather thought-provoking question about what might happen if the United States decided to require a pre-test or prior validation of generative AI apps before they were permitted to be publicly released, including doing so for well-known and wildly popular favorites such as ChatGPT, GPT-4, Gemini, Bard, Claude, and others. The basis or impetus to consider this intriguing notion is due to a recent news report that China is doing just that already, stipulating that generative AI or large language models must meet certain governmental provisions and prescribed tests before legally hitting the streets. Is China doing the right thing? Should America do the same? Or is China doing something that seemingly befits China, but perhaps is akin to a square peg trying to fit in a round hole regarding a similar approach for the US? Let’s talk about it.

Before we leap into the details, allow me to go into my customary opening remarks. For my ongoing readers, in today’s column, I am continuing my in-depth series about the international and global perspectives underpinning advances and uses of generative AI. I’ve previously covered for example the sobering matter of humanity-saving efforts intended to establish global multilateral unity on the al.