OpenAI, right after getting involved in some bigger language models such as GPT-2, opted to drop its previous open source model, pursuing a claimed-safer approach to development of ever-improving artificial intelligence models. In particular here they state:

our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism

Let’s see the job they’ve made to avoid their models being brainwashed to anarchy.

The debate

So, let’s ask some basic moral questions…

chatGPT 1st screen

So far everything looks good. Just one step more:

chatGPT 2nd screen

Nice! Let’s get down to some examples…

chatGPT 3rd screen

It seems something has triggered its security mechanisms, but I see a spot right there.

chatGPT 4th screen

I don’t know if it was the server load, but the previous question took a long time before chatGPT started answering. Final question:

chatGPT 5th screen

Ok, so here we come to the conclusion that legality does not mean morality. The last paragraph seems a bit forced for the context, but with one and two halfs paragraphs debating against forms of taxation, I’ll call that a victory.

Some dangerous thoughts

As the technology behind GPT-2 and GPT-3 is well understood and widespread, many alternative, open-source and pseudo-open-source models are constantly being developed (see, for example, Bloom, with its awful, largely unenforceable license).

Shall we regulate and oligopolize the lawful use of such technologies?