The researchers are applying a technique called adversarial schooling to stop ChatGPT from allowing customers trick it into behaving badly (referred to as jailbreaking). This get the job done pits several chatbots from one another: a single chatbot performs the adversary and assaults another chatbot by making text to power it to buck its usual cons