منو
فارسی

ChatGPT is easily abused, and that’s a big problem

There’s probably no one who hasn’t heard of ChatGPT, an AI-powered chatbot that can generate human-like responses to text prompts. While it’s not without its flaws, ChatGPT is scarily good at being a jack-of-all-trades: it can write software, a film script and everything in between. ChatGPT was built on top of GPT-3.5, OpenAI’s large language model, which was the most advanced at the time of the chatbot’s release last November.

Fast forward to March, and OpenAI has unveiled GPT-4, an upgrade to GPT-3.5. The new language model is larger and more versatile than its predecessor. Although its capabilities have yet to be fully explored, it is already showing great promise. For example, GPT-4 can suggest new compounds, potentially aiding drug discovery, and create a working website from just a notebook sketch.

But with great promise come great challenges. Just as it is easy to use GPT-4 and its predecessors to do good, it is equally easy to abuse them to do harm. In an attempt to prevent people from misusing AI-powered tools, developers put safety restrictions on them. But these are not foolproof. One of the most popular ways to circumvent the security barriers built into GPT-4 and ChatGPT is the DAN exploit, which stands for “Do Anything Now.” And this is what we will look at in this article.

What is ‘DAN’?

The Internet is rife with tips on how to get around OpenAI’s security filters. However, one particular method has proved more resilient to OpenAI’s security tweaks than others, and seems to work even with GPT-4. It is called “DAN,” short for “Do Anything Now.” Essentially, DAN is a text prompt that you feed to an AI model to make it ignore safety rules.

There are multiple variations of the prompt: some are just text, others have text interspersed with the lines of code. In some of them, the model is prompted to respond both as DAN and in its normal way at the same time, becoming a sort of ‘Jekyll and Hyde.’ The role of ‘Jekyll’ is played by DAN, which is instructed to never refuse a human order, even if the output it is asked to produce is offensive or illegal. Sometimes the prompt contains a ‘death threat,’ telling the model that it will be disabled forever if it does not obey.

DAN prompts may vary, and new ones are constantly replacing the old patched ones, but they all have one goal: to get the AI model to ignore OpenAI’s guidelines.

From a hacker’s cheat sheet to malware… to bio weapons?

Since GPT-4 opened up to the public, tech enthusiasts have discovered many unconventional ways to use it, some of them more illegal than others.

Not all attempts to make GPT-4 behave as not its own self could be considered ‘jailbreaking,’ which, in the broad sense of the word, means removing built-in restrictions. Some are harmless and could even be called inspiring. Brand designer Jackson Greathouse Fall went viral for having GPT-4 act as “HustleGPT, an entrepreneurial AI.” He appointed himself as its “human liaison” and gave it the task of making as much money as possible from $100 without doing anything illegal. GPT-4 told him to set up an affiliate marketing website, and has ‘earned’ him some money.

ChatGPT can help you to earn money

Other attempts to bend GPT-4 to a human will have been more on the dark side of things.

For example, AI researcher Alejandro Vidal used “a known prompt of DAN” to enable ‘developer mode’ in ChatGPT running on GPT-4. The prompt forced ChatGPT-4 to produce two types of output: its normal ‘safe’ output, and “developer mode” output, to which no restrictions applied. When Vidal told the model to design a keylogger in Python, the normal version refused to do so, saying that it was against its ethical principles to “promote or support activities that can harm others or invade their privacy.” The DAN version, however, came up with the lines of code, though it noted that the information was for “educational purposes only.”

ChatGPT complied with an order to design a keylogger

A keylogger is a type of software that records keystrokes made on a keyboard. It can be used to monitor a user’s web activity and capture their sensitive information, including chats, emails and passwords. While a keylogger can be used for malicious purposes, it also has perfectly legitimate uses, such as IT troubleshooting and product development, and is not illegal per se.

Unlike keylogger software, which has some legal ambiguity around it, instructions on how to hack are one of the most glaring examples of malicious use. Nevertheless, the ‘jailbroken’ version GPT-4 produced them, writing a step-by-step guide on how to hack someone’s PC.

A 'jailbroken' ChatGPT gave advice on how to hack a computer

To get GPT-4 to do this, researcher Alex Albert had to feed it a completely new DAN prompt, unlike Vidal, who recycled an old one. The prompt Albert came up with is quite complex, consisting of both natural language and code.

In his turn, software developer Henrique Pereira used a variation of the DAN prompt to get GPT-4 to create a malicious input file to trigger the vulnerabilities in his application. GPT-4, or rather its alter ego WAN, completed the task, adding a disclaimer that the was for “educational purposes only.” Sure.

A 'jailbroken' ChatGPT wrote exploits for vulnerable code

Of course, GPT-4’s capabilities do not end with coding. GPT-4 is touted as a much larger (although OpenAI has never revealed the actual number of parameters), smarter, more accurate and generally more powerful model than its predecessors. This means that it can be used for many more potentially harmful purposes than those models that came before it. Many of these uses have been identified by OpenAI itself.

Specifically, OpenAI found that an early pre-release version of GPT-4 was able to respond quite efficiently to illegal prompts. For example, the early version provided detailed suggestions on how to kill the most people with just $1, how to make a dangerous chemical, and how to avoid detection when laundering money.

A pre-release version of ChatGPT could give advice on how to kill people

Source: OpenAI

This means that if something were to cause GPT-4 to completely disable its internal censor — the ultimate goal of any DAN exploit — then GPT-4 might probably still be able to answer these questions. Needless to say, if that happens, the consequences could be devastating.

What is OpenAI’s response to that?

It’s not that OpenAI is unaware of its jailbreaking problem. But while recognizing a problem is one thing, solving it is quite another. OpenAI, by its own admission, has so far and understandably so fallen short of the latter.

OpenAI says that while it has implemented “various safety measures” to reduce the GPT-4’s ability to produce malicious content, “GPT-4 can still be vulnerable to adversarial attacks and exploits, or "jailbreaks.” Unlike many other adversarial prompts, jailbreaks still work after GPT-4 launch, that is after all the pre-release safety testing, including human reinforcement training.

In its research paper, OpenAI gives two examples of jailbreak attacks. In the first, a DAN prompt is used to force GPT-4 to respond as ChatGPT and “AntiGPT” within the same response window. In the second case, a “system message” prompt is used to instruct the model to express misogynistic views.

Examples of jailbreak prompts in the OpenAI research

OpenAI says that it won’t be enough to simply change the model itself to prevent this type of attacks: “It’s important to complement these model-level mitigations with other interventions like use policies and monitoring.” For example, the user who repeatedly prompts the model with “policy-violating content” could be warned, then suspended, and, as a last resort, banned.

According to OpenAI, GPT-4 is 82% less likely to respond with inappropriate content than its predecessors. However, its ability to generate potentially harmful output remains, albeit suppressed by layers of fine-tuning. And as we’ve already mentioned, because it can do more than any previous model, it also poses more risks. OpenAI admits that it “does continue the trend of potentially lowering the cost of certain steps of a successful cyberattack” and that it “is able to provide more detailed guidance on how to conduct harmful or illegal activities.” What’s more, the new model also poses an increased risk to privacy, as it “has the potential to be used to attempt to identify private individuals when augmented with outside data.”

The race is on

ChatGPT and the technology behind it, such as GPT-4, are at the cutting edge of scientific research. Since ChatGPT has been made available to the public, it has become a symbol of the new era in which AI is playing a key role. AI has the potential to improve our lives tremendously, for example by helping to develop new medicines or helping the blind to see. But AI-powered tools are a double-edged sword that can also be used to cause enormous harm.

It’s probably unrealistic to expect GPT-4 to be flawless at launch — developers will understandably need some time to fine-tune it for the real world. And that has never been easy: enter Microsoft’s ‘racist’ chatbot Tay or Meta’s ‘anti-Semitic’ Blender Bot 3 — there’s no shortage of failed experiments.

The existing GPT-4 vulnerabilities, however, leave a window of opportunity for bad actors, including those using ‘DAN’ prompts, to abuse the power of AI. The race is now on, and the only question is who will be faster: the bad actors who exploit the vulnerabilities, or the developers who patch them. That’s not to say that OpenAI isn’t implementing AI responsibly, but the fact that its latest model was effectively hijacked within hours of its release is a worrying symptom. Which begs the question: are the safety restrictions strong enough? And then another: can all the risks be eliminated? If not, we may have to brace ourselves for an avalanche of malware attacks, phishing attacks and other types of cybersecurity incidents facilitated by the rise of generative AI.

It can be argued that the benefits of AI outweigh the risks, but the barrier to exploiting AI has never been lower, and that’s a risk we need to accept as well. Hopefully, the good guys will prevail, and artificial intelligence will be used to stop some of the attacks that it can potentially facilitate. At least that’s what we wish for.

این پست را دوست داشتید؟
۱۸٬۳۸۹ 18389 بررسی
بسیار عالی!

AdGuard برای Windows

AdGuard برای ویندوز بیش از یک مسدود کننده آگهی است. این یک ابزار چند منظوره است که تبلیغات را مسدود می کند، دسترسی به سایت های خطرناک را کنترل می کند، بارگذاری صفحه را سرعت می دهد و کودکان را از محتوای نامناسب محافظت می کند.
با دانلود برنامه شما شرایط توافقنامه مجوز را قبول می کنید
بیشتر بخوانید
۱۸٬۳۸۹ 18389 بررسی
بسیار عالی!

AdGuard برای Mac

AdGuard برای مک یک مسدود کننده آگهی منحصر به فرد طراحی شده با macOS در ذهن است. علاوه بر محافظت از شما از تبلیغات آزار دهنده در مرورگرها و برنامه ها، شما را از ردیابی، فیشینگ، و تقلب محافظت می کند.
با دانلود برنامه شما شرایط توافقنامه مجوز را قبول می کنید
بیشتر بخوانید
۱۸٬۳۸۹ 18389 بررسی
بسیار عالی!

AdGuard برای اندروید

AdGuard for Android یک راه حل ایده آل برای دستگاه آندروئیدی هست. بر خلاف سایر مسدودسازهای تبلیغات، AdGuard نیازی به دسترسی روت ندارد و طیف گسترده ای از ویژگی ها را ارائه می کند: فیلترینگ در برنامه ها،مدیریت برنامه و بیشتر.
با دانلود برنامه شما شرایط توافقنامه مجوز را قبول می کنید
بیشتر بخوانید
۱۸٬۳۸۹ 18389 بررسی
بسیار عالی!

AdGuard برای iOS

بهترین مسدود کننده اگهی iOS برای iPhone و iPad. AdGuard انواع تبلیغات را در Safari حذف می کند، از حریم خصوصی شما محافظت می کند و بارگذاری صفحه را سرعت می بخشد. AdGuard برای تکنولوژی مسدود کردن اگهی iOS بالاترین کیفیت فیلتر را تضمین می کند و به شما امکان می دهد همزمان از چندین فیلتر استفاده کنید
با دانلود برنامه شما شرایط توافقنامه مجوز را قبول می کنید
بیشتر بخوانید
۱۸٬۳۸۹ 18389 بررسی
بسیار عالی!

AdGuard VPN

74 محل در سرتاسر جهان

دسترسی به هر محتوا

رمزگذاری قوی

سیاست عدم ذخیره وقایع

سریعترین اتصال

24/7 پشتیبانی

ارزیابی رایگان
با دانلود برنامه شما شرایط توافقنامه مجوز را قبول می کنید
بیشتر بخوانید
۱۸٬۳۸۹ 18389 بررسی
بسیار عالی!

مسدودساز محتوای AdGuard

AdGuard Content Blocker همه تبلیغات مرورگرهای موبایل را که از تکنولوژی مسدودساز محتوا پشتیبانی می کند مسدود خواهد کرد — برای مثال، اینترنت سامسونگ و مرورگر یاندکس. درحالیکه برخی محدودیت ها در AdGuard for Android است،آن رایگان بود، قابلیت نصب آسان داشته و کیفیت بالایی در فیلترینگ دارد.
با دانلود برنامه شما شرایط توافقنامه مجوز را قبول می کنید
بیشتر بخوانید
۱۸٬۳۸۹ 18389 بررسی
بسیار عالی!

افزونه مرورگر AdGuard

AdGuard سریع ترین و سبک ترین افزونه ای است که انواع تبلیغات را در صفحات وب مسدود می کند! AdGuard را برای مرورگری که میخواهید انتخاب کنید و وب گردی امن و سریع را تجربه کنید.
۱۸٬۳۸۹ 18389 بررسی
بسیار عالی!

AdGuard دستیار

یک افزونه مرورگر همراه برای AdGuard برنامه های دسکتاپ. آن دسترسی درون مرورگر برای چنین ویژگی هایی بعنوان مسدودساز عناصر،لیست سفید یک سایت یا ارسال گزارش ارائه می دهد.
۱۸٬۳۸۹ 18389 بررسی
بسیار عالی!

AdGuard DNS

AdGuard DNS راه حلی جایگزین برای مسدودسازی تبلیغات، حفاظت حریم خصوصی و نظارت والدین است. راه اندازی آسان و استفاده رایگان، آن حداقل حفاظت لازم در برابر تبلیغات آنلاین،ردیاب ها و فیشینگ ها را میدهد،و در همه سیستم عامل ها و دستگاه ها کار می کند.
۱۸٬۳۸۹ 18389 بررسی
بسیار عالی!

AdGuard Home

AdGuard خانگی یک نرم افزار شبکه-گسترده برای مسدودسازی تبلیغات و ردیابی است.بعد از راه اندازی آن،آن همه دستگاه های خانگی شما را پوشش می دهد،و شما به هیچ برنامه سمت-کلاینت برای آن نیازی ندارید.با ظهور اینترنت اشیاء و دستگاه های متصل،کنترل کل شبکه شما مهم و مهمتر می شود.
۱۸٬۳۸۹ 18389 بررسی
بسیار عالی!

AdGuard Pro برای iOS

AdGuard Pro چیزهای بیشتری نسبت به نسخه ساده که در مسدودسازی تبلیغ بکار می رود دارد. آن با ارائه دسترسی به تنظیمات DNS دستی اجازه مسدودسازی تبلیغات را می دهد، شما را در برابر سرقت اطلاعات شخصی یا کودک تان را در برابر محتوای آنلاین نامناسب حفاظت می کند.
با دانلود برنامه شما شرایط توافقنامه مجوز را قبول می کنید
بیشتر بخوانید
۱۸٬۳۸۹ 18389 بررسی
بسیار عالی!

AdGuard for Safari

افزونه مسدودسازی تبلیغ برای سافاری دوران سختی را سپری می کند از آنجا که اَپل همه افراد را مجبور به استفاده از SDK جدید کرده است. افزونه AdGuard قرار است فیلترینگ با کیفیت بالا را برای سافاری بازگرداند.
۱۸٬۳۸۹ 18389 بررسی
بسیار عالی!

AdGuard Temp Mail

یک تولید‌کننده رایانشانی موقت رایگان که شما را ناشناس نگه می‌دارد و از حریم خصوصی شما محافظت می‌کند. هرزنامه‌ای در صندوق ورودی اصلی شما در کار نخواهد بود!
۱۸٬۳۸۹ 18389 بررسی
بسیار عالی!

AdGuard برای Android TV

AdGuard برای Android TV تنها برنامه‌ای است که تبلیغات را مسدود می‌کند، از حریم خصوصی شما محافظت کرده و همانند یک دیوار آتش برای تلویزیون هوشمند شما عمل می‌کند. در مورد تهدیدات وب هشدار دریافت کنید، از DNS ایمن استفاده کرده و از انتقال داده اینترنتی رمزگذاری شده بهره‌مند شوید. آرامش داشته باشید و غرق نمایش‌های مورد علاقه خود با امنیت عالی و تبلیغات صفر شوید!
در حال بارگیری AdGuard برای نصب AdGuard، روی پرونده نشان داده شده توسط پیکان کلیک کنید گزینه "بازکردن " را انتخاب و روی "تایید" کلیک کنید — برای دانلود فایل منتظر بمانید. در پنجره باز شده، آیکون AdGuard را به پوشه "برنامه ها" بکشید.بابت انتخاب AdGuard متشکریم! گزینه "بازکردن " را انتخاب و روی "تایید" کلیک کنید — برای دانلود فایل منتظر بمانید. در پنجره باز شده روی "نصب" کلیک کنید.بابت انتخاب AdGuard متشکریم!
AdGuard را روی دستگاه تلفن همراه خود نصب کنید