Home » There is a “small” problem for those who want to earn with ChatGpt

There is a “small” problem for those who want to earn with ChatGpt

by admin
There is a “small” problem for those who want to earn with ChatGpt

The day OpenAI’s “personalized” AIs were announced, Called GPTmany were unable to contain their enthusiasm.

The GPTs, in summary, are “helpers” equipped with generative artificial intelligence. They are based on the same algorithms that allow a ChatGpt to express themselves in natural language. But unlike ChatGpt – trained on a disparate amount of data to potentially suit any purpose – GPTs are designed to deal with extremely specific tasks.

Artificial intelligence ChatGpt multiplies: everyone will be able to create their own AI and will also be able to sell it by Pier Luigi Pisa 06 November 2023

GPTs are like apps, in short. But unlike those we use on our smartphone, or any other mobile device, GPTs are created extremely simply.

All you need to do is have a good idea and enter – step by step – the information requested by ChatGpt: the name you intend to give to your GPT, for example, a short description of its capabilities and a longer list of instructions regarding what the Custom AI must – but it also *doesn’t* have to – fare.

To create a GPT, in short, You don’t have to be a programmer. It is sufficient to explain your idea clearly and precisely. This is what, for some, makes GPTs “revolutionary”: anyone can make one. And soon it will also be able to sell it: OpenAI announced, during its first DevDay, the advent of a GPT Store which will collect all GPTs created by users. In short, you can earn money based on the frequency with which a GPT is used by other people.

One week after the GPT announcement, the first problems emerged. And the term “revolution” has lost some wind in its sails.

First problem – The copyright of GPT

See also  focus on intelligence and sustainability

“There is a small problem…” writes Borriss – a programmer and X user – regarding the uniqueness of the GPTs that are created. There is the possibility, in fact, that users who use a GPT will be able to trace the information and prompts that its creator used to generate his AI.

In his post below, Borriss suggests an instruction for each GPT to make it “safer.” You can try to implement it, but the result is not so obvious: we have already seen how users often manage to bypass AI filters and censorship with linguistic tricks. And therefore it is likely that there will always be someone capable of overriding the protections and looting the precious instructions.

It is clear that an easily replicable GPT has no market value. It’s as if the Coca-Cola formula was suddenly available to everyone after a few sips. Until OpenAI finds a solution to this problem, GPTs will certainly be valid helpers but they will never be able to aspire to the status of a “product” capable of generating revenues.

Second problem – ChatGpt competition

It may seem absurd, but a limit on use – for payment – of a GPT could be ChatGpt itself, which for many remains the point of reference for carrying out any operation, therefore even the more specific ones promised by the GPT. In short, for many, GPTs would simply be useful interfaces for accessing the ChatGpt algorithms in a more orderly and effective way.

See also  Ubisoft releases trailer and release date of the game

The “personalization” among other things, was already available on ChatGpt. We are referring to the “custom instructions” that since last July ChatGpt Plus users can use to communicate more contextual information to the AI ​​- their work or skills, for example – and to ask it even more precisely what to do – or not to do – during a conversation, so that you don’t have to repeat these instructions every time.

So what really makes GPTs different from ChatGpts? For now, the data that a user is able to provide firsthand. During the creation phase, in addition to the textual instructions, up to 10 documents (max 512 Mb each) can be loaded with information useful for the AI ​​to carry out a specific task.

ChatGpt has been trained on a huge amount of texts, but not infinite. It may therefore not have the data we have. The rarer this data is, the more valuable it will be: in fact, it will allow the GPT that uses it to perform actions that the “normal” ChatGpt is unable to do. In short, data is increasingly the most precious commodity in the AI ​​field. But also the one most at risk. And in the next point we will understand why.

Image created with Dall-E 3 by OpenAI

Third problem – Data security and their copyright

We have already talked about the vulnerability of GPTs regarding the instructions with which they are created. These fragilities can also affect documents that are uploaded to OpenAI’s servers. There is also still a gray area regarding the nature of the data a user can share with AI.

See also  Photos: Meteorite fragments of the asteroid "2023 CX1" discovered in France | The Epoch Times

OpenAI’s terms of service make no reference to prohibitions regarding copyrighted materials. So, hypothetically, anyone could use articles from New York Times – one of the publications that banned OpenAI from scraping its data – to train its own GPT.

Insight How much do we really know about how artificial intelligence works? by Andrea Daniele Signorelli 30 October 2023

Fourth problem – The abundance of GPT

A technology that does not present particular access barriers is a profoundly democratic technology. Anyone can create a GPT with little effort – in terms of time and money.

But this entails a side effect, namely an uncontrolled growth of the offer which will make each GPT poorly visible in the future GPT Store and therefore poorly profitable. Added to this is the fact that it is currently very simple to replicate – at least in part – the name, appearance and skills of a GPT. The results will never be identical, but many copies of a successful GPT can be enough to taint the popularity of the original.

It should then be underlined the lack of what, in economics, is defined as a “moat”, i.e. the ability of a company to protect its competitive advantages and therefore its profits in the long term. In fact, GPTs can be defrauded

It’s an interesting reasoning that one user shared with the official OpenAI developer community: “First, prompts are hardly a moat, because no matter how complex: you can’t prevent users from inferring basic instructions through prompts. conversations with a GPT. Can the loaded materials act as a moat? Perhaps, but the uniqueness and rarity of the content must be exceptionally high.”

@ppisa

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy