I opted for the formation of IA. Does this reduce my future influence?

I opted for the formation of IA. Does this reduce my future influence?

If we all begin to opt for our publications that are used for training models, does this not reduce the influence of our voice and unique perspectives on these models? More and more, the models will be the main window of everyone in the rest of the world. It seems that people who are least concerned about these things will be the ones who have more data that end up forming the predetermined behavior of the models.

—Flat influencer

Honestly, it is frustrating that Internet users are forced to opt for artificial intelligence training as a predetermined. Wouldn’t it be good for affirmative consent to be the rule of generative AI companies, as they scrape the web and any other data deposit they can find to build more and larger border models?

But unfortunately, this is not the case. Companies like Openai and Google argue that if the fair use of using all this data was removed from them, this technology would not even be possible. For now, users who do not want to contribute to generative models are stuck to a wall of deactivation process on different websites and social media platforms.

Although the current bubble that surrounds the AI, it appears, just as the bubble of Dotcom made after a few years, the models that feed all these new AI tools will not be extinct. Therefore, the ghosts of the Niche forum publications and the social media threads that defend the strongly retained convictions will be lived within software tools. You are right to opt mean it is actively trying not to be tilted in a potentially lasting culture.

To address your question directly and realistically, these processes of deactivation are basically useless in their current state. Those who choose right now still influence the model. Let’s say fill out a form for a social media site for not using or sell your data for Training IA. Although this platform respects this request, there are countless startups in Silicon Valley with unpleasant children who will not twice think about the tracking of the data published on this platform, even if it is not supposed to be not technically supposed. As a rule, you can assume that anything you have published online has probably made it several generative models.

Okay, but let’s say you could realistically block your data from these systems or require them to be eliminated after the fact, so would you reduce your voice or impact on the tools of the AI? I have been thinking about this question for a few days and I am still torn.

On the one hand, your singular information is only an infinitely small contribution to the immensity of the data set, so that your voice, as a non -public figure or author, probably does not nourish the model in one way or another.

From this perspective, your data is just another brick on the wall of a 1,000 -story building. And it is worth remembering that data collection is only the first step in creating a AI model. Researchers spend months by adjusting the software to obtain the results they want, sometimes relying on low salary workers to label data sets and evaluate the quality of the output for improvement. These steps can abstract abstract data and reduce your individual impact.

At the opposite end, what if we compare it with voting in an election? Millions of votes take place in the North – It is not a perfect metaphor, but what if we see that our data had a similar impact? A small whisper between the cacophony of noise, but still has an impact on the production of the AI ​​model.

I am not completely convinced of this argument, but neither do I think that this perspective must be rejected directly. Especially for experts in the field, your views and how to approach information are unique to AI researchers. Meta would not have had the problem of using all these books in their new AI model if any old date did the trick.

Looking to the future, the true impact of your data on these models will probably inspire “synthetic” data. As companies that make AI generative systems remain with quality information to scrape, they will enter their time; They will begin to use and generative to replicate human data that will then be introduced to the system to train the following AI model to better replicate human answers. As long as there is a generative AI, just remember that, as a human, you will always be a small part of the machine, whether or not you want to be.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *