AI generated art has caused quite a stir in the artist community. Software companies have been training AI algorithms using art samples from across the internet. And in many cases, artists don’t get to have a say in it.
Some artists even discovered their own unique style (that made their art stand out before) completely replicated by an algorithm. Others, found signs of signatures and watermarks – the AI isn’t perfect after all.
However, with no regulatory framework in place, it has become very hard and even scary for artists to protect their artistic identity in the community.
Are there any companies solving these issues?
Internet content has always flowed ‘freely’. Meaning, everything you post on the internet belongs to the internet and you can’t possibly prove your ownership. Before NFTs, this has only been possible for music or films that have real-life licenses attached to them (however, those are not accessible for everyone).
NFTs made it possible to protect your rights to a piece of content. However, NFTs are still far from mass-adoption and all major social networks still operate on free content – awarding revenue to only those who qualify.
So these days, the internet has become a very dangerous place for artists, whose livelihood and survival depends on their stylistic identity, which can easily be replicated using AI.
So, are artists really doomed?
Current social media platforms don’t offer ownership of content the way they should. However, if social networks had a way of protecting content rights of every single post on the internet, artists wouldn’t be affected by AI generated art.
Think about it – if every piece of content on the internet had a digital license or was minted into NFTs, then artists would have nothing to fear from fake versions of their work.
Should AI be banned?
Actually, no. The answer might surprise you but let’s not forget that AI is the way forward. AI unlocks huge creative potential for all of us. That’s why artists should be able to use AI (ethically) to create works of art.
But if AI generated content gets rejected altogether as fake or unoriginal, the potential of insane creativity simply dies out. Artists should also be allowed to monetize AI generated art without compromising the works of human artists.
That’s where Paysenger comes in
Paysenger – a social marketplace has always been a big advocate of ‘minting’ every piece of internet content. Creators deserve the right to own their memes, photos, audios and videos and should be paid royalties for content that goes viral. And same goes to AI generated art (and non-AI generated art).
To fix the ethical dilemma, Paysenger partnered with Stanford professor Dr. Tamay Aykut, PhD in AI to develop an AI that will allow creators to build their own computer-learning models that generate art in their own style.
The model takes data from an artist's portfolio and social profiles to map their artistic personality. In addition, since every photo, video and post on Paysenger is mintable, artists can rest assured that the rights of their artworks will always be protected (and differentiated from AI-based content).
Paysenger uses EGO tokens for monetization and AI-generated content will also be eligible for monetization. What’s your take on this? Should artists be allowed to monetize AI generated art? Should they be paid less or more? Let us know in the comments!