As many of you have likely noticed, there has been an increasing number of accounts that create characters solely to spam messages like “selling gold at www…” for a few minutes before disappearing.
I’m not sure how this issue is currently being addressed by the development team, but I would greatly appreciate the addition of a simple in-game option to report these accounts. This type of spam significantly breaks immersion and detracts from the overall experience.
My suggestion: Please consider adding a reporting feature directly to the game interface. I’d be more than happy to help flag these accounts — just give us the tools, and the community will support you in keeping the game clean.
Alternatively, if there are concerns about potential abuse or security risks in exposing this feature to all players, perhaps it could be made available to a group of trusted community members. For instance, users who have linked their accounts to Discord and have undergone some form of pre-verification could be granted access.
Yes it would be nice to have an option to do just that.
Yersterday I even tried to do this, but ended up just blocking the gold selling spammer.
But I guess this should be done with caution, since this could lead to badly-intended people abusing it for mass-reporting legitimate players, flooding the system just “for fun” or for trying to harm someone’s account.
Probably because any sort of automated filter will always eventually end up filtering legit players. Or otherwise be so lax that it will be innefective.
So it’s easier to simply have humans doing those decisions.
AI is great at this task and quite reliable these days for something like this, especially when it’s targeted to gold and a website which is not normal human messaging.
You can also do a 2-step process to ensure you don’t filter out legit players instead of manually filtering out all spam like it is now.
There is no excuse to not have adequate spam filtering in today’s world.
RMTers have always been clever to overcome these types of situations. You block a pattern, they change the pattern to something else.
I’m sure EHG has tools to monitor this, especially because, according to all reports so far, no single account gets to spam for long.
Preemptively filtering out messages from legit players pending an approval by mods will simply create bad feelings when players get blocked or their messages delayed and no longer fitting with the flow of conversation.
It has been proven over time. Policing is always one step behind.
Yes, you could probably use AI to try and try to filter this out, but they would also just use AI to get over it. They have access to the same tools devs have and they can always get around them.
No, I’m someone that has been playing online games for over 30 years and I’ve seen the development of this very issue, ever since the D2 days.
What you’re saying is no different from people saying back in the day that you could just filter things out with regex. And then RMTers getting around the regex pattern.
And then filter via adaptive scripts. And RMTers getting around that.
Any online game will always have RMT and you’ll always have spam bots. The only way to prevent having spam bots is by making something so restrictive that half the player messages are also filtered out.
Because no matter what you come up with, they always just change the pattern to get around that. Because they can use the same tools devs use and they can also make their own to respond to those tools.
AI isn’t a static solution to everything when you can also use AI to combat those solutions.
I am a programmer. I’m quite aware of the tools available to both sides of this issue.
Every time there is a new technology emerging, people will always say you can use it to block spam bots. But they all just forget that that exact same technology is also available to RMTers.
And I have 20 years experience in software engineering, a recent masters in AI, and run 2 businesses dealing with AI. But that’s not what makes or discredits any argument, that’s just empty puffery.
No solution is 100% perfect, and stating that is simply being nothing more than a captain obvious. But LE can easily stop the vast majority of the obvious spam if they wanted to. They can very easily improve the current situation.
So how about this, explain specifically how implementing modern AI will fail to improve the situation and demonstrate that you are not simply ignorant of modern AI and saying generic empty statements. Being a negative defeatist is easy, producing value and solutions is more challenging. The current manual human after-the-spam solution is currently failing and not working. I see obvious spam every time I get on the game.
Because, just like you can train AI to find patterns for spam bots, so can they train their AI to avoid those same patterns. The tool is available for both sides.
So you’ll end up in mostly the same spot. The current chat already has some filtering capacities. And I’m sure they keep adding patterns. But the bots will come up with new patterns.
So yes, you can create an AI tool to filter the current obvious ones. Just like RMTers can create an AI tool to come up with different patterns. It won’t stop them. It will just change them.
Completely flawed logic you have there. There becomes a point where their adaptation becomes useless and can’t even convey their URL or service coherently. They don’t spam to get around spam filters for fun, they spam to get you to a website. AI can easily prevent that. Again, you are repeating 30 years ago, not today’s reality.
That was already true with the previous iterations. You could already filter out patterns for obvious urls. And you could keep doing it over and over again until they couldn’t convey their URL properly. And yet it kept happening.
And this is mostly because, if you do filter it out in such a way that they can no longer function, then you’re also filtering out players from doing the same thing.
Pointing you to urls would become impossible for a player, even if it was a legit site (like a wiki or some build or a youtube video).
For years, it’s always been an arms race. Those on the good side were always one step behind the ones on the other side.
AI won’t change this. It will just become an arms race between AIs and not humans.
I honestly don’t know how you can think otherwise and believe AI is a magical tool that will stop all evil in the world, when evil also has AI.
Again with the thinking of the past to incorrectly discredit modern new solutions. AI goes well beyond simple ‘patterns’ that you are using to defend your point. You are also literally helping prove my point, that the spammers are lessening the impact of their message with each enhancement proving my previous point. There becomes a point where their spam is simply not worth spamming at all, and we are at the capability to bring us there today.
Ironically, 30% of the chat messages in my game at the moment are RMT spam and it is clear as such. Something AI could easily and instantly remove. Your system is clearly failing. I’m not really sure why you want to defend such a clear and obvious failure status quo when it can easily be made so much better.
You still don’t get the point. Yes, we can get to the point when we can filter out most of their ability to provide information. But it will happen at the cost of players also having their communication disabled partially.
If you can’t properly convey urls, so can’t legit players that might be pointing you to a wiki, or to a build guide, or just to a meme. Because they will be flagged for it as well.
Unless you also prevent every player from posting a link, it really can’t. There are always ways around it.
All systems fail. It’s naive to think that you can stop illegal activities completely and they won’t simply adapt and keep doing the same thing.
If you fully block spam bots, you’re also blocking regular conversations.
If you don’t fully block spam bots, then they’ll just keep on working around your measures.
There really isn’t anything you can do that they can’t circumvent. And this is, again, because they have access to the same tools you do. And because they know that the only way to fully stop them is to disable anything that can resemble a url, which will also do the same for regular players.
Nothing really changes. We already had the tools to stop them before. The problem is that you can’t stop them without also stopping players from communicating properly.
If they are spamming about gold sales, then they should. It doesn’t matter if a message is from a real player or a bot if they are spamming about RMT or gold sales.
I never said fully, I have always said improve the situation.
I never suggested ‘blocking URLs’ In fact, AI agents can also assess links when needed.
Ignorance of modern AI applications and strawmen are still not a valid counterargument on reducing obvious RMT spam.
Sure. That won’t prevent much, though. It’s not too hard to find a workaround. For example, you edit the wiki and add a page of your choice about RMT and pointing to your site. You can even set AI to monitor it and create new pages when they get taken down.
So, are you going to block the whole wiki?
The only real difference you’ll achieve is changing the messages from “buy gold <<—http:yrl>>” or whatever to “Hey guys, check out this great way to beat the game: https://wiki.LE.com/BuyGold” or “GetGold” or “G_E_T_G_O_L_D”.
And training an AI to do that isn’t that hard either.
The truth is that we already could have prevented spam bots 20 years ago. But doing so would come at the cost of players not being able to communicate freely because there would be lots and lots of messages that would get falsely flagged.
And there really isn’t any way around that. Spam bots use the messages in the way they are because they call for attention a lot more. But if you successfully manage to block those types of messages, they’ll just switch to more naturally sounding messages and spam those instead.