Eleventh Hour games, are you children?

Yes, but you neither demand everyone to wear a chastity device of some sort, putting on a straight-jacket when going near a animal or similar things.
That would be proactive protection against it.
Instead you get punished for doing it.

Those filters which have become so nonsensically common are though proactive handling of a situation, you don’t punish people for doing them, you prevent any option for the situation to even happen… at the result of also preventing many other non-malevolent actions to become impossible.

You’ve aptly described it, not everyone is offended by everything, some things are generally accepted as being offensive or ‘unwanted’ to be seen and hence it’s kept out of the respective societal frameworks… but you (the general ‘you’ as a reminder) theoretically could do it, you just don’t do it since you got at least a modicum of social competence.

Yes, but the issue is that this is a private game. Much like when I’m at home I have the right to kick out someone that is using language that I don’t like or discussing things I don’t want discussed.
I will censor people in my home because I have that right.
Much like EHG has the same right.
Much like Facebook or twitter have the same right.

Absolutely fair!
Once more, reactive handling, you don’t tape their mouth shut so they can’t voice something you don’t like :stuck_out_tongue:

It’s the method of handling it, not that it’s handled.
It’s unfitting and always was.

And no, you actually can’t censor people in your home, you can throw them out when they don’t follow your demands on being censored.
Those systems though do actively do that… which is a necessary evil to a degree as the manpower needed otherwise is simply unsustainable… but there’s very simple systems available for that (which are fine), there’s more complex systems available which cause issues at times (which are not fine anymore already) and then we got what we have in LE, which is so far beyond any reasonable implementation of such a system that it’s baffling.

I don’t have to accept it in my house. My house, my rules. Democracy and a free society more or less ends at my doorstep. If you as a guest cannot behave as I expect from you, you are not welcome, sorry.

You said that what I describe is toxic positivity. It is not. Toxic positivity is dismissing and disallowing negative feelings and attitude. I do not dismiss negative feelings in favour of a permanent happy sunshine attitude.

It’s okay to feel frustrated by fellow players, for example if they play subpar and that is the reason your team lost. The way how some people express this frustration, especially on the internet, is not okay. This has nothing to do with toxic positivity.

I really do enjoy steak, especially the ramp cut, but replace a with u in the word ramp for this filter does not like the original word.

I agree in general, I have brought this argument very often if people want to generally ban a word. You don’t even need to forbid the old term, you just need to introduce a new term.

The game chat is not a journal to discuss medical terms or etymology though.

It’s an integral aspect of life that unexpected changes of routine happen at some point. Running a company is an aspect of life. Gaming is an aspect of life.

Nope, I got it quite right. You made a statement that it does more harm than good, presented as a fact. So, do you have proof to back your claim up?

I do not make claims about the overall effectiveness of their system.

No, I am correct.

Language is not a natural but a learned ability. Every word people ever came up with had to be explained in some way. Showing at a object and spelling the name, for example.

Your explaination of fernweh itself is a proof how this works. Yes, fernweh puts a specific concept in a single word. But the word is meaningless if not explained in some capacity. By a description, or we derive the meaning from context, for example as an opposite of heimweh (being homesick). But what is homesick? We have learned the meaning of homesick. The urge to return home if you are away for a prolonged time. But from the symbols, it could also mean that it hurts to be home.

Edit: I didn’t intent to sent the message yet, apparently pressing shift+ctrl+enter sends the reply.

Yes, specific words are a lot faster to use. They convey a whole bunch of implications with a few letters. But this is not the only way to convey all that information.

I work as a translator and editor. I often encounter words or phrases that are not as easy to say in German. In a novel, this is a problem. Prose does not lend itself well to long explanations for a short catchphrase. But it is possible to explain it, the context surrounding it, etc.

Because I can totally get behind the lines:
“Watch my six.”
“Oh, I am doing so for a while.”

I had to explain the meaning to the translator I edited this for, because they were pretty bad at doing their job and doing their research. They translated it literally.

The joke might not be funny any more if you have to explain it, but you can explain it with different words and why it is supposed to be funny. And people might laugh the next time they encounter a similar joke, now getting it.

Not in this specific example, but if I create a discord server, I’m free to use whichever filter I want and censor whatever I feel like. People that join either accept that or not.
Likewise, EHG could have even not given us a chat. It wouldn’t have been a good option, but it would be totally their right. Players would then decide if that was a dealbreaker or not.

This is basically the same thing. They’re using a system to keep chat cleaner. My guess is that it’s using AI and LLM’s and they’re still fine tuning it out to figure out context better. But even without that, they could just use a static filter and filter out words they don’t want discussed, including Trump, Kamala and dozens of other words. That would be their prerogative.

So rather than dropping a system that is still learning and being fine tuned, they can stick to it until it does what they want. Which is why they’re asking to report on false positives. And yes, it’s annoying at times, but the benefits of the final product far outweigh them.

Personally, I’d rather not have any filter at all. I’m used to D2 chat, after all. But I can definitely understand that they don’t want that kind of toxicity in their game. So while I personally would rather not have a filter, I completely support their stance on this.

I mean, that’s the basic reason why dictionaries (and synonym dictionaries) exist in the first place. :stuck_out_tongue:

So, to remove the option for someone to touch a specific part in your home you hence put them into a straightjacket, shackle their arms behind their back, do something else to ensure they can’t?
Or you’re basically taping their mouth shut so they can’t speak the respective things you don’t want to hear?

Yes, you’re right, your house, your rules.
Reactive handling of it though, not proactive one. For a reason.

Yes, the filter is not good, hence you can insult someone nonetheless, just add spacing, exchange singular letters… and perfect, you’re back on track!

So as a repeat: Does the filter actually do the job? Or does it mostly make it annoying to have a normal conversation?

See above.
It doesn’t hinder actually using derogative terms if you decide so, hence it doesn’t fulfill the function.
It does though make normal conversation cumbersome, hence it is a detriment there.

I would say that’s more then enough to say ‘more harm then good’ when the baseline functionality is not fully provided but it has clear-cut effects depending what you talk about and how you try to formulate things.

Not to speak that it would still uphold otherwise, because if we couldn’t avoid it via exchanging letters or adding symbols to circumvent it then it would intrinsically cause issues with words that have those as a part inside as well. ‘Assassin’ is a great example there… after all it would contain ‘ass’ and hence I couldn’t even 'ass’ume anything working at all anymore.

More harm then good, need some more?

Yeah, but translation is quite a different direction then direct communication in your likely one and only language. If words you’re using on a daily routine basis are taken away from you, like for example ‘assume’ then you’ll be thrown majorly off-kilter. How will you - at that exact moment - re-describe the word ‘assume’ on the spot? It’s not like people are used to managing their specific flavor of speech unique to them. That can - and will - stop quite a bit of communication up-front, never happening as it’s too much of a bother in the first place. And best case it simply makes it frustrating, like not being used to writing on a phone and autocorrect does change what you want to say willy-nilly around and you have no idea how it came to be.

Well, I might put certain things away and lock the room proactively. And I will certainly take away my guests’ brought kitchen knives before blood ruins my carpet. Kitchen knives belong in the kitchen, not in the living room while we have a party.

And if I throw a large party with hundreds of expected guests that I don’t know, I will probably hire security personal / bouncers to check if someone tries to bring a kitchen knife. Proactive, better safe than sorry.

You try to evade the point. What is your proof that it does more harm than good?

Do safety inspections prevent people from circumventing safety measures? Should we stop doing safety inspections or having safety measures and regulations in the first place?

People can ignore seat belts, so should we stop putting them in cars? It could be an optional upgrade for those who want them, after all. Furthermore, people unfortunately still die in car crashes despite using seat belts. So, since they only reduce the average damage, should we ignore them?

Does everything has to be 100% perfect and fail-proof before we use it? Is an improvement not enough to justify some measures?

Yes, all of those are false-positives. As I said, the system they use seems to build context to decide if the message gets blocked, from the little experiment I did in march.

There you go. It is not as simple as ‘ass’ in 'ass’ume.

It is the core concept of explaining A with B, C, and D. It does work, and it is not relevant whether it is in the context of translation or in the same language. Translation was just an example for showing that this does actually work, contrary to your claim that I was wrong.

I forgot to mention one thing in my reply. Your point of language being complex. That is actually a benefit. Because language is complex, we can use many ways to say the same thing.

I assume that you think that this is hard to do.
I think that you assume that this is hard to do.

I assume the game will run more smoothly with a different engine.
I have no proof, but I believe that the game will run more smoothly with a different engine.

The main problem is that they might not even realize why their message was blocked in the first place.

As is finding new creative ways to reformulate the insult or slur that was just blocked. At least it makes people stop and think for a moment.

I can be quite a git occasionally. It isn’t as much a problem in written form, but every so often I wish I had an implemented filter that stops me saying certain things and makes me reconsider my words.

What if the filtering was disabled when messaging friends?
I don’t remember if being friends is mutual in LE or like in PoE (where you can have someone on your friends list without being on theirs). If it’s the latter, then it would make sense to not filter messages when the recipient has the sender as a friend.

This obviously doesn’t fix the main issue raised here, but it would be a small improvement for that specific case.

Sorry, are you saying that using the in-game bug reporting tool where you have to come out of chat, type the offending message & username from memory is easier than right clicking a chat message/user & selecting “report user”?

Is the bug reporting tool equipped with some chat functionality that would make this easier. I’ve never used it but I can’t see a “report user for being a dick” option.

That’s not the only thing that can happen, but sure, lets blame the victim 'cause it’s clearly their fault. Other bigger companies want to protect their customers from other customers being abusive dicks, that doesn’t mean that it’s “carebearing” or that one side is just a “snowflake”. Remember when I said you had all the compassion & empathy of a brick? Or is this one of the times when you’re arguing the devil’s advocate?

Yes.

If by that you mean there shouldn’t be anything at all? No, not really. Should it be less excessive & tirn off-able? Yes.

While I agree with everything else, it would help to “rephrase my perfectly innocent message” if the system told me why it was blocked.

The other person doesn’t have to accept your friend request. While your idea is a decent one, it would require a change to how the friend list works.

1 Like

Yes, that would be really helpful for the user. It might be difficult if the system tries to build context based on a poor Llama-clone language model with limited reasoning capacity. I assume that is how their system works, and EHG saying in this thread here that this is not simply a regexp-system, so I might be right.

People often forget that friend lists do not equal friends by necessity. Most often in video games, it is just a list of contacts.

1 Like

Ok, I’ll reframe the whole thing and get into more detail hence:

How would a proper filter look like in modern times? I’ll provide an example + a breakdown of the major issue:

Example start

A basic filter would involve:

  • Building a list of applicable profanities
  • Developing a method of dealing with derivations of profanities

A moderately complex filer would involve, (In addition to a basic filter):

  • Using complex pattern matching to deal with extended derivations (using advanced regex)
  • Dealing with Leetspeak (l33t)
  • Dealing with false positives

A complex filter would involve a number of the following (In addition to a moderate filter):

  • Whitelists and blacklists
  • Naive bayesian inference filtering of phrases/terms
  • Soundex functions (where a word sounds like another)
  • Levenshtein distance
  • Stemming
  • Human moderators to help guide a filtering engine to learn by example or where matches aren’t accurate enough without guidance (a self/continually-improving system)
  • Perhaps some form of AI engine

Example end

As we can see it’s 3 major categories (btw that’s a modelling example copied from a older posts of someone working professionally as a coder for profanity filters)
They are ‘Simple filter’ ‘Moderate filter’ and ‘Complex filter’.

A basic filter does a great job and catches a good 70% of profanities. This is the part which causes the least interference with general communication. That is fine. Absolutely for it, works, proven, eases workload substantially.
Good to implement!

Then we have the moderate filter, that one becomes already iffy, the pattern matching is easy to cause problems, l33t is a given to be taken into account and I would say should nowadays fall into ‘basic’ actually.
False positives? Well… there’s the crux of the problem. How to ensure that? You can’t. You can’t even ensure that it’s a positive properly, hence going a step further to ensure it’s a false positive if we can’t even decide if it was a positive in the first place is already a major issue. Here the filters start to break into the area where it becomes bothersome and nonsensical already, able to cause issues.

Complex filters… we don’t need to speak about that. White- and Blacklists should generally be at the basic area again, I don’t know why it isn’t, that’s a given, using a dictionary to implement those should be standard practice anyway.
For the others? bayesian interference is a mess which can cause a ridiculous amount of false positives, soundex can cause even more issue, Levenshtein distance is decent but also… makes it so when you try to form your sentence different when nothing is seemingly ‘odd’ in it… that sentence will still get blocked while it shouldn’t have been in the first place. Stemming we don’t even need to think about how that can cause issues… what if the derived word has a completely different inferred meaning from the stem? Oof, that is a false positive machine basically.
And AI is not advanced enough to handle it either, deriving context is nigh impossible for it still, with simple situations it can help, with anything beyond the simplest things though it causes a mess.

And that was roughly 8 years ago in terms of complexity for profanity filters.

Which leads me to this:

Yes, and that’s all that’s needed to stop a large portion of people to go further with it. That singular step to have them re-think.

Aligns with ‘simple filter’ and needs nothing beyond.

Now towards why am I saying ‘it’s more harm then good’ (The next major point beyond this is the direct links and explanations of concepts):

Generally profanity filters are unable - inherently - to filter out 100% of it, you can insult, harass, defame and so on… every in a myriad of creative or simple ways which removing it fully would utterly neuter communication completely as context is nigh impossible to infer with a automated system, even nowadays.

That means beyond the ~70% capture rate you increase the rate of false positives substantially more then you reduce the actual amount of proper positives. That means the efficiency ratio of the filter goes by design down… maybe won’t in a few years but currently that’s still the status quo.

Next up is that a profanity filter has to be adjusted to the respective product’s userbase. In our case this is actively stated in the LE ToS under Paragraph 10 III ‘Account Creation’: You are over eighteen (18) years of age;
This deems that terms which wouldn’t be fine to use in any environment with minors instead can be used freely. Hence for example ‘sex’ is to be a freely used term, unless it directly relates to depiction of those acts rather then the overall maturity topic of it. This term as well as any other possible terms which could come up in a discussion between informed adult people without depicting it and hence going over the line… which is a blurry one definitely, major issue for overall moderation in general.
The same goes for terms describing ethnicity groups for example, or physical traits people can have.

As a end-result the filter is only allowed to actively weed out distinct words which are in basically 99% of situations used as… well… a profanity. Anything beyond is limiting general communication. Specific things like religious or political topics can also fall under that, though it needs to be done properly and through the board, otherwise we would get favoritism and that’s a big big ‘oof’ again.

Next up, non-clear language in the rules:
As the Rules of Conduct for ‘Harassment, defemanation, and abusive language’ describe it: While using the forums and in-game chat, all users must treat others with respect. People have the right to participate in the discussions which take place here without receiving abuse or harassment of any kind. This also includes Bullying, Name calling, Baiting, and slander and will not be tolerated in a public or private fashion.

Well then… what exactly is ‘Abuse’ or ‘Harassment’, in that case specifically as mentioned ‘Bullying’, ‘Name Calling’, ‘Baiting’ and ‘Slander’?
By which law? Is it our respective law of the country of the user, is it the US since EHG has their seat in the US?

For example the US has so called ‘Freedom of speech’ while my country has ‘Freedom of opinion’, quite different terms with different meanings, importantly so.

This for example depicts that in my country nobody is allowed to stop me from voicing any form of opinion in any way… but! If it’s voiced in distinct ways (Harassment, defamation and so on) then I’ll have to handle the consequences of doing so, but nobody is theoretically allowed to hold my mouth shut for trying to speak out words. So… while I’m acting against the rules of conduct and my message will be removed henceforth… the voicing of my message itself is not to be hindered, it’s to be punished afterwards.
Quite important aspect.

Now, let’s move over to why it’s harmful directly, supported by studies:

Conflict tends to get less when expression is unhindered, this has been proven scientifically already. The more freedom of expression is there the more likely it is that conflict won’t happen as the ability to solve it before escalation is provided.

https://www.ifn.se/media/dvsnxius/wp1473.pdf <— this is a study done on how freedom of expression and social conflict are correlated to each other.

This means that inhibiting the ability for free expression, hence the usage of distinct words, not how they’re used has commonly a negative effect on a community rather then a positive one. Depicting that measures are taken they have to be as limited as possible.

Next up is the distinction of what counts as ‘harassment’ for example. That commonly is depicted as ‘The intentional annoyance, threatening or demeaning of a victim’. Hence, distinction is to be provided for un-intentional behavior, hence compulsive behavior as well as uninformed actions which leads to a unwanted outcome.

This can’t be handled through a qualitative profanity filter, it has no meaning in that regard. The biggest aspect to hinder there is the pure impulse of doing it, not letting it come to realization, which causes any complex filtering and limitations to have no effect in the first place. Hence beyond the basic premise solely reducing the ability for ‘normal’ communication.

Defamation is also a problem, that one can’t be handled with a filter in the first place. It’s the spread of false information about a person, business or organization. Libel and slander hence. How would a filter decide if information is false or true? It can’t, unless the operators of the program implement those filters… which is a clash of interests by design.
So also a non-viable approach.

Bullying also can’t be handled with a filter, since it’s ‘repeated and intentional use of words and actions against someone or a group to cause distress and risk to their well-being’. Hence it is repeated harassment, already covered.

Name calling… oof, that’s a tough one. ‘A form of argument in which insulting or demeaning labels are directed at an individual or group’.
Well, what’s ‘insulting’? And what’s ‘Demeaning’?
Insulting would be ‘an expression, statement or behavior that is deliberately disrespectful, offensive, scornful, or derogatory towards and individual or a group’. Aha! Once again, intent has to be there. So a ‘insulting’ term can only be one when it’s used intentionally so, otherwise it is not insulting, no matter the word.
Now we start to get into major problem areas already.
Demeaning on the other hand is ‘behavior or speech that is belittling, insulting or derogatory towards someone or something’. Well, we covered insults already, so ‘belittling’ is ‘the intentional act of making another feel worthless, empty and dismissed’. Once again, intention.
So that leaves us with the term ‘derogatory’ as the last one ‘a word or grammatical form expressing a negative or disrespectful connotation, a low opinion or a lack of respect towards someone or something. It is also used to express criticism, hostility or disregard.
The bold part is an issue.
How to make the distinction? Well, that’s not clear, which is why it’s a problem. It’s the only aspect which works without intent, but intent matters. That’s why it’s so often used, because it just can be done ‘whenever’. Also everything can be derogatory depending on circumstance, which means everything is derogatory by default. Someone messes something up and you say ‘Well done!’ then that term has become derogatory, it’s a disrespectful connotation.

But… should decisions be made on how much you’re respected? No, they shouldn’t. Respect is not a inherent thing you’re getting, respect has to be earned through actions, which also means the negative form of that can be provided through actions. Should you voice it though? It might become important, most of the time though ‘no’.

So yes, it’s generally detrimental as automated filters can’t take ‘intent’ into consideration, we can only infer on ‘likely intent’, which has to suffice as anything beyond would cause negative effects of labelling everyone automatically as ‘ill intended’ by design.

Never said that. As a victim you’ve nonetheless got to provide the information that ‘you’re a victim now’.
If others get to decide beforehand if you’re a victim or not that takes all the responsibility and authority out of your hands, it doesn’t ‘empower’ you as it should, it ‘depowers’ you.

That’s very important for someone who’s gone through a traumatizing experience, they often feel powerless, so don’t play into that further and cement it through your actions.

Never said that either.
Carebearing is being overzealous. Which yes, EHGs filter is. Don’t you agree? I think you agreed earlier, didn’t you? So why not anymore?
Also I never spoke ill about someone or called them ‘snowflake’. The problem arises if you try to save 1 person and instead throw 1000 under the bus at the same time. That’s not how it works, that’s detrimental.

You gotta use appropriate measures hence to ensure this doesn’t happen.
That’s all I’m saying here and always did. Profanity filters are a necessary evil as we don’t have a better system created - yet - but hence since they’re nonetheless a evil they need to be used as sparingly as possible to not infringe on fundamental societal aspects.

Neither/Nor, first of all, thanks for providing a prime example of ‘derogatory’ wording, it’s a really good example.

Secondly, I’m advocating for proper measures being provided. You can’t run out and scream at any problem a meme like ‘But think of the children!’… because that’s not how it goes.

Yes, all the aforementioned issues are… issues, major ones, societal ones, systemic ones mostly. And you need to provide according options to ensure that when something happens it is handled appropriately. You cannot avoid it happening in the first place, it’s a current impossibility with our knowledge base world-wide still, maybe the future changes that, hopefully so. Hence… you need to adjust to that circumstance, as shitty as it is.

Also, do you think the majority of those cases are in any way/shape or form ‘easy’ to handle?
Was it intentional? Has the ‘victim’ in such a case actively gone into the situation with knowledge that it’ll happen? (A form of enabling btw. as little as we like to see it. If you wear a shirt stating ‘punch me’ while running around screaming to everyone to punch you… you shouldn’t be surprised to be punched after all when it actually happens) Is it ‘solely’ a clash between cultural differences leading to it? What sort of infringement has actually be done?

You have to take those things into consideration individually, that’s why it’s so darn hard to handle those things.
And also There’s often 2 perpetrators and victims at the same time. Obviously not every time… but we’re not talking about rape-victims here… and even in that case there’s so much baffling stuff going on there that it’s a mess.
Such situations are never simple. And it’s definitely not in the hallmark of a game-dev to have the capacity to handle them in the first place. That’s to be done by legal authorities and therapists respectively.
The devs solely should report it.

Now we could talk about how effective those reports often are, but that’s another topic in itself and a mess in many areas of the world currently.

And it would be a good change I would argue.
Having a simple ‘contact list’ and a ‘friends list’ separately goes a long long way, especially in handling harassment as for a contact you wouldn’t inherently see if they’re online or not, just having the ‘quick dial’ button basically available.

I initially liked that idea. But thinking it through, I don’t believe this should be facilitated like this…
Probably would do more harm than good at the moment. I mean, imagine how many times some people would incorrectly use that tool and spam the bug report system because they want to type Trump in chat.
If the current tool is already able to fetch several data, like probably recent messages, then I think this is perfect already.

Actually, you did, I even quoted it, but I’ll do it again:

Yup. Possibly even a list of people you played with recently as well.

Never said you did.

I’m not sure that adding a functional profanity filter is “dis/de/whatever the correct prefix is-empowering” people/victims. Feels like quite a big stretch IMO.

I agree that it’s overzealous (as I’ve said before), but I wouldn’t use a term with negative connotations like that, hence “overzealous.”

Indeed & part of what’s lacking in the current implementation is a lack of feedback as to why the comment didn’t get through.

And yes, they are a necessary evil.

What’s funny is that you’re OK with the company removing the ability for one to type in chat COMPLETELY, in order to protect the community as a whole, but at the same time, is offended if a single message gets censored in order to protect the same community as a whole.

My husband is a Psychologist. I quoted this sentence to him, he laughed and said: "this person is wrong… a therapeutic environment is the exact place where it belongs. A therapy session is the best place for you to offend, slur and be angry towards people you don’t like, because no one will judge, nor censor you.
Pretty much like there’s those Rage Rooms, where you can go and break things to let go of the anger.

So, if one feels like they need to be releasing all that through a video game chat, maybe what they need is therapy or a rage room, not gaming.

No thanks, that’s a huge problem already, we don’t need that being reinforced.

Edit:

Ok, now you’re crossing the line and started talking about something you clearly know nothing about just by reading what you wrote above.
Completely missing skin color for race/ethnicity.
No, it’s not offensive to say someone is black. Only if you attach any kind of detrimental opinion towards black people.

Please, keep your comments within the boundaries of what you’ve studied or know about.

3 Likes

Not only did I not say that, but you got the issue wrong. We’re not discussing reporting other people. We’re discussing reporting your own message that got filtered by a false positive. And you don’t need to remember your message. It’s in the game files. It goes with the report.

However, it doesn’t look likely that LE is using either of your examples. Going from the current state of technology and the statements made by EHG on this thread, I’d say that they don’t have “Perhaps some form of AI engine”, but that the filter is mainly made based on AI and LLMs. Which means that the model needs to learn over time (which is why they ask for false positive reports) until the success rate is close to 100%.

And the reason I know this is because in my job we are currently implementing our own AI engine and it has the exact same issues.

This isn’t true, though. It all depends on the context you’re feeding your LLM, which is what keeps growing and teaching the AI how to behave. Currently you can already use it for a purpose like this.

That study doesn’t make a distinction between online and offline expression and conflict. Both are very different. The anonimity in online media makes it so that conflicts are much harder to prevent. Basically, without any moderation, you end up with 4chan.

I skipped most of that, since it’s based on a false premise, namely that AI can’t handle things like this already.

Is this only for chat messages? The word “Kuma” is currently blocked from character creation offline. If I understand the issue correctly, it’s because it’s a word used to describe genitalia in some foreign language, but if it’s offline, I really don’t get why it’s blocked when I am the only one that is ever going to see it, and it’s not a problematic word in any language I know.

I mean, I am just wanting to call the character “Bear” in Japanese.

I’m in full support of a filter, but it should work that if you don’t want to see the foul language then the filter should block it (or mask the part that foul) and if you don’t mind it then you turn the filter off. Yes “foul” is subjective but we all know a great number of words or content/phrases that are considered foul, and those are the one the filter should focus on.

The current system IS too aggressive and there SHOULD be a way to know what caused your message to get blocked, as well as a way to put in a ticket without having to retype your message (as well as the block message tell you what caused the block in the first place).

I know that EHG has already clarified a couple of those points:
POST #16 by @ServerCryptid
POST #22 by @EHG_Kain

You’re misinterpreting it there.

Is EHG responsible for it? Yes or no? It’s a ‘no’. That’s not their job and shouldn’t ever be.

Also it’s a factual statement, if someone actively writes to you with the intention of harassment and you don’t block them as it happens… what is EHG supposed to do? It’s not their task to do that, it’s not their task to limit exposure of potentially malicious people, it’s their task to provide options to handle it when it happens.

Doing otherwise is not only nonsensical but actively dangerous as it infringes on the absolute core principles of modern western society.

Wasn’t in direct correlation to the profanity filter.

It was about that:

You don’t rush in at the first sign of ‘something’ happening, you take action when you can ensure that your actions won’t have a distinct further detrimental effect on the person… and after making 100% sure that the person actually is a victim at that moment and you’re not simply being overzealous.

You can do so… soooo much harm by acting wrongly in those situations and even professionals struggle to reliable handle those. They’re extremely complex in their setup what’s behind it and what has to be taken into consideration.

It’s not the place for a company to act on accord of any person there, it’s hence solely for them to forward it so that professional people which work directly to handle such situations do so instead to minimize the risk to cause further damage.

In that case it’s literally about saving lives at times, or livelyhoods as long-term trauma can severely impact every aspect of your life after all for years or for the remainder of your living days. It’s not a ‘oops, I made a little mistake!’ situation there, miniscule things can have massive effects and it should be handled with the appropriate care hence rather then doing ‘something’ and being rash.

You seem to have a problem with discerning the situation here:

Has the person caused an infringement of any sort? No? All fine. Don’t limit them.
Does the person cause a infringement? Now you take action, which is a respective punishment fitting for the position the person is in, which is a user of the product infringing on the rules given to them. If harm is being caused then authorities have to be informed, if not then the access to the function is removed.

You don’t limit people, you provide them consequences. Our whole society solely works on consequences, otherwise people would go around willy nilly killing people left and right. That’s how society has always worked, works nowadays and likely will work for at least several tens of thousands of years going forward still.

Yeah, it was badly worded, I’ll rephrase it:

If you go into… for example something like ‘anonymous alcoholics’ then you’ll leave those things outside commonly. You don’t yell and insult the other people in the group as it’s distinctly set up to not judge but instead share between people, providing a environment where they can tell their experiences as they want and not be under scrutiny.

Obviously confrontational therapeutic sessions are a big thing and exist, which is the opposite site of it.

Was badly framed from my side.

Depends on the type as well. If someone curses that’s often a coping mechanism and not targeted, those words are censored though, don’t fall into the framework of being insulting, defamatory or anything else.

Outright attacking other people is another topic by itself and obviously falls under harassment and is not to be allowed.

Distinctions there are extremely important.

In some areas… it is :slight_smile: Luckily not my place, but yes… it is.
In my country calling someone a ‘Gypsy’ even if they’re a Gypsy by heritage would be a slur nonetheless, which is a ridiculous notion. Many countries have something of that sort in different shapes and forms.

Which by itself would pose a major problem as mentioned… they’re not able to properly work with context yet and are extremely inefficient resource-wise as well.

Obviously you need a environment for them to get the model to learn… but you don’t implement it in a half-baked state as it causes more issues then good.
Not to speak that the current situation is extremely fast-paced on how legislation and research happens and hence which things are actively allowed or not allowed. Your models are very quickly outdated or outright working against the expected behavior in parts there. Which will only get expedited in the next years since it’s becoming an ever bigger focus to ‘work it out at least half-way properly’ by now, seeing as the internet has caused many things to become socioeconomic issues which before its existence weren’t even a thing as no environment for it to happen existed world-wide.

Which is why I’m an advocate for providing basic private information to those companies while enforcing proper privacy laws onto them so it can’t be shared with anyone at the same time.

Upholding a safety system while enforcing accountability on the user.

I don’t know. Maybe they could create a filter (that is still being fine tuned) to prevent the toxic behaviour in the first place? So the chat isn’t as toxic as D2’s was or many in PoE are?

Does it? Many organizations have lists of words that can’t be used in any communication. If you want to work in a call center, same thing. There are plenty of examples where private companies censor their employees.
This is not even going into online media like facebook that auto-censors many posts.

Seems like modern western society’s core principles are mostly: “want to use my product, you have to follow my arbitrary rules”.

Yes, they are. You are the one feeding them the context and fine tuning it. The broader it is the more training it requires, but they’re already able to work with context just fine. That is the main point of LLM’s, really, to decipher context from sentences.

Considering that more than 99% of the good phrases go through, I’d say it’s way better than half-baked. False positives are an outlier, not a norm. They’re not even that common for you to advocate for tearing down the system.

You speak from theoretical and formal definitions, which is fine. Yes, not everything people say is meant as an insult, even if it is perceived that way. As some other user in this forum put it, and I am paraphrazing here: “I am gay. I deactivate game chats usually because I don’t want to read when people call everything that is not to their liking as gay.”

While people might claim that this is not intended as an actual insult, their use of language - if we look at it favourably - mindlessly hurts and discriminates gay people. The ‘culprits’ might not even be aware of the effect this has. And you will find people who know exactly what they do.

Going purely by book-definitions does not help the reality of what happens. Unmoderated chat quickly devolves into a cesspool of foul behaviour that will make other customers uncomfortable or actually harm them emotionally.

EHG does not want this in their service. THEY decide what they consider appropriate or not, and as a private company, they are well within their right. Society at large is under no threat by this.

Do you really consider applying a chat filter as throwing people under the bus? What is the real harm for those people whose messages get blocked? What is more important to protect? Your right to speak as you want in an in-game chat where you are only a guest, or the dignity of fellow customers?

The goal of this filter is to automatically moderate the chat in a way EHG wants. It is overzealous, yes, creating false positives. But I bet it has more real positives. LE’s chat seemed pretty tame compared to other game chats I used.

By my definition, if it did more harm than not, it would increase the amount of stuff that it tries to moderate. The ‘detrimental effect’ you proclaim is a mild inconvenience when a message is wrongfully blocked. That’s all there is to it. It does not make communication impossible, it does not remove your ability to express your opinion in a place where it is more welcome.

As for this paper you linked - have you read it? 15 Pages of self-references, where the authors mostly cite their own findings in prior papers they authored to back themselves up. Sounds pretty biased at a first glance, but I did not put in the effort to check everything.
Even if this paper is a quality product, we have to consider the applicability.

Do you think you can apply a large-scale sociological study that does not really go into detail to a limited social media environment? Don’t you think that there are different stakes and social mechanisms between an in-game chat and the whole political landscape of a country?

What many people don’t seem to understand about the freedom of expression/opinion:
In Germany, you have the right to express and disseminate your opinion. You have no right to be published by someone who does not want to publish your opinion*. You are free to publish it yourself or find someone who wants to publish it.

  • Minor limitations regarding political parties that I am aware of. Broadcast stations are mandated to show political spots for parties they don’t agree with.

If we talk ethnicies, they are Sinti and Roma, not gypsies. Two destinct groups.

1 Like