I saw a comment on the internet that LE will be focusing on AI slop in the future. Has EHG or Krafton announced something to this extent? Because focusing on the latest tech bubble is often a death sentence for any sort of game.
Kraftonâs announcement was âAI firstâ, exactly that term even.
They announced that they wanna focus on implementing AI in as many aspects as possible, internal communication, having all members (which EHG is one, theyâre owned by Krafton) use AI for their processes and so on.
So yes, the chance is very high that thereâll be a massive failure ahead.
EHG hasnât, Krafton has said that they want to focus on âAI firstâ so it could go either way & drpends on how they use it (as a tool to make the artistsâ jobs easier versus replacing the artists).
Like 3d was back in the 90s? Hardly any games did well using 3d & itâs barely used now. Same with 3d acceleration cards.
To be entirely clear⌠the early 3D titles had a very mixed reception overall, especially since several titles were critized to look bad because of the blocky setup, have odd camera movement and even cause headaches.
Much like the first VR headsets were extremely niche and critized.
Or the first alternatives to controllers and joysticks (which had been established by then) were utter garbage. Power Glove (which nearly never worked), the duck hunt gun which only worked at specific screens (but at least worked) or even the first Wii Controller using the functionality for anything else then Wii Sports (Metroid Prime was such a disaster control-wise as a âprimeâ example).
Nigh universally new technology is badly received, not only in gaming. Itâs more a tech showcase then anything else, vanity being above functionality.
Heck⌠self-driving cars like the Tesla are a great example in that regard, outdated beyond end and extremely unsafe compared to other companies since they improved on the concept severely.
AI based tools are still very limited in design and often faulty what they provide as output, leading to more work rather depending on use-case.
And declaring âAI firstâ means itâs a supreme focus on that aspect, one which is a very powerful tool in the future surely but not remotely polished in the current state for those use-cases.
And yet itâs only through continued use of a technology that it actually does evolve into something that becomes almost required since most players expect it. Like ray tracing or DLSSR.
Sure, some technologies donât get to survive (like 3D glasses or the power glove) but if studios didnât try to innovate with the latest technologies, weâd all still be playing 8-bit games.
Itâs only through several iterations of previous uses of a technology that said technology has a chance to bloom and actually become an asset.
Thatâs not to say that AI will become an asset or not, but it will be only through studios attempting to use it that it will either sink or swim.
And, to be honest, there are several areas where AI will very likely become standard very soon in games, like asset creation.
Nobody said otherwise.
Do you wanna be the beta-tester though or do you wanna have a polished product? ![]()
This is not a new product after all, itâs already established. So moving on with non-proven concepts is a risk beyond the initial investment, itâs a risk of sustained customerbase.
Thatâs not good business practice.
There are plenty of places where they could use AI and you wouldnât even notice it. Voice acting, for example. Even asset generation, as long as there is quality control over the result.
They could even get their graphical designers to use AI to generate the base asset much faster and then they just have to polish it. Meaning weâd finally get all the unique renders missing and more MTX faster.
Besides, in a live service game, everything is a beta test. PoE treats leagues as a beta test playground, so they know how much to tweak to add to core, factions were a beta test, woven echoes were a beta test, etc.
Most stuff added to this genre is a beta test. Itâs all just a âI had this nifty idea, letâs see how it goes and if players like itâ.
Not how or what it was stated as by Krafton.
Inner communication, overall usage of all possible tools.
Their way they described it is basically âwherever AI can be used it will be usedâ. Which is obviously not a good thing.
Were it differently proclaimed it would also be a different matter.
That depends entirely on how itâs actually used. If they start doing vibe coding, thatâs not a good thing.
If they use vibe coding with human oversight afterwards (meaning, fixing all the bugs/nonsense), it can speed development. Or it can not. I dislike vibe coding, so I canât really say.
But knowing how many indie studios and solo programmers are using AI for asset creation, that can certainly be a good thing.
And if they decide to use AI for their support line (lots of call centers already do that anyway), for their internal processes and whatknot, that will not affect LE. Lots of companies are already doing that anyway and more.
I expect they wonât use AI on everything. Getting AI to bring them coffee would likely not be possible and not a good use of it, so itâs likely that wonât be used.
Just because they said âAI firstâ doesnât mean every single thing will now be AI. Plenty of things still canât be done with AI.
Itâs also not likely that they can just turn to the studios they own and say âYou canât have developers anymore, you have to use AI for everythingâ (especially when we consider that LE is actually still hiring new people), so I donât expect much of an impact on LE.
If you read something on the internet itâs true.
Sure it does, thatâs obvious!
Given the proclivity of larger companies to use them in non-optimal ways (mildly spoken) and the track-record of Krafton in general it is just more likely they screw it up then it becoming something great.
In which direction it goes is open, but the potential issues are aplenty after all.
Mind my words if in the next 2 years any sorts of layoffs happen related to LE, then you know which direction it went.
I still remember getting my first 3dfx voodoo 1 card while at uni.
And yet if it doesnât happen then thereâs no improvement.
Sorry, what LE badge do you have?
Voodoo was a nice one, besides the stability issues.
While improvement is important itâs for those people which are willingly going into forming areas. Not those which shouldâve a finished and polished product (both which LE is still not) already received.
And me being a beta-player was with regards to going into it fully knowingly and willingly. Beta is over though, now Iâm not willing anymore ![]()
Me âonlyâ reading it on the internet was why I asked the LE forums instead of blindly believing it.
There was an announcement regarding the use of agentic AI. This type of AI is used for decision-making with minimal human intervention. It wonât and canât even get used in many aspects and the areas where it will be used will most likely not impact the development of the games directly.
AI nowadays is a big buzzword, but AI is a very big and broad topic.
The areas where agentic AI will get used will most likely not even be noticed for us from the consumer perspective.
Donât worry there will be no âAI slopâ implemented in the game directly anytime soon.
While agentic AI is not the most visible for a customer it is one with the highest forms of potential long-term impact, both positive and - for the moment still, though shifting as it gets improved - negative dependant on usage.
While not directly visible it can have severe consequences for the customer.
If itâs used as a supporting tool to make the work of people easier and save up on positions while being used to oversee viability of decisions and how action is taken⌠the premise is still the primary focus on AI usage here, not the supporting role of it⌠but the primary usage.
This means that decision making is likely to be taken over by AI to a degree with less human oversight then needed, often leading to erroneous decisions based on the algorythm the system works under, unable to deal with the details of situations after all rather then the big picture (at which those systems excel compared to a human). This type of decisionmaking is great short-term but faulty long-term, and extremely frustrating for the individuals put in the position to directly interact with the system.
For example financial risk assessment is a big aspect of agentic AI, should Krafton use that in a sector like gaming then weâll see the surging of generic products and methods of development.
Why?
Because risk is a necessary aspect of creativity. To become successful in any entertainment or creativity-based sector you need to take inherent risks. Depending on the setup of their system this means starvation of creativity and hence risk-averse behaviour, which leads to products becoming more generic and hence failing as they donât provide any form of unique experience. Pure mainstream focus.
Agentic AI is also directly used for game development. Weâre talking about NPC behaviour being designed with it, content design is also agentic, game balancing (actually that would prove positive for EHG to use it there
) as well as bug-testing and most importantly support is falling into agentic AI use-cases.
So:
Isnât really upholding. Youâre 100% right, itâs a lot of buzzwords⌠but sadly especially in situations of a large company the vague basis those buzzwords create causes them to potentially apply to a large range as well.
Theyâre specifically designed to create a specific narrative after all, one which is based on easing worries or painting a specific picture in the favor of the company while leaving the potential for usage which is deemed controverse open this way.
Donât mistake it as any form of security for us as customers what was proclaimed there, itâs deeply worrying to see given the prevalent methods of even agentic AI usage. We donât need to talk about generative AI usage here even, which has the same issues⌠just different in how they depict themselves to the customer as a end-result.
Oneâs more direct then the other, both are as likely to cause the same types of issues though.
If you look at the biggest blockbusters of recent years youâll see that most are mainstream focus and they werenât made with AI. Fast & Furious XX29 - Cars in Space are just formulaic slop and those are the ones that perform the best.
Iâd say that AI canât do worse than that.
First off:
Whatâs your point with this:
It poses no point, no purpose.
It points to the argument âif they were made with AI they would be betterâ as it inherently states âbecause if was made without AIâ.
I know not the point you wanna make, but showcases that the point itself is meaningless, it has no basis.
Fast & Furious always was story slob, good action scenes though, they do that well.
Thatâs their whole premise. Good action, end of line.
So they do something very well and focus on that, which is visual presentation. Perfect for a action movie. You donât go for a deep enthralling storyline to watch that, you go there to have the adrenaline surging action scenes and as long as the premise isnât too moronic to even follow itâs working.
Do you think for example pulp fiction shouldâve followed a pre-established non-risk approach to making a movie instead? Or Mad Max? Inception? Fight Club?
All of them took a unique premise for their stories which were intriguing, theyâre not top-tier in visual acuity and the âclassicâ established model of their genre, they are the memorable things shaping the future generation of creation in their segment.
This is not - yet - something that heavily AI leaning system can reasonably do. What those systems do is providing more âsame-ishâ stuff which is risk-averse, like Fast & Furious. But never something enticing.
If a single product for a long-term based game is there in competition then this is a failing methodology to follow. Unlike a 2 hour (or 3 hour) movie the product is based on spending thousands of hours in it. In that sort of timeframe it needs to provide a reason for existence beyond being mediocre⌠otherwise people vanish to the next mediocre product for the short timeframe of vanity that one provides.
No. You said using AI in creative roles will lead to mainstream slop. I was just pointing out that we already do mainstream slop without AI.
You can use AI to create mainstream slop or you can use it to create something more risky/creative. That will depend on what you ask of it.
Just like you can use writers to create mainstream slop or you can use them to create something more risky/creative. That will depend on what you ask of them.
My point is simply that simply using AI is not inherently very different from what we already have. It all has to do with what you task it to do.
If you ask AI to âwrite a bookâ, youâll likely get mainstream slop. But if you ask it to âwrite a book using as a premise that X and Y and Zâ, then youâll get something else entirely. Just like with a human writer.
And if you then follow up on what the AI produces and further guide it with more prompts, you can actually get something different.
Sure, itâs more work that âwrite a bookâ. It will take longer. But you can use AI in a way that wonât simply produce mainstream slop. Which was your argument.
In fact, you can guide it to create something completely niche according to your vision.
I said the likelyhood is high. I specifically didnât state it as a guarantee.
Does anything point to a different outcome?
Yes⌠but whatâs the point?
âWe already have bad products, so it doesnât matter.â? ![]()
I donât get what you wanna say with it.
All fair and good.
These processes to have proper usage with the right outcomes happening through the usage of AI though is not common protocol yet. Itâs starting to form, but itâs not established in the industry. There are no basically standardized practices available, itâs a new field still. Hence oversights and mis-usage rate is extremely high compared to processes which are established.
Thatâs the whole point.
Which leads back to the start of this post here:
Does anything point to a different outcome?