The Age of Winter 555 corrupted timline cleared to see all types of nodes there

In general I kind of feel like the node target farming is too much in tension with the other systems of the monolith. To push corruption and/or get stability for bosses, you want to just go as far away in a straight line as possible, so exploring around for a handful of specific nodes might not even be worth. That there also aren’t even that many of them is kind of baffling. The system really feels like it needs an update at some point.

1 Like

Missed this part.

The main difference is that with a weight table you have to analyze it and figure out where you want to squeeze your new item.
With LE’s system you just have to decide how often you want it to drop. It’s a much simpler process that actually is completely separate from the rest. You don’t need to worry how often a red ring drops when adding a new ring. You just need to decide how often you want the new one to drop.

You have a base weight value of ‘1000’ which is ‘normal drop-rate’ and everything changing from that is the percentile it should drop more or less.

Want an item to be more common then the base? 1200
Want it to be a very rare thing? 100
Want it to be extremely rare? sub 100.

It’s a extremely simple system. Overall size increase in the pool always dilutes the outcome, you don’t need to analyse the pre-existing positions in a pool.

So:

That’s wrong.

It’s simply a single small piece of code which is done shitty, deal with that. There is no upside to the method presented from EHG, it has the same function but is less efficient. It is inferior to the named one in 100% of the situations.
The only exception is when someone doesn’t understand how it works… but that’s the same with everything.

I asked Mike about this and he said “There is a really good math reason that I always forget.” So excuse me if I believe the senior developer of the game over you.

Again, the operation cost of looping 20 or 100 times is negligible in this instance. And since there’s a “good math reason”, I’d say there are no downsides to this.

Lately you just seem to have become “radicalized” in your opinions and now everything LE does that differs from your viewpoint of the game is suddenly crappy trash from incompetent devs. Maybe you should tone down on that?

The game has serious flaws. The devs aren’t talking about fixing those flaws. That’s why people are negative. I wouldn’t say the devs are incompetent, however, I would describe them as not very competent at game design. The game, as it stands, is just not very good.

1 Like

No one argues it doesn’t, although most people disagree on the severity of those flaws and exactly which ones they are.

That doesn’t mean they aren’t working on them.

That is just your opinion. My opinion, for example, is that LE is the ARPG I enjoy the most right now and the only one I’m playing regularly. And also that, even if they never changed anything in it anymore, I would still play it for years to come.

So what you’re saying is just subjective. You don’t really like the game. It’s fine. But there are people that do. And maybe changing the game to something you would like would make the game something they wouldn’t (including the devs).

Yeah, the guy selling this idea was good, I give him that :stuck_out_tongue:

Doesn’t make the outcome better.

You see, since decades there’s a general ‘weight-system’ used for item creation, that has a reason.
EHG went, re-invented the wheel, now they’re squares.
That’s the situation.

I nitpick at the small stuff.
I’ve always had issues with some things, but there’s a difference between ‘flavor’ and outright ‘bad design’ in specific areas.

Yeah, 90% of the things I mention are ‘no big deal’, but… they are ‘a deal’ and could be improved.
Much much much could be improved.
Much is good.
But this is the feedback section and not the ‘praise devs to the sky’ section.

So let me re-iterate my stance:
You’ll hear 99% negative things from me here in this section, now and in the future.
You’ll nonetheless hear a single positive statement from time to time, and this is the major take-away which should provide all reference to the core position of myself ‘Last Epoch is a good game’. Because… it is!
It functions.
It has better mechanics then 99% of other games in the same or adjacent genres.
The effort which went into it is visible.

What you’ll hear from me in the feedback section though is either ‘this part is fine and works, no need for change’ to ‘this part works for now’ towards ‘this part is not working and needs changes’.

Since feedback relies mostly on the negative aspects (since few people go out of their way to simply say ‘yeah, found nothing’) you’ll also see the notion aligning from my side of what you can read.

This is not ‘radicalizing’, this is ‘perception bias’.
And it was actually you personally which made me go and find more things to nitpick on :slight_smile:
Since my arguments about software engineering itself in another topic and not being in the sector for years upon years anymore has sprouted my knowledge urge to shift from my former direction towards specifically game design and game development I’ve picked that up as a hobby again (for however long).
So, since I’m directly into it again and refreshing my knowledge I’ve found several things which are just… wonky… not well executed. As you would expect! There’s tons of those in every game, and if you got few then those are called ‘masterpieces’. And I hope we can agree that LE is not a ‘masterpiece’, it’s a ‘well made game’. ‘Masterpieces’ are rare, far and few between.

So back to the actual thing which is:

What is it then?
I don’t take ‘Trust me bro’ arguments.
As long as no argument is there I’ll uphold my personal argument until proven wrong.
I neither trust my colleagues blindly, nor my family, nor my friends, nor my life partner and less so companies or government without fact-checking myself.
Critical thinking is the highest thing for me.

And since I specifically am going into itemization systems and the algorithms behind it the simplest and most effective one is the ‘Roulette-wheel selection’ method.
You can make it a O(1) one by adding stochastic acceptance to it as well.
The stochastic acceptance part would likely be the ‘Math reason’ provided, which makes the usual ‘Roulette-wheel selection’ method into a O(N) or O(log n) complexity.
But that would break the depth of how far we should go.

It suffixes to say that most coding languages already have extremely efficient functions related to that in-built.
For c++ it would be ‘mt19937’ from the library which then gets used in conjunction with ‘discrete_distribution<>’ from the respective vector array table.

This is extremely efficient and the distribution is already very well done there from a mathematical and seeding standpoint, as well as code-wise a total of 4 lines rather then several interconnecting functions with likely a far less efficient outcome.

What EHG does is called ‘sampling with replacement’ in a extreme version, the sample is removed entirely rather then reduced in chance. (A natural ‘pity system’ turned on its head).

This is not needed and goes against the KISS principle (Keep it stupid, simple!). You don’t need to make more complex things when complexity isn’t needed.
And before you say something against the KISS principle and it not being the first and utmost principle to be used in any project: It’s used by the most important segments world-wide and not only in coding.
Or: “Efficient game design is an exercise in modeling elements only to the minimum level necessary to create the desired experience.”

Which is not done in that case specifically. It’s a complexity level beyond what is needed and actively works counter to easy of implementation as you need to work from entirely different premises then those which have been expanded on and optimized since decades by now.

Not to speak of scaling issues with the system EHG has. The amount of Uniques being implemented is not static. With a re-roll chance instead of a static drop chance related to items available your overall chance to get such an item is automatically worse for every added item since the cumulative probability is negatively inverse to the sample-size you start with.

This is also extremely hard to understand just how vastly it differs unless you know the math for cumulative probability and make a formula for the different situations.

So I’ll do just that!

A table with 4 items
first doesn’t re-roll.
second doesn’t re-roll.
second does reroll 20% of the time.
third does 80% of the time.

This directly relates to a weight distribution of:
10, 10, 8, 2
Total is 30.
Each ‘state’ is a 6,666…% probability. So item ‘4’ is a 12, 13,33…% probability.

With that you should be done, you got your finished probability by adjusting the values accordingly for the ‘re-roll’, that’s it.

Now with EHG’s system we have to take into consideration the re-roll chance.
We get the following table (with a 5th value for ‘re-roll’):
10, 10, 8, 2, 10 (10-8 = 2 and 10-2 = 8)
40 states, 2,5% for each state, hence 25% chance for a re-roll.
Should the 25% happen we get a new table, let’s say our ‘8’ was hit:
10, 10, 2, 8
Which obviously changes the probability completely for everything to drop! 3,333…%

This needs to be done for every cumulatively possible state of the outcome. Since the cumulative chance though changes with more cumulative outcomes beyond there is literally no reasonable way to see what any new input will change realistically, unless you specifically have a testing environment just to tinker with the numbers until they ‘seem fine’ because they don’t follow any understandable logic unless you’re a math genius.

Makes it darn hard to balance anything.

In comparison a static system which has a singular weight value will be understandable still, and it’s easily deducted if adjustments have to be made, and where.

It’s nonsensical and abstract. And you only abstract things to make things easier, you don’t abstract for abstraction’s sake.

By that reasoning, nothing should be changed.
By that reasoning, since decades there’s a general “resistances system” used for damage mitigation, so EHG should have never changed it to what they use.
By that reasoning, since decades there’s a free, toxic, oportunistic free trade system, so EHG should have never changed it.
By that reasoning, people that don’t want to trade have always got the shaft. EHG should have never changed it.
That’s the situation.

You usually laud them on making innovations you like, but then you go and trash them when they do the same thing and you just don’t like it.

I don’t mean that. I mean outright saying code is trash, devs are incompetent, that type of verbiage.

The results their system provides do the same thing as a weight table. The processing used in it is negligible. So what does it matter to the player? If they feel they have a reason for it and it has no impact on the game, why start attacking that?

EDIT:

This is no different from simply adding a new value to your table without adjusting the other values. And even adjusting the values, it will always change probabilities of at least some. It’s impossible not to.

First check if a change is needed, then change.

Was it needed?

It’s not a perception thing, it’s a efficiency and coding thing.

In math you don’t ‘invent’ random things. You either implement the existing solutions or you create a completely new one solving another problem.

What’s the problem they went to solve?
They just made a alternative - and less efficient - solution for the same problem already existing.

Once more, not perception, it’s a code related topic.

I’m saying ‘yes, this type of code is not good’.
Why not?
What’s against that?
Look into it, see if it’s the case, change it if statement is true.
If it’s not true then leave it be, optimally with a explanation as to why the statement is false (albeit obviously we won’t get that, time is a factor and answering everything is just not possible).

I attack it from a coding standpoint, not a player standpoint.

Also different thing.

Which is a static change in outcome.

If you add binominal distribution with several layers into it then it’s not the case anymore.
It flips the second you re-roll, by then every state afterwards is less likely to occur.

After all if you have a 20% chance for the table to re-roll total and it does… those 20% are adjusted.

Now it could be 18% to re-roll… or 16%, depending on which piece was picked.
Also each singular item leftover in the table has a different chance now compared to before, related to the specific item picked. So your ‘no re-roll’ item which formerly had ~5% chance to be picked is now 7%… or 9% Well… which is it? We can’t know!

So first, we have a so called ‘Markov Chain’, hence we have a different state from a former state.
Then we need to put that into a ‘Stochastic Matrix’ and math that out.
But since we don’t have a ‘it does provide it’ or ‘it doesn’t provide it’ outcome but a variety of outcomes that means we have a ‘Stochastic Matrix’ made up from ‘Stochastic Matrixes’ to get our actual outcome.

The ‘Stochastic Matrix’ itself is a great thing to cause a higher efficiency for a multi-layered execution process with random distribution… but I damn well hope that EHG doesn’t put it through that through without actually visualizing the outcomes… and hopefully does use those as the starting process for each area rather then the chain itself.

You can use that outcome in a useful manner though by splitting up the vector search into segments based on the size of the vector length then, which would then need a nested ‘if’ loop based on that size to decide which starting direction and point you have. I don’t actually know if this algorithm has a library attached to it in c++ for example, otherwise I would name it, it’s the 2(n) nesting of for functions into 2(x) amounts of if functions based on the size of the array you search through.

Edit:

My big question simply is:
Is that layer of abstraction and math necessary when otherwise it’s been shown that this only becomes relevant in excessively large data pools in the future? Which EHG clearly doesn’t have yet?

I don’t know why you wouldn’t just build the weight table on node init. Then, like this loop, it’s in memory. My biggest, ‘what?’, with the reroll is that you don’t know how long it will take. And it’s a synchronous call. I wonder if they’ve ever logged it looping thousands of times? It’s not taking nano seconds. I wonder how long each call takes.

This sounds like somebody really wanted to try their hand at recursion and found math to justify it.

1 Like

It’s a simple roll 1 random number for which item it is, roll again if it has reroll chance. If it rerolls remove it from list and start again. It’s not recursive, it’s just a loop (which isn’t the same thing) of, at most, # items with reroll chance + 1. Modern computers can run thousands of these calculations per second.
The biggest weight in this whole operation is simply loading the list to memory (which is usually done at the start of the game and stays that way through the whole game) and that weight is the same for both.

The fact that both systems produce different results when adding new items (adding an item with reroll chance has a different impact in the whole system than if it doesn’t have reroll chance) is to me enough justification for them trying to do something different with their game.

If a system has no significant weight, produces different results and they feel like they like that result better, for whichever reason, that to me is enough.

Just like with resistances. The weight of using LE’s system vs the traditional system is basically the same and the results are different.

gotcha, I misread it, thanks

1 Like

I thought about this discussion in the morning while walking the dog.

  1. As @DJSamhein said, no recursion is required.
  2. But each time we loop another random number needs to be generated. Yeah, that might be cheap, but do it often enough and you start to feel it.
  3. When using the wheighted table with stochastic acceptance also random numbers are rolled until an index is accepted
  4. I believe the fastest would be the weighted table with binary search.
  5. I would say both methods could handle new items reasonably well as we are not adding new uniques at run time. I’d say this is a moot point.
  6. LEs method requires write access to the used data structure - even it is not destructive - which prevents reentrancy, so each time an item is rolled a copy needs to be set up. This is not the case with a weighted table where only read access is required.
  7. For me LE’s approach is not particularly straight forward, after a re-roll the probabilities of all remaiing items change slightly and we are looking at a Markov Chain. A weighted table however is pretty easy to understand.

Never the less I doubt they were shooting themselves in the foot by using something which does not appear to offer any advantages. So it would be really interesting to get their take on on the subject.

You always run only at most items with reroll chance + 1. In the case of rings, that’s 7. Even if it were 50 or 100, you’d still do that in a fraction of a second.

Weighted tables only need a single random roll. Each item has a range according to the weight and that is calculated when you load the table to memory.

It is, but the difference is negligible.

Neither needs write access other than the first time. The whole table is loaded into memory, whether it’s a weighted one or the rerolled one.

That is true, but maybe that’s part of the reson? So people don’t have x rings fall and say that they should have had one?
With a weighted table you can clear point out that item x has, let’s say, 20% chance. When they drop 5 rings they then complain that the one they wanted didn’t drop and things much be broken (we actually see this with 1LP slamming).
With their system you can’t really determine the exact drop chance, just the ballpark.

It also favours common unique drops, since they are the ones that get the most benefits from the Markov Chain.

So maybe those consequences are part of the reason for using this?
Again, there are no real downsides in term of the game, or the ones that are perceived are negligible.
And since both systems do behave differently and produce slightly different results, I’d say that’s enough reason for them to use it.

Yes! but it is a O(1) operation rather then a O(N) as a end-result. Which is a better outcome.

It’s slower, but with the data-size we have it is not meaningful in outcome. A weighted table with stochastic acceptance is also in-built O(1), binary is still O(N).
Rather then binary you can simply implement the Walker Alias Method and go with that. It has a O(N) prep time but a O(1) execution.

Exactly. Improves readability and also makes the runtime close to static which helps in optimization and ongoing design decisions.

Agreed :slight_smile: It would definitely be interesting.

It’s a O(N) runtime, which is something you wanna avoid if you can. Go with O(1) options instead.

Which is a setup through initialization done upon server start-up and never done again until server shut-down.

Agreed, it’s in both cases contained in-function.

Huh? That’s not… that’s not how that works. That’s all chance-based situations, always? Is there a difference between what ‘type’ of 10% you have? :stuck_out_tongue:
How would you even know? A mystery sixth sense which is the chance-sense now? :stuck_out_tongue:

You… can?
It’s just… a mess?
Unneeded complexity? No functionality?
It’s a game and not a scientifically related multi-step execution process which demands Sigma 7 reliability?

How so?
I don’t understand that one, please explain that since I can’t follow.

That would be for the stochastical acceptance (instead of a binary search).
That part surprised me as well but gives the term stochastic acceptance a meaning. I found some code on GitHub.

And here we disagree. I had to profile and optimize other peoples code too often, because nothing is free, particularly not the seemingly simple stuff such as “new”.

How do you remove an item then?

Obfuscation sucks.

The difference is that in PoE, for example, you know you have x% chance to get a specific item. It’s a static, fixed, easily calculatable number.
In LE, it’s not as easy to calculate, so people aren’t really aware of the drop chance. It’s not a static number you can easily calculate, since that is the nature of Markov Chains.

It could be just a way to obfuscate drop chances.

Each new reroll increases common uniques’ chances more, that’s all.
Simply put, you have x commons and y rares. When a rare rerolls, you get x commons and y-1 rares in the pool. So commons are more favoured with LE’s system than with a weighted table.

I never said they were the same. I just said that the operation costs in both are negligible. Both are done in miliseconds, at most.

You use a clone variable, of course. You never touch the original variable, you just make a new one. Having a list that you remove item by item has negligible impact on processing. I know because we often have to create lists for thousands of people at once and the calculations themselves (it’s a debt calculation) have more impact than manipulating the variable.

Sure, but that doesn’t mean devs can’t use it.

That is what I meant. You need to copy it which adds to the cost.

Sure but customers often appreciate transparency.

And your solution is to obscure the number so only people with more knowledge can infer it out of that?
It’s not impossible, just cumbersome.

So… what happens when someone DOES provide the numbers? All crumbles to dust? We’re back at Step 1 now. Has the situation changed in reality? No?
So what was the goal in the first place?

Each new roll increases all unchosen elements respective to the chosen element non-uniformely.
So your argument is not true. It also does increase the extremely rare ones to roll. It just depends on which item has been picked.

That’s… nonsense?
Refactoring a Markov chain and a Stochastic Matrix returns a weight-distribution table.
The weight-distribution table is the static presentation of those concept.
Your argument literally says ‘The outcome of the situation is worse then the outcome’.

It’s just commonly useless and a waste of resources. Exceptions apply.
And if the reality of a situation comes out then people start to feel cheated.

You know, making informed decisions is based on having information.

It does. It’s just negligible. You can still do thousands of operations per second like this in modern computers.

I don’t think that was part of the reason at all, but it could have been a factor. Personally, I think it has to do with how probabilities redistribute differently when adding new items, especially ones with reroll chance.

Yes, non-uniformely. Common items’ chances increase more than the remaining rare ones significanty with each reroll.

It isn’t. Rare items get removed from the list. Common ones never are. Eventually, if you reroll them all, you’re only left with the common ones.

As I said, I don’t think that was the reason for using it. I don’t even think it was actually a factor (even a minor one) though it could have been. It’s just a side effect.