The Age of Winter 555 corrupted timline cleared to see all types of nodes there

Could be… it’s just… inverse?

What would you expect to happen when you add more rare items to the game without counter-balancing with the common uniques?
The situation is that the range of the weight-table gets larger, hence the chance for a rare item to drop gets higher compared to a common item. This means rare items become more prevalent.

If you use their method though it’s the inverse.
For every rare item you add the chance to get one becomes overall less. For every common item you add it becomes substantially less.
Why so?
Because since you have a secondary layer (the re-roll chance) on top of the base weight distribution (which is ‘1’ for each item) you cause the the outcomes to move towards the common uniques more and more since every re-roll has a higher chance to cause them to drop then the rare ones, in a non-direct correlation.
If we say the direct correlation is made into a actual weight table it would look like… let’s say a ‘1000’ (common number to be used as a base) for the common unique item, the 20% re-roll has a ~650 weight (since every step it becomes more unlikely to roll.).
Now if you add another item suddenly the distribution is ~1100 to ~600 in comparison.

Is this what is supposed to happen?
Does this seem ‘right’ to you?

Because that’s what the system actually does.

Until you can’t. Also you also have thousands of people playing this game.
Reminds me of my Software Engineering professor back when Jave was introduced but deemed inadequate by some due to performance. “Just wait for the next hardware generation”.

If you add a common item, it doesn’t affect the rest of the probabilities other than increasing the item count.
Which means that a red ring would be something like 1/20(new number)x0.02 + 1/19x0.8 (next item in the chain) + … etc.
But if you add a rare, you actually not only increase the same number but also add another item to the chain, including to the common ones.

So adding a common item has less impact on the whole probability table than a rare one does.

This is exactly what I said in the previous post and you said it was the opposite.

It’s still negligible with the current hardware. Applying filter rules will have a much higher impact than rerolling a few times.

Quite the opposite! You get a inverse correlation with the drop-rate of every other item, by design.

That’s even happening in a weight-table. That’s normal.

If you drop 1 item but have 20 possible ones instead of 19 that means all existing 19 items will be seen less overall by a fixed percentage.
That’s basic math there.

So if you go down the chain and since the decision is already weighted towards the common ones (since they don’t re-roll) you automatically gain a even higher chance to get a common one in the second step… a even higher one the third… a even higher one the fourth and so on.
The same though happens with adding rare items, it does change the correlation between getting ‘a’ rare item versus ‘a’ common item, but the higher in rarity an item is the more unlikely it becomes overall to get the more rolls there are.

This is why often multiple layered drop-tables are used.
For example… we could get a ‘big single’ drop table with everything inside, but as you said, it makes problems with how the percentile of drops can shift if you add more.
So instead what’s sometimes done is to create separate drop-tables which are viable for any given area of the game, changing their weight accordingly.
As an example (1000 Base) we get a 1000 common drop, 700 uncommon drop, 400 rare drop, 100 very rare drop and 10 extremely rare drop. Then inside those it will be chosen which item actually drops.
That’s a static enforced distribution. It offers control for balance.

Adding a common item does cause another position with a end-state. This is a extreme infraction on the markov chain. Every end-state a Markov-chain has inversely relates to a continuation of that chain, exponentially.
So for each common unique you add you make all the others vastly less likely to exist. The higher up into the chance-based area (so higher re-roll chance) you get the more likely the continuation of the chain obviously, which causes a new state and that new state now has an exponentially higher chance to align with the end-state.

The filter is supposed to remove a set of rendering though. Not to actively ‘remove’ those items but instead to make them invisible for the renderer. They exist, something else can’t take up the space but they’re simply not displayed.

This means the more you filter out the less computation should happen… up to a maximum logical degree.

Loot filters improve performance, not decrease it.

Let’s just make a simple example:
3 items, one without reroll chance, another with 50% reroll chance and another with 90% reroll chance.
That means that #3 has:
1/3 x 0.1 + (1/3 x 0.5 * 1/2 x 0.1) = 0.078 or 7.8%, which is the chance it will be picked and not reroll as well as the chance that #2 will be picked, reroll and #3 will be picked on the second roll.

If you add a new common item, the formula simply becomes:
1/4 x 0.1 + (1/4 x 0.5 * 1/3 x 0.1) = 0.029 or 2.9%.

But if you add a rare, let’s say with 20% reroll, you now have:
1/4 x 0.1 + (1/4 x 0.5 * 1/3 x 0.1) + (1/4 x 0.2 * 1/3 x 0.1) + (1/4 x 0.5 * 1/3 x 0.2 * 1/2 x 0.1) = 0.031 or 3.1%.

So adding a rare item actually increases rare items chances better than adding a common one.

No. What it means is that for every single item that drops, it has to go through every single rule in your filter and evaluate it. So for the vast majority of items it will loop them all until they finally hide it.
It’s why filters are limited to 75 rules. Because too many impact performance too much.

Yeah, screwed up my explanation along the way.

But doesn’t that also directly showcase the issue?

A ‘darned if you do, darned if you don’t’ situation?

You can’t add a common item to the equation without utterly screwing over the drop-rate for every single item which is more rare then it, and quite by a lot as your example has shown!
7,8% compared to 2,9% is a massive difference, though one which is getting smaller the longer the list of initial choices becomes… at the cost of reducing the overall probability anyway.

Which is what we have with the ‘added rare’ example. Formerly to get a ‘very rare’ item we had a 7,8% chance… but with 2… we have a 3,1% chance. So 2 of those items relate to 6,2% outcome, which causes all rares to drop less in conjunction. The ‘uncommon’ one has a higher chance though and the ‘common’ one has a higher chance though.

But… don’t we want to have the same chance that ‘a’ common item drops versus ‘a’ rare item at least?

This is exactly - outside of code - what I as a player don’t want to see, do I? ‘Yes, we’re adding more content so you’ll see less of the ‘good’ content overall because of it’ seems kinda backwards in my mind.

Depends, there’s a tipping point. We don’t know how much this decision maker impacts the game compared to the rendering. It can go both ways.

It’s a small sample size, so of course the fluctuations are greater. But the same thing happens to a weighted table. If you simply add a new item, you completely change every single other chance. It’s impossible not to. Even if you pick a simple 2 items with 50/50 chance, adding another forcibly has to change them.

The point is that the current system changes odds differently if you add a common or a unique.

How can you? Even with a weighted table, odds have to shift. If you had 2 commons and 1 rare and add another common, it’s obvious the odds of getting the rare drop. This is true in both systems.
But if you add a rare, the odds of getting one of them increases. Also true in both systems.

In my example, we added an uncommon item (20% reroll isn’t rare). So obviously chances of getting a rare dropped.

You seem to have a misconception about the filter. The item is always rendered. It’s always visible. It can have a label and be interactable or not. That’s all the filter does.
Just pick an item, drop it on the ground and make a rule to hide it. You’ll still see the item. And if it’s a unique or exalted, you’ll still see the rarity glow. And if it’s a unique, it still shows on the minimap.

1 Like

Yes, but they shift to the direction you ‘add into’.

If you mentally split your segments into ‘common’, ‘uncommon’ and so on… each has a specific drop-range, right?
So if you add a ‘uncommon item’ it directly causes all uncommon items seen together to have a wider range in the table and hence drop more likely. While common and rare items do less. Counter to what we see with the inverse correlation to added items in the current state which causes rare items to drop less when you give them more space.

Which was probably what was to be avoided, which got us this… system there.
Which is understandable.

But… that’s where the point comes in that we can have the rarity split up into full on separate tables, supported by a main table which simply enforces the quantity of each rarity to be static to each other.

Because once more… it makes no sense code-wise, and with the provided outcomes it also makes no sense experience wise.
If you want more or less of a specific rarity to drop because their table gets especially large you have a single value to change, all else is done automatically after anyway. You have a place to grab inside, adjust swiftly and solve any guaranteed issues coming up over time, this is proper planning beforehand, ease of use.
With the current system nobody profits in any way. The coders will have to wrangle with numbers for the drop-rate individually until they get the respective chances somewhat properly done. Any new inclusion will force them to re-do it from scratch in a probably multi-step process changing several values… given they KNOW the outcome as they have a testing environment for that specific piece of code set up.

I would argue that:

  • Neither do we want that very rare items become even less rare over time because of new inclusions of uniques.
  • The coders should have a easy access point to adjust values with a direct outcome (I see too few common uniques, pump the number up a bit so they’re evening out again properly)

And that’s the current state.

Imagine it with the future in mind. LE will be expanded after all, that means more entries in the core pool.
Every time the entries in the core pool get more we’ll need a related increase in item rarity or item quantity to drop, which is normal.
But given that more rarity without more quantity means overall more unique drops it means also we’ll see more… and more… and more garbage before we even can realistically see a rare drop.

It leads to trash galore, you get tons and tons of low quality uniques nobody needs before anyone sees the rare ones.
And since rare ones directly relate to the power curve it means rarity needs to rise disproportionally, which would cause even more trash to drop to get the same amount of rares as before.

That’s… bonkers plainly spoken.

You make a static correlation of ‘more unique items in the pool means a directly correlated increase in rarity’ into a ‘more unique in the pool mean a exponentially correlated increase in rarity’ which - while needed for exalted items - is not how you should get anyone to try and dismantle stuff.

It goes against the KISS principle, and the KISS principle is the basics of the basics. The enforced thing every single person creating something has to follow if they want to see a direct correlation with success to start off with.
It’s a setup for making mistakes and not getting them solved in minutes but weeks the bigger the system gets.

This still does the same thing. The only difference is in the degrees which it adjusts the rest of the odds for existing items. In fact, if we had added another unique with 99% reroll instead to the above formula, the chance for #3 would be a little higher than with the 20% reroll.

There really isn’t much difference between how both systems work, just in how much they adjust to new entries and how they keep their relative rarity between themselves. And those differences are also pretty small between themselves for a large enough sample.

I don’t think any game uses a system like that. Nor do I think you gain much from it. In fact, it can even be worse.
Let’s say you want 60% drops to be common items. You add 10 items to it. Then you want uncommon ones to drop 30% of the time and you add 5 items to it. And you want uniques to drop 10% of the time and add 2 items to it.
This means that the rares have about 5% chance each to drop and the commons have 6% chance each to drop. And if you add another common, now rares are more likely to drop than a common.
Yes, you’ll normalize the chances of getting “a” common, but for some common items your chance to drop will be lower than rares.
It would be a nightmare to juggle all that.

This will only happen if the uniques being added are rarer, meaning, having a higher reroll chance. Just like adding a new entry to the weight list.

They do. There’s an item database, it has the reroll chance for each item, just like it would have the weight on a weighted system. You can change the whole table in one system as easily as on the other.

Not really? Not unless they add a lot more commons without adding rares. If they maintain their ratio, the overall ratio stays similar as well. Just like a normal weighted table.
The only thing having a lot more items does is lower everything a lot, including commons.

If you have 100 items in your pool, you’ll reach the point where a specific common only has about 2% chance to drop. But that is because of the large pool and would have happened with a weight system as well.

This doesn’t happen though. The distribution still works fairly similar to a weighted table. It just varies slightly in how it adjusts the relative distribution between the previous pool when a new one gets added, making them maintain that relativity very slightly tighter than a weighted table.

You really went from a “this system does nothing different so it has no reason to exist” to “this system is so different it has no reason to exist”, huh?

Yes, but it’s not done in a intuitive way, you have to actively think about it.
If you ‘add’ something to something then it gets ‘more’ right? That’s the logical first step… but here the more we add the less it becomes.
Mind you, for the ‘section’ hence the similar ranged items, and only those which have a high re-roll chance… which makes it feel fairly ‘odd’.

Umh… it’s fairly common?
You have a decision for rarity, then type, then pool of that type for example. Staggered decisions are fairly basic. Not faster… but easy to implement and easy to understand.

No? That’s… not how this works?

Table one provides the weight for rarity.
Table two provides the table for the options inside that rarity.
It’s solely picking the respective drop-table.

Let’s say we’ll get 3 items.
It rolls for rarity, 2 roll on common 1 on rare.
Now it rolls 2 times in common and picks 1 of the pool each.
The third roll is in rare.

You’ll see less rare items then common items. It’s not designated how often you’ll see a specific rare or common item though, just that they’ll be of this rarity type.

It doesn’t matter if individual items have less or more chance to drop, it mostly matters that first of all you got your rarity handled, so rare stuff drops exactly as often as it should drop versus common stuff dropping as often as it should.

Modifying this loot table according to currently run content is what causes the changes in which things you’ll see. More rarity for example can give a 0.8 modifier to common and a 0.9 modifier to uncommon, so you see less common items, more uncommon and more rare ones. There’s several methods and it’s a whole new ballpark to play in should we get into that.

Why? We have no worries about rarity in LE currently outside of uniques and exalted items. More is better, you don’t care about how the drop-rates are adjusted to each other, only if your individual item will drop.

For that a system with a weight-distribution is actually very good. We can decide that out of 10000 uniques a single one of that specific type should drop, so we have a range of 10000… and this one is ‘1’. Rarest item. Everything else we adjust accordingly. If the range is too small we multiply by 10 and adjust our values accordingly until everything fits inside.
It’s a once done job in that case without rarity choices as an example, it’s the direction of adjusting individual drops.

In a rarity based system you have to simply adjust the rarity weight accordingly. Too few rare items? Reduce range of uncommon and common ones a little and you’re done.

If we want generall drop-rate to adjust then that’s the number of rolls in total, hence quantity. But we don’t have quantity, we solely have rarity in LE currently. The only quantity changes are related to common/uncommon/rare and unique mobs respectively.

If you add a similar one then the other similar rare ones will reduce overall as we’ve seen.
Also it’ll skew the change for all other ranges to drop as well, everything to each other has a different chance to drop.
If you have a white sword, a rare amulet and a unique belt and you say I need 10 swords to drop a amulet or 100 swords to drop a belt then you have a basis.
In this system if you add a unique ring to it then suddenly you need 12 swords to drop a amulet, 160 for a belt and 160 for the ring.
That’s the problem.
What’s the ‘anchor value’? What is it related to?

So no, that doesn’t hold true.
This goes until the markov chain normalizes to infinite, where outcomes stay basically static.

You do have a staggered decision, but no ARPG I know of separates their unique drops into tiers and rolls for their tiers.
You roll for item type, then you roll for rarity, then (assuming unique), you roll for the pool.
What you suggested is rolling item type, then rarity, then (assuming unique) roll for tier, then roll for pool (for example of all commons).

I think we are now talking about different things. I was only referring to common uniques vs rare uniques, not common drops vs rare drops. That has nothing to do with what we were discussing nor do I know why you brought it up.

I’ve made this so you can see exactly how adding each type of item with one system or the other will change things, so you can see that they are very similar and what you’re saying isn’t true.
Maybe with the actual values you can see this.

PoE does.
Uniques are set up into Tier groups like currency is.
A prime example of a relevant game.

Applies the same if the uniques are handled in rarity classes. The same system applies over the whole drop mechanic in such a case since it needs a layered system. Decision 1 → decision 2 → decision 3.
In such an example it’s all weighted. A low upkeep system overall since rarity doesn’t affect the whole slew of items but only specific sub-segments of the decision chain.

Thanks a lot, mine was borked and I didn’t have the time to fix it yet.

So, with this we can see the actual values and it comes down to 2 situations which we have to look out for.
Spread of rarity inside the category.
Items available in the rarity category itself.

So the baseline of 72% common, 24% uncommon and 4% rare changes.
Adding a common changes it to 83% common, 14% uncommon and 2,4% rare.

Which you can see… is baaaad. A single common unique does screw over the player majorly, which shouldn’t happen, hinders adding ‘general variety’.

The others aren’t good either.
Adding a uncommon makes rares less likely to happen, but brings us nearly as many uncommons then commons, 58% commons and 39% uncommons. That’s… nearly the same, doesn’t make them ‘uncommon’ now, does it?
With rares it’s also not good since they’re nearing them closer to uncommons, instead of 24/4 we suddenly have a 23/7,9 spread.

This hinders the ability of the devs to add a range of items in a specific rarity category without literally ruining the whole drop system for those types of items.
You can’t just add 10 ‘extremely rare’ uniques willy-nilly to provide a variety of strong end-game items. They’ll suddenly be prevalent, which is to be avoided!
You also can’t add a baseline range of items with it, imagine them putting in 20 new common uniques to bolster the build range at the lower levels, simply to introduce more possible variances for how you build a character. You suddenly don’t see rare uniques anymore.

That’s not good, in no case. The foremost thing to uphold is that item drops uphold the rarity to appear, this is a more important constant then enforcing a ‘specific’ item to drop, after all the game has implemented a market, and that’s a intrinsic aspect that has to be taken into consideration since otherwise you’ll flood it with rare items and make all the ones below ‘worthless’. That’s a huuuuuge issue for a loot-based game with an economy to have.

In a static one you have another issue instead, which is ‘dropping a specific item’, which you’re absolutely right about!
The status though is that this affects only CoF and needs specific counter-measures related to target-farming, it’s contained within their space solely. Which is a smaller subset of the total and hence easier to adjust, for that we have the faction system after all, don’t we?

You’re absolutely right after all that it has ups and downs in both cases. The downside of the current system is that you can’t implement new items freely as it ruins the player economy as well as the perception of what is ‘rare’ or ‘not rare’. In comparison to upholding that baseline with a secondary measure solely addressed to target-farming and hence CoF it’s just the vastly easier method.
Before 1.0 it had less ‘weight’ attached to the decision, but that’s changed.

Also, any sort of balancing becomes a automatic nightmare if you don’t have a so called ‘anchor value’ which you build around. And in the current system this anchor doesn’t exist, without a anchor you have to re-evaluate the situation steadily and adjust a multitude of aspects before getting it back to being ‘right’, with a static system you only have to adjust the respective detailed piece to make sure it keeps on functioning in the ‘grand whole’.

I added a category for rarity tiers. It seems worse to me. If you add a common, suddenly each common drops hugely. An item which had a 72% chance to drop now has a 36% chance to drop because you added a new common. And if you added a lot more commons than uniques, then you would have to change the values for the rarity, otherwise you’d end up with the situation where a single common is more rare to drop than a single rare.

I added an extreme example. As you can see, both reroll and weight table adjust the relative rarity properly no matter what you add, whereas the rarity tier one doesn’t and needs to have constant changes made to the base rarity list.

Yes, absolutely!
You’re 100% right there. Which is the necessity for follow-up balancing as needs to be done anyway.

If your scales are skewed in a specific direction and you don’t want that to happen you need to do a few things. In your example the following (once again for both systems):

In the ‘static table’ system:

  • You see you have a category which is underrepresented, hence for example… common uniques!
    -You raise the value for that table to drop, that ruins the option for all others to drop though, which is the follow-up issue.
    -You adjust general rarity-rate, so overall more items of that category drop.

Like this you solely increase the chance to have common items dropping more, you see the same amount of rare and uncommon items compared to before but you’ll see each common at the same rate as the former amounts of commons dropped.
This is done through the global unique drop-rate and a really… piss-easy way to balance, very low effort needed. You have a singular multiplier based on amount of items which affects the drop-rate of this category in total compared to others, all of them always stay ‘the same’ to each other this way. You can easily automate that in code even.

The current state:

  • You realize you got too many common uniques, which causes each to drop significantly less.
  • You can’t raise those up, you need to make every… single… item above it a different re-roll chance.
  • Then you once again adjust the global category to drop more items overall to balance it out and have every item drop roughly the same amount as before.

This is ridiculously hard to automate, you need a very very in-depth algorithm to do that, checking all values in accordance to every type of newly added item.

This… is a nightmare for balance.
Can be done but will likely be manually done by someone adjusting each value in the code.
Also it’ll need to be done either before implementation or it’ll be passed over and cause issues.

Which is… once again… not a good state.

All of the points and issues can be done with a weighted table in a easier manner to keep balance and maintain functional integrity long-term for now.

Edit:

I messed the second up since it’s hard to wrap my head around.

I meant that too many common uniques cause too many compared to the others to drop now (since more in the ‘category’ cause more to drop) and then nonetheless by step 2 you need to follow the same line. It’s a mess.

I don’t really follow your reasoning.

Both weighted table and reroll table behave similarly, maintaining relative rarity between items effortlessly. If you add 20 commons, every common will still have a higher chance to drop than a rare one. And yes, rares will drop less often, but that’s to be expected. If you have 1 rare in 10 you can’t expect it to drop the same amount of time as if you have a rare in 20.
However, the rarity table needs constant adjustments to the base tier table. When you add a common item you massively decrease their chance to drop without regard for the other items. So you end up in a situation where a specific common is more rare than a specific rare, which doesn’t happen with the other 2 systems.
This means that you preserve the chance of “a” common or rare dropping, but the chances of “the” common drop massively.

Personally, I think the other 2 systems are much easier to handle and maintain for scaling with less changes overall.
Most of the time you just add stuff without needing to change anything else and in those cases where you want to adjust the overall situation (which aren’t often) then you tweak each weight/reroll chance.
Whereas rarity needs to constantly tweak the rarity table weight or you’re forced to add items in the same ration (x commons, y uncommons and z rares each time you add items) otherwise your ratio goes all screwy.

Also in terms of efficiency, they’re not as efficient as a single weight table (though still more efficient than rerolls), although we already concluded that either method is negligible.

Honestly, seems like rarity tables are the most messy to maintain in the long term and with larger pools.

Also, I can’t find any source that says PoE uses a rarity tier for uniques. Everything I find is that it determines the rarity of the item, when it rolls unique it has a weight table that determines which item drops. They don’t even separate it by bases.

Not even remotely. I’ll give you an example.
Imagine the following:

I’ve decided in my game I want a uncommon item to drop for every 20 common items of that type to drop roughly, and a rare one for every 100 of the common ones to drop.
So a 100:5:1 weight-table.

I only have 1 of each type for the start, to see if it feels ‘fine’, but now I wanna populate the separate parts properly.

I get 50 types of commons, 7 types of uncommons and 2 types of rares. Now obviously… I won’t see each of those 100 types of common dropping before I see each of the uncommon ones drop 5 times before seeing each of the rare ones drop 1 time anymore.
That’s a problem!

So now I implement a multiplier based on the elements in each category.
My table now has the following spread:
5000(50 commons):35(7 uncommons):2(2 rares)

They stay equivalent to each other at any time this way. I get as many commons as before, as many uncommons as before and as many rares as before, all in perfect equilibrium to each other, at any given time.

Now obviously you need to multiply the overall drop-rate of this ‘item segment’ accordingly.
This is a multiplier for how often this table gets even chosen.
In this case we would do a double simply and input the total weight/100 to give a ‘common’ item the value of ‘1’, which is our base value.
So now we get a value of 1,06 for that weight-table above beforehand (1 item in it each) and afterwards it turns into a 50,37 value when we’ve populated it.

Guaranteed same drop-rate for each singular element in the whole list, forever.
The player will see every type at the exact same amount of time with a medium chance of 50%
So I can say 'Yeah, you get the ‘ultimate amulet of uber-god-slaying’ every 250 hours of play-time, maybe a bit more, maybe a bit less.

In the current system:

How do you achieve that?
You have a very complex item decision system up-front which then provides a variable outcome where every addition of an item does change how often it drops in comparison to every other item.
Today I get my ‘ultimate amulet of uber-god-slaying’ every 250 hours… but since you added 5 common items now I get it every 320 hours! Why?

Explain to me how you would balance this.
What’s the anchor?
How do you decide on the re-roll range?
How to you ensure you don’t get too many of any rarity to drop?
How do you ensure you get the overall proper amount of items to drop?

The answer is: Another extremely complex algorithm which dismantles all that backwards and starts us off at the beginning… because otherwise you can’t.

If I add a common my spread is out of whack.
If I add a uncommon my spread also is out of whack.
If I add a rare my spread also is out of whack.
If I add any item I don’t know how often this category should drop in the first place anymore as I don’t have a direct correlation to ‘items needed to drop to provide a even spread’ in the first place.

I have no anchor.

No, they’re piss-easy and established since decades.
They’ve got a reason to stay relevant for such a long time despite alternatives being presented.

Easy to maintain.
Easy to scale.
And also has the option to reduce the O(N) complexity into a O(1) complexity through a stochastic decision matrix which would be the only upside of the system EHG has.

https://www.poewiki.net/wiki/Guide:Analysis_of_unique_item_tiers

That is what I said. Every time you add an item, you have to update the table values.
And it actually doesn’t really work the way you suppose it does. If you add individual chances (so you get a sum of 1 total), you’ll see that with those numbers you’ll get an overall chance of 99% common drops. I added those calculations at the bottom.

Because you have more common items that drop. That seems obvious to me. You can’t add more items and not add more time to farm some or all of them. Because if you maintain the 250h for the uber item, then your common “trashy amulet of trash” that required 2h to get now requires 4h. And how do you explain that all common items now take twice as long to get while your other items all still take the same time?
You’re not maintaining any ratio other than an abstract one. For all practical purposes, adding a new item will increase farming time for other previous items you had. You’re just saying that you will increase common item farming (thus making them rarer to get) rather than all of them equivalently.

Adding new items should increase the required time to get any specific one, for all of them. If you want to maintain common/uncommon/rare ratios as a group, just add in a similar ratio.

As far as I’m aware, only PoE does that. All other ARPGs I know simply use a weight loot table. So do most RPGs that don’t have static loot. Even games like Borderlands use a simple weight loot table. So not really established since decades.

That’s done automatically, why would YOU do it?
It’s simply a multiplier which is the length of the array inside.

Yes :slight_smile: But with the multiplier before to even choose that table in the first place I get exactly the same amount of each individual item compared to before.

That’s why I added that. Sure, that individual list seen by itself makes it look like I’ll get 99% common item. So?
I still get 1 uncommon of each type for every 20 common of each type. I still get 1 rare of each type for every 100 common of each type.
I also still get the same amount of each type in the exact same timeframe.

For me… as a player… nothing has changed to acquire the one specific item I want.
What has changed is the quantity of items dropped in total.

Yes, don’t you understand the issue behind that?
If you add more common items in that case you get less rare ones.
If you add more rare ones you get less common ones.

You never want a situation where you can theoretically get more uncommon items then common ones, period.

By the category multiplier which I’ve provided.

Nope, that’s the overall quantity multiplier which is mandatory to have in such a game.

If you add a bigger variety of items you increase the base quantity of drops to achieve a overall spread still.
Obviously that depends on the type of implementation and which specific drop-tables are chosen, which should all be separate stochastic decision matrixes to be chosen at the right place. 1 outcome for 1 place including area level and content type.

That’s not true, never was.

Imagine we add a new top-tier base of an item. The one everyone would enjoy to have unless a specific need is there not to have it.

A good example are ‘Gluttonous Gloves’ right now.
Imagine we don’t have them, we ‘only’ have ‘Eternal Gauntlets’.
Now you decide the new top-tier item are those gloves, you’ve implemented them freshly. You want them to be acquired at the same rate Eternals have been before, the game progressed after all, a new base, we need a new ceiling.
How do you achieve that in the current system without fucking over everything else?

In the basic table example you now have ‘Gluttonous Gloves’ as the new ‘1’ value, the rarest existing one, we want Eternals to drop 1,5 times a much as them.
This is a integer of ‘2’ for the gloves. Hence the former ‘1’ value of Eternals becomes a ‘3’ to stay in relation to it. 1,5 times hence.
All other values are also multiplied accordingly. The 250 value of the cheapest base type becomes now… 750.
All related to each other 100%
This then causes the multiplier for that table to rise accordingly. Which means we’ll overall see more items to ensure that every base drops exactly as often as before.

Do you see what we’re doing? We went from the initial drop-table a step further up simply. We’re scaling here with the drop-table above.

Now you might say ‘But this also increases the drop-rate in lower content, you get more items there because the value is higher!’
No, also wrong.
If I have a place in the game which has the Eternal’s as the drop-limit I limit the choice to solely the positions in the array up to the Eternals. This causes my multiplier to adjust accordingly.
Since my multiplier adjusts it’s directly transferred over to the lower hierarchy to adjust it there accordingly, enforcing the same spread as before.
And that moves it to the next hierarchy, once again adjusting values to each other compared to before.

With a rarity of ‘0’ I’ll hence get ‘Hide Gloves’ the same amount of time in level 1 areas as I’ll get them in level 100 areas.
The difference is that at the start of the game I drop less items and the amount of items ramps up the further I progress.

To adjust the progression accordingly you solely need a singular multiplier which acts on the scaling, like every 10 levels we should see 20% more items then before, so (1+1,2*n) as a function.
To pick higher rarity it’s also the same function with that value, not adjusting the amount of items but the value in which they stand to each other. 110% rarity adjusts common items to have a 10% less modifiers, which causes 10% of each other category combined to drop.

Tiers are weight-based loot tables.
They’re a category of it.
A weight-table inside a weight-table. They’re nothing else.

Edit:

Page 12 starts with the math behind it.

The only way that happens is if you have so many more rares that their combined weight is larger than the common ones. Which is fine. If you have 2 common uniques and 50 rare uniques, you should see rare uniques drop more often than common ones. Because that is not a flaw in the system, it’s a flaw in your design of introducing only rares.
Rares should never outnumber uncommons which should never outnumber commons. As long as you do that, which should be a basic thing, you never get more uncommons than commons.

An overall quantity multiplier will affect all equally.
So if you had 2h for each common (50) and 250 for your uber amulet and you add a common, you either increase the overall multiplier so that your commons still only need 2h, which makes the uber require less than 250 hours (since nothing changed in its side of the table), or you keep it and your commons now require 3h.

You can’t keep farming times the same by adding new items outside your ratio. The only way is to keep the same ratio (add 1 rare, 5 uncommons, 100 commons). Otherwise some items forcibly have to change, either for more or less time required.

That’s what I mean. I don’t know any other game other than PoE that uses that system. Every other loot based game I know of uses a simple loot table for uniques and doesn’t use unique tiers. Rarer uniques simply have less weight.
So it’s not a system that’s been used for decades since only PoE uses it, that I’m aware of.

Yes, 100%!

Which is why I’m repeatedly saying anchor like a madman.

You want one aspect where everything hinges on, this value provides upsides and downsides to go from but a relative place to balance from.

Which is missing in the current system once again.

You can use the example above (which is better fitting for base item drops) or you could also instead enforce a rarity based drop-table, hence starting ‘extremely rare’ with 1 and then multiplying upwards for the related weight before choosing an item in the category.

It’s up to you if you would prefer to categorize full sub-segments or throw them into a fixed relevance to each other. A mixture of both though is fairly nonsensical as you neither get a base value for how often overall items should drop nor how it’ll be split between categories.
Which the current state is.

For Uniques it can happen that you get a whole slew of ‘extremely rare’ ones which then cause each individual one to nigh never drop… but you get ‘a’ extremely rare item.
Or you enforce it to be fixed onto relevance to each other to see each one drop at the same frequency, so you end up with theoretically more uncommons then commons in practicality but still achieve your specific item as a end-result in the same timeframe.

You can’t combine em, if you do you get what we have and that’s ‘neither do you get ‘a’ item of the rarity but neither do you have a fixed timeframe’.
So, what should the player strife for? Getting ‘one of those items’ for the market or getting ‘the specific item’ personally?

Our current answer is ‘neither’. It feels willy nilly and hence makes no sense to present it to the player.

Yes, it does :slight_smile:
If I search for a specific base in the current case I could take 1 hour or 10 hours for it, I never know! Which is bad.
If I anchor the drop-rate to time investment directly I have that though.

I can say ‘I get this item after putting xyz effort into it… likely’ and it’ll be so… likely.
Any new addition won’t stop me from acquiring ‘this specific item’ which I need for progression. I have a goal, I have something which I know I’ll achieve.

Because in the current state we have a major problem.
What if I implement 3 times as many uniques? No matter the spread for now… just… 3 times the uniques. I won’t find any unique I search anymore, not even the common ones. I’ll get everything I don’t need.

Which enforces a quantity increase for them.

So to return to the spread directly hence… each addition will cause my situation to change entirely. Unless the same amount of uniques of each type of rarity are added and stand in direct correlation with each other in amount.

Imagining I’m searching for one specific core-pool unique after implementing more uniques (and adjusting the quantity of drops accordingly, which I don’t think is done 1 to 1 anyway, but another topic) then this specific item which I formerly needed 100 hours for now could need 60 hours… or 150 hours… how should I know? Has something changed? How much has it changed? Should I strive for it or not? I can’t know since EHG won’t tell us, they just say ‘yeah, 98% re-roll range’… why even give us this value? It has no meaning to interpret for us!

Do we really need someone to make a Stochastic matrix to provide us with the information?
What do you think will happen if someone does it and provides the results on the Last Epoch database?
‘Oh yeah, 1.2 comes around, well, they added a few uniques, so now acquiring a red-ring will take 650 hours instead of 400, you’ll also see Wraithlord Arbor every 85 hours instead of 60, values based on 300 corruption’
How do you think that’ll go with the community or any player in general?

That’s what this system does.

You know that tiering is simply a parent table before making decisions in a child table?
You use it when you want to make your game more static and not have to worry about adding new content in any specific ‘tier’.

Path of Exile has it because they don’t want to worry about screwing their whole loot table up when adding their 50 new uniques per league, not having to fiddle with each specific value individually but instead using simple multipliers behind it.

If you see a tier-system you can do it in one big table too, but that’s more effort simply.
You tier when the tables get too big.

When you get too many tables you create a stochastic matrix to turn the decision back into a single table to pick from and hence saving computation time. (Which will ramp up if you search through 50+ vectors per decision, in a non-neglectable manner, especially when quantity rises, why waste this computation when it’s simple not to?)
That’s your result.
That’s how it’s commonly done.
That’s how basically every game does it.

Alternatives were presented for decades from darn math geniuses at times and nonetheless… this one has stuck for some reason. Some are more efficient… but still it stuck.

Why?

Scalability.
Readability.
Easy balancing options as you can ‘reach in’ at several spots and adjust details you otherwise can’t. Rarity multiplier, base distribution to each other, needed time, quantity multiplier.
And that for each step along the way for fine-tuning if so needed.

What can you adjust with EHG’s system?
Yeah… you can fiddle around, create algorithms over algorithms to ensure it stays well balanced, so many many fringe scenarios which you need to add to it theoretically. Before long you got spaghetti code.

Once again, KISS principle. Keep it stupid. Simple.
Don’t know a single situation where the principle itself has failed.