Yes, and that should be properly parsed in the background to make it efficient.
Basically the same methodology as any IDE uses, which is parsing the programming language down to another one… many doing it into C or C++ and those ultimately parsing it down to assembler. The UI in our case should only provide the functionality and as soon as it’s finalized it should go through a round of parsing to make it into a reasonable efficient code at the base level for displaying.
This obviously comes with the downside of it not being applied immediately but likely taking a few seconds to do… but has the severe upside of allowing vastly higher complexity with severe performance increase as it all ultimately is unified into a single methodology at the end which can be optimized properly.
This can be easily applied by removing redundancies, for example if you adjust that a specific base isn’t displayed (which is a very bottom layer aspect for displaying) then there is after all no need to check if a specific affix is on it… that base is not relevant anymore.
If a specific Affix is to always be displayed then obviously we cannot outright remove the base for display purposes, hence a bit more resources are needed for the checks.
In the end while the end-user (we) can adjust and position the rules willy-nilly basically through a proper parsing into a unified system at the bottom-end which is optimized it doesn’t matter which is handled first or later. You can always enforce ‘this base is displayed’ hence and apply upper level aspects along with it even if it’s split into 3 different areas of the loot-filter rules.
So basically it first shows/hides bases, then checks for Affixes, then rolls inside those Affixes for example and then applies the relevant display function of size, color and so on. Obviously with rarity as well and whatever else can happen.
Which is sadly wrong in this case as the current likely way the filter system works (and if you check performance impact with or without filter it seems to align) is that the filter checks for each respective rule individually in order.
So rule 1 is checked… then rule 2… then rule 3… without much care for redundancies.
I imagine with a proper base layer implementation of a unified decision system it’ll be able depending on filter complexity to even provide a 500+% increase in performance for that small aspect of the game.
I’m pretty sure they already do that, otherwise we wouldn’t even have 75 rules. I’m sure they don’t use XML matching.
That is simply determining the order of operations. Any modern programming language will do this for you automatically.
If you run:
If (A && B && C && D)
As soon as one of them is false, it doesn’t even check the other ones. So all the devs need to do is determine which rules are less costly or more likely to result in a state that will make the rule go faster.
Other than the language already doing that on its own, this would still have to happen for every single rule, which is the issue.
Your use case would only work if someone were to place a rule on top saying “Hide two-handed weapons”, in which case, as soon as the filter loop reaches that rule it will reach a success and not read any of the further ones anyway.
But this is rarely the case and you always need to prepare for worst case scenarios of evaluating all of the rules and not getting a match.
It almost certainly does, and most likely has to. I don’t think you can find a single condition that is universal in 99.999% of existing loot filters that you can remove for “redundancy”.
Basically, there are only 2 options for a filter to work:
1- You create some object out of your item drop and then the filter runs each and tries to apply each rule to it sequentially (since you only need to match one).
2- You convert each rule into a pseudo object that has all possible states to validate it, then you try to match your item to them.
Both have positives and negatives. In one you’ll run a lot more checks but they are all much simpler/faster, in the other you run just one check but it’s a behemoth, not to mention that any change to the filter means redoing this pseudo-object, which can take time.
Both will have performance issues due to different reasons for each and at different points for each.
What the best way to achieve this is, only the devs will know. But I’m sure that they are looking into it.
Yes, but only if you actually do provide the baseline for it to allow that to happen, it’s not an omniscient thing that optimizes your code for you to the maximum possible state… otherwise optimization wouldn’t be such a big thing
If you make a system which doesn’t have this applied manually from you then while the relevant parts of the code will indeed run in a near optimal state it still won’t know which parts are relevant to execute or not execute, which means it’ll do em all despite redundancies being able to happen.
No, it isn’t. Hence why I said parsing it.
Basically you as the user see your setup… but the program uses not what you see but what it has parsed and optimized. So when you change it it simply needs to re-parse again to optimize it again.
That this is not done is easily checked as well, if you adjust a filter on the fly it changes displaying the things on the ground also on the fly, which means it doesn’t parse as there is no ‘hitch’ or ‘delay’ of any kind happening. And yes, I take into consideration that it would be less then a second of delay happening… it’s instant though.
And the point is that the ‘loop’ is not able to stop currently as you could theoretically asign something which goes counter to the formerly applied loop conditions, so it has to use em still.
Which is different from a pre-parsed ruleset which doesn’t use a repeated looping system but one which only is applied once and only when another update happens after parsing it down.
As an example:
If you have 5 rules. (loaded in this order)
1 states ‘hide all amulets’
1 states ‘show all +health affixes’
1 states ‘show all +health in green’
1 states ‘show this sub-type of amulet’
1 states ‘show this sub-type in red’
Then it would already cause a difference.
The current state does the following:
Check if item base is amulet.
Hide if amulet.
Check if item base has health.
Show if health.
Recolor to green if health.
Check if item is sub-type.
Show if sub-type.
Recolor to red if sub-type.
It goes one after the other ignoring the former state.
My suggestion is parsing it, which changes the outcome, for the sub-type happening here:
Check if amulet.
Check if sub-type.
Show if sub-type.
Color red.
That would skip the green one automatically as it’s a lower layer, it cannot apply hence.
If it’s a health but not sub-type one it does the following:
Check if amulet
Check if sub-type
Check if health.
Show if health.
Color red.
The number of conditions it has to go through is different.
This is not something that is automatically optimized, this has to be coded manually.
Your follow-up example also is explained with this and why a vast portion of rules is redundant even.
Option 3 is the layering.
Currently the filter goes step by step as exceptions can apply, which doesn’t allow skips.
My methodology is to parse them into layers which have distinct cut-off conditions which hence allows to skip the follow-up conditions entirely.
It has obviously also negatives when a myriad of conditions apply to a single circumstance at once and hence has to do the follow-up check repeatedly… but those are less likely to happen then removing unapplicable conditions which follow.
If you have the check for recoloring to green happening on position 10 and the enforced recoloring of a sub-type to red on position 70 then you would need to cycle through the 59 positions in-between first before it applies. This way you can entirely remove them. Which provides a reduction in efficiency for low-complexity filters but provides a increase the more complex the filter becomes as more needless positions apply.
Your example is a very theoretical best case scenario. Even switching rule 1 with rule 2 already breaks it and now you have no “base rule” to start checking.
That’s not to mention that for most filters this is even worse.
For example, my filters always follow the same workflow:
-Show rule for potential Runed Visage crafts so I can use RoA for the unique
-Show 2 rules for each item type I use that is not unique, with a list of desirable affixes and one of the rules shows with 2 matches, the other highlights with 3 matches. These start out as just this, at some point I add rarity: exalted.
-Show rule for each item type of every unique I use with the main affix I want to slam (T7 only)
-Show rules for idols my build wants
-Show rules for idols with health (always on, always helpful)
-Show rules for double weaver idols
-Show rule for unique idols
-Hide rule to hide all idols
-Show rule for any T7 affix
-Hide rule for T7 affix with item type I no longer want to see (usually because I already have 4-5 tabs full of them)
-Show rule for 2xT7
-Show rule for special affixes I want to shatter (always hybrid health and I add whatever I need for current build)
-Hide everything
From a filter like this, there isn’t a single property you can single out as your first query to immediately eliminate anything. Not even several, since many filters have commonality, but many more don’t.
In fact, the more I think about it, you can’t even do option 2. Or you could, if you generate a big list of pseudo objects for each rule, which would make matching impracticable.
I’m pretty sure rule by rule filtering is the most efficient way (even if it does need optimizing) and I’m also pretty sure that PoE does the same.
Rule by rule filtering has the advantage that it has an early state exit. As long as you match a rule, you stop evaluating all others. Which isn’t something that can happen otherwise.
It’s not less likely, though. In almost every single filter you’ll find these situations where you’ll have to repeatedly evaluate the same thing several times because there are no common properties in most of your rules.
It doesn’t matter which order it is.
It only focuses on which layer has which conditions inside and works from there.
The majority of filters are not optimally designed after all.
Also your example also provides very clear optimizations.
For example T6 or T7 displays are majorly used, hence ‘showing all’ of them not reliant on base, so that can be made into a lower layer as a check rather then one happening at the end.
Hence every ‘hide’ rule is already bypassed if the check happens as a lower layered one.
Which means those can be skipped, which is a net efficiency boost solely if that happens.
From there the T7 hiding is the follow-up condition.
It can be a ‘smart’ system after all, wide rule-ranges checked first, then application of exceptions beyond. Those have always a efficiency upside comparatively, as long as they’re pre-parsed and not adaptive on the fly.
As you’ve done a bottom-up one though I can go over it precisely as well:
Hide everything causes the parsing to then check for any conditions which undo it, as everything is gone, obviously. Since you follow up in your list with specific sub-types the filtering can already apply easily:
Let’s imagine we have the T7 rule for the belt handled already, you have a hide-rule for T7 Affixes on types you no longer wanna see after all.
Ignoring here that you got a show rule which overwrites the hide rule anyway… so I’ll take it as them being switched since I imagine that was a mistake anyway.
Here is where it already applies.
The difference would once again be the following:
Non-parsing:
-Hide belt
-Check for shatter Affix
-Check for 2 T7 Affix
-Check for single T7 Affix
-Check if it’s supposed to be hidden.
-Keep hidden.
Parsing method:
Pre-parsed that no belt is visible. No checks needed
Basically what it would do in that case is to parse hidden belt → T7 exceptions apply → T7 exceptions once again hidden as sub sorting = Always hidden.
This is what the system would apply it with hence, until changed.
Or better said if we have the specific layers we can say the following happening as a baseline.
Item type
Sub type
Rarity
Affix
Affix Tier
Roll Range
So basically if the parsing check for ‘exceptions’ (hence tasks which need to be fulfilled) is not applicable then it automatically works solely from the bottom layer.
The higher up in layers you go the more checks happen.
In our example if we solely want the belt to showcase a T7 Hybrid Health Belt (of any type) to display there’s a good chunk of redundant checks happening.
Here it would only measure ‘type = belt’ ignoring Sub Type as it has no meaning for any outcome anymore, moving directly on to Rarity and checking for exalted. Only if exalted is handled it checks if the Affix we want, if it is the case it checks if it’s T7 and then it checks if it is within the potentially applied roll-range.
To be specific it would check for the tag of ‘belt’ then for the tag ‘exalted’ (as rarity is tagged unless EHG screwed up) and then indexing the 6 possible positions for Affixes to create a positive/negative boolean.
The upside of this is that any other potential outcomes which include ‘belt’ can be immediately checked rather then having to re-do the whole system individually.
As example if we want to have 3 possible T7 single exalted Affixes viable to show it creates this index once rather then thrice, as it’s parsed that only those 3 states are viable. It creates a index and does check with a boolean outcome if it’s true or not, nothing else.
So if you have a belt which for example is fine as ‘exalted’ simply with the check for the second Affix of having ‘hybrid health’ then the index is existing already. So it goes ‘T7?’ followed by ‘is T7 Hybrid?’ followed by a ‘no’ for ‘is another Affix Hybrid?’ right away.
The current state is that it only saves the pre-applying states and builds up from there rather then optimizing the check-route.
Worst-case it solely saves on miniscule amounts of RAM and best-case it utterly removes several checks completely depending on situation.
No, PoE has a pre-parsed enforced methodology of checking. That’s why it causes bascially no overhead even if a mob creates 100 items at once… which is something where LE chugs hard.
Even in your ruleset there’s a myriad of redundant ones though?
Do we need to hide all idols first to then re-show them?
Can the baseline not be ‘show all’ and have a hide-condition instead?
In our case it needs to index Affixes anyway, always… as it could have health, right?
So that alone ignores the ‘hide all’ rule. You don’t apply it… yet.
You first check if the idol has the conditions applied to hide it. You don’t hide it then un-hide it.
That’s a single step saved already.
Also with a proper index array you can check all relevant outcomes at once rather then having to re-index and re-apply a check repeatedly. Because if it’s supposed to be boldened for personal build as example then that check can take priority… we don’t need to check for health anymore. Or for double weaver… or if it should be hidden. We solely need to apply the state of display without extra checks.
If we have health on it we don’t need to hide after all. So we don’t need to apply the hide part. We also don’t need to showcase if it’s double weaver as health in your case is higher priority anyway.
If it’s a double-weaver we don’t need to hide it either.
Yes, majorly. But not exclusively. Like I said, the top rules for non-unique gear I’m looking for don’t start searching for exalted gear. At first I just want any drops with those affixes, even if they’re T1.
I can also be searching for a 2h weapon with some affixes in this group of rules but lower down I am hiding 2h weapons entirely, regardless of the affixes it has.
Yeah, the rule is 2xT7, hide specific T7 item types, show T7.
That is not what the current method does. You reversed the order. It always reads top to bottom until it matches one rule and then it doesn’t read any other.
Again, wrong order. And you forgot that I also always have a rule for non-unique slots, of which belt can be one, and for unique slots to slam, of which belt can be one. Although this last one would fall onto the T7 one.
So there are more checks, but more importantly, there are plenty of cases where an item will fulfill more than a single rule. And you need to know which of the rules it is, so you can apply color/emphasize/beacon/sound. Which your system doesn’t do.
This isn’t exactly meaningful. Again, this is just placing the order of importance in your if statement. The language already does this. And this works for the current filter system as well.
Your global parsing example would only work if we could apply a rule at the base level that would be sure to immediately be able to mark the item as show/hide. And you can’t make one because your filter will have several broad rules that will catch a lot of different cases.
Without an early exit, running the rules one by one is still most efficient.
For example, when a runed visage drops, it will read a single rule. It’s true, so that’s the one we’re using. Every other rule is ignored, even if it would match.
I don’t know this, nor do you. There is nothing said on how PoE’s loot filter works in the engine.
Watching the filter that is created, I’d say there isn’t much difference between both, other than one storing in xml and the other in plain text. I doubt that PoE is parsing your filter every single time you launch the game or change which filter you’re using.
Plus, the fact that you always need to find the first true match so you know what to apply to them, in cases where there are multiple matches, makes your approach impractible.
As I said, you’re reading it wrong. It’s top to bottom. It highlights/recolors idols. Only idols that aren’t matched previously are hidden.
And, like I also said, it stops at the first match. So if it were bottom to top, it wouldn’t match “Hide all” and then search for exceptions. It would match “Hide all” and never show anything.
Which is fine, isn’t it? It just increases the quantity of checks rather then checking ‘empty’ repeatedly.
It’s just about reducing the overhead to make it scale better.
Which is the opposite of what I wanna say that it should do now, isn’t it?
pre-parsing means it doesn’t apply states prematurely or has to go through the whole list of conditions before coming to a resolution. It’s about only applying the relevant checks and doing them and only them exactly once.
Because if your relevant position is ‘Number 75’ which is ‘hide it’ it means it’s the biggest possible resource sink.
The amount of items which aren’t hidden are miniscule compared to those which are. After all no sane person even coming close to end-game will display white items, blue items or rare items unless a specific narrow condition (shattering or RoA for example) apply. That means the majority of items would need to go through 75 checks worst-case.
Versus parsing it and hence enforcing items to usually go through 5-10 checks at worst.
Makes a massive difference.
Fair, so that’s extremely inefficient then. Not even a option I considered at the start since it wouldn’t make much sense since each individual rule needs to at least check something once rather then bottom-up for only the relevant items where it at least would only affect potentially a portion of the items for more then a single check and most for a fraction of the total size of the list.
Makes it only worse
What I’m talking about is not a method of ‘is it a runed visage?.. No? Ok, next!’ ‘Does it have the desirable Affixes? No? Next.’ ‘Is it a Unique and the specific one? No? Next.’ And so on.
I’m talking about a unified pre-parsed method the second you close your window, press ‘apply’ or simply causing a second long stutter whenever a rule is implemented or changed.
A system where it does first check if the relevant dropping item-type is even relevant for any specific rule outside of ‘display or hide?’ and moving from there.
This would substantially reduce the overhead.
Obviously it’s not implemented… that’s why the system from LE has had so many loot related stutter issues when the loot filter was released after all.
It isn’t. Because pre-parsing the way you’re suggesting you’d have to do your checks to know if you show or hide and then you’d have to do your checks again to find out which exact rule if being matched.
Yes, and like I said repeatedly, there isn’t a single check you can reduce to. A belt with hybrid health could match the “non-uniques slot I’m searching (t1+ with 4-6 specific affixes, match 2 or 3)”, “T7 affix (any, except about a dozen)”, “Shatter (hybrid health+class specific)”.
So could boots or gloves.
My filter has cases for every single item type, so you can’t rule them in or out based on that. At most you could find some affixes that do the trick, but little else.
Filters vary too much for you to be able to make something like this work automatically. You could maybe analyze them manually and figure a way to save a couple of checks in some cases, but you can’t create a universal rule that will both speed up the show/hide part and also the identify the exact rule so you know what color/emphasis/beacon/sound to apply.
That will depend on how you make the filter. Experienced people will run tight filters and this will be true. Most people (as we’ve seen them complaining in this forum) will run looser filter and will have lots of drops still.
You can’t assume that everyone is running an optimal filter. Hell, when I start a new character, I will often just copy the bottom general purpose rules (T7, 2xT7, shatter, etc) and make a general purpose catch all for a few affixes I might want (like minion affixes) and then I don’t use a “hide all” rule, so I can inspect stuff leftover and see if it’s an improvement.
Except the way filters work, and the way several people make them, you can’t really reduce them that way.
That… actually makes no difference. Bottom up or top down requires the same checks. Stopping when you get the first hit is an early state exit that only increases efficiency.
I know what you’re talking about. And I’m telling you that there isn’t an easy way to do this. Any item type that drops will immediately match several rules. Most of them won’t even specify item type in the rule.
Like I said, when I’m leveling I often create a single rule where I add about 15 affixes that are general purpose good for my build and I will match any item that has at least 1 or 2 of those. I don’t specifiy rarity, I don’t specify item type, I don’t specify anything other than those affixes.
So every item that drops, based on item type and rarity alone will always potentially match at least 2-3 rules, probably half a dozen or more. So those checks aren’t actually filtering anything for us.
You’re thinking of a best case scenario where everyone runs very tight and intelligent filters, but that is not the norm.
Also, like I said, I’m pretty sure that PoE does the exact same thing for a single reason:
If they pre-parsed the filter into those conditions, then the saved filter in your file would save that preparsing.
There’s no way they would be so inefficient as to calculate the pre-parsing (which isn’t an easy task) every single time you launch the game or every single time you change which filter you’re using. They would store it in the file instead.
So the way the file is presented is the way it is used.
The only way to even feasably do what you want is to parse each individual rule with the relevant properties (like rule 1: item type-helmet, sub-type: runed visage, rarity: any, level: any, etc; rule 2: item type-any, sub-type: any, rarity: any, level: any, affixes: a, b, c (count 2), etc) and then apply that to all and return only rules that fit item type helmet. Do we get more than one? Then item-subtype. Do we get more than one? All the way down to affixes, which we still always search as long as we have a match (and with most filters, you still will).
The only early exit state in this method is if you have no appliable rule, in which case you show the default drop state (for when you don’t have a “hide all” rule).
But that would also cause us to have to search every single rule, we just search for a specific property instead. We search all 75 rules to filter out the ones that don’t match item type helmet. Then you search 60 rules to filter out the ones that don’t match the item subtype, then the same for rarity, level, faction, etc.
And, like I said, as long as you have a “hide all” (which, in my case, is hide all exalted and below, so no hiding uniques or legendaries) you’ll be running every single check and will still likely end up with more than one matching rule.
So you end up effectively checking a sum of 150+ rules (several of them being checked several times on different properties) with no exit state.
All this is not to say that you can’t improve the filter. You obviously can and they’re obviously working on it. It’s just not as simple as you make it out to be.
The most efficient solution is most likely the one I placed above, though. Not creating a general purpose pre-parsed global solution, which is a nightmare to even identify, but simply creating an object list with their properties properly filled out which you then filter out one by one.
It has less exit states but will also run less checks (although still a lot of them, there’s lots of stuff the filter lets you check these days).
For all I know, it’s even what they’re actually using.
First off, there needs to be a check if a filter applies anyway, so that one is a given. What I suggest is not decision making up-front but instead with proper coding.
Hence checking if the base type even has alterations to the filter applicable or not, if so move to the next step along the line to decide which steps are applicable and if they’re applicable.
That’s vastly more efficient then potentially checking for 75 rules for the majority of items as they generally tend to run ‘empty’.
Nobody said that?
You can reduce the amount of checks though… substantially?
There’s a difference if the system does check ‘does rule 1 apply? Yes/no’ then ‘does rule 2 apply? Yes/no’ and so on… or if it does ‘Is this type affected? Yes/no’ ‘how so?’ ‘Is it applicable? yes/no’
Obviously the second and third step are multi-steps, but I expect the knowledge that this would be a given.
It’s a vast difference if I solely need to check for the relevant possible states of change rather then re-checking if a specific filter rule applies individually.
Unless you’re relatively early in the campaign or you got an idiot playing it upholds.
No, since if you create a bottom-up methodology you would at least parse it in a simplistic way which would reduce overhead.
But not even that is done seemingly! And I already gave that leeway.
And that has absolutely no bearing on overhead created in a substantial manner
I mean… I’m not a great coder… but heck… even I know how to systemically set up a system which optimizes that stuff, just takes a while to make the back-end happen after updates.
First you create a system which checks for pure chance of something happening, this is done for every item, sub-base, Affix and so on once, which takes roughly 1-2 weeks to create the dataset.
Then you apply priorities according to said chance.
Then you create the base system which the filter translates into, which is set up in layers to reduce check-count.
Then depending on the rules set by the player you parse that into the pre-created blocks which are pre-sorted by priority.
Each possible check hence applied to the parsed ruleset hence acts on the highest possible chance for something to happen first before moving down the line as thing get more detailed and unlikely to happen.
Each specific check has the relevant early exit applied which is the decider on how exactly the item is displayed. This includes the beam, sound, hidden/displayed and the tag.
Also it should’ve a ‘default’ state which does reduce even this overhead substantially as no decision has to be made but the basic one designed by EHG is used.
I mean… it’s really not rocket science there… that’s not even any sort of advanced methodology there… that’s basic shit for optimization of such systems as it has a multi-pronged effect anyway.
The priority creation system is something which is supposed to exist anyway as it allows easier balancing and even allows that to be done without any testers.
The parser is a relatively easy constructor which simply creates a file that has the code inside and should exist anyway as it’s a important part of optimization.
The upkeep is miniscule as it’s a back-end system anyway which does create the massive overhead and resource investment once but solely on systems which have no effect on the live-systems anyway.
So I really don’t know what you’re arguing about as your examples spiral always back to the same stuff, saying you need to over-check repeatedly while that’s utterly baffling to even imagine.
Avoiding that is coding 101… it’s good practices… so unless you’re either really starved for time or are doing coding for stuff which has no efficiency needs then this should be a given to use one way or another.
And this one clearly is a resource-based issue here as we’re limited to 75 rules and not ‘however many you want’, which is how PoE’s filter system allows to handle things. Harder to use as everything is presented up-front but also powerful beyond comparison when we look at both systems.
You say this like this isn’t a common occurrence, though. You always have to design your systems keeping idiots in mind.
So you have to design a system that will allow for this. Especially because the idiots are the most likely to complain. Which means you need to take into account a filter that will highlight/recolor/beacon/sound half the items and not even hide the other half.
Because, not being a programmer, you’re not really picturing the huge bloat that that would create. The sheer number of possible combinations would require you to create either many many “boxes” to match your item to, or it would create a smaller number of much more inflated ones.
The most efficient way really would be what I described earlier of transforming your rules into objects with the relevant properties, like “type: helmet, sub-type: any, rarity: rare;exalted, etc”. And then run your checks filtering out the rules that don’t match (probably starting with rarity, since that is the one that tends to remove most conditions, then item type, etc), until you’re either left with no rules (early exit state), in which case you apply the default (which LE does have, just don’t place a hide rule), or you reach the end of the checks and you’re left with one or more rules, in which case you apply the first.
That is because they have 1-their own engine and; 2-optimized the heck out of it already. But I seriously doubt they have a global pre-parsed solution for the simple reason I already said several times and which you always ignored, which is that they don’t store that anywhere, since the filter file remains as clear text with the sequential rules definition.
Are they so incompetent that they will convert the filter to a pre-parsed global solution every single time you start the game or change a filter?
The most likely method being used is the one I described above where you filter down the rules. And that doesn’t require any pre-parsing. It just converts the rules to an object list.
I don’t know if that is how LE is doing it, but I doubt that isn’t what PoE is doing.
Which is why that’s handled by a non-live system on the back-end as mentioned. Pre-preparation for the most prevalent things, if it’s too much bundling them into tiers.
As said… not rocket science.
And what you follow-up to describe is literally what I said
The yuse Unity, they’re not a self-created engine.
I was obviously talking about PoE, not LE, on why you can have as many rules as you want there.
EDIT: I would have expected that by this point you would be aware that I know LE uses Unity to figure out I wasn’t saying what you thought I was saying and evaluate what I said better