Jump to content

Crimean Archivist

Member
  • Posts

    91
  • Joined

  • Last visited

Everything posted by Crimean Archivist

  1. This is actually my favorite detail in the game because it reinforces the idea that the Awakening trio truly have no fixed ties to either nation. As for the actual topic, Subaki seems very Knight to me. Just...that man's growths. I can also see the Butler someone else mentioned.
  2. The priority order is primary > secondary > primary parallel > secondary parallel. However, you can't get parallel classes by Friendship/Partner Seals, and you only get secondary classes if the primary class is a "special" class, like Emblem Blade said. The wikia page for Takumi shows that Oboro grants him nothing, same for Leo and Felicia. You'll also find that Corrin doesn't show up in anyone's Friendship Set box -- because Corrin can't share classes via A+ rank. Corrin only shares their secondary class (because Nohr Prince(ss) is a special class) and only with their spouse, because units can S-rank Corrin but cannot A+ them.
  3. You can't even take a picture of your screen with your phone after using a class change item? I just finished some research on how classes are shared between characters, but this is outside of my results. Basically I just want to see what the options are when the item is used. Exact is greatly preferred over what you can remember.
  4. Just a general update: we've hit 23000 points, the (3A+B)/4 model has resolved all of its boundary issues except for one, which is acceptable, and continues to be the best predictor for hit rate. I still haven't found an acceptable model that is linear on the low end and parabolic on the high end.
  5. So all that's left to affirm is Sol, Luna, and Ignis, which are all Skill% chance. I can do Sol/Luna but I don't have a Robin amiibo.
  6. Done, except for case 8. That was quicker than I expected.
  7. I was chatting to a friend on tumblr about the possibility that weird things could happen when a character obtains a class set that overlaps with a class set they already have, and it turns out, it changes the way skills are obtained. Normally, when a character class changes into a promoted class, they obtain the skills of the base class in their set first, then the promoted class's skills. However, there are cases where an overlap confounds this a little bit. I experimented on Selena!Percy first, as he has two paths to Hero: Fighter from Arthur and Merc from Selena. I removed all skills from the parents before starting his paralogue to ensure no confounding factors, and this is the obtained skill order while leveling him: Hero!Selena!Percy So some things change when there's an overlap, for sure. This same friend said that when Heart Sealing Hinoka into Basara while married to Kaden, they found that Hinoka obtained both Spear Fighter (Hinoka's secondary class) and Diviner (Kaden's secondary class) skills. This is significant because Diviner and its skills shouldn't be accessible to her without Partner Seals. I have yet to confirm this but I have no reason to distrust them. At any rate, I singled out all the cases I could think of that need to be tested in order to determine how this hybridization works: Child character inheriting two different base classes with a shared promotion (x2, to determine which classes take priority, e.g., father’s class over mother’s, secondary class over tertiary, etc.) Tested once, with Selena!Percy Character class-changing into a promoted class shared with spouse’s base/parallel class (whichever is obtained) via Heart Seal (completed by tumblr user acupfullofsynthen, yet to verify) Character class-changing into a promoted class shared with spouse’s base/parallel class (whichever is obtained) via Partner Seal #2, but for best friend with Heart Seal #3, but for best friend with Buddy Seal Character class-changing into a promoted class shared by spouse's base/parallel class and best friend's base/parallel class, but not in own class set, with Partner Seal Same as above, but with Buddy Seal Corrin with two A rank same-sex supports (Niles/Rhajat excluded) that share a promoted class but not a base class, Buddy Seals into that class Additional combinations of inheritance, marriage, and best friend to hunt for exceptions to precedents established by the first 7 cases. I have a lot of unused characters in my current Revelation file and figured I would simply run through at least one of all of the cases. Case 1(2): Child of Azura with access to Kinshi Knight via Archer: Update: not a possible combination without Archer!Corrin -- even though Azura!Kiragi will have Archer in his set, he will start out in it, meaning all of the Archer skills are obtained immediately, while Shigure does the same for Sky Knight Cases 2 and 3: Hinoka x Kaden Heart Seal into Basara, reset, Partner Seal into Basara, see what is different. Case 2/3 Results: Cases 4 and 5: Keaton x Laslow Heart Seal into Hero, reset, Buddy Seal into Hero, see what is different Case 4/5 Results: Cases 6 and 7: Kaze x Effie + Silas, Buddy Seal into GK, reset, Partner Seal into GK, see what is different Case 6/7 Results: Case 8: Corrin with everybody. Needs to be done with both male and female Corrin. Only some combinations result in overlaps, so no need to try to test everybody against everybody. Ideal testing method: Revelation file with all characters available (sans Kana) and all characters B-ranked to Corrin with A-rank available. Save, A-rank two characters with overlap, play a challenge battle and gain at least 1 level. Record priority. Works for all characters except Gunter. Case 8 Results: Case 9: I'm going to take the same Percy and see what skills I can get out of a friend/spouse pair much like Cases 6 and 7. Results Summary: It seems that the game immediately adds a spouse's base class to a character's class set upon reaching S-rank, and immediately adds a friend's base class upon reaching A+-rank. Even though these classes are only accessible through Partner/Friendship Seals, they are still present and their base skills can be obtained through Heart Sealing into an overlapping class. Likewise, the skills from the character's own secondary class can be obtained even when the class change occurred via a Partner/Friendship Seal. When obtaining skills, "leftmost" priority applies, as follows: if the character has a maximum of five classes numbered from left to right, the spouse's class always takes slot 4, and the friend's class always takes slot 5. The order of the unit's own classes is irrelevant as no unit seems to have a class set that overlaps with itself, although child units' slot 3 is always filled by the variable parent.
  8. Personally, I'll feel more comfortable about it when it is known rather than postulated. I know there's not much cause to think that it has changed, but even so, it never hurts to affirm.
  9. This is one of those things I intend to test once the hit rate regression is finished. It's commonly thought that highest priority goes to lowest activation rate, but that still leaves matched activation rates for testing. The easiest way to try to test this is to take maxed-stat units with multiple procs, calculate the activation rate of each skill if one outprioritizes the other, and then go to someone's My Castle (or check defenses of your own) with awful weapons that will do 0 damage and keep chipping away until you get enough data points to make some kind of activation ratio. For example: Butler has 33 max skill and low damage output. Let's say we have Butler!Corrin (no Asset/Flaw) with Dragon Fang and Luna. Probability of Dragon Fang, if it outprioritizes Luna: 0.24 Probability of Luna in same scenario [P(!DF) * P(L)]: 0.2508 Probability of Luna, if it outpriorizes Dragon Fang: 0.33 Probability of Dragon Fang in same scenario [P(!L) * P(DF)]: 0.1608 So, if DF and Luna have about the same frequency, Dragon Fang has higher priority, and if Luna occurs about twice as often, Luna has higher priority. For what it's worth, I already know that Nohr Noble +Spd/-Res Corrin at max Skl gets about equal activation out of Dragon Fang and Sol, which is consistent for Dragon Fang outprioritzing Sol.
  10. Yeah, this is the primary reason Dual Guards aren't being counted. But we can at least say that in the limit, Dual Guard affects both hits and misses equally, so the exclusion shouldn't matter. I'm doing a test import of my equations right now, and it looks like it will work. Does anyone besides me want to see confidence interval data, odds ratios, etc.? I also consolidated the data collection and output stages into a single document on two sheets so that everything will change in real-time instead of needing new imports with each contributor's data. I've also separated data collection into columns so that there are no rush conditions in filling data points. All told, the version I just finished will be able to manage 3 persons recording data simultaneously, with plenty of room for more. I also went ahead and imported a full all of my collected points so far, which includes everyone that sent anything to me or posted here. https://docs.google.com/spreadsheets/d/1UaKECEGX7Dyb_JhduG6j_SPDkphsKELdVZvr2mAv5aU/edit?usp=sharing (I had to work from a copy because view-only privilege) So yeah anyways that's up and running, and I (or anyone else) can add whatever additional information they think may be necessary, just don't break the data-reading functions in columns B and C on the Results sheet. Edit privilege is active for anyone with the link, so just add stuff. I'm going to continue to store data locally in addition to on Google because I'm pretty sure Excel's statistics package is more rigorous. I'll also put the link in the front post.
  11. Good news! The data is nearing parabolic convergence for the upper range -- the current R^2 value is sitting at 0.969. I'm basically waiting for something I can adjust to better fit a floor of 50 - 50.5 and a ceiling of 99.99. The best fit polynomial as of now is P = -0.0168*Z^2 + 3.5614*Z - 87.635 dP/dZ = -0.0336*Z + 3.5614
  12. Oh yeah, go for it. The only problem with spreadsheets is that you can't attach them to posts or messages, so you'll have to save it into another text file. Because of the way I have my analysis algorithm set up, it's best if it's just a string of values and outcomes I can import. [Hit Chance], [Outcome] [Hit Chance], [Outcome]
  13. As in, your own, independent data-collection spreadsheet? At risk of sounding conceited, I'd say that I have a clear head start on you there (20000+ points, counting your data) and that keeping all or at least most of the information in one place would be to our advantage. I've recently started including the raw hit/miss counts for each point in updates to the initial post in this thread, and I am going to insert the most recent graph of the data in future updates for others to analyze. That said, you can certainly take the raw data and perform your own analysis on it to see if you come up with any leads.
  14. With additional data, 3 points for (3A+B)/4 have fallen outside of their respective confidence intervals, close behind their unweighted brethren with 4 points. I set up the calculations necessary to keep track of a logistic model and the respective odds ratios for each pair of points. The idea there is that as confidence increases the ratio of odds ratios, given by Odds(x) = P(H)/[ 1 - P(H) ] Odds Ratio = Odds(x+1) / Odds(x) = e^beta1 the beta1 value is a constant which should in the limit rest at a specific value. The data will then fit a model of the equation P(x) = 1 / [ 1 + e^-(beta0 + beta1 * x) ] The center for this model is chosen for us at 50, so beta0 is simply -50*beta1. At present, this model is the least accurate using our measured data, because each data point is some unpredictable difference E from its actual value and this can't be eliminated from the beta1 term, but it's another method of data validation. There's also the method of treating each point as the CDF of a normally-distributed probability function, where dP represents the change in probability from point to point. This method would require us to have a sequence of about 10 very high-confidence values so that we could adequately bound their mins, maxes, and the intermediate slopes.
  15. 17600 points. No new evidence, but the error margins for (3A+B)/4 are decreasing, making it the best model so far for the upper half, hands down. In comparison: Sum of the squares of the errors from Hit = 50 to 99: (3A+B)/4 -> 579.73 (4A+B)/5 -> 743.24 Unweighted 2-RN -> 748.21 I also made a post about the system in a stats group I'm in on LinkedIn, so they may be able to provide some insight on how to narrow down possibilities more effectively than simply collecting more data. I've played a little bit with prediction intervals to no avail. That's all I've got for now. I tested Hit = 4 about 350 times and the 1-RN model holds for that value as well, so status quo today.
  16. All right, one more thing I'm going to throw out there to pursue, once again from Verile's suggestion. A split system (as in, a hard split system, with two different formulas) is ugly. Very ugly. And more code to implement than a single formula. However, we know that the data on one side is roughly P = Z, while on the other, it follows some kind of transform. However, it is possible that there is a linear term and a nonlinear term that diminish near their appropriate extremes. This would be something like P = (2*Z)/Z^2 + Z^2/(100 + Z) This is an extremely rough estimate not based on any particular trends in the model, but the point is that the maximum of the first term is near the low end of the spectrum, and the maximum of the second term is on the high end. To get one that might be valid, we just have to follow the same constraints as any previous model: Uniformly increasing from 0 to 100 Value at Hit = 0 is 0 (all terms cancel at 0, likely all terms multiplied by Z) Value at Hit = 100 is 100 (or very, very close, if we want to try to explain the reported miss at 100) After we find a good baseline, we can tweak it to fit our data as best as possible. As of right now, a good target would be a slope very close to 1 in the lower half and very close to 4/3 in the first segment (50-75) of the upper half.
  17. If I see an image of this, I'll take this seriously. I'm really averse to taking something from a GameFAQ's thread as fact when GameFAQ's is just really bad about being wrong about things -- and the comments in the thread are evidence that these people don't quite understand how the game handles number values. Every past FE has rounded prior to use in calculations, so all values in Hit calculations should be integers. If they're not, I'd have a hard time believing that was intentional and would expect it to be patched out (actually, I would expect it patched before international release). A glitch in the displayed value is I think more likely than a rounding error, and I don't think that's terribly likely. Edit: As far as I can tell, a rounding issue at Hit = 100 is actually impossible. The simplest approximation of an unweighted 2-RN system (pseudocode) is: R1 = Math.rand()%100; R2 = Math.rand()%100; If (R1+R2 < 2*Hit) return success; If Hit = 100, it doesn't matter what the combination of RN's is -- even if they're both 99.9999999, the result is less than 100. If you just store a raw random number value as an int, you're going to kill everything after the decimal point. IntSys would have to have switched from using either floor or store pretty much exclusively to using either ceiling or a more general round function to make that happen. If it's really possible to miss at 100, it will happen to one of us soon enough, and hopefully we'll be able to get screenshots.
  18. All right, I have a model for dynamic hit rates, and I know it fits because I've checked it mathematically. So the next step for testing dynamic hit rates is to go back into My Castle and repeatedly test the same values. There's a bit of variance depending on the coefficient, but basically, if dynamic hit rates are correct, it should be impossible to miss a certain number of times consecutively. For this segment I request that everyone record long-form data of Hit, Outcome, in the order measured. We can't test the positive condition, we can only check against the negative condition. If the step size is 10% of the value (rounded down), then it should be absolutely impossible to miss 10 times more than you hit in any string of trials (whether length 10, 20 or 200) at a Hit value of 50. At other values: @ 50: M <= H + 10 @ 60: M <= H + 7 @ 75: M <= H + 4 @ 80: M <= H + 3 I suggest testing at 75-80. The probability that you could surpass the maximum if dynamic is false is pretty high, relatively speaking. We can't lean on each other's data for this, though. The success probability of the function has a limit, which can be taken as the EV of it, so I'm going to model some variants with static increments and ones of different proportions and see which fit the current data the best. The EV for 10% increment at 75 is -5 on our measured, for what that's worth, but I haven't checked any others yet. Edit: Just from preliminary checks against the EVs of all of the data so far, this doesn't look like it fits, even at high increments. The change in probability diminishes much faster at high hit rates to the point where the impact is insubstantial. I'm going to start testing it against low hit rates to see what happens. Edit 2: I've now tested the floor and ceiling of all possible increment types for different hit rates. While there is a marked increase at some values (the hit rate effectively doubles in the limit for some low values), the absolute ceiling for high values is below our measured values. It's not even a question of confidence values; the dynamic model does not allow there to be an 82% success rate at a value of 75, it caps at 80%. Other values, especially high values, are similarly over their dynamic-model predicted limit. In the process, though, I did develop a formula for iterating through from trial to trial, which could be useful if applied to dynamic growth rates. I'd just need someone to help me build it into a web applet or something.
  19. I'm going to work this out: say I have a growth rate of 30 in a stat. There is a 30% chance that the stat will grow, and a 70% chance that it will not. In the event of a success, these values remain the same. In the event of a failure, the chances change to 33% and 67%. Under normal circumstances (no dynamic growths), the EV is n*p, which for 15 levels is 4.5. Under these adjusted probabilities, the first trial is 0.3, the second is 0.321 (0.7*0.33 + 0.3*0.3), and the third is 0.33444 (sum of all paths to success, HHH, HMH, MHH, MMH). The fourth is 0.3478989, and so on. In four trials, the base EV is 1.2, and the dynamic EV is 1.3033389, a +8.6% increase. All right, I'm convinced. I'll look into it.
  20. We have a point at 9, which is reasonably close to 6 or 7, so if you want to brute force another value, try to target somewhere between 14 and 18. That would give us at least one value with a sample size > 200 every 7 Hit values for the lower region, which I can do a linear regression on and see how close to slope = 1 we get. The evidence is definitely in favor of 1-RN in the low range.
  21. Verile's idea got me thinking, since we have seen multi-term equations from IS before, namely FE9's Forge calculations. Cost(stat) = [10(base stat)abs(stat increase)+(stat increase)^2]/2(stat)^2 At any rate, I added a polynomial fit (up to n = 6) and suffice to say we can pretty safely rule out anything on the order of 4-6. 3 is possible, but unlikely. I wish Excel would let me do a logistic fit. True Hit as we know it, for example, can be pretty closely approximated by Z = 100/(1+exp(-0.188(x-50)) which is just a logistic function centered at 50 with bounds 0 and 100. If you vary the k (0.188) a little bit you can get more or less a perfect fit. If we had a true logistic fit, though, the slope at 50 would tell us everything we need to know.
  22. I've been going at it from a couple of different angles. I'm trying to figure out how to give myself a min/max slope for different regions so that I can bound the data and their errors above and below by values based on other values. Right now I have the error bars set to cap at the highest of all low bounds to the left and the lowest of all high bounds to the right, and I'd like to narrow that further. I have a hunch that the function has a maximum slope at 50 and a minimum at 0 and 100, like unweighted 2-RN before. If that's true, then the slope at 4 should be either equal or greater than the slope at 3. The difficulty is implementing this when the error bounds for adjacent points can be wildly off from each other. When looking at the graph, you can ignore most of the values < 30 -- a lot of those still have sample sizes of 20 or less.
  23. The return value of SLOPE keeps oscillating between 1.27 and 1.30. This isn't helpful at all. I also can't shake the thought that something is missing, like we're trying too hard to make this for a model that is easy to conceive. When I get back at it tomorrow I'm going to try to set up a Gauss-Newton analysis. That will be able to maintain a number of variables and minimize wow -- we can go for a simplified approximation of that function after we find it. Edit: MATLAB doesn't like this system and won't optimize any farther than my initial estimate. Bother.
×
×
  • Create New...