Jump to content

French_Kiss

Verified Tanker [NA]
  • Content Count

    20
  • Joined

  • Last visited

About French_Kiss

  • Rank
    Post Derp Rage Poster

Profile Information

  • Server
    NA

Recent Profile Visitors

1,092 profile views
  1. Suppose you did 123 damage, 45 track assist, and 67 radio assist in the last battle. Then your total damage+assist for this battle will be calculated as 123+67=190. 45 track assist won't be count, because they take either track assist, either radio assist, depending which one is higher. This is what I meant, using "max" function. Now, suppose your moving damage was 60. After the battle your new moving damage will be 60*(1-2/101)+190*2/101 = 63. I guess, they use one day distribution, averaged for the last 14 days.
  2. In this topic I'll try to give an idea, how WG converts your damage into MoE rating, measured in percents. After each battle your "moving average damage" is recalculated as Dnew = (1 - 2/101) * Dold + 2/101 * (damage_battle + max(assist_track_battle , assist_radio_battle) ) When you buy a new tank, your initial moving damage is set to zero. After hundreds of battles it approaches your plain average damage+assist (if you play constantly good). Your current value of moving damage can be found in battle_results cache files, if you use Phalynx converter. Writing down moving da
  3. I think you just need about 30-50 battles more to get the 3rd mark. They use exponent with constant 2/101, which means that after 101 battles your moving damage is ~80% of your simple average. You can see your current moving damage in battle_results converted to json format with Phalynx converter.
  4. Well, replays data proved to be REALLY biased. Here are some facts about it. I collected 40 k replays. - uploader's average winrate is 52%. For other players it is 49%. Uploaders are always in the "green" team - green team wins 85% of battles Any ideas how to "work around" these things?
  5. I collected ~40k replays and statistics of participants. In 85% (!) of the replays green team wins. As I take a sum of total winrates as a predictor, I can guess 60.7% of battles, if I take tank winrate, I obtain 72%, just like in tsuker's data. However, if I correct per-tank winrate by subtracting result of the battle, I can guess... 57% of battles. Correction means, that I decrease wins number of each player in winning team, decrease number of battles of all players, and recalculate their winrate. When we discussed biases in data, I wanted to mention, as a joke, that knowing result from repl
  6. Well, if someone wants very fresh data, one could monitor wotreplays and download statistics as soon as a new replay appears in database. Then tanks/stats is not a problem at all, I guess. Could you explain, how to use this? Do you mean, that there will be actually 3-4 times less different players, because they appear in several replays?
  7. tsuker, how did you collect wn8 data for your analysis? Did you do the same trick as RichardNixon? Or used some side API server? Now I'm collecting replays information, I will have about 100k replays in two days. (result+players+their tanks) Initially, I was going to collect full per-tank statistics of each player, but I just realized, that it will take REALLY lots of time. If I collect stats for only one tank, that was actually in the battle, it will take me about three weeks.
  8. There's a way to calculate moving average damage for your next mark, if you are close to it. Here's an example plot for E50: They use linear interpolation between damage points for next percents: 0,20,40,65,75,85,95,100. Damage (moving average damage+Max(assistTrack,assistRadio)) at these points is always a multiple of 50. Every day these values can slightly change, but only by +-50, +-100 and so on. If you are on a line, leading to the next mark (for example, between 75 and 85%), you only need to write down your moving damage and corresponding percent before and after one battle. Additionall
  9. Battle_results.dat converted to JSON contains both nicknames and user_ID's of all 30 participants, what is the problem?
  10. I use his scripts, I know how they work. Currently, I'm trying to find unbiased data. It seems, ideal data for analysis would be recent cache files from battle results folder, that Phalynx collects on vbaddict.
  11. I wonder if it's possible to reach 80% of correct guessing. Maybe I'll try only 10k replays + tank/stats. People usually share epic battles on wotreplays, it could affect the results(
  12. In other words, random errors of damage and winrate are not independent, and naive fitting using least squares might give totally wrong results. However, there is another effect, causing error in slope, and this effect is of opposite sign. So the two errors partially compensate each other: I also did some simulations and finally came to conclusion, that 100 battles is a reasonable compromise. Of course, higher number would reduce the error, but almost nobody plays even 100 battles on low-tier shitty tanks, especially if you want to use recent data, and not <0.8.0 patches. This is where pu
  13. A few months ago I played statistics, trying to invent my own rating system, and ended with a linear damage to solo winrate per-tank conversion, so I could show here zero-damage winrate value for any tank. But I still don't understand, how can neglecting bad players improve prediction success. By the way, adding per-tank winrates instead of total ones improves prediction from 63% to 73%!
  14. Are there any experimental evidences of real existence of the zero-contribution point?
×
×
  • Create New...