Verified Tanker [EU]
  • Content count

  • Joined

  • Last visited

About DeinFreund

  • Rank
    Usually heads in the wrong general direction

Profile Information

  • Server
  1. @Buckyball Thanks for your positivity, I hope you enjoyed reading the paper! I've already tried different weighting systems for Teams in Zero-K, even though there are no tiers(More weight to newbies and the like). It should be easy to use this for tiers. From my experience it'd probably be best though to completely ignore tiers and tanks in the rating. This might sound weird, but in many games those things only really make a minor impact. The enemy might have 50% more HP, but skill and teamwork have a far bigger impact. Too sad WOT is a closed-source for-profit game. There's also an easy and effective way to compare different iterations of or completely different rating systems. Take the probability p with which the system predicted the actual outcome, then score the system by `1 + log2(p)`. This gives the system a score of 0 for 0.5(50:50) probability, 1 for perfect prediction and negative infinity if it was completely wrong. Average this for all games and you get a fairly good score by which you can compare the different systems with smallish sample sets (much better than taking correct percentage). If you think you could use this rating somewhere just let me know. Of course you're also free to copy my Java and C# implementations.
  2. No, Bayesian rating systems are completely independent of the matchmaking. They are designed to account for any kind of matches. For example with Elo: If you are put in matches that you will lose nine out of ten times and do lose nine out of ten times, your rating will stay constant. Chess tournaments aren't made to put people of similar Elo against each other. That may be the case in the first round for some tournaments, but after that it's completely open. I've implemented WHR for zero-k without even needing a matchmaker. Of course most systems will work better if you have good matchmaking. It's easier to learn a player's strength when he is put against equal opponents. It's impossible to determine somebody's strength if he wins/loses every match. Random MM as we have it in WOT is already more than adequate for this. I've noticed this. There would be no point trying to make this a rating for the general WOT player. This is why I suggested using it for competitive/esports only. But I doubt there actually is any interest in having a rating for WOT tournaments. It seems this would be of much more use for games like csgo or dota.
  3. Apparently I got that wrong. Just found Is there an API for these or would I have to write a scraper? I'm not sure if it's really worth doing though, is there an interest in tournament ratings? I'm not quite sure what you're trying to say, but the idea of a good rating system is to require small amounts of battles to get good results. So you can keep track of a player's skill development and estimate his current strength. The currently used ratings mostly seem to concentrate on getting a good estimate of a player's average skill. For a four year old players this average represents how he played two years ago instead of how he plays now.. You can't just use recent winrate because it is too random, this is where rating systems come in. @Private_Miros tanks
  4. It seems even the dossier doesn't contain any information about other players you played with. In order to establish such a rating I'd need to know about individual battles, who participated in them and who won. Apparently this is only available in replays. WOT would need some kind of battle database that I could use. The rating would be very suited for esports, for both teams and individual players. So maybe I could use it with the esport section from wotreplays. Not sure how much the site owners like scrapers though. Also WOT seems to be rather new to esports, I'm not sure whether there are enough tournaments available for a rating to be sensible.
  5. @Haswell Thanks, it seems individual battles aren't available in the API so WHR wouldn't work. If a moderator could move this into the right section that'd be nice.
  6. There are definitely a lot of drawbacks caused by how different tanks and tank compositions affect the outcome. I can't really say anything about whether and for what use case this would work if at all. WNR is probably much more precise, but also much more exploitable. However I might as well try it.
  7. It is only based on winning, just like win rate. But winrate is pretty much the most jittery rating you can get, needing thousands of games to average out its errors. It's also susceptible to platooning or WG manipulating the Matchmaker to not be random. Elo is the basic step-up. Instead of only considering whether you won or not, it'll also check your enemies and allies. If you play with a strong team (possibly caused by platooning), you'll only earn little reward for winning. If you play against a strong team, you'll only lose little on a defeat and gain a lot on a victory. Everybody starts at the same rating, and over time it'll get a good approximate of each player's influence on winning games. TrueSkill, Glicko and WHR are all improvements based on the same idea. I wouldn't worry too much about bottom/center/top tier, I've seen similar things have little impact in other games. Especially with the new MM it shouldn't be much of an issue. Not incorporating any in-game stats is part of the foundation of all these ratings. Once you start rewarding other things than winning, there are always loopholes and uncovered cases. So in short, these systems would be a possible replacement for winrate that reduce impact of platooning and bad MM. It could be applied per-tank, per-tier or per-player. I am doubtful whether WG actually provides the API required for such ratings though. The rating could still be used for sites where you upload your battle history. I think I've seen some similar topics before, so I expected this to be a familiar sight. Because I already have the code to test it, I thought I'd just give it a go if you have some data sets available.
  8. So this is the forum to go when you're tired of WG forums? I'll keep this short in case I'm posting in the wrong place. Maybe I haven't done my research properly, but I somehow failed to find out how to access the battle API of WOT. I've seen there are a lot of stat based metrics for WOT and am disappointed at how much winning games is being neglected. You have probably tried Elo rating before and I wouldn't be astonished if the large teams made it a pain to work with. As I have just implemented Whole History Rating (publication) for my favourite open source game, I thought I might as well see how it does on WOT. It generally converges much faster than Elo and might thus have a chance of coping with WOT's big teams, although I can't make any guarantees. WHR is a bayesian rating system like Elo. It is time dependent, meaning your rating function is optimized for each point in time, not just your most recent or average game. It also adds an uncertainty to each point in time, similarly to Glicko or Trueskill (they only store one data point). The only thing that improves your rating is if you win games, so it's much harder to exploit than ratings like WNR and winrate. I will explain this in more detail later. You can also read this thread if you're interested. If you're wondering who else is using this system, take a look at Go ratings. How does one access the WG API? Are there any battle data sets(containing a list of battles with usernames of each battle and who won) available that I could test this on? Let me know if you need more information.