I understand that these 4 at bat sample sizes even out over the course of a 162 game season, but as you’ve stated, OPS is outdated. So even if it evens out with a larger sample size we’ve established that a 2-5 point deviation in wRC+:OPS+ can happen, and the former is preferred. So if you have say 2 players on the same team with identical park factors and an equal OPS+, while one of the players has a 2 point edge in wRC+ for example, what quantifies it?
is it a mathematical formula that has a higher correlation with with player success? like in the NFL for example where passer rating is the most popular quarterback efficiency statistic while Adjusted net yards per attempt correlates stronger with wins and losses. Because a qb who’s 6/9 for 100 yards and 1 td pass and 1 sack for a 6 yard loss has a higher passer rating than a guy who goes 6/10 for 100 yards 1 td pass and no sacks with a low passer rating but a higher ANY/A, if you get my gist.