You sit down with a well mixed deck containing A cards marked "+" and B cards marked "—". You may draw cards from this deck as long as you want, i.e., you can stop playing at any point. Each time you draw a + card you are given $1 and each time you draw a — card you have to pay $1. Cards are
not replaced after having been drawn.
What would be a fair amount to pay for the right to play (i.e., what is the expected payoff) and under what circumstance should a player cease drawing?
I think the first question to resolve is really the second part of the task FrankM defines, i.e. "under what circumstances should a player cease drawing." How that is answered will affect what (if anything) needs to be calculated as the "fair amount" to get the "right to play." It seems to me that the right strategy for play can be defined independently of the "fair amount", but the decision on whether to play at all would THEN require calculating the suitable "pay to play" for the given #A and #B. We assume the Carney would set the entry fee high enough so that the odds would be against the player ('Never give a sucker an even break"), but then this would mean that no rational person would ever play! Assuming they both agree on the calculation of probabilities, neither would play (I suppose if the fair amount were the exact probability, a player might enter -- to kill time if nothing else.) FM hasn't replied to my query, but I think we are all assuming the game has a onetime upfront fee, not a fee for each draw.
My original quick guess was that a player should enter only if there were more A than B; FM showed this was not the best strategy, by considering A=B=1; I will leave the decision WHETHER to play at all -- which has the factor of "fee to play" -- to others. Once one has decided to play, i.e. one will draw at least one card, the question is when should one stop.
One should stop when the expectation for continuing is lower than the expectation for not continuing. This does involve attention to the consideration that the player may stop at any point, but the carney is obliged to continue so long as the player wishes (this is why the A=B case is the anomaly).
At any point, let rA and rB be the remaining number of A and B cards (if one starts with initial #A and #B, and notes the results of each draw, this is known at each stage of play). My simple rule would be to stop drawing when rB EXCEEDS rA (at first I thought one should stop if rA not greater than rB). If rA = rB, the odds favor the player, because if he continues and next draws an A he is up a dollar, and then can decide what to do next, but even if he gets a B, he can always just draw until all are drawn (this assumes the wording does not preclude "drawing" when there is only one card left from which to "draw") since he would be exactly even (since the carney is obliged to continue play) for the continuation.
I am not sure from reading previous posts whether this is agreed; it seems that some say (or others think they have said) that there are cases where the player should continue drawing even when rB > rA. Once in the game, the player should optimize regardless of his initial cost to play, and this means ending draws when the odds are against continuing.
I grant that the issue of devising a formula for W(A,B) is the mathematical challenge here (but I'll leave that to others). But if my logic regarding the winning strategy (once started) is faulty, where have I gone off the rails?