A Problem with Nash Equilibrium
The idea of Nash equilibrium is for the player of a multi-player game to observe the strategies all other players are following and then choose the strategy that is best for him given what they are doing. In other words, he freezes their behavior, assuming they will do the same thing whatever he does.
The problem is that my choices affect what choices are available to you. It is not, in general, possible for all other players to keep doing the same thing whatever I do--some of the things I might do would make things they might be doing impossible. Hence in defining Nash equilibrium we must implicitly assume, not that other players don't react, but that they react in some specified way, something we can describe as following the same strategy in the differing conditions corresponding to different choices I might make. There is no theoretical basis for deciding what that specified way is, hence Nash equilibrium is not clearly defined.
Consider an oligopoly. Each firm is producing a quantity and selling it at a price--all at the same price if the goods are perfect substitutes. If one firm changes the quantity it produces and sells, it is no longer possible for all the other firms to keep selling the same quantity as before at the same price as before.
We might define a strategy as a price and assume that when I change my price everyone else keeps the price he is charging the same. The result is Bertrand competition. As long as price is above cost, it pays a firm to charge a penny less than everyone else so as to expand to the whole market, or at least as much as it can produce--for simplicity assume constant costs. So the equilibrium is price equal cost.
Alternatively, we might define a strategy as a quantity and assume that when I change the quantity I produce everyone else keeps his production constant; price then adjusts to the price at which total quantity demanded equals our summed production. The analysis of that problem is more complicated and yields a different result.
This is not a matter of having multiple Nash solutions, which is also a possibility. It's a matter of not knowing what the Nash solution is until you make an essentially arbitrary definition of a strategy.
I wouldn't be surprised if all this is familiar to people who spend more of their time than I do on game theory, but it isn't mentioned in the text I use and, off hand, I don't remember seeing it in other texts.