The formal representation of solvability may include only unique and salient game states. Salience, however, is subject to design, so the model is as useful as it can be implemented.
I think these two are in fact related, since salient decisions would be those which can be taken (or maybe considered) by players making intelligent decisions. Otherwise any complex game would have a very low and irrelevant solvability (as my chess example shows).
Hmm... It doesn't have to do with player intelligence because the solvability is generated into the challenge. The player's actual intelligence is not factored into generation (though I don't think this is what you're saying). We don't care what actions the player is making or will make, we just want to be certain that there exists a desirable amount of decision paths that the player can take to solve the challenge. Moves of a low heuristic (dumb moves) value are not salient because we assume they are failed paths. The player's decisions don't really have anything to do with quantifying how solvability effects the challenge. Solvability should try to include some unintelligent decisions after all.
Now, if we are using heuristic pruning of some kind to simplify the problem of solvability, then we need a measure of what moves are important and what moves are not. I'm not sure if this is the player's intelligence- the player doesn't have anything to do with how the challenge is generated.
Also, I think some way of probabilistic calculation makes more sense than simply counting the paths. Otherwise you would have to calculate paths which occur in very rare situations on the same level as paths which occur frequently. This also automatically prunes meaningless decisions (like my fire/ice example).
I think I've been wrong in thinking about solvability as a tree (which could work). It should be a directed graph.
If that were the case, we could generate solutions as salient actions from the goal state with some branching factors and desired depth until we decide to converge on the initial state. If this is the case, we can explicitly generate the paths with only salient events. Any divergence from these paths may or may not result in failure (which is a problem). The greater the branching factor from the goal state on, the more solvable it is. That is- solvability describes the breadth of solutions. Complexity, then, is how many required steps from the goal state to the initial state there are- or the depth and variety of actions. Excuse the redundancy, I'm still thinking this through.
Hmm-- Suppose a Hydra is generated that the player can't solve with current equipment. We could also generate a single weapon that makes the hydra solvable in some amount of steps... or a whole slew of weapons and items that all need to be used within a particular order to solve the Hydra. Even a two-headed hydra can be turned into an incredibly complex problem pending on what weapons and tools are available. Runes can be used to auto-solve these problems or jump between salient events, but if we were to exclude them when deciding which hydras and weapons to generate, we can generate challenges whose difficulty is very measurable.
Combining the two, we get that solvabilty would be the probability that a player wins the game (except that now this is no longer a perfect player, simply an intelligent one).
I'm reluctant to use probability to describe whether or not a player wins a game, but I think we can set the probability based upon how we address solvability. If we're analyzing a game's solvability... then yes, the digested information we get is the probability. It's probably more important to look at distinct challenges, though, instead of the entire game.
This would actually make the game more complex, because we aren't considering highly equivalent actions as a contributing factor to variance when it comes to content generation. If we're in the land of 'Fire and Ice does the same thing,' we don't want to think of them as separate actions when generating complexity. This is just a side-benefit to applying salience.
But we should also apply this to your previous example that there are many paths when killing goblins, but not so many when killing dragons.
I think it still works out- if salience only cares about the highest heuristic valued moves, goblins will be more solvable because more of those moves are likely to have solution paths than in the case with the Dragon- unless, of course, that's not how the game is designed.
Calculate all possible board states and decision paths of chess, what is the ratio between Win and loss or draw? We obviously can't do that- but we're not trying to.
It should be possible to calculate this in very simple situations (like king vs king + 2 rooks and some simplifications of the draw rules).
Yea- I just mean we can't generate the entire game state tree of chess. It's one of those end of the universe problems. Of course, solvability as a model might hypothetically care about all of that, we just have to implement it in a salient way for it to matter. For really good players, chess doesn't really begin until after the 20th move or so- both players have the same amount of knowledge of the game from the initial state, they can silently agree on how the game will progress (because the best first 20 moves or so has more or less been solved). I kind of hate chess for that. Knowledge becomes more important than a person's ability to solve problems.
I'm really just trying to rationalize the most complete and useful way to regulate difficulty in a game so that it remains meaningful and challenging. Solvability and complexity, and their sub-properties, in terms of some organization of actions seems to be close- but there are a lot of gaps that need to be worked out.