Blindly retrying can be a major source for slowdowns for the generator.
Theoretically. In the worst case at least. But that sounds like premature optimization to me.
Not for me at least -- of course depends on what do you use the randomness for. An example scenario, try to place 100 rectangular rooms (disconnected for now):
rooms_placed = 0
target_room_number = 100
while rooms_placed < target_room_number:
room_corner = generate_random_point()
ok = place_room( room_corner )
if ok:
rooms_placed += 1
Above, when you're nearer the target room level, it will be more likely that the random point will be inside or very near an existing room. On the contrary, if you generate a list of random points based on a better distribution, you would have less failures. How much would that affect performance of course is variable and would have to be measured (indeed if it's not a problem, no need for fancy stuff).
If I had to choose between something that is pretty good at making valid dungeons and easy to retry vs something that is mathematically proven to always be valid, I know which I would choose: the good enough way. I've found that a good enough generation algorithm and a rock-solid validation algorithm are far easier for me to program, far more adaptable, and require far fewer tweaks as new monsters, terrain, puzzles, abilities, etc, are added.
Imho that's the mindset if one wants to actually release something
Besides; with most algorithms, I'm guessing at least a hundred or a thousand levels or rooms can be generated per second anyway.
It pays to be fast in the long run though. If you make the basic generator logic fast, then you can scale up either in terms of dungeon level (100x bigger) or you can scale up in terms of running many generators for different aspects ( one for level layout, one for monsters, one for torches, one for chests, one for traps and so on and so forth)