You get your memory leaks from not releasing allocated memory.
What does this mean? I'm glad you asked!
Let's say you have some arbitrary collection (maybe an array, maybe a hash table, whatever) that indexes a bunch of stuff in the dungeon by x,y coordinate.
You create a monster. Let's say it's a Dire Leaf. And you put it somewhere in the dungeon.
(pseudocode)
cellsCollection.put( x, y, new DireLeaf() );
The Dire Leaf wanders about the dungeon for a while, scratching random bits of pseudocode into the walls. Then a Corremn comes along and kills the Dire Leaf. You must remove the dead Dire Leaf from your object graph. Let's say the DireLeaf is at x,y.
cellsCollection.remove( x, y );
And you go on about your day.
WAIT STOP! You have a memory leak! Why?!?
When you instantiated the new DireLeaf earlier, the storage for the object is allocated off the heap, and you are given a pointer to it. In languages like C and C++, you MUST always explicitly return this pointer back to the operating system, or the storage will never be reclaimed (at least not until the program exits or crashes). We call this garbage collection. The way you're supposed to do this varies between C and C++.
(assuming an object in C++): deadDireLeaf.delete();
(assuming a pointer to a struct in C): free( deadDireLeaf );
Well that's easy, it seems. I suppose it is.
But as the code becomes more complex it ceases to be such, and the memory leaks start to creep in there. Memory leaks are terribly difficult to debug, even with tools specially designed for it. Even more difficult than threading bugs like race conditions and deadlocks. What makes it so difficult is that your code just blithely goes about its day, running fine, until you have so much unfreed garbage that you can't allocate any more memory, at which point the program crashes. And the stack trace you get has nothing to do with the actual source of the bug.
More modern languages like C#, Python, and Java do their garbage collection automatically. Remember the "cellsCollection.remove( x, y )" from before? That's all you have to do. Assuming something else somewhere else in the program isn't holding a reference to the dead Leaf, the runtime will realize that the memory allocated to the dead Leaf is no longer in use and free it for you automagically. This means you get to spend more time actually implementing things and less time trying to figure out why your program is suddenly using 4 gigs of ram after you've been playing for 5 hours.
What I think many people don't realize is that automatic garbage collection is actually /faster/ than explicit deallocation. Sure, it consumes more CPU on average, while the garbage collector runs over your object graph marking and sweeping. But the garbage collection typically runs at a lower priority than your program, so the collection happens at times when your program would otherwise be idle (sitting there waiting for you to press a key), rather than in the middle of that tight loop where you're deleting a bunch of objects.
Don't get me wrong. I think explicit deallocation has its place. But I believe that these days its place is firmly in the realm of bare-metal stuff, rather than high-level application programs. That's also not to say that you can't still have memory leaks in languages that collect garbage automatically (all it takes is one little reference somewhere that is indirectly accessible from the top of the object graph), but it's much, much rarer and easier to find (often the bug that causes the leaks in auto-GC'd languages causes other problems as well, and the leak will magically disappear once you find and fix those other issues).