Why Not To Use Pointers

So if pointers are so great, why not only use pointers? For two reasons, both of which are consequences of the fact that pointers are a method of indirection. Indirection is, at heart, adding a step to the set of steps needed to get at the information you need. This makes the process of getting that information more complicated.

With that complexity comes, firstly, increased risk – you have to manage memory yourself and this is fraught with peril and requires both experience and discipline. The object has to exist somewhere for a pointer to have something to point to, so you now have three things to manage – the pointer, the pointee and the relationship between them.

Secondly, the added complexity of indirection brings inefficiency. This can be inefficiency in terms of CPU cycles or in terms of memory usage. Pointers are often an unnecessary overhead e.g. for(i=0; i<10; i++) would be overly verbose if written using pointer notation. If you’re just dealing locally with simple built-in types that don’t take up much space then there’s often little to be gained by using pointers. It’s only when you start trying to pass things around, particularly large things, that the overhead of indirection becomes smaller than the cycles or space wasted by not using it.