- The most dangerous feature may be naked pointers. Unfortunately, their use tends to be encouraged by novices leading the way for other novices, users of average knowledge and even teachers. This in my opinion is tragic, for several reasons:
- Naked pointers are a conceptual requirement to understand smart pointers, however, novices should avoid them at all costs until they have become familiar with at least the three most important classes of smart pointers:
- unique pointers,
- reference counted,
- and intrusive.
- What applies to what is easier, to teach up to naked pointers as opposed to go all the way into smart pointers, is also reflected in the deceptively simple notation for naked pointers and the tedious and repetitive notation for smart pointers. They have great economy of syntax, despite being
- extremely hard to use productively,
- having only very narrow applicability,
- extremely easy to misuse catastrophically
- The greatest feature in all of this language, the implicit destructors that make the RAII (Resource Acquisition Is Initialization/returning or freeing a resource is its destruction) possible are not applicable to naked pointers. That is, naked pointers lead away from one of the greatest things in C++
- Smart pointers, seeing from the point of view of standardization, are unacceptably recent additions:
- unique_ptr, which can be a zero-performance-overhead smart pointer was added to the standard for '11, because it required move semantics to be added to the language itself, and then move semantics is a especially subtle concept.
- shared pointers (that is, reference counted) were added in the TR1. Although the boost libraries have had a very good implementation of the shared_ptr, using them in practice used to have drawbacks: The shared_ptr requires a lot of compiler template juggling, which used to make compilation much slower, when the programmer made a mistake, hard to decipher template errors, etc.
- With regards to intrusive pointers (pointers to objects that they themselves have the reference counter), unbelievably, there is no standard intrusive pointer;
- auto_ptr was never useful
- Almost everything is mutable by default, except compile-time constructs. I think there shouldn't be a "const" keyword, I mean it should be implicit. What should be made explicit is the mutability of named things. Not just changing the contents of memory is just about the most expensive processor operation in terms of performance hit, but it is orders of magnitude harder to reason about mutable things than immutable ones. That is, the compiler should have ways to express different degrees of immutability. There has been progress in this area, for example, "constexpr", but much remains to be done. This is a multifasceted topic that I'll probably cover over many articles, below you'll see two aspects in high detail: Aliasing and false sharing, that deal with mutability and the impossibility to express that things don't change. There are languages in which mutability is avoided in extreme ways, those on the functional programming paradigm. This is relevant to C++ in two ways:
- Metaprogramming is amazingly very much functional programming in concepts and practice, especially including the feature of immutability
- This language supports almost any sane approach to programming (except introspection, what is commonly referred to by the misnomer "reflection", because this requires substantial effort on the programmer to emulate), so the concepts can be applied. This is a digression, but there is a very interesting presentation by Sean Parent, "Inheritance Is The Base Class of Evil" on how to apply functional programming concepts and techniques in normal C++ that I hope I will treat at length some time.
- On the other hand, and very paradoxically, compile time constructs are immutable. I am not advocating compile time constructs should be any different, just calling the fact that meta programming in C++ does not resemble normal programming at all, the building blocks are fundamentally different. This makes it hard even for experts to do metaprogramming, that is, even though it's possible to metaprogram in C++, which proves conclusively the richness and expressiveness of this language, it requires an altogether different set of skills.
- To further complicate things, meta programming in C++ is based on a form of pattern matching that relies on the extremely subtle template specialization and especially confusing overload resolution mechanisms of C++; which effectively and thoroughly obfuscate the metaprogramming code. Again, I don't want to unfairly criticize, if I thought C++'s current framework to express smart pointers is nothing short of a major accomplishment of human civilization, I am convinced beyond doubt C++ metaprogramming capabilities are. C++ is so expressive that metaprogramming support, far from having been a design objective, was actually discovered into the language, invented and developed over 15 years.
- Talking of template specializations and overload resolution, while extremely powerful features conceptually, their pathetic syntax leads to truly worrisome subtleties. Determining which specialization is picked by the compiler, or which overload, is probably the hardest thing in programming C++. It is not just hopelessly hard for automated tools, it is hard even for very experienced programmers! I wonder how come this is so hard and the compilers don't give you any help; I mean that for example, compilers give you a preprocessing option that shows you the macros expanded, why wouldn't the compiler give you the equivalent code with the specializations and overloads selected? We will talk more about this in the future. See "noexcept" below.
- Bears mentioning how hopeless it is to make C++ tools. A code editor that will autocomplete a function call? allright, to begin with, is it a global or a member function? is there a template involved? are there specializations for that template?... to do it right, the tool must know what are the types of the arguments involved. The problem is that to be able to do that, the tool must essentially parse 100% of the features of the language. In due course I will collect apparently inoffensive code fragments in which it is hard to know what are the things involved.
- "noexcept" and "constexpr" should be the default. The "hottest" thing in C++ today I would think is "value semantics". That you can express value semantics at all is proof of how expressive the language is, however, it gets tedious really quickly annotating yourself that things are const or constexpr and noexcept. At least noexcept is a compile time operator that will tell you whether an expression can throw...
- Passing by value, by pointer, const pointer, reference, const reference, and "rvalue reference" all have their subtleties, and in general the set is not useful.
- In my opinion, the only justification for naked pointer arguments is to indicate optionality, the argument address may be null, in any case, the callee does not assume any ownership.
- Without optionality, only pass by value or a form of reference
- Returning naked pointers only makes sense for optional things not owned by the caller
- For return value or arguments to a function use smart pointers only to indicate ownership. While the current set of smart pointers allow to indicate the intended ownership, unfortunately it is not possible to dis-indicate optionality. That's one reason why I think the standard library should have intrusive pointers, this allows to convert a value to a smart pointer.
void incrementAll(std::vector<BigThing> &bt, const BigThing &value) { for(auto &v: bt) { v += value; } }By "BigThing" is meant some data type that is expensive to copy around. For example, imagine an implementation of matrices as vectors of vectors. It is easy to see that passing as parameter a const reference to a big thing is the right thing to do, or is it not?.
The issue is that in the simple code given, there is no guarantee that the value referenced through "value" does not lie within the vector, that "value" is not an alias to an element in the vector, or even, to parts of up to two elements of the vector.
Unless the function is inlined (in this case it is not, it probably should be), and also the compiler has full information about the arguments "bt" and "value" at the caller site, then it would not be able to prove that the location of "value" is not within the memory buffer of the vector "bt". Unless the compiler seriously optimizes, it would not even be able to realize that the iteration of elements will happen over a sequential buffer, hence, it conceivably can insert code to check for the bounds of the buffer and determine whether the location of "value" may be within the bounds of the buffer for the vector, perform case switching, that is, to automatically transform the code into this:
template< typename V > void vectorizeIncrement(V *buffer, const V *limit, const V *val); // do in parallel *ptr += *val // for ptr in the range [buffer, limit - sizeof(V)] // and assuming that *val does not change void incrementAll(std::vector<BigThing> &bt, const BigThing &value) { auto vectorBuffer = bt.data(); auto end = vectorBuffer + bt.size(); auto valueLocation = &value; if(valueLocation < vectorBuffer or end < valueLocation) { vectorizeIncrement(vectorBuffer, end, valueLocation); // vectorizeIncrement would be something reasonable that attempts // to apply the increment to many elements in parallel } else { // "value" is going to be changed despite being "const"! // note that a "BigThing" at "valueLocation" may legitimately straddle // two "BigThing" elements in the vector... intptr_t valueLocationAsInteger = intptr_t(valueLocation), bufferAsInteger = intptr_t(vectorBuffer), diff = valueLocationAsInteger - bufferAsInteger, size = sizeof(BigThing), straddleBeginningIndex = diff / size, straddleEndingIndex = (diff + size - 1) / size; // integer ceiling vectorizeIncrement( vectorBuffer, vectorBuffer + straddleBeginningIndex, valueLocation ); vectorBuffer[straddleBeginningIndex] += *valueLocation; if(straddleBeginningIndex != straddleEndingIndex) { vectorBuffer[straddleEndingIndex] += *valueLocation; } vectorizeIncrement( vectorBuffer + straddleEndingIndex + 1, end, valueLocation ); } }
The problem is that it is fundamentally impossible to know at compilation whether the extra work of doing the case analysis for whether the location of value is in the vector buffer, and this is still assuming that the compiler knows that the BigThing &operator+=(const BigThing &) does not have a side effect on either the vector or the value!!
Profile-guided optimization can certainly help, but even then, what's the guarantee that the profile cases will be representative of production use long after the code has been released?
Something as apparently as simple as telling the compiler "I am giving you the address only to prevent copying, but trust me, the contents do not change" is inexpressible in current C++. "const" is only a restriction on your code, it means you prohibit yourself from changing the values declared const explicitly, but that says nothing about who else can change the values, such as a different thread, nor whether they can change implicitly, as a side effect of a function call.
Solving this goes way beyond incorporating "restrict", because unless inlined, any function call can have any arbitrary side effect; on the other hand, the only guaranteed solution to aliasing expressible within the current language, which is to pass by value and use local copies of values until the changes are written back at the end, is both a heavy handed semantic change and a performance pit.
The final remarks about these complications are:
- Don't manage naked pointers, instead link your pointers to things that will delete them on their destructors, for example, smart pointers.
- Take the trouble to constify and constexprify and noexceptify
- Endure the pain of manually making your code less and less mutable, it pays off
- When in doubt, err on the side of passing by value
- Even if you are a not a sophisticate, learn enough of metaprogramming so that the compile time immutability tools help you accomplish guarantees of immutability in your practical code.