JavaScript EditorDhtml editor     Free javascript download 



Main Page

A discussion of the motivation behind the revision from Managed Extension for C++ to Visual C++ 2005.

Probably the most conspicuous and eyebrow-lifting change between Managed Extensions and the new syntax is the change in the declaration of a managed reference type:

В CopyCode imageCopy Code
// Managed Extensions
Object * obj = 0;

// new syntax
Object ^ obj = nullptr;

There are two primary questions that get asked when people see this. Why the hat (as the caret (^) is called affectionately along the corridors here within Microsoft), but, more fundamentally, why any new syntax at all? Why couldn't Managed Extensions be cleaned up with less invasiveness rather than the admittedly in-your-face strangeness of the new syntax?

C++ is built upon a machine-oriented systems view. Although it supports a high-level type system, there is always an escape mechanism, and those mechanisms always lead down into the machine. When push comes to shove, and the user is hard-pressed to pull a rabbit out of the hat, she tunnels under the program abstractions, picking apart types into addresses and offsets.

The CLR is a software abstraction layer that runs between the OS and our application. When push comes to shove, the user reflects upon the execution environment, querying, coding, and creating objects literally out of thin air. Instead of tunneling, one jumps over, but the experience can be unsettling to those used to having both feet on the ground.

For example, what does it mean when we write the following?

В CopyCode imageCopy Code
T t;

Well, in ISO-C++, regardless of the nature of T, we are certain of the following characteristics: (a) there is a compile-time memory commitment of bytes associated with t equal to sizeof(T), (b) this memory associated with t is independent of all other objects within the program during the extent of t, (c) the memory directly holds the state/values associated with t, and (d) this memory and state persists for the extent of t.

What are some of the consequences of these characteristics?

Item (a) tells us that t cannot be polymorphic. That is, it cannot represent a family of types related through an inheritance hierarchy. That is, a polymorphic type cannot have a compile-time memory commitment except in the trivial case in which derived instances do not impose additional memory requirements. This is true regardless of whether T is a primitive type or serves as a base class to a complex hierarchy.

A polymorphic type in C++ is possible only when the type is qualified as either a pointer (T*) or as a reference (T&) – that is, if the declaration only indirectly refers to an object of T. If I write

В CopyCode imageCopy Code
Base b = *new Derived;

b does not address a Derived object stored on the native heap. b has no connection to the Derived object allocated through the new expression. Rather, the Base portion of the Derived object is sliced off and memberwise-copied into the independent stack-based instance of b. There is really no vocabulary to describe this within the CLR Object Model.

To delay resource commitment until run-time, two forms of indirection are explicitly supported in C++:

Pointers conform to the C++ Object Model. In

В CopyCode imageCopy Code
T *pt = 0; 

pt directly holds a value of type size_t that is of fixed size and extent. Lexical cues are used to toggle between the direct use of the pointer and the indirect use of the pointed to object. It can be famously unclear at times which mode applies to what or when or how: *pt++;

References provide a syntactic relief from the seeming lexical complexity of pointers while retaining their efficiency:

В CopyCode imageCopy Code
Matrix operator+( const Matrix&, const Matrix& ); 
Matrix m3 = m1 + m2;

References do not toggle between a direct and an indirect mode; rather they phase-shift between the two: (a) at initialization, they are directly manipulated, but (b) on all subsequent uses, they are transparent.

In a sense, a reference represents a quantum anomaly in the physics of the C++ Object Model: (a) they take up space but, except for temporary objects, they are immaterial, (b) they exhibit deep copy on assignment and shallow copy on initialization, and (c) unlike const objects, they really are immutable. While they are not all that useful within ISO-C++ except as function parameters, they turn out to be an inspirational pivot upon which the language revision pirouettes.

The C++.NET Design Challenge

Literally, for every aspect of the C++ extensions to support CLR programming the question always reduces to "How do we integrate this (or that) aspect of the Common Language Infrastructure into C++ so that it (a) feels natural to the C++ programmer, and (b) feels like a first-class feature of CLR programming itself? By all accounts, this balance was not achieved with Managed Extensions.

The Reader Language Design Challenge

So, to give you a flavor of the process, here is the challenge: How should we declare and use a CLR reference type? It differs significantly from the C++ Object Model: different memory model (garbage collected), different copy semantics (shallow copy), different inheritance models (monolithic, rooted to Object, supporting single inheritance only with additional support for interfaces).

The Managed Extensions for C++ Design

The fundamental design choice in supporting the CLR reference type within C++ is to decide whether to remain within the existing language, or to extend the language, thereby breaking with the existing standard.

How do you make that decision? Either choice is going to be criticized. The criteria boils down to whether one believes the additional language support represents either a domain abstraction (think of concurrency and threads) or a paradigm shift (think of object-oriented type-subtype relationships and generics).

If you believe the additional language support simply represents yet another domain abstraction, you will choose to remain within the existing language. If you see the additional language support as representing a shift in programming paradigm, you will extend the language.

In a nutshell, the Managed Extensions design saw the additional language support as simply a domain abstraction – which was awkwardly referred to as the managed extensions – and so the design choice followed logically to remain within the existing language.

Once we had committed ourselves to remain within the existing language, only three alternative approaches were really feasible -- remember, I've constrained our discussion to be that simply of how to represent a CLR reference type:

  • Have the language support be transparent. The compiler will figure out the semantics contextually. Ambiguity results in an error, and the user will disambiguate the context through some special syntax (as an analogy, think of overload function resolution, with its hierarchy of precedence).

  • Add support for the domain abstraction as a library (think of the standard template library as a possible model).

  • Reuse some existing language element(s), qualifying the permissible usages and behavior based on the context of its use outlined in an accompanying specification (think of the initialization and downcast semantics of virtual base classes, or the multiple uses of the static keyword within a function, at file scope, and within a class declaration).

Everyone's first choice is #1. "It's just like anything else in the language, only different. Just let the compiler figure this out." The big win here is that everything is transparent to users in terms of existing code. You haul your existing application out, add an Object or two, compile it, and, ta-dah, it's done. No muss, no fuss. Complete interoperability both in terms of types and source code. No one argues that scenario as being the ideal, much as no one argues the ideal of a perpetual motion machine. In physics, the obstacle is the second law of thermodynamics, and the existence of entropy. In a multi-paradigm programming language, the laws are considerably different, but the disintegration of the system can be equally pronounced.

In a multi-paradigm language, things work reasonably well within each paradigm, but tend to fall apart when paradigms are incorrectly mixed, leading to either the program blowing up or, even worse, completing but generating incorrect results. We run into this most commonly between support for independent object-based and polymorphic object-oriented class programming. Slicing drives every newbie C++ programmer nuts:

В CopyCode imageCopy Code
DerivedClass dc;    // an object
BaseClass &bc = dc; // ok: bc is really a dc
BaseClass bc2 = dc; // ok: but dc has been sliced to fit into bc2

So, the second law of language design, so to speak, is to make things that behave differently look different enough that the user will be reminded of it when he or she programs in order to avoid ... well, screwing up. It used to take half an hour of a two-hour presentation to make any dent in the C programmer's understanding of the difference between a pointer and a reference, and a great many C++ programmers still cannot clearly articulate when to use a reference declaration and when a pointer, and why.

These confusions admittedly make programming more difficult, and there is always a significant trade-off between the simplicity of simply throwing them out, and the real-world power that their support provides. And the difference is the clarity of the design, as to whether they are usable or not. And usually the design is through analogy. When pointers to class members were introduced into the language, the member selection operators were extended (-> to ->*, for example), and the pointer to function syntax was similarly extended (int (*pf)() to int (X::*pf)()). The same held true with the initialization of static class data members, and so on.

References were necessary for the support of operator overloading. You could get the intuitive syntax of

В CopyCode imageCopy Code
Matrix c = a + b;  // Matrix operator+( Matrix lhs, Matrix rhs ); 
c = a + b + c;

but that is hardly an efficient implementation. The C-language pointer alternative, while providing efficiency, broke apart with its non-intuitive syntax:

В CopyCode imageCopy Code
// Matrix operator+( const Matrix* lhs, const Matrix* rhs );В 
Matrix c = &a + &b;
c = &( &a + &b ) + &c;

The introduction of a reference provided the efficiency of a pointer, but the lexical simplicity of a directly accessible value type. Its declaration is analogous to the pointer, and that was easy to internalize:

В CopyCode imageCopy Code
// Matrix operator+( const Matrix& lhs, const Matrix& rhs );
Matrix c = a + b;

but its semantic behavior proved confusing to those habituated to the pointer.

So, the question then is, how easily will the C++ programmer, habituated to the static behavior of C++ objects, understand and correctly use the managed reference type? And, of course, what is the best design possible to aid the programmer in that effort?

We felt that the differences between the two types were significant enough to warrant special handling, and therefore we eliminated choice #1. We stand by that choice, even in the new syntax. Those that argue for it, and that includes most of us at one time or another, simply haven't sat down and worked through the problems sufficiently. It's not an accusation; it's just how things are. So, if you took the earlier design challenge and came up with a transparent design, I am going to assert that it is not in our experience a workable solution, and press on.

The second and third choices, that of resorting to either a library design, or reusing existing language elements, are both viable, and each have their strong proponents. The library solution became something of a litany within Bell Laboratories due to the easy accessibility of Stroustrup's cfront source. It was a case of, Here Comes Everybody (HCE), at one point. This person hacked on cfront to add concurrency, others hacked on cfront to add their pet domain extension, and each paraded their new Adjective-C++ language, and Stroustrup's correct response was, no, that is best handled by a library.

So, why didn't we choose a library solution? Well, in part, it is just a feeling. Just as we felt that the differences between the two types were significant enough to warrant special handling, we felt that the similarities between the two types were significant enough to warrant analogous treatment. A library type behaves in many ways as if it were a type built into the language, but it is not, really. It is not a first class citizen of the language. We felt, as best as we could, we had to make the reference type a first class citizen of the language, and therefore, we chose not to employ a library solution. This remains controversial.

So, having discarded the transparent solution because of a feeling that the reference type and the existing type object model are too different, and having discarded the library solution because of a feeling that the reference type and the existing type object model need to be peers within the language, we are left with the problem of how to integrate the reference type into the existing language.

If we were starting from scratch, of course, we could do anything we wished to provide a unified type system, and -- at least until we made changes to that type system -- anything we did would have the shine of a spanking brand-new widget. This is what we do in manufacturing and technology in general. We are constrained, however, and that is both a blessing and a curse. We can't throw out the existing C++ Object Model, so anything we do must fit into it. In Managed Extensions, we further constrained ourselves not to introduce any new tokens; therefore, we must make use of those we already have. This doesn't give us a lot of wiggle-room.

So, to cut to the chase, in Managed Extensions, given the constraints just enumerated (hopefully without too much confusion) the language designers felt that the only viable representation of the managed reference type, was to reuse the existing pointer syntax – references were not flexible enough since they cannot be reassigned and they are unable to refer to no object:

В CopyCode imageCopy Code
// the mother of all objects allocated on the managed heap...
Object * pobj = new Object;

// the standard string class allocated on the native heap...
string * pstr = new string; 

These pointers are significantly different, of course. For example, when the Object entity addressed by pobj is moved through a compaction sweep through the managed heap, pobj is transparently updated. No such notion of object tracking exists for the relationship between pstr and the entity it addresses. The entire C++ notion of a pointer as a toggle between a machine address and an indirect object reference doesn't exist. A handle to a reference type encapsulates the actual virtual address of the object in order to facilitate the runtime garbage collector much as a private data member encapsulates the implementation of a class in order to facilitate extensibility and localization, except that the consequences of violating that encapsulation in a garbage collected environment is considerably more severe.

So, while pobj looks like a pointer, many common pointerish things are prohibited, such as pointer arithmetic and casts that step outside the type system. We can make the distinction more explicit if we use the fully qualified syntax of declaring and allocating a reference managed type:

В CopyCode imageCopy Code
// ok, now these looks different ...
Object __gc * pobj = __gc new Object;
string * pstr = new string;

At first blush, the pointer solution seemed reasonable. After all, it seems the natural target of a new expression, and both support shallow copy. One problem is that a pointer is not a type abstraction, but a machine representation (with a tag type recommendation as to how to interpret the extent and internal organization of the memory following the address of the first byte), and this falls short of the abstraction the software runtime imposes on memory and the automation and security one can extrapolate from that. This is a historical problem between object models that represent different paradigms.

A second problem is the (metaphor alert -- a strained metaphor is about to be attempted – all weak-stomached readers are advised to hold on or jump to the next paragraph) necessary entropy of a closed language design which is constrained to reuse constructs that are both too similar and significantly different and result in a dissipation of the programmer's energy in the heat of a desert mirage. (metaphor alert end).

Reusing the pointer syntax turned out to be a source of cognitive noise for the programmer: you have to make too many distinctions between the native and managed pointers, and this interferes with the flow of coding, which is best managed at a higher level of abstraction. That is, there are times when we need to, as system programmers, go down a notch to squeeze some necessary performance, but we don't want to dwell at that level.

The success of Managed Extensions is that it supported the unmodified recompilation of existing C++ programs, and provided support for the Wrapper pattern of publishing an existing interface into the new managed environment with a trivial amount of work. This could then add additional functionality in the managed environment, and, as time and experience dictated, one could port this or that portion of the existing application directly into the managed environment. This is a magnificent achievement for C++ programmers with an existing code base and an existing base of expertise. There is nothing that we need to be ashamed of in this.

However, there are significant weaknesses in the actual syntax and vision of Managed Extensions. This is not due to inadequacies of the designers, but in the conservative nature of their fundamental design choice to remain within the existing language. And that resulted from a misapprehension that the managed support represented not a domain abstraction but an evolutionary programming paradigm that required a language extension similar to that introduced by Stroustrup to support Object-Oriented and generic programming. This is what the new syntax represents, and why it is both necessary and reasonable despite some of the grief it engenders for those who committed themselves to Managed Extensions. This is the motivation behind both this guide and the translation tool.

The New Syntax Design

Once it became clear that support for the Common Language Infrastructure within C++ represented a distinct programming paradigm, it followed that the language needed to be extended to provide both a first class coding experience for the user, and an elegant design integration with the ISO-C++ standard in order to respect the sensibility of the larger C++ community and engage their commitment and assistance. It also followed that the diminutive name, Managed Extensions for C++, had to be replaced as well.

The flagship feature of the new design is the reference type, and its integration within the existing C++ language represented a proof of concept. What were the general criteria? We needed a way to represent the managed reference type that both set it apart and yet felt analogous to the existing type system. This would allow people to recognize the general category of form as familiar while also noting its unique features. The analogy is the introduction of the reference type by Stroustrup in the original invention of C++. So the general form becomes

В CopyCode imageCopy Code
Type TypeModToken Id [ = init ];

where TypeModToken would be one of the recognized tokens of the language reused in a new context (again, similar to the introduction of the reference).

This was surprisingly controversial at first, and still remains a sore point with some users The two most common initial responses I recall are (a) I can handle that with a typedef, wink, wink, and (b) it’s really not so bad. (The latter reminds me of my response to the use of the left and right shift operators for input and output in the iostream library.)

The necessary behavioral characteristics are that it exhibit object semantics when operators are applied to it, something Managed Extensions was unable to support. I liked to call it a flexible reference, thinking in terms of its differences with the existing C++ reference (yes, the double use of the reference here – one referring to the managed reference type and the other referring to the “it’s not a pointer, wink, wink” native C++ type – is unfortunate, much like the reuse of template in the Gang of Four Patterns book for one of my favorite design strategies):

  • It would have to be able to refer to no object. The native reference, of course, cannot do that directly although people are always showing me a reference being initialized to a reinterpret-cast of a 0. (The conventional way to have a reference refer to no-object is to provide an explicit singleton representing by convention a null object which often serves as a default argument to a function parameter.)

  • It would not require an initial value, but could begin life as referring to no object.

  • It would be able to be reassigned to refer to another object.

  • The assignment or initialization of one instance with another would exhibit shallow copy by default.

As a number of folks made clear to me, I was thinking of this puppy backwards. That is, I was referring to it by the qualities that distinguished it from the native reference, not by the qualities that distinguished it as a handle to a managed reference type.

We want to call the type a handle rather than a pointer or reference because both of these terms carry baggage from the native side. A handle is the preferred name because it is a pattern of encapsulation – someone named John Carolan first introduced me to this design under the lovely name of the Cheshire Cat since the substance of the object being manipulated can disappear out from under you without your knowledge.

In this case, the disappearing act results from the potential relocation of reference types during a sweep of the garbage collector. What happens is that this relocation is transparently tracked by the runtime, and the handle is updated to correctly point to the new location. This is why it called a tracking handle.

So, the final item I wish to mention about the new tracking reference syntax is the member selection operator. To me, it seemed like a no-brainer to use the object syntax (.). Others felt the pointer syntax (->) was equally obvious, and we argued our position from different facets of a tracking reference’s usage:

В CopyCode imageCopy Code
// the pointer no-brainer
T^ p = gcnew T;

// the object no-brainer
T^ c = a + b;

So, as with light in physics, a tracking reference behaves in certain program contexts like an object and in other situations like a pointer. The member selection operator that is used is that of the arrow, as in Managed Extensions.

A Summary Digression on Keywords

Finally, an interesting question is to ask is, why did Stroustrup add class to the C++ language design? There is no real necessity for its introduction since the C-language struct is extended within C++ to support everything that is possible to do with a class. I have never asked Bjarne about this, so I have no special insight, but it is an interesting question and seems somewhat relevant given the number of keywords added to the new version of C++.

One possible answer – I call it the foot soldier shuffle – is to argue that, no, the introduction of class was absolutely necessary. After all, not only is the default member access different between the two keywords, but so is the access level of the derivation relationship as well. So of course how could we not have both?

But back then of course introducing a new keyword that is not only incompatible with the existing language but imported from a different branch of the language tree (Simula-68) risked offending the C-language community. Was the difference in implicit default access rules really the motivation? I can’t convince myself of that.

For one thing, the language neither prevents nor warns if the designer using the class keyword makes the entire implementation public. There is no policy in the language itself with regard to public and private access, and so it hardly seems reasonable to suggest that the default unlabeled access levels permissions is considered an important property – that is, important enough to outweigh the cost of introducing an incompatibility.

Similarly, the wisdom of defaulting an unlabeled base class to private inheritance seems questionable as a design practice. It is both a more complex and less understood form of inheritance since it does not exhibit type/subtype behavior and thus violates the rules of substitutability. It represents a reuse not of interface but of implementation, and having private inheritance be the default is, I believe, mistaken.

Of course, I couldn’t say that in public because in the language marketplace, one should never admit one iota of imperfection in the product, since that is providing fodder to the enemy who will be swift to seize on any competitive advantage to gain market share. Ridicule is particularly popular in the intellectual niche. Or, rather, one doesn’t admit imperfection until the new, improved product is ready to be rolled out.

What other reason could there be for the introduction of the class incompatibility? The C-language conception of a struct is that of an abstract data type. The C++ conception of a class (well, of course, it did not originate with C++) is that of a Data Abstraction, with its accompanying ideas of encapsulation and interface contract. An abstract data type is just a contiguous chunk of data associated with an address – point to it, cast it about, pick it apart, and move on swiftly. A data abstraction is an entity with lifetime and behavior. It’s of pedagogical significance, because words make a world of difference – at least within a language. This is another lesson the Visual C++ 2005 syntax design takes to heart.

Why didn’t C++ just drop struct altogether? It is inelegant to retain the one and introduce the other, and then literally minimize the difference between them. But what other choice was there? The struct keyword had to be retained, because C++ had to be as closely backward compatible with C as possible; otherwise, not only would it have been less popular with the existing programmer base, but it probably would not have been allowed out the door. (But that’s another story for another time and place.)

Why is a struct by default public? Because otherwise existing C programs would not compile. That would be a disaster in practice, although one would certainly never hear that mentioned in Advanced Principles of Language Design. There could have been an imposition within the language to impose a policy such that the use of struct guarantees a public implementation whereas the use of class guarantees a private implementation and public interface, but that would serve no practical purpose and would therefore be a bit too precious.

In fact, during testing of the release of the cfront 1.0 language compiler from Bell Laboratories, there was a minor debate within a small circle of language lawyers as to whether or not a forward declaration and subsequent definition (or any such combination) had to consistently use the one or other keyword, or should they be allowed to be used interchangeably. If struct had any real significance, of course, that would not have been allowed.



JavaScript EditorDhtml editor     Free javascript download