What is IDL in a distributed system

Investigation of the possibilities of distributed application development with DCOM and Corba

Table of Contents

1 Distributed Applications
1.1 The client / server model
1.2 Middleware

2 component-based distributed applications
2.1 Areas of application
2.2 Basics
2.2.1 Interfaces
2.2.2 Server types and lifetime management

3.1 history
3.2 From COM to DCOM
3.3 Interfaces in (D) COM

4 Corba
4.1 history
4.2 Differentiation from DCOM
4.3 The architecture of Corba

5 Comparison of Technologies
5.1 interoperability
5.2 Reliability
5.3 Performance
5.4 Future security
5.5 Summary of the comparison

6 Outlook
6.1 DCOM
6.2 Corba


Corba and DCOM are the most advanced technologies in support of distributed application development in companies. The aim of this elaboration is to present the theoretical fundamentals of a component-based distributed environment and finally to compare the two products with each other.

1 Distributed Applications

One speaks of distributed applications as soon as a number of processes communicate with one another across computer boundaries. The following examples show how widespread and as such difficult to recognize distributed applications are [Ö97]

- network operating systems,
- remote database access,
- RPC’s and
- distributed object systems.

As different as these systems are, what they all have in common are the advantages of distributing resources. Access to centrally managed data avoids redundancies and inconsistencies. Scarce resources such as printers or computing capacity can be used more efficiently. Data exchange and communication processes are made more efficient. All of these models use technologies that help to cope with the complexity that results from the distribution of processes. [Fer99]

1.1 The client / server model

All distribution techniques presented here are based on the client / server model. Here, one party provides a service that another can use. If a client requests the service from the server, it must contact the server via previously defined interfaces. After the service has been provided, the server returns the result to the client. This behavior can be continued over several layers. In this case, one server in turn uses the service of another server and thus becomes a client itself. Such an architecture is also referred to as an n tier architecture model and is described in more detail in Section 2.1. The client / server model is widespread in information technology. They are used in both hardware and software development. [Ö97]

1.2 Middleware

The client / server model describes how, from the software developer's point of view, applications can cooperate with one another across computer boundaries. It says nothing about how the necessary communication between client and server is implemented. Calls beyond process or even computer limits are by no means trivial. It must be taken into account that, in order to guarantee system security, interprocessual communication can only be realized with the help of contrivances and detours. This is necessary because no process should be able to access the process space of another application. So it becomes an infrastructure, the so-called middleware1 required that enables data exchange between processes. It is typical in distributed environments and so are DCOM and Corba middleware applications. The middleware basically has the task of supporting the distribution of software. This remains invisible to the user. However, in order to be able to use their mechanisms effectively, the software developer must have a deep understanding of how they work. In the course of time, a number of techniques have been established that support distributed application development. These range from the simple exchange of messages (message passing) to the calling of remote functions (remote procedure calls) to distributed components (DCOM and Corba). The quality of a middleware depends on a number of partly competing properties. A high degree of flexibility is often at the expense of ease of use and leaves many tasks to the developer. In principle, however, the following requirements can be placed on every middleware. [Ö97]

Independence from the transport system The exchange of data should be made possible independently of underlying technologies. Routine tasks such as connection establishment or synchronization should be taken over by it.

Hide heterogeneity The middleware must ensure that the applications work together in a heterogeneous environment. This also means that operating system or hardware specifics on the client or server side are taken into account, e.g. to keep data types compatible with one another.

Ensure transparency Here, internal structures should remain invisible to the observer, who does not need to know them in order to perform his tasks. This reduces development effort, increases the overview and makes maintenance easier. The following transparency functions are summarized under the generic term distribution transparency:

Location transparency The location of a service should remain hidden from the user.

Access transparency Access to local and remote services is treated equally.

Migration transparency Services can be relocated without affecting the user.

Replica Action Transparency Services can be duplicated, whereby the consistency of the data remains assured.

Transaction transparency Coordination mechanisms for transactions remain invisible.

Error transparency As far as possible, error correction measures should be carried out unnoticed by the user.

Autonomy In order to fulfill a common purpose, autonomous units cooperate with one another in a client / server system. These units can correspond to corporate structures and they are granted a certain amount of independent freedom of choice. The degree of autonomy is determined by the need for self-control. For this purpose, interests (global and local) are weighed against one another.

Scalability Scalability is the ability of the system to adapt to a constantly changing environment. Over time, the number of users changes, the number of computer nodes in the network or new hardware and software are added. Middleware must be able to adapt to these permanent changes. For example, it should be possible to integrate new users into the system, relocate applications, remove components or replace them with new ones, without there being any restrictions in the operation of the system.

2 component-based distributed applications

Modern software development attempts large, complex, monolithic applications in easily developed, easily understandable and exchangeable parts, so-called components2 disassemble. These can be developed in different languages, provided there is an environment that ensures the interaction of the components. In addition to the software-related advantages (easy maintenance, reuse, etc.), there is the option of distributing these components in networks. If the software is cleverly designed, the individual components of the application work largely autonomously. If this is the case, it is possible to tear the software apart and distribute it without impairing its functionality. Thus, in principle, every application that was developed on a component basis can also be distributed. [Koc99] The resulting advantages are:

Maintainability Functionality shared by many applications can be managed centrally and is therefore easy to use. The division of the applications into independent components makes it possible to exchange individual modules without endangering the functionality of the overall application.

Encapsulation The concept of information hiding is implemented by defining interfaces. This means that the developer only needs to know the interfaces offered. He does not need to worry about the underlying implementation.

Expandability Interfaces are an integral part of component-based applications. With them it is possible for the developer to connect new modules to existing implementations.

Flexibility The modular alignment makes it easy to adapt individual components for the respective application. Problem-free integration and short development cycles are the result.

It does not make sense to distribute them for every component. It must be taken into account that the interconnection of a network creates an overhead of communication and coordination tasks, which can lead to a loss of performance. You have to bear in mind that the runtime behavior during distribution is significantly influenced by the communication effort. A component executed on another computer should be manipulable with a low data volume. The temporal one

The overhang created by the interconnected network should not be too high compared to the time required to process the request. [Tha99]

2.1 Areas of application

Figure not included in this excerpt

Figure 1: three-tier architecture

Wherever client / server-based applications are used, component-based distributed applications are, in principle, also conceivable. This particularly applies to the business layer of the three tier architecture model shown in Figure 1. This software architecture model describes the structure of an application which enables many clients to work on a common database. It is a popular model for business applications and represents an extension of the client / server model. The clients of the first layer are responsible for entering the data and for validating it. This data is used in the business layer, e.g. to create an offer for a customer. To do this, this layer uses data from the database layer. The advantage of this approach is that the frequently changing parts of such an application, the business rules, are only available centrally. Components, as status-oriented modules, are particularly suitable for mapping process-controlled processes. In addition, this layer is already connected upstream of a remote database, so that hardly any performance losses are to be expected from the additional layer. [Cha98]

2.2 Basics

Both technologies presented here have strong similarities in terms of the underlying techniques. For this reason, an insight into the functioning of a component-based distribution environment is given at this point and the necessary additions to the individual technologies are then provided.

Due to the requirement for transparency, the developer should not notice any difference between a local component - executed in the process area of ​​his application - and a remote component. The central tasks of a component distributing middleware can be derived from this requirement. First of all, the client must be given the opportunity to become familiar with the component, to get to know its published methods, its interface. As soon as a client tries to access a component via the interface, it must be localized and initialized on the remote computer. In addition, method calls from the client must be intercepted and passed through to the server, executed there and the return values ​​returned in the opposite direction. These tasks are carried out with the help of so-called stub / skeleton technology, which is already used by RPC's3 is used, solved.

Figure not included in this excerpt

Figure 2: Method invocation in a distribution environment

2.2.1 Interfaces

As you can see in Figure 2, instead of the component code in the client, only a facade, the so-called stub4, implemented. It intercepts method calls and passes them on to the middleware that is responsible for executing them on the remote computer. For this mechanism to work, the client must have a clear picture of the server component.

The interface definition language In the interface definition language, the public methods of the component are declared in a programming language-independent form. Each component can define any number of different interfaces. Both the server and the client get their idea of ​​the component from this source. In addition, the code for the stub is generated from this, which is linked to the client application. The server component also receives compiled code, the skeleton5which accepts the method request and calls the corresponding method in the server. It is then the work of the programmer to bring these methods to life. [Ros98]

An interface definition differs in some points from a normal object definition because it has to meet the requirements of a distributed environment.

interface a [uuid (0000-0000-0000-0000)]


attribute UShort value;

void set_value (in UShort new_value);

void get_value (out UShort output_value);

void add_to_value (inout UShort value_to_be_added); oneway void performthisoperation ();


As you can see in the example above, which follows the specification of the IDL from Corba, only the public methods of the object are described in the interface. In addition, the arguments of a function are given an additional parameter that specifies in which direction the arguments are sent when the method is called. This has the advantage that no unnecessary data is sent over the network. You can also specify whether the client should wait for the operation to be carried out (synchronous call, standard setting) or whether it should continue immediately after the command has been sent (asynchronous call). A brief summary of the IDL key words can be found in Table 1. The complete specification of the Corba IDL can be found in [Obj], that of the DCOM IDL in [Mic].

Figure not included in this excerpt

Table 1: Key attributes in IDL

In addition to their identifier, interfaces are identified with a globally unique key (GUID6 ) occupied. This is automatically generated when the interface is created based on the system time and computer specifics and thus ensures to a large extent that there are no two different interfaces with the same GUID anywhere in the world. With the help of the key, the interface and its associated components are managed and referenced internally.

Dynamic binding The method of static binding described so far has a decisive disadvantage. The interface must already be known to the client at compile time. However, this is not always possible. That is why there is also the option of dynamic binding. This makes it possible to request the interface during runtime and to access server objects without an explicit client stub. The client then compiles and sends each method call according to its definition. However, this technique has the disadvantages that performance losses are to be expected compared to static binding and that no type checking of the arguments takes place. Dynamic binding is required, for example, if you want to provide scripting options.

The advantages of interface technology The interface technology makes it possible to develop components and access them beyond process limits. By separating interface and implementation, language independence is achieved and the ability to access common components in heterogeneous environments is achieved. Changes to the implementation are also possible without affecting the client.

2.2.2 Server types and lifetime management

Since components are not independent programs, they need an application in whose process space they can run. This means that as soon as a client requests a component, the associated server application must be started7in order to then be able to instantiate the component in their address space. An application that is started automatically must also terminate itself as soon as it is no longer needed. It is therefore necessary not only to manage the lifetime of a component but also that of the surrounding server application. For example, it must also be taken into account that in the event of a network error or if the client crashes, the connection to the server can suddenly be lost without the client noticing. In such a case, it is the responsibility of the middleware to notify the server of the incident. Both Corba and DCOM support this task. Figure 3 shows some of the many variations on how clients are

Figure not included in this excerpt

Figure 3: Clients access components

Components can access. As you can see, several components can reside in one and the same server. Several clients can use the same instance of a component or a separate instance is created for each client. Here, too, there are various problems of lifetime management, which can usually be solved using reference counters. A complete list of server types can be found in [Sch97] and in [Tha99, page 91ff].


3.1 history

The distributed component object model (DCOM) is a technology introduced in 1996 by Microsoft. It was first supplied with NT 4.0 and has also been available for Windows 95 since 1997. There are also some ports under the name EntireX for various Unix derivatives (e.g. Sun Solaris, Digital Unix, HP-UX, AIX, Linux, OS / 390) from Software AG. DCOM is based on the component object model (COM) and extends this with the possibility of distribution.

3.2 From COM to DCOM

The extensions compared to the COM standard are comparatively small. COM offers a process the possibility to access components residing in another process. It represents the basis for many technologies used to link applications. The aim is to be able to provide software modules and functionalities developed in any programming language for other applications. The communication of the data required for this and the forwarding of function calls are handled by the COM runtime system. In order to be able to localize a component in the server, COM requires a fixed structure of the binary code of the application. This means that COM objects can only be created with the appropriate compilers. It is also not possible to derive from COM objects, as this would require a more precise picture of the component used. Therefore COM is more an object-based than an object-oriented technology. Figure 4 shows

Figure not included in this excerpt

Figure 4: Interprocess communication in COM and DCOM

the differences between COM and DCOM. During COM LPC’s8 used for communication, these are used in DCOM by DCE-RPC’s9 replaced and additional security mechanisms implemented. Otherwise, both technologies are absolutely identical, which makes the switch to DCOM much easier for programmers who have already used COM. [Koc99]

DCOM alone only provides the basic functionality that is required for distributing objects. Microsoft offers with the back office family10 offers a range of complementary products designed to increase the utility of DCOM. They roughly correspond to the services from Corba, which are discussed in more detail in Section 4.3. In contrast to these, the back office products are not part of DCOM, but independent applications, which are mainly used in the 2 and 3 layers of the three tier architecture model.

3.3 Interfaces in (D) COM

Interfaces ensure that object representation and implementation are separated from one another. However, COM interfaces require a special structuring of the binary code of a component. This separation is thus partially canceled. In addition to this disadvantage, there are a few other disadvantages related to the use of interfaces.

Lack of structuring of the interfaces One disadvantage is that a client only sees a compilation of related interfaces, but does not receive any information on the hierarchy of embedded components. For example, Word publishes a component called an application. This has, among other things, the property ActiveDocument, which refers to a document component. In the interface declaration, however, the application and the document appear on the same level, although direct access to the document is not possible. In addition, inheritance structures fall victim to this flat order. Although interface declarations in the IDL support multiple inheritance, this information is lost when compiling into the .tlb file. That is why you can see all published methods in each interface, but not from which interfaces they are derived.

Limitations in the definition In addition, MIDL only allows method declarations but not the definition of properties. It is true that with automation technology11 circumventing this, but has to accept performance losses. [Sch98]

Administration of the interface definition Furthermore, there is no central point of contact from which both the client and the server can obtain interface information. With the MIDL12 Defined interfaces are either linked to the server application or saved in a .tlb. This can then be passed on and used in the respective client application. This information is stored locally in the registry in both the client and the server. This means a doubling of data and thus more difficult administration. The solution here is the active directory service, which is delivered together with NT 5.0 and will serve as a central database for the user data and service information of a domain [Mic96].

4 Corba

4.1 history

In December 1991, the OMG13 the Corba standard, which should enable the cross-platform use of distributed objects. Corba is a specification that describes the basic properties of a distribution architecture. The primary goal is independence from platform or programming language. This enables Corba to be integrated into an existing system environment, which is a decisive advantage when used in companies. From version 2.0 of the Corba standard it is also regulated how the interaction of several Corba implementations developed by different providers is to be regulated. This means that a large number of compatible Corba products, such as the ObjectBroker from BEA Systems or the VisiBroker from Inprise, are available. [Ros98]

4.2 Differentiation from DCOM

Basically, Corba and DCOM are very similar. A major difference is that Corba is only a specification, while DCOM, as an integral part of Windows operating systems, is specially tailored to these. The resulting advantage is the easier portability of Corba. In addition, Corba was specially designed for the needs of a component-based distribution platform, while DCOM is more to be understood as an extension of COM. With DCOM, Microsoft acts on the principle of “first implement and then design” [MET98, page 19], while Corba takes the exact opposite approach. In this way, Corba offers broader support for the problems to be expected in the context of component development. With DCOM, for example, the client is required to specify a server computer on which it suspects the corresponding component. Corba, on the other hand, uses a smart agent to find the relevant object itself. [Cha98]

4.3 The architecture of Corba

Figure 5 shows the structure of a Corba ORB14. The ORB is the counterpart to the DCOM runtime environment. It is the heart of every Corba implementation and is responsible for redirecting client requests to a corresponding server. The individual components of the ORB are explained in more detail below. A complete description can be found in [Obj95]

Figure not included in this excerpt

Figure 5: Corba ORB architecture

Interface Repository The Interface Repository is a distributed runtime database that contains machine-readable versions of the IDL-defined interfaces, the metadata. It is the central point of contact for finding information about the interfaces. One API15 allows the metadata to be read, saved and changed.

Dynamic Invocation Interface (DII) With the help of this service, dynamic binding is implemented on the client side.

Object Implementation The Object Implementations implement the interface descriptions specified in the IDL interfaces. They can be implemented in a wide variety of languages ​​supported by the ORB. These include C, C ++, Delphi, Java, Smalltalk and Ada.

Object adapter The object adapter accepts service requests from clients and activates the corresponding server objects. It forms the runtime environment for the server objects, transfers the requests to them and assigns object references to them. The object adapter registers the classes and objects it supports in the implementation repository.

Implementation Repository The implementation repository is a runtime database that contains information about the classes and objects supported by a server. Further information related to the implementation of ORBs, e.g. trace information, audit logs, security and other administrative data, can also be found there.

Dynamic Skeleton Interface (DSI) The Dynamic Skeleton Interface is the counterpart to the DII on the client side. It allows incoming method calls from a client, for which no corresponding IDL-compiled skeletons exist, to be processed dynamically. Its task is to analyze incoming requests from a client and to find the object intended for this and to issue the method call. [Cha98]

In addition, Corba offers a range of services that expand the functionality of the ORB. They support you with such central questions as the lifetime management of a component, security checks and synchronization of objects. The services in detail are:

Life Cycle Service Regulates the creation and destruction of objects.

Persistence Service Enables the storage of object states and references.

Naming Service Used to find objects in the network using their names.

Event Service Enables the server to notify clients about the occurrence of events.

Concurrency Control Service Serves the synchronization of objects and thus prevents deadlocks from occurring.

Transaction Service Used to combine several related operations into one transaction. See also section 5.2.

Relationship Service Used to manage relationships between objects. If, for example, an object is deleted, all embedded objects are also removed.

Externalization Service Enables the state of objects to be converted into a data stream. This means that an object can be sent beyond computer limits.

Query Service defines operations on groups of objects with the same predicates, attributes or properties.

Properties Service properties are only assigned to the object at runtime.

Time Service Synchronizes the system clocks.

Security Service Protects against unauthorized access.

Further services are the Licensing Service, the Trader Service and the Collection Service. A more detailed description of the services can be found in [Höl99, page 11ff], the full specification in [Obj96].

Another collection of services is brought together in the Corba facilities. Here you will find complex modules that provide a range of standard functions. The common facilities under development include, for example, printing, e-mail, data exchange, frameworks for business objects and internationalization. However, these are not part of the Corba specification. More on this can be found in [Ede97].

5 Comparison of Technologies

Even if the approaches that both technologies pursue are very similar, the approaches are very different. In the following chapter an attempt should be made to make a comparison between DCOM and Corba, which, however, is a noble undertaking given such complex environments and the multitude of possible uses.

Only the fact that Corba as a specification can build on many different implementations makes the comparison questionable. Nevertheless, advantages and disadvantages of the individual techniques can be worked out with regard to certain properties. Which technology is then given preference depends largely on the requirements of the respective task. The following list is based on ”Corba vs. DCOM - Solutions for the enterprise” from META Group Consulting [MET98]. In relation to the distributed application development in the business area of ​​a company, this paper compares the requirements for a distribution platform and its implementation in Corba or DCOM. It is assumed that there is a system that needs to be expanded and that the software must be constantly adapted to the changing infrastructure in the company.

5.1 interoperability

The task of middleware is to connect software systems to one another. In order to be able to guarantee this in a heterogeneous environment, it must be able to allow interaction between the systems as independently of the underlying technologies as possible. Interoperability here means above all the ability of the middleware to be able to serve as broad a spectrum of different environments as possible.

Due to the broader platform support, Corba can score a clear victory here and Corba is also ahead in terms of support from flanking modules.

Figure not included in this excerpt

5.2 Reliability

Complex processes that are not visible to the outside world are carried out in a middleware. Errors can occur within these processes, which then have far-reaching consequences. Middleware must therefore independently ensure that the damage is limited in such cases.

In addition, the open architecture offers points of attack for attacks from outside, so that additional security mechanisms are required.

In order to be able to use middleware sensibly, it must bring a number of services with it that take on these tasks or support them in their implementation.

Transaction A transaction is understood to be an operation or a group of operations that have to be carried out down to the last detail. This means that either all objects involved in the transaction carry out the transaction successfully (update their own data records), or all objects involved must abort the transaction (return to the state before the transaction was initiated). [Ros98] Corba offers the transaction service, who is responsible for this task. With the Microsoft Transaction Server (MTS), Microsoft has a comparable solution to offer, but the META Group prefers the Corba approach, as the transaction service is integrated into the specification.

Messaging Messaging is understood to be the secure sending and receiving of messages. The aim is to ensure that a message sent once also reaches the recipient in any case, even if the recipient is temporarily unavailable. This is made possible by the fact that messages are temporarily stored by the middleware and sent as soon as a connection is established again. The client should not be blocked during this waiting time (asynchronous sending). Some Corba implementations offer messaging support by integrating an independent message-oriented middleware (MOM). By default, DCOM does not offer any messaging. However, the Microsoft Message Queue Server (MSMQ) is a product that promises to close this gap.

Security A central aspect with which the acceptance of middleware stands or falls is the question of the security mechanisms of the offered solution. DCOM uses the Windows NT security framework. The authenticity of the client is ensured via user names and user groups and appropriate access rights are assigned. With every attempt to access a component, a check is made to determine whether the client is authorized for use. Security specifications can be assigned individually for each interface. This means that there is no need to program security-specific code on the client and server side. For other platforms, a Windows NT compatible security provider is used in order to be able to use this security mechanism.

In addition to authentication, data encryption also plays a role. In the Internet in particular, it is important to ensure that the connection between client and server cannot be eavesdropped. Here come SSL16 for use and other authentication mechanisms (e.g. public key procedures or certificates).

The Corba security service is one of the strictest of all. It includes authentication, access control and confidentiality. It offers a 3-stage security concept and thus takes into account the different security requirements in the company. Unfortunately, most Corba ORBs implement the standard inadequately or not at all. Only Visigenic can offer an ORB that corresponds to the highest security level.

Fault tolerance The catchphrase fault tolerance conceals a number of requirements that reflect the wish that middleware should be able to repair itself as widely as possible. This means that in the event of a failure, alternative servers are searched for or work is temporarily stored.

Corba does not offer such a service within the scope of its specification. However, some manufacturers also deliver fault-tolerant systems. DCOM implements a simple ping mechanism. Keep alive messages are sent to the server at regular intervals. If this does not happen for a certain period of time, the connection is assumed to be interrupted and the reference counter is reduced by one. However, the period of time until a client is declared dead cannot be configured and it cannot be determined whether the server is not hanging in an infinite loop. In addition, this process leads to increased network load. You can find out more about this in the white papers on DCOM under [Mic96].

Figure not included in this excerpt

5.3 Performance

Of course, the result of a performance evaluation of such a complex environment as represented by middleware must be enjoyed with the appropriate caution. That is why the META Group is not examining the speed of reaction of one or the other installation, but rather its ability to adapt to different environments and requirements. You have to rely on statements from users of Corba or DCOM, which is not exactly conducive to objectivity. In addition, the scope of support in the creation of multithreading components and tuning options for the components are used as evaluation factors.

According to the specification, Corba neither offers support in the creation of multithreading applications nor does it allow the tuning of components. Nevertheless, many Corba manufacturers deliver their software with the appropriate tools. Similar to Corba, DCOM does not inherently have any multithreading capabilities. However, since it is an integral part of the Windows operating systems, you can easily fall back on their mechanisms.

Figure not included in this excerpt

5.4 Future security

In view of the fact that, in spite of a constantly changing corporate environment, applications often continue to exist for several decades, it is particularly important to be able to fall back on middleware technology, the reuse and integration of software modules also in the long term enables. That is why it is important for companies to build on a technology whose survival is also secured in the long term.

Maturity In view of the fact that many of the above-mentioned services (messaging, transaction) are neither fully available at Corba nor at DCOM, it shows what a great need for development there is with both technologies. Nevertheless, strong efforts are being made on both sides to try to close these gaps.

In addition, the popularity that Corba is currently experiencing shows that even demanding companies, e.g. in the telecommunications industry or airship travel, rely on this technology. Microsoft is trying to reconnect with the MSMQ, MTS and active directory. However, there is a clear restriction that these products only run in a pure Windows NT environment.

Manufacturer support It is important for every company to rely on products that can be assumed to be supported for many years to come.

As a specification that is supported by more than 700 companies, Corba has the best prospects of surviving in the long term. The standardization efforts of OMG also ensure compatibility of the individual installations. In addition, with IBM, BEA, or Orbix, well-known software manufacturers are involved in Corba, so that the continuation of the future should be ensured for the next generations. However, due to the large number of decision-makers, Corba has the disadvantage that it develops more slowly than, for example, DCOM, which is largely supported by just one company.

DCOM, on the other hand, has some restrictions in terms of platform independence and technical sophistication. Since it is a key technology in the Microsoft back office family, it is to be expected that Microsoft will try to push this technology and thus Windows NT through to the server market.

Scalability In order to survive in a changing environment, middleware must be able to cope with growing requirements. In the case of an increasing number of components, there should be no performance drops due to the more difficult management. In addition, there must be options for the administrator to intervene in order to be able to remove bottlenecks.

Both technologies provide insufficient support here. For example, Corba does not offer a corresponding service that provides the desired functions and DCOM has nothing comparable to offer. At Corba, however, based on good experience with large-scale projects, we can assume that it will not reach its limits so quickly.

Figure not included in this excerpt

5.5 Summary of the comparison

From a technological point of view, there are no significant differences between DCOM and Corba. If you disregard the platform independence, both are almost equivalent. In a pure Windows NT environment, DCOM is certainly the better choice, as it fits seamlessly into an existing environment and can also be used without an extra license. Of course, with this choice you are locked into this platform. This disadvantage does not exist when choosing Corba. The goal of OMG has always been to ensure platform independence and the best possible integration into existing environments. In addition, accompanying services are part of the Corba specification. This ensures interoperability, flexibility and increases the stability of the Corba implementation. However, due to its size, the OMG is often slow in its decisions and so Corba implementations are often delivered with non-standardized extensions. [Owe98]

6 Outlook

6.1 DCOM

Microsoft now has the further development of DCOM from the Active Group17 left. In the future, more and more technologies will be grouped under the keyword COM +, the further development of COM. On the distribution side, things like a publish and subscribe service, which allows servers to send event messages to multiple clients, are provided here. In addition, COM + components are queued with messaging18 Expand skills. The dynamic load balancing automatically distributes client inquiries to several servers and the MTS will become an integral part of COM +. These efforts clearly show that Microsoft wants to set an alternative to the open standard Corba without, however, wanting to grant platform independence. [Cle]

6.2 Corba

The Corba version 3.0 has been approved since August 1999. It is still in the pre-production status and will change to the available status in mid-2000 with the appearance of the first implementations. The aim of the new specification is to establish easier handling and to meet new requirements. Attempts are made to integrate the Internet better. There will also be support for firewalls and a revised nameservice. There are also efforts to expand or improve the existing services. [Sie99]

It remains to be noted that both DCOM and Corba have a clear focus on Internet technologies. Corba shows many efforts to expand its standard in this direction. There is particular hope for support from the Java community, as this language is particularly suitable for collaboration with Corba due to its platform independence. Microsoft, on the other hand, uses its own standard with its ActiveX technology.

In the future, both technologies will probably find their place on the market. It is not possible to give a clear statement for or against one or the other technology. Everyone should decide for themselves which middleware they want to use. There are also efforts to combine Corba and DCOM in order to be able to use the advantages of both technologies.

List of abbreviations

Figure not included in this excerpt


[Cha98] Tin-Ho Chan. Seminar lecture: Corba, IDL, Java-ORB. University of Applied Sciences Munich, 1998.

http://www.informatik.fh-muenchen.de/ schieder / seminar-java- ss98 / corba / ausbildung / index.html.

[Cle] James Cleverley. Com + Watch. Wrox.


[Def] Webopedia, online encyclopedia.


[Ede97] Falk Edelmann. Process scenarios for client-server applications with CORBA 2.0. Chemnitz University of Technology, 7 1997.


[Fer99] Ao. Univ. Prof. Dr. Abis Ferscha. Distributed software development. University of Vienna Institute for Applied Computer Science, 1999.

http://www.ani.univie.ac.at/ ferscha / lehre / SS99.

[Höl99] Thomas Höller. Design and implementation of a CORBA / SNMP gateway. Technical University of Munich, 6 1999.

http: //www.nm.informatik.uni- muenchen.de/common/Literatur/MNMPub/Diplomarbeiten/hoel96/HTML- Version / main.html.

[Koc99] Carsten Kocherscheidt. Elaboration: Distributed Objects - DCOM. Ruhr University Bochum, 1999.


[MET98] META Group consulting. CORBA vs. DCOM: Solutions for the Enterprise, 3 1998.

http://www.sun.com/swdevelopment/whitepapers/CORBA-vs- DCOM.pdf.

[Mic] Microsoft. DCOM homepage.


[Mic96] Microsoft. White Paper: DCOM Architecture, 1996.


[Ö97] Özer, Rezic, Sahin, Schneider. Elaboration: Open distributed systems. TU-Berlin, 7 1997.

http://www.de.freebsd.org/ wosch / lv / ovs / elaboration /.

[Obj] Object Management Group OMG. Homepage.


[Obj95] Object Management Group OMG. CORBA 2.0 / Interoperability - Universal Networked Objects, 3 1995. OMG TC Document 95.3.xx.

[Obj96] Object Management Group OMG. CORBAservices: Common Object Services Specification, 3 1996.

[Owe98] Owen Tallman, J. Bradford Kain. COM versus CORBA A Decision Framework. Quoininc, 9-12 1998.

http://www.quoininc.com/quoininc/COM CORBA.html.

[Ros98] Jeremy Rosenberger. CORBA in 14 days. Markt u. Technik, 1998. online version at

http://www.datanetworks.ch/ steger / pages / dom / corba / inhalt.htm.

[Sch97] Martin Schwarz. Wait a minute, I'll combine ... - COM, SOM and CORBA - or the search for software Esperanto. c’t Magazin, 3: 256, 1997.

[Sch98] Arne Schäpers. Short leash - local COM servers and clients. c’t Magazin, 3: 174, 1998.

[Sie99] Jon Siegel. What's Coming in CORBA 3rd Object Management Group OMG, 1999.


[Tha99] Thuan L. Thai. Learning DCOM. O'Reilly, 4 1999.


1 Definition of middleware from [Def]: “The term middleware is used to describe separate products that serve as the glue between two applications. It is, therefore, distinct from import and export features that may be built into one of the applications. Middleware is sometimes called plumbing because it connects two sides of an application and passes data between them. " Alternatively, the terms distribution platform or distribution environment can also be found in the literature

2 the term “component” used here refers to objects as used in modern object-oriented programming languages. Components are independent software modules which can also consist of a number of embedded objects.

3 RPC: remote procedure call

4 stub, English: stump, stub

5 skeleton, English: framework, frame

6 GUID: global unique identifier

7 the so-called remote activation

8 LPC: lightweight procedure call

9 DCE-RPC: An RPC implementation of the distributed computing environment

10 Back office family: A number of Windows NT Server programs such as the SQL Server or the Internet Information Server (a web server)

11 Automation: Synonym for dynamic binding, see also 2.2.1

12 MIDL: microsoft iterface definition language

13 OMG: object management group, consortium of over 700 companies

14 ORB: object request broker

15 API: Application Programming Interface

16 SSL: security socket layer

17 Active Group is made up of the companies Microsoft, Adobe, DEC, HP, SAP and others

18 see also section 5.2