Practical Catia

PROVIDING CATIA TRAINING TO COMPANIES AND INDIVIDUALS ACROSS AMERICA AND AROUND THE WORLD SINCE 1986

 

CATIA® is a registered trademark of Dassault Systemes and has no affiliation with Practical CATIA Training.


Practical CATIA V5 Online Training


With The Next Generation (Almost)
Everything Changes

A white paper on state-of-the-art of 3D CAD in an Internet / intranet-based world

Ulrich Sendler
CAD/CAM Industry Analyst and Manager of CADcircle

Germany
October, 1998

Foreword

There have already been many "new CAD/CAM/CAE generations". The claim has heralded numerous changes from one main version to the next. And not just by marketing employees and sales consultants for software providers. Consultants and analysts have always allowed themselves to be led by the hope that improvements, extensions and especially a thorough redesign of software programs for engineers would allow us to put to rest a few of the chief limitations of this type of computer application.

My book dealing with "3D CAD – Productivity of the new system generation", published in 1994, is no exception – the title says it all. Caution is therefore well advised when words of portent are spoken

It is also true that C technology has already passed through several generations – from the individual software of separate large companies to turnkey standard products, from a simple replacement for the drawing board to modeling of complete entities from a definition of the geometry of a comprehensive tool for automating product development.

One thing has always stayed the same, however: C consists of special systems for engineers, and beyond the engineer, no one could use them. Their effectiveness, stability and quality increased with every leap in development, but remains forever limited to the corporate area of research and development.

Is anything different today? What are the differences between the systems now gradually coming onto the market and those already introduced?  Just how much justification is there this time for speaking of a ‘new generation’ – or, as in the case of CATIA, of ‘CATIA’s Next Generation’?

The first big difference is that we are not dealing with a new generation of C technology. This is a paradigm shift that includes the whole field of computers, the entire manufacturing and consuming industry, and no area will remain unaffected.

The second phenomenon: the technology in question of Java, CORBA and Web, that I would like to describe for you in some detail in this book, was originally designed and developed for other uses than C technology. This fact has fueled a rather skeptical position among many specialists regarding adaptation in this area. It thus appears all the more important to me to understand the real meaning of the environment for the engineering disciplines.

The third innovation: for the first time it is not only interesting but even extremely relevant for the user or for management of the industry to be concerned with the technology that stands behind the software. For the changes that can be derived from it concern not only the development tools available to the engineer. They will have significantly greater affects on the entire company, for example through better support of sales, marketing and customer service or through a fundamental change in business-to-business communication.

Finally, it is so immensely important to deal with this question because the new technology has already arrived, because its effect is already unfolding without any great external signs, because it works alongside and with installed systems, because it brings a speed to software development never before seen, and because anyone who assumes the luxury of underestimating it will be punished. For the competitor will be working with it and gaining a greater lead as every day goes by.

This is reason enough to view the products with other eyes than for familiar systems. Reason enough also to question the attitude that until now has driven the selection, installation and usage of technical software.

The year 2000 is at the door. By coincidence, the computer application has reached a level of maturity that will have similar global consequences to the three zeros that will bring so many computers to their knees in the near future if the user looks into the problem too late or does not understand its full implications.

My advice is to regard the momentous paradigm shift with the same seriousness as the solution to the problem the year 2000 requires of you. Even if the ‘next generation’ is not a generation that will hold sway for a thousand years, it is safe to say that the next ten years will carry its mark.

Before we begin with a discussion of the actual material, I would like express my to IBM and Dassault Systems. Access to their labs and discussions with their specialists provided me with support that certainly goes beyond expectations. It is always a gray area to decide whether making such information public is essential and useful, or whether its main effect will be to allow the competitor to gain an advantage more rapidly.

Incidentally, this may also be a sign of the basically new character of the technology. Open systems are the basis for a new world where communication between areas of a company and with customers and suppliers will no longer be a slogan, but a day-to-day experience.

Introduction

Although I am of the opinion that even industry management at this time should familiarize themselves with this new technology, it's not my intention to bore the readers of this book with a lot of details that would only interest software developers or system administrators.

This book is not designed to provide instructions for programming or installation. Rather, its purpose is to make the background situation more understandable - as well as the generational leap from traditional ways of working with computers to the Java era.

This necessarily requires an examination of how things used to be, and that's the focus of the first part. This will include a discussion on the one hand of the development of the company at the end of the twentieth century to become a globally active, virtual or expanded company, as well as its various requirements. It will also discuss the expansion of C technology to become a network-supported, unlimited means of process integration on the other.

The second part describes from a technological point of view what is happening with regard to standard software, using the distribution providers IBM and Dassault Systems as an example. This is an obvious example, since CATIA’s Next Generation appears to be one step ahead of the rest of the sector in more than one aspect. Not just with products that are available, but especially with respect to the product strategy borne by forward-looking visions; and which not only takes modern technologies into account - it already uses them intensively.

In addition, this section will explain a few areas that have been assigned to or become a part of development processes. What, for example, is happening in office applications and project management? Which tools can be used for integrating engineering and company-wide information technology?

Part Three is designed to help make the impending decisions. It will provide answers to a number of questions, which must necessarily be posed when the role of the present-day upheaval has been defined. What do I have to watch out for? What is going to be important, and what will be less important? What do I have to be prepared for? Which priorities will I have to work with when determining a specific budget?

Because the situation in the future is foreseeable only in part or in fact can only be imagined, I have taken the liberty of indulging in a little science fiction at the end of this chapter - for the simple purpose of giving the reader an idea of what companies, product development and the utilization of product data may look like in the future.

Since not every reader will want to examine this subject to the same extent, the basic principles of this technology are described in greater detail in the appendix. Java as a programming platform, CORBA as an object infrastructure, and the methods of multistage computing should be made understandable to anybody – at least on a basic level. And a glance at the appendix will also bring the necessary clarity to many of the terms appearing in the preceding sections that not every reader may be familiar with.

But that's enough of an introduction. Let's have a quick look into the past in order to make more sense of our view of the future.

1 The industry and C technology

1.1 The company at the end of the twentieth century, its tools and methods

The change in work methods and organization which began with the advent of the industrial revolution led to a level of sophistication in the previous century that is now being defined primarily by the terms "globalization" and "virtual company".

The transition from manufacturing to industrial production with its division of labor was only the first step. Taylorism, the automation of work cycles, the constant introduction of new machinery and equipment to aid or replace manual labor led to new phases which themselves raised new questions and demanded new solutions.

Following the general electrification of our society, the availability of the computer, especially in the last 30 years, has been decisive for the final steps in this overall process - at least for the time being.

Automation

The first phase of the industry's re-orientation towards using modern methods only involved the improvement of individual procedures. Virtually isolated from each other, the attempt was made through the automation of individual tasks to reduce costs, increase productivity, and improve quality. This was accomplished first of all by concentrating on processing and assembly.  Processing centers and all types of NC machinery gained a foothold and began to replace older, long-serving machinery in the 1970s and 1980s.

At this point, emphasis began to be placed on design itself. CAD began to take over the engineering offices.  Workstations at computer monitors appeared in place of traditional drafting boards. Today, this process is viewed as being virtually complete. One can hardly find this once common way of creating technical drawings anywhere that has any significance worth mentioning.

The means provided by the computer industry also reflected these endeavors, and were geared towards supporting certain specific tasks based on the use of highly specialized software systems.

Let's take design as an example. Regardless of the differing significance of standard technical drawings in a given region, the first approach for designing engineers everywhere in implementing computer-supported technologies was to accelerate the process of creating these documents.

As the years passed, the euphoria of some and the fears of others faded; those who had believed that CAD would make it possible to work many times faster, and especially more economically, with a dramatically reduced staff. Its actual advantage proved instead to be on a completely different level: CAD drawings could be modified more simply, variations could be generated more easily, and documents became more exact, more reliable and of a better quality.

Questions on technology played practically no role at all for the user during this phase. For the user, a system stood and fell with the available functionality.  Decisions regarding selection were frequently based on well-polished lists of criteria, which were used as checklists to review how well each function had been implemented.

The questions were: can the system handle the processing of ‘real ellipses’, or simply arcs that approximated ellipses? What does the software offer for manipulating splines? How complicated is the calculation of geometries? Or: what type of support does the program provide during the installation of in-house drawing title blocks?

The question was not "is the program written in FORTRAN 77 or in Pascal?" And at first, it wasn't even "can I choose from different hardware platforms?"

The second question was rarely posed during the first ten years, anyway. Software and hardware were generally presented as a complete package. And the type of programming? For heaven's sake! That was a matter for the developer of the system. The companies that actually took this approach during those first years eventually gave up because software development was not included as a part of the so-called core competencies.

To the extent that CAD gained acceptance as a standard, its deficiencies also started to become apparent. The most important of these can be summarized in two ways: first, it was very complicated to utilize the data once it was created and was possible only at great additional expense; and second, even the use of computers did nothing to alter the fact that the final changes generally occurred after the tools had been constructed and the prototypes had been created, and practically never became a part of the original design.

This applied in any case to the use of two-dimensional drawings, or to put it differently, for the simple substitution of the drawing board with the computer.

The situation is quite different within the realm of 3-D.  The attempt was made quite early on to create either surface models or - more and more often the case since the late 1980s - volume models of components and increasingly larger modules which no longer strictly served to define the geometry. For the most part they were used for other reasons: for NC programming or rapid prototyping, for presentations or various kinds of simulations.

Mostly, however, the 3-D model remained limited to these types of special tasks and the technical drawing continued to be definitive -  whether it was derived from the model or not.

The phase of re-engineering

But then the next phase of the industry's re-orientation began. It was concentrated less on detailed operative or functional improvements; rather, the procedures themselves, the organization and combination of commercial processes within the company moved into the center of interest.

Gradually it became clear that the organizational structures in the industry had to change in order to meet quickly changing market demands. To a certain extent the development of C technologies and especially the associated deficiencies provided additional fuel to this trend.

The reason: the better the individual procedures function and the less there is to improve, the more troublesome basic, organizational weak points become.

Let's continue to use the example of the drawing. Whether or not it was created on the computer - all the other departments or groups and engineers involved in a development project must wait until the drawing has been finished or changed before they can begin their own work.

And something else became clear: as the body of data that has been produced grows, the question of accessibility becomes that much more serious. Attempts to get a grip on this task by means of PDM (Product Data Management) or EDM (Electronic Data Management) systems - or to put it another way, by making use of electronic data and file management -showed only limited success.

A precondition for success is that every person involved - everyone - must use this type of system and doesn't try to get around it; that they start their special application via an appropriate management application and also save the data they produce. But what this means is an additional system, another user interface, and greater expenses instead of fewer. A little later, we will see that this approach under other conditions can still lead to a satisfactory solution.

"Paperless production" - you must be joking! Despite all their workstations and personal computers, companies and their product development departments did and generally still do rely on traditional means of communication and exchanging information.

Concurrent or simultaneous engineering and process orientation were and still are the watchwords here. The parallelism of these processes and a gearing of individual procedures towards the specific overall process should cause the company and its members to function more like a living organism than like a collection of machines that are connected in a row, one after the other.

This had a clear outcome for C technology: only a complete, three-dimensional model that is truly able to describe the entire product can be the basis for all disciplines to function in parallel. It must be flexible enough to permit quick changes and still be able to supply relevant data even in the design phase: for calculation, planning, or tool construction.

The current massive trend towards 3-D can be traced for the most part to this and less to design-related requirements. In fact, I believe that it is not possible to separate these two aspects: re-engineering in the direction of process-oriented work, and the utilization of the volume model as a medium of communication within the project team.

General solutions instead of individual systems

Accordingly, the task of the software industry was also two-fold. First of all, it had to develop applications based on 3-D geometries that were simple, powerful and quick. Secondly, it had to ensure that every task in product development had to be solvable on the basis of this system. 

‘Design in context’, as the Americans describe this task, provides for a model in the center of the design process which could be used by all the different engineering disciplines. It should be possible to associate all the different procedures to the single 3-D model and/or its logic. Changes to the model can then be carried out automatically to any and all the related images and representations.

The continuity of the system based on the 3-D model is also offered by today's leading high-end applications.

They are distinguished essentially on the basis of their functionality, the type of operation, their degree of specialization for certain sectors or industries, and sometimes even on how extensive every aspect of engineering has been integrated.

And in this let's call it "second generation" of C technology software, it was rarely of importance to the end customer which specific technology the individual system distributor was operating. The task of process integration and automation using 3-D representation continued to dominate the discussion and the newly revised requirement catalogs.

Is the structure of the program object-oriented? Does the software development use the newest methods and languages? These questions were interesting only to the very few. More important was the software partner's expected life span, but this was also not a criterion in this phase of technology.

The question of limits in the selection of hardware also began to gain in importance. Not only with respect to better value for each workstation, but also with respect to the goal of process integration and the improved utilization of the design data, it was necessary for the freedom to select different hardware to climb in the list of priorities.

The momentary success of Windows NT (and we'll come back to this) is one of the side effects of this development. As long as the integration of the platforms was so expensive, the integration of different software on a single platform naturally had to have a great deal of attraction.

But, this was also not necessarily a question of technology. In the end, there are many ways to provide a system on different platforms.

Digital product development, virtual company

Recently the efforts of the industry and software manufacturers alike have been concentrated on including complete products with extensive and highly complex modular structures as a digital model. The use of digital mock-up or the generation of virtual prototypes is explained in the following section not only within the automotive and aviation industries, but also far beyond these bounds in the mechanical engineering sector, for example; many development projects are already making good use of these technologies.

The reduction of costs and time required for the traditional prototype series stands as the most important goal. The elimination of many, many repeated entries in order to define the same geometry is also not insignificant.

The product should be definable as soon as possible and should also be economical. It's what is known as "front-end loading" in the United States. And this is possible only when the actual costs are not fixed only after the final prototype series has been completed.

Over the long term, however, the 3-D models will also provide assistance in the maintenance, assembly and repair of finished products; they are also one of the core elements in the efforts to create the most intensive and effective integration of the engineering sectors within the overall company structure. The spatial model in particular provides marketing and sales with a medium that makes them into informed negotiating partners within the company, and that helps them to realize previously unknown presentation opportunities beyond the company's walls.

However, there is also additional pressure in this direction that also has something to do with the further development of the industry.

It was not only the borders and political boundaries of the Cold War that fell at the end of the 1980s; naturally other barriers as well have ceased to exist or exert their effects on society.

In general, this is viewed positively: it is now possible for products to find a market worldwide, and the previously unknown numbers of resources are now available to the industry and its customers worldwide. That is truly a positive innovation.

But isolation had a favorable aspect, too, for many industrial operations. There was less competition, one could concentrate on smaller markets, and of course, everything went a little slower than it does today. One simply had more time, and for a certain period of time changes also had a certain validity.

This is now all a thing of the past. While new markets are opening worldwide, this development is also bringing international competition right to our own front doorstep.

Every limitation has been swept away at such a high rate of speed that even the recognized rules for proper methods and structures no longer exist. Right along with the barriers, every aspect of reliability and clarity has also disappeared. Re-organization that is carried out today never has anything permanent about it; it hardly even guarantees a preliminary result. Instead, it has become the beginning of an unknown and rapid sequence of other re-structuring measures. 

Over the course of this most recent development, the borders of the individual company have also become harder to draw. The department of today can be an independent profit center or an external business partner tomorrow. The company that was an uninteresting competitor for the same market segment on distant continents yesterday is often a direct competitor today.  And today's competitor may be your most important business partner tomorrow.

When access to resources or the range of services does not draw in this situation on the newest methods but instead adheres to dealing exclusively on the basis provided by the postal system and telephone and personal discussions, this can have fatal consequences.

This trend came to light most distinctly in the automobile and aviation industries. These manufacturers and their huge numbers of distributors and system suppliers have taken on a new look, and the type of cooperation now has a completely different character.

The ‘virtual company’ is taking on ever clearer outlines. Structures that greatly resemble those of a company exist for the duration of a project, but never lead to the organizational integration into a conventional company. In some cases, rather, these structures are dissolved as soon as the project has been concluded.

This process is not finished by any means. The elimination of many company-related processes may be retracted, and many points of emphasis may shift. Only one thing is final: the situation will never again be what it once was. This is true not only for these sectors but for every sector in general.

Globalization has meant that a series of long recognized demands has been given a new and unfamiliar urgency. Fulfilling these demands is no longer a question of ‘going along with the times’ - it has come to mean a direct question of survival:

  • Reduction of processing time to the point of market maturity (time-to-market), along with a shorter product life span

  • Reduction of costs

  • Increase in product quality

  • Greater customer service

  • Increased flexibility for the entire organization

Wanted: the right software

To attempt to stop this development would be just as absurd as the irrational opposition to machinery in the nineteenth century, and naturally places new demands on the software systems that are used by the industry.

It not only depends on the fact that design and manufacturing must become quicker and more economical, and that idle periods and sources of error are eliminated as in the case of redundant data entry. It has now become mandatory that systems be used that permit the integration of internationally distributed, internal and external development teams and production sites.

And that puts us right back where we started again. Only very few software products were able to fulfill a demand of this type and only to a limited extent. Not because they were ineffective, but because they did not permit the use of the available technology.

For the time being and despite the full integration of components and modules, the effectiveness of C technology was quite restricted -  limited namely to the immediate circle of developers and engineers.

When the purchasing department wants to know what material is needed for the next serial production and in what quantity, it is generally impossible to avoid discussing it with the design and planning departments - even though the data have been stored somewhere for a long period of time and are (theoretically) accessible.

When the employee from marketing wants to create a realistic photographic image of the planned tool machine, a design engineer will have to assist him to print out the correct view of the correct model in the correct format - and that requires a great deal of specialized knowledge.

As before - or better, more than ever before - the installed applications concern monolithic systems with their own data structure, even when they are being used in the meantime not to accomplish a single objective, but rather to handle the entire operative field of the various engineering disciplines.

And that means greater expenses for exchanging data with other software, as well as for induction, operation and determining the most efficient degree of utilization.

System management, maintenance and user services, and the installation of new versions are all associated with a relatively high level of expense. Just as with the still necessary adaptation of the software to meet specific operational conditions, and its supplementation through in-house development.

Thank goodness: a general problem

From today's point of view, the computer industry has to take on a number of demands from the engineer as both user and beneficiary. Here's a brief summary:

  • Even better integration of all engineering disciplines (continuity, 3-D models)

  • Integration of production, marketing, sales, administration

  • Greater utilization of development data, especially beyond the engineering sector and actual design process, for the entire life cycle of the product

  • Easier access to development data, from other areas of the company and externally as well, and an improved flow of information

  • Improved cooperation between engineering and other software applications (interoperability) 

  • Reduction of expense for the exchange of data

  • Simple, reliable, effective data management

  • Reduction in expenses for service, maintenance, installation, system management and additional development

  • Better scalability of the installed solutions in accordance with current objectives

This is a complex array of requirements that can only be covered satisfactorily in part by existing systems. They concern the organization of processes, the operative process itself, the flow of information and the management of the software usage.

A solution is possible only when the company is seen as the sole reason for these endeavors, including its partner companies and external resources.

It just so happens - something that has to be assessed as an exceptionally positive development - that the list of demands for engineering software corresponds with those of the entire industry, not to mention of the entire community of computer users. It was most certainly due to the fact that without exception all the professional computer users were thinking and had to be thinking along the same lines, so that over the past few years even the software developers throughout the world were encouraged to concentrate on finding a solution to this general problem.

The Internet, the new programming platform Java and the generally recognized object infrastructure CORBA are all the result of this common concentration. And almost everyone is also a beneficiary: the software developers as well as engineers, bankers and architects, system administrators and customer service representatives.

And suddenly it is not so much a question of which surface functionality or which link for parts lists and NC programming a C technology software offers. One can assume in the meantime with any serious provider that these sophisticated functions have been implemented.

Suddenly, questions are being posed about the technology and just as often about the programming as well. The Web plays a significant role in the next step towards process integration, and this role can be carried out only moderately well to poorly using conventional means - especially since it has only been possible to use the majority of the information from a specific area. With the currently available technologies this information will actually become one of the company's most important resources.

Let's have a look at what this new technology is all about before we deal with its effects on C technology.

1.2 The Web: backbone of product development

It has taken over twenty years since the installation of the first ready-to-use CAD systems to reach the point where the traditional drawing board has disappeared from virtually every designing office, or is simply used for holding and reviewing computer print-outs and plots. 

During this time, there has been more than one change in the system architectures. Now we aren't dealing with just another phase of this technical advancement. On the threshold to the new millennium, we will have to deal with a real change in paradigms.

After we learned to communicate with the computer and to use it as an aid for certain tasks, an era has begun in which all types of networked computers are the means of communicating with everyone, both within and beyond the company, from task orientation to comprehensive process orientation. Although it doesn't sound very spectacular, it is actually a real revolution.

What is the best indicator of the revolutionary aspect of the current development? Perhaps simply the fact that this time the industry and the users themselves are the pacemakers.

Almost immediately after Java appeared on the scene and became accessible on the World Wide Web, it wasn't just the experienced Internet specialists who pounced on the new programming language. Within a short period of time distributors in the sphere of the Internet were confronted with two variations of the Web, which in turn promised much greater advantages for the industry in the initial stages:

Intranet has been the name so far for the networking of the internal computer world within a single company. In the same way as with the global network and using the same means, all types of information are made accessible and readable. The single difference: so-called ‘firewalls’ protect the operational networks from undesirable onlookers.

The second option: Extranet. In order to make the best use of the unhindered flow of information - for sales, customer service and for communication with contract partners - the protective barrier around the internal area is opened for certain authorized users and for dedicated subjects, but continues to be protected by reliable access mechanisms.

In the meantime, it is rare to find a major company that doesn't have its own homepage in the Internet that (at the very least) lists the most important information on the company and its production. In the USA, where innovations are very often accepted with great ease and enthusiasm, this development has (one tends to say ‘naturally’) progressed to a much further extent than it has in Europe. It is also more advanced in Germany, France and Great Britain than in southern Europe.

But the revolutionary aspect of the new technology is found first and foremost in the platform independence and in the resulting possibility of the comprehensive platform integration of various applications. It lies in the never-before-seen fact that any type of computer - from the PC to the workstation to the host computer, all of which are connected to the same network - can exchange information on this basis.

Indeed, this will mean a radical change. Not least for those areas which - like the various engineering disciplines - have had to exist thus far in rather complete isolation from the rest of the company due to these highly complex systems and their specialized computers.

Entering the new century with new systems

What we can now expect is a comprehensively arranged, process-oriented system landscape whose basic idea is really quite simple: total linkage of everything to everyone, without consideration of the type of computer being used, the transfer media or the network itself.

In concrete terms, for example, this means: a 3-D design which has been created on a UNIX computer can be shown and reviewed on even the smallest computer -no matter whether it is a notebook or a network computer. In the same way that specific company information regarding delivery deadlines or inventory which may be managed on a host computer can also be available on a UNIX computer. One no longer needs highly complicated individual systems that are difficult to maintain; just small applets and browsers that are user-friendly (for everyone).

With total linkage, the saying about the information society finally acquires its true sense.

Information flow, information technology, information management, information organizations - how often have engineers and managers of large companies in particular been both amused and irritated by the fact that more data is being produced but that it is thoroughly impossible to speak of its accessibility. In light of the most recent developments, a turning point is in sight in this most critical aspect.

The World Wide Web will be the backbone that joins the various users like the organs of a living being. And the Web Browser will become the standard user interface for gaining access to information – in engineering and other sectors.

Engineering teams will create their own Web pages for specific projects and set up project Extranets. And both individual team members will be linked up as well as other teams, who together will form the virtual, expanded company within the scope of a single project.

The tendency is even to involve the product's end customers so that they will then be in the position to order spare parts or additional options via the manufacturer's homepage.

The Web therefore creates the conditions (without great expense) to be able to connect everyone who is working on a specific task or is pursuing a common goal to a single network. This is done without needing to consider the different locations, or the type of link to the network, and especially without any consideration of the type of hardware.

It could be a construction project for a new plant, or it could involve a sales organization or a transport system.

Every task that is carried out these days using paper, personal discussion, telephone conversations (and is in part supplemented by the exchange of data or files, for example on diskette, tape or CD-ROM) is given a new dimension.

From one minute to the next - it is now possible to send all kinds of current information, including design models or their associated modular structures - using telephone lines without requiring an additional medium (assuming, of course, that the telephone line and the installed modem are designed for that kind of transfer).

But there's even more: even programs can be activated via this network. Whether they are stored temporarily and started by ‘downloading’ them onto your own workstation, or whether one controls the programremotely via the main network link - the World Wide Web permits a means of working interactively in a way that has thus far only been possible at individual workstations or within limited, local networks.

Naturally, the possibilities described above are primarily of a theoretical nature. The reader may find of greater interest the question of how different tools can be converted for actual, practical use. And as you see, there are already a number of innovations.

2 What’s happening in application software

2.1 CATIA and the Web

CATIA, 3D software from Dassault Systèmes in Paris, marketed and distributed worldwide by IBM, has a noteworthy history behind it, currently at Version 4.

This includes integration of CADAM 2D functionality as well as development of the entire system into a modern volume modeling system which may be found everywhere as part of the leading software products. Especially in the automobile industry, but increasingly in small and mid-sized businesses, and certainly not just for suppliers.

CATIA V4 is one of the most extensive complete solutions in areas of application such as mechanical engineering, aircraft and aircraft manufacturing as well as system and naval engineering, which includes nearly all engineering disciplines as integral components.

Gigantic installations such as Boeing or Chrysler include several thousand CATIA workstations on a slowly growing range of hardware platforms. In general, they are connected to additional CAD, CAM or CAE systems, either internally or in a virtual team that includes the external partner.

Projects, developed with this program, are often very large and extend over an extremely long range of time, as in aircraft and naval engineering and space travel. The data that must be generated during the project must have a much longer shelf life than version descriptions of the software being used. Product life spans of 15 or 20 years have been and are common in many of these areas, and given the complexity of the products, not much is likely to change in this regard.

Good reasons for new directions

Everything comes together here, as the limits of previous C technology show all too painfully. Unimaginably large development teams distributed  worldwide on the one hand and complex, very large product structures with long life cycles on the other hand. No wonder  that Dassault Systems was among the first software manufacturers to recognize the potential of Java, CORBA and the Internet and  began using the new technology for its own software development.

The same applies to IBM. In contrast to the CAD system, the development of the product data management system ProductManager  was in the hands of IBM. And just as IBM has long been one of the greatest Java development teams in other areas, the new programming platform was quickly in IBM’s sights here as well. 

This then serves to briefly characterize current development, albeit in a  somewhat simplified form. Data management and C technology are growing together, and the common architecture will be designed on the basis of the Web. 

Just as CATIA has been astonishingly and overwhelmingly successful in ensuring continuity for its users while implementing all the steps of evolution of the computer industry, so too the way to the next software generation will be taken not only with new products, but especially by further developing existing ones, and integrating mew methods into existing systems.

The product is new nonetheless, and will likely play an increasingly important role in the future. It might even develop into a decisive connecting link between different applications, and IBM will initiate a new product line with it in the near future. The first version was released at the end of 1997 and the second followed in spring 1998. The name is CATweb Navigator.

We will examine it in more detail before turning to general software development and the next generation of CAD and data management of IBM and Dassault Systems.

CATweb Navigator

The goal is to make the engineering data available to the entire company and vice-versa, to enable the engineers to access company data directly from their workstations.

Arnaud Ribadeau Dumas, a Java specialist at Dassault in Paris, made some remarks at the end of March 1998 in a conversation on the reasons for basing this task on Java and CORBA.

"We’re convinced that some day in the near future, user will be working interactively over the Web with complex applications. We want to make tools available for this, and no technology is as good for this as the combination of Java and CORBA. 

As a development environment, Java is not just more than twice as fast and much more secure than traditional languages. It also offers a series of features as an integral component that have otherwise required additional development efforts on our part.

Thus, there is no need to link in completely new class libraries in Java if you modify a single object in it. The new object is inserted, but the library remains unaffected.

Take another example, even more important in our situation: dynamically loading and activating objects: With C ++ we had to program this functionality separately. With Java and CORBA, it is the core element of the platform.

Of course the fact that almost all our customers already have access to it themselves via Intranet or the Extranet and expect corresponding connections on our side also plays a role.

One of the main goals in developing the CATweb Navigator is to minimize the flow of data. It is not just a matter of simply enabling access to CAD data. For models that are daily bread as for our customers, the essential thing is to be able to view the immense amounts of data behind them quickly and easily. This is where CATweb Navigator opens the door to completely new dimensions."

Essentially, the new product is a CAD model browser. It takes practically no time to start working with the program. CATIA models are displayed in the original. No intermediate format is required.

Even in the first version, it was possible to form links via CATweb Navigator with other Web pages using model geometries. In January of this year a demonstration was given, starting an office application through the surface of a CATIA component by clicking with a mouse and using the model within a text application for illustration purposes.

As Web servers, the hardware platforms currently available are IBM, HP, SGI and Sun. Almost any computer can be used as a client,  and soon JavaStation from Sun and the network computer from IBM as well.

Standard protocols and mechanisms protect sensitive data from unauthorized access. CATweb Navigator is based on the CORBA  IIOP standard and thus runs inside a firewall.

By using CORBA ORB, there is also nothing in the way of accessing server applications that are written in C ++ .

With version 2, which was presented publicly for the first time at MICAD 1998 in Paris, CATweb Navigator has received a new expanded architecture and a series of new functionalities.

Java components of the system are called ‘CATlets’ at Dassault Systèmes. They are compatible with JavaBeans, and they now allow the user to adapt the so-called CATweb desktop to specific requirements with the help of the CATweb Development Toolkit.

Along with simply displaying and manipulating existing models, it is also possible to put together components with the new version, for example, to initiate collision observations, to query original dimensions and to dissect and traverse the model online. 

The performance the French developers have achieved is mind boggling. Even models with a data range of 60 Megabytes can be manipulated on the monitor as if they were small components.

The complete model is not transferred to the individual desktop via the client/server architecture of CATweb Navigator. The original remains instead on the server and all important computation operations are performed there. The client merely receives pixel information on her screen. This reduces the amount of information coming across the network line from 20 Megabytes to about 300 Kilobytes.

 Multiple CATlets can be opened simultaneously in the new user interface, and as is common in other interfaces, they can also be reduced to an icon when they are not active. The layout of one session is saved until the next one. The visible menus are entirely context sensitive - they minimize themselves automatically to command buttons that are required for a particular operation.

A file manager is available and is integrated into the CATweb Navigator. It has the functionality of the Explorer familiar from the Windows environment, but goes beyond: It is also possible to select multiple files from different folders simultaneously. 

CATweb Publish is responsible for publishing models via CATweb Navigator. Each client has options that include plotting and printing, saving views, web publishing of models via HTML, and appending notes or additional pieces of information.

All in all, and I have by no means described all the details of the range of functionality, this is a most promising start into the age of the Web and fertile soil from which additional applications and applets will  spring. But this by no means adequately describes the range of applications for Java and CORBA for IBM and Dassault Systems. 

CATIA’s next generation is Version 5

Recently the pre-announcement was made, to the surprise of many, also at MICAD in Paris, of the release of ‘CNEXT’ or ‘CATIA’s Next Generation’. It has been grist for the rumor mills for some time, and will be  released this fall under the somewhat prosaic name of CATIA Version 5.

Even though this is in fact a new generation of application software, there is something to the name: In particular it brings out the fact that compatibility with data and models of earlier versions is ensured, and that the investment these represent remains secure.  

Where is the connecting link between the old and the new, what basic advantages does Version 5 offer the user, and what role do Java and CORBA play in the new development?

The connecting link may be described thus: numerous elements and components of the new software are already familiar to the user from previous versions. They will form the actual core of the next release.

What has been introduced since CATIA V4.1.3 – to a certain extent as a ‘plug-in’ – already as a general plan for user guidance and several new technologies in the direction of Digital Mockup, now represents the basis of  the future software. 

This includes conferencing, the mockup inspection, the digital product structure, assembly design, but also dynamic sketching, the part structure editor and photo-realistic rendering.

The essential advantages of the new CATIA version go far beyond a ‘unified solution’, however. They concern hardware, the application philosophy and finally the architecture of the system itself. 

Hardware: for a long time, IBM computers were the only machines on which CATIA ran. In the last few years, workstations from Hewlett-Packard, Silicon Graphics and Sun Microsystems have been added. Now as earlier, these are essential ports, and the system   currently most popular for new installations, Windows NT, was not even supported. 

CATIA now arrives on the market as practically the only platform-independent solution. This is not a Unix version ported to NT. Rather, a largely neutral version has been developed that is a native NT application, and which generalizes the popular ‘look and feel’ of Windows user interfaces at the same time.

To put it somewhat differently: CATIA V5 continues to run on Unix workstations, but it looks and works identically everywhere now. It is thus an object- and component-oriented version supporting the OLE/COM  concept of Microsoft under NT, but overall supporting CORBA.

Objects of different applications can be combined with each other in the same manner using this principle, no matter what the platform. This was never possible within Microsoft Office applications. The user is thus no longer tied to a single solution. 

The new version gives the impression that the ease of a relatively new Windows Mechanics package such as SolidWorks or SolidEdge had been combined with the complexity and performance range of CATIA, without giving up any of the simplicity in the user interface.

Thus, for example, 3D models that have been created with CATIA V4 can be inserted into a V5 component using cut and paste, or can use models created in the  new version with the functions of a V4 installation. The design of the components is not limited to one or the other of the two software generations.

We thus come to the last question that was asked, namely the role of Java and CORBA in connection with CATIA Version 5. Although  details of ongoing research and development are protected by non-disclosure, we  may say this much, that it is considerable. 

Arnaud Ribadeau Dumas formulates this role as follows: "For every line of program code we develop today, for every object and for every component, we consider first whether it can be implemented in Java. We turn to other possibilities in exceptional cases, for example when standardization has not yet advanced far enough.

If you look at the platform-independence of CATIA with Version 5, you will see that we wouldn’t be able to implement things like this if  we weren’t making  extensive use of the potential of Java and CORBA." 

This also means, however, that every add-on to Version V4, every modification to existing programs can be written in pure Java just as   well as new modules. The coexistence of C ++ and Java, which is a matter of theoretical possibility and an opportunity in the  Appendix to this book, becomes here a matter of practice.  

ENOVIA and the VPM product line

We would now like to take a closer look at the project for which IBM and Dassault Systems created the new company ENOVIA.   The company was founded in February as a division of Dassault Systems and employed approximately 150 people as of May 1998. The project is called PDM II. 

Two product lines were merged into one: the IBM Product Manager and Dassault Systems’ most recent developments created to manage product development data.

While Product Manager mainly concentrates on managing and organizing data from products that are already available, Dassault Systems targeted mainly the organization of the development process. With current  demands, such a division is no longer useful. The company-wide flow of information demands new answers, and ENOVIA is the answer for businesses  using CATIA. 

By the time you read this, it is very likely that Version 2 of PDM II is just about to be released. And Java and CORBA are literally everywhere.

The Software was written in Java and uses CORBA ORB Orbix from Iona. It is a very modern, multi-level client/server-Application that runs on all common Hardware platforms.

The user interface is identical for all systems, regardless of whether you use Windows or MOTIF. It looks like a familiar data management program but the technology at its foundation is the browser. The drag and drop option is just as standard as web publishing.

An absolutely transparent product structure allows the engineer to select a model, one of the components or an embedded part within a component and to utilize this part for any desired step within the system.

If CATIA is installed on the same computer, it is possible to edit the component directly from PDM II. If this is not possible, it is still possible to access the model data with CATweb Navigator or another web-based  tool. 

PDM II can be installed as a standalone application or it could function as a pure client applet. Of course it also provides access to all  other web-based data sources within the company.

If you examine the entire new range of software released by Dassault Systèmes, you will recognize CATweb Navigator as an essential element that aids in the engineering environment, combining and correlating information coming from all areas of the business.

As mentioned earlier, Web technology symbolizes the backbone of product development and engineering.

This example is just starting and is still in an early stage, but the direction it is heading and the elements that determine the direction it will  head are already more easily recognized. The speed at which the new applications were and are being developed is impressive, and makes it even more exciting to see what is next.

2.2 An open office world

An office without Microsoft? No office suite from Bill Gates installed? The PC is not the only hardware?

Today most people probably use Windows or Windows NT with accompanying products from Microsoft and its partners to perform professional office  tasks. Then they alternate between updating hardware and updating software.

The number of companies that have gone out of business competing with Microsoft or were purchased by Microsoft is so large that any change in the near future appears unlikely. Especially when you compare the attempts of such companies to compete with self-developed  products, which in most cases did not appear to be very promising. 

A second look, however, reveals that technologies offered with Java, CORBA and the Web could make such a change possible. There are already a number of applications on the market that are establishing themselves. They are even having success in offices, the first domain conquered by Microsoft.

At a ‘Lotus press club’ conducted by Lotus development on May 5 th 1998 in Munich, several consultants for the development of Java Office Suites stated that the future for office suites in general will be based on Java-technology.

Rüdiger Spiess, Senior Consultant of the META Group Deutschland (Germany) quoted a current study in the US that states:

Within the next three to five years, thirteen percent of all Global 2000 companies (G2000) will use only network computers. The remainder will use some combination of PCs and NCs.

Even today, about 50 percent of all G2000 companies are using Java or are testing this programming platform.

The META Group estimates that the "DCOM/CORBA-War" will become more serious. With respect to Microsoft, this company estimates an improvement of the Java implementation with the availability of Java as programming language for Windows. They predict that Microsoft will develop into one of the largest Java providers within the next couple of years.

No clearly drawn lines, no specified estimates on when those Java Office Packages will be ready to face the current market leader Microsoft. But the statements are bold, that Java and CORBA will be entering the market and that the impact of such technology will be  accepted by the industry and a majority of users.

The fact is that there is a new market, there are new products with new features, and finally there are options. There is an opening between the existing solutions.

Applix Anyware Office, Corel jBridge and Open-J, Sun HotJava Views, Cooper & Peters EyeOpener, StarOffice and Lotus e-Suite - these are the names of products, that are either in development or already on the market.

Let’s take another example.

Lotus e-Suite

Lotus e-Suite was introduced to the market in spring1998. The product consists of two basic components. e-Suite WorkPlace provides the actual work environment, which provides the typical office functions including the web-browser – this system uses the integrated Hot Java Browser from Sun Microsystems – text processing as well as presentation and the file manager (work files). Together with e-Suite DevPack, the product provides an integrated development tool to create Java-Applets.

WorkPlace is task-orientated, unlike current work environments that are all application-oriented. When a task is started, e-Suite offers exactly those functions in clearly identified menus that apply to the task at hand and that will help to complete the next step.

The tasks displayed on the WorkPlace desktop depend on each individual workstation and can be customized or expanded. All active tasks are displayed in a separate toolbar.  As soon as a new task is started, this task will be automatically added to this toolbar.

After finishing the session, e-Suite saves the current conditions including all active tasks. If the user logs on again later, she will find the client in exactly the same condition as when she logged off.

The package is basically a reduction of Lotus Smart Suite focused on limited functionality and containing all the basics and the most common features, but not everything one would like to have. In numeric terms, it covers about 20 percent of the features and functions of the desktop solution. 

All functions are developed as small, flexible Java-components. Companies that require more functionality than just the standard Software, could add additional components – also based on Java - or could implement interfacing with remote applications through Java.

The native-format of the text processing software is HTML. For future versions, Lotus is offering compatible formats to Microsoft Office and their own Smart Suite.

In general, all the storage will take place on the Internet, and data and tasks can also be restored on mobile devices. E-Suite can run onevery client if the client is equipped with a Java Virtual Machine. This applies to network computers as well as to regular PCs. 

e-Suite DevPack opens the entire bandwidth of the Java-world to the user. Whether it is programming for access to a mainframeapplication and presentation on a network computer; or a new application to improve internal business communication that needs to be integrated into WorkPlace – there are almost no limits to what is possible for the systems administrator.

The core element of the DevPack is the Lotus InfoBus, which enables the interconnection between Java-components. Through this files and data can be dynamically exchanged and shared between components and without scripts.

With JavaBeans and InfoBus constructing applications becomes interactive and can be performed without regular programming. Thismakes it much easier to create business-specific applications.

Sun Microsystems plans to integrate InfoBus-technology into the Java Developers Kit (JDK), which would make that software the industry standard.

2.3 Project management with Java

A very good example of a non-technical or less technical computer application that many employees in large corporations are confronted with is project management. Even if no other texts have to be created   or tables have to be calculated – access to an appropriate system should always be available to plan projects. And until recently in this field everything was headed in general in the same direction: Microsoft.  

All the old methods with paper reports and post-its  that were once used to coordinate and organize all appointments and  environmental conditions of complex development projects have long since been done away with. Or rather, they should have been done away with: believe it or not, even today, and in large corporations,  the same old-fashioned conditions still exist – the PC is just used on the side.

In the field of time and appointment scheduling for projects, Microsoft Project has become just as much a standard as Microsoft Office became the standard for allgeneral office applications. 

Netplan limits

The program is based on the well-known and familiar method of Net plan technology. In this process, individual project sections arelinked together. As soon as something is changed within a part of the entire project and the activated phase has changed in comparison to its original state (either longer or shorter), the result will be an automatic shifting of all following sections within this project.

This is one of the problems with net plan technology: a static approach and automation that does not allow the individual planning phases to develop for themselves and that automatically dictates an effect, which may not always be applicable. 

It becomes very obvious as soon as several projects are linked together and the effects of one influences all the others along, as well as external conditions that influence them as well. This may occur during larger product development phases or system development just as when a hospital is constructed.

Microsoft Project and other commonly available project planners do allow you to link projects, but the static thought process creates disastrous consequences. How is it possible for a project team to deal with the delay of a single external vendor, if all following contractors have to be informed that the schedule for the entire project now needs to be extended due to the delay of this single vendor?

This should not happen, of course. But other options for reactions to unexpected events within a net plan are not possible. The result is that proper systems that maybe installed are not utilized properly or that they work the same way as all the paper work we tried to get rid of. A current version of the plan is printed out for the discussion with the project leader. This version is laid on the table and ceases at that point to be an organizing element for the entire project.

So now we have identified the second problem, which Microsoft Project has in common with almost all currently available project planning systems: Interdisciplinary or inter-project links are not really achievable. The consequence is that such a system is installed and may even be used in hundreds of departments, without having a valid relationship to each another.  

The possibility of effective communication between the individual areas and installation levels is also missing, which would at least allowthe separate operation of partial plans on the basis of shared information. This would include automatic notification of an error in a specific area, which may effect other areas or users as well. 

Microsoft Project and RPlan

Those two problems describe the major difficulties of the project planning that motivated an engineering office in Munich to write their own Software, which essentially resolves those problems. The company is called RCOM (Organisationsentwicklung und  Informationssysteme – Organization Development and Information Systems), and currently employs 20  engineers and supplies software solutions to the public to make life easier for engineers. They have been in business now for eight and a half years.  

The most important product is called RPlan. It is an  appointment and activity management system, based on Microsoft Project, but improved through  methodical  expansion to meet the requirements of multi-project management.

RPlan stores all related projects plans in a relational database that manages the project information centrally. An RPlan Navigator creates a structured overview in the form of a tree for projects and organizations, and permits user defined planning views.  

With Rplan, as opposed to the net plan technology, no automated consequential processes are generated and implemented that could lead to undesired results. Instead it limits its automation to one level, which makes sense, and this level is the coordination and communication level.

Errors and unexpected events within a project section will be displayed to all effected users within their own project plans, with specific reference to the concrete background information, i.e., form, reason, and effects. The deduction then is left to each user individually. Each one can use this current information to make the appropriate decision.

RPlan Java

And now for the item of interest in this context: the most recent product is called RPlan Java and offers users on the front end either Microsoft Project or another user interface that looks and runs almost exactly the same. However, this one is written completely in Java, just like the entire client/server architecture of the system. You might say this kills all the birds with one stone, especially those that are flying around in project management.

The front end could be any kind of computer, from a JavaStation to a PC or even a Unix workstation. Large project teams of any size can be linked through a unified system that is more than flexible, and actually works. The central management of the entire installation includes access rights and data security, configuration and updates. It provides complete freedom for the user to access and view any desired data.

All existing plans created in Microsoft Project can be transferred without conversion into Rplan Java, then processed and used within the expanded solution and with database support. Finally, all existing installations can be connected and linked to the entire system and logged on as clients within a modern architecture.

BMW has recently decided to introduce this new product throughout the entire company. The development of the new generation 3 series BMW and engine development are performed in this environment. BMW is planning to convert all remaining project planning stations. More than 1,500 workstations are already in operation and extremely productive. Of course, RPlan Java uses the Web as its medium, through which all relevant factors that could effect project processing are transmitted immediately to all participants.

The speed for displaying and updating plans with this browser technology (besides the familiar look and handling) is noticeably faster in comparison with existing systems. Instead of those complicated and at the same time limited algorithms that try to derive the correct consequences from individual events before they create a revised image on the screen, intelligent Java-based objects are linked together. These objects alter their appearance immediately when their status is changed.

3 The way to the age of Java

3.1 What is changing, what must change?

You know right now in general terms and with the help of a few concrete examples what technology has to offer and what is technologically possible. However, what effect does it all have on organizing information technology in your company? What measures should you take if you want to optimize the new options?

The answer is multi-faceted, and must be multi-faceted. But there are a few basics that are surely helpful in planning the next steps.

Connecting to the Web

First, you must get rid of the notion that the Internet is a playground for freaks, and especially get rid of the notion that there is nothing to find in the way of product development in the foreseeable future.

The user interface of the future for information systems will be a Browser. Everyone can use it, almost everything in the near future can be taken care of with it. With a simple mouse click, this interface between man and computer facilitates even now the access to and combination of information of all types, from all sources, on every platform.

This interface will soon become the most significant one in your home to the outside world: to partners, to employers, to customers. 

And the Web server will become the elixir of life for in-house IT landscapes. Therefore, we are not waiting for the availability of a complete Java environment – if it ever comes. What is wanting is rather the analysis of its demand and establishing an aggressive but appropriate step-by-step plan of conversion into a fitting, multi-staged network architecture that will meet these requirements.

This should always include an internal and, if needed, external Web site that in some way represents the pivot point of all information flow .

Product development should be provided through an internal Web server for specific engineering information with a bi-directional link to the company’s home page and with the most flexible access options.

New development projects may start with a new Web page in the future, with a link to prior and relevant projects, whether running or shut down. All project-specific files must be accessible through this page.

Good and useful Web pages demand continuous service. Only current pages make sense. As a result, someone with some degree of expertise should be involved with this task.

Such Web masters should be technically oriented but also creative people. Also, if a new field of activity for experienced systems administrators arises, you should not underestimate the need for education measures. You will need specialists for a sector in which there are practically no experts at present.

Make use of Coexistence

The appendix drives home the fact that one of the positive sides of the new technology is how it functions together with installed IT systems, requiring no tabula rasa.

If the necessary infrastructure is installed, one can imagine implementing appropriate programs immediately that allow easier access to engineering filesand thus better coordination of processes, using available C technology and commercial data processing. 

However, additional modules based on Web technology made available for your installed programs like the CATweb Navigator for example Dassault Systems) could make themselves useful immediately and should not be relegated to the long list of "nice to have".

We are not dealing at all with things interesting to one person but not to another. These components have strategic importance.

As IT conversion of companies sets the pace, more and more sources of information, internal as well as external, will be accessible primarily or exclusively over the Web.

Related Web pages will contain programs that allow access to and processing of files, and these programs will as a rule be written in Java.

This was indicated by a study published by ZONA Research in the summer of 1997, among others. 

279 IT specialists were questioned from companies in which more than 250 employees work with computers. While at that point in time less than half reported being productively engaged in Java, nearly all expressed the expectation that they would be involved with this step within the following twelve months. A good number of arguments were mentioned for choosing Java, not the least of which was the fact that this is the preferred language for programmers.

Many saw Java as the strategic direction for their company IT in order to unify applications, to simplify and reduce maintenance costs. 

It makes sense then to assume from this that future communication based on Java-based applications will have considerable advantages both within the company as well as with business partners and customers. And the speed at which information can be made available and retrieved will become a common competitive criterion in the coming years even more than in previous years.

All programs that integrate their software suppliers into their applications on the basis of Java are of great value to overall company communication and the improvement of the process, even if they do not individually anticipate a functional expansion versus the installed version. And naturally, these base modules are subject to the prerequisite of the ability to use built-on expansions effectively.

Making engineering resources available

Compartmentalization of specific engineering information from the rest of the company is no longer required. Using the independence of platforms of the Web, and moreover using the simple accessibility of desired information through browser technology, there are no longer technical reasons why construction models can only be loaded, used and issued by experts.

In order to effectively utilize this possibility, a change in competence is needed as well as new rules for access rights and an organization of data flow. This has not had to be considered at all before. 

Furthermore, employees who up until now have been responsible for product development data, have in many places in the last 20 years gotten the feeling that this data belongs to them, that the rest of the company can’t begin to understand it.

The new conditions mean in some ways a handle on ownership of data and information. Many technical and expert arguments against introducing modern methods will be revealed at their core as an attempt to protect the familiar situation.

And it is more a requirement than an acceptance of their introduction. Because information must be first made available before it can flow.

Only if the required tool is made available in house to the marketing person, for example, and only if he is informed of the existence of corresponding data can he introduce a developing product model for a presentation to sales partners, for example.

Employees should be incorporated early and as actively as possible in the conversion process for these reasons. The uses, which this process can unfold, depend on their understanding and application. The existence of technology and connection to the Web is by no means sufficient.

I have frequently seen wide-ranging, internal Intranet implementations whose only connection to product development consisted of engineers having access to company-wide data. The opposite path which is at least as important, is often judged as completely unjustified only at a very later date, but sometimes the connection is not seen at all.

Thus, one can take a right step and despite considerable potential squander the whole thing instead of using it.

The digital product model

Also within product development itself – to the extent that this has not already happened in the course of the last few years – the path from the construction model to the digital product model should be wrapped in.

On one hand this means that in deciding on installed systems in the construction, other departments, not just the development department, should be involved.

On the other hand it means that after the technical barriers between the engineering office and the other company areas are taken care of, there is no longer any reason why the construction model should not be valid for the entire duration of the product being produced. An electronic product object should also come about that can expand its features and that remains flexible – far past the point of production.

Also, necessary management systems must be prepared that are not just in-house up until the deadline or parts list generation, but which are also in a position to guide the product data through its entire life cycle.

Naturally, such systems must be based on the company-wide necessity for data access. The question of support for Java/CORBA technology through software plays a major role here as well.

Budget priorities

Will hardware soon no longer play a role due to pure platform independence?  Or will more be achievable in considerably smaller blocks of cost?

A general answer can not be given on this. But, one should fundamentally consider as a main point shifting costs with respect to incorporated hardware. The basis is obvious: The future heart of IT of a company no longer beats in individual networked work stations and the uses surrounding this, but in networks themselves. 

Whatever makes sense to connect to this network must be decided depending on the requirements of each individual workstation,  case-by-case.

The network computer without a hard disc and CD-ROM and with a limited memory may be altogether sufficient for many tasks – and it will certainly be at a lower price level than today’s PCs.

The workstation will tentatively remain indispensable as regards 3D construction, calculation and simulation. Also, the continuously growing need for memory will not diminish. The larger the electronic models are, the more their field of application expands, and the more important it becomes to make sure the performance is satisfactory. 

Even for the foreseeable future, the Workstation reality will ride alongside Windows NT and UNIX. And the more the new technologies prevail, the less reason to change anything about this reality. 

Servers must be considerably stronger as a whole, than is the rule today, in order to make data flow not just possible, but useful as well. The bandwidth with which data are transferred in Java pages has had more influence on the speed of applications than all others. 

The hardware budget calculation must include this importance in network architecture. Unwarranted savings in this area would directly counteract all efforts in modernization of the IT infrastructure.

High performance network servers are already the most important guarantee for the overall functioning of information technology in a few large companies. Compared to equipping a desktop or a high-performance graphic Workstation, the performance strength of the server does not affect one particular task, not one particular class of worker, but simultaneously and dramatically it affects all, more or less. And positively as well as negatively. 

Otherwise, it can be assumed that on one hand, the price/performance ratio in this future central area will  improve relatively quickly. However, it can also be assumed that new products will enter the market in fast cycles that  will renew investments here in shorter intervals.

The new technology should have tangibly positive effects on those sections of the budget which have had to be made available for services and maintenance of computer installations. In this respect, the good times of the host computer return many times over: The maintenance of the network occurs mainly at the central localization and considerably less downtime can arise. 

3.2 A view of product development in the future

We have come to a point at which I will leave the present existing products and reliable facts. I would like to describe project development in an application environment that doesn’t yet exist.

Because only a minute beginning has been made in the development of new products based on the new technology. And because practically nowhere has there been a beginning in effective use of this technology in engineering.

A little science fiction then. In this chapter, it will be assumed that the products exist already and that there are also all components of the new infrastructure.

But this is not really science fiction. Because I am describing things that are altogether realistic technically and already implemented in some places.

In our fictitious example, we are dealing with the production of a mechanical component. We will take a look at the manufacturing industry, say, in the year 2003. International telecommunications equipment nearly everywhere have transfer speeds and bandwidth many times higher that in 1998.

The virtual companies have further developed. Even small companies with less that 250 workers are working closely with changing partners on individual projects and the smallest companies have achieved important worldwide positions as suppliers. They take care of a majority of the steps on the Internet, from the offer up to the order development. 

The order

A letter icon illuminates the screen. The mail server automatically initializes the modem because there is a message to be retrieved. The server transfers a short   message, the title, sender, size, type, and time themessage was sent. Subsequently, the modem deactivates  again.

The development leader of the operation who has specialized in the construction and tool assembly for truck components clicks on the letter. The message comes from an automobile systems supplier. Besides text, it contains four illustrations. It has to do with the development of an exterior mirror.

One click on the title reinitializes the modem and opens the letter, which comes to the point after a short introduction.

"The design of the doors we’ve developed for the new X-class car is already largely completed. Attached, please find the complete model of the front doors, the portion of the cable harness you are interested in as well  as the model sketch with the design concepts from the manufacturer, dimensions and positioning of the mirror. When can we work on the model of the mirror so we  can introduce quality control?"

The development boss clicks on the first picture icon which shows the left front door in miniature format. A new window opens up, the 3D model of the door appears, next to it are a few command buttons which can manipulate it. He takes a look at the door from all sides with the help of the available commands, zooms to the interesting part about the mirror, minimizes the model and places it at the screen edge. He does the same thing with the right door.

Then, he opens the model of the cable harness and takes a look at the most important details, above all the position of the connectors for the electric motors, the freedom of motion of the cable ends and the size of the compartment which naturally is displayed together with the cable harness.

He opens the last picture with the concept from the manufacturer for future outside mirrors. This is also a space model. He peruses the geometry using the cursor and makes the gray/blue outer surface of the model red. He clicks on the red surface and another window comes up. In a small table, relevant dimensions are located which have to be maintained in any case for the new product. Additional textual information is located beneath this which clarify what the manufacturer’s concerns are and where he has room to play. 

Once more he enlarges the door model. He looks in vain for the dimensions for the exact positioning of the mirror. He clicks on the button of the project management and the current state of the ongoing project appears shortly. After an overview, he chooses the plans for construction that he feels he is a responsible party for. He is mainly involved with the improvement of an internal 3D component  library, which can wait. 

After a short discussion with the co-workers, he answers the electronic letter from the system administrator. Referring to the door model and the missing dimensions, he reports that the order can be immediately placed and indicates a possible release date.

The  project Web page

Then, he launches a new project Web page for the exterior mirror in which he embeds the short exchange of letters together with the existing model data. He adds the envisioned deadline and responsibilities into the integrated project plan and sends a short note to h s co-workers who inform him of the new web page. 

All applications used for the initiation of the project are based on Java, his screen is hooked to a network computer which runs on aJava chip and neither a hard disk, nor a CD-ROM or diskette drive is needed. None of the applications is installed at his workstation. 

Construction

The construction specialist first looks at – on the monitor of her Unix Workstation – the existing models that have been updated in the meantime by the employer with the data still missing. Then she closes all windows up to the design model of the mirror. This one takes   up the entire screen.

She looks for the truck mirrors developed to this point in the graphic component library using a Browser, and selects the exterior mirror for limousines. She scrolls through the minimized pictures of the mirror slowly. When she sees an interesting type for the new product, she goes to the picture with the cursor. Next to the icon is a small table with textual information as well as a graphic chart giving the components of the overall assembly.

After she has chosen an appropriate version, she starts – just now – the client of the CAD system installed on the engineering server by double clicking on the displayed components. A new, transparent window opens over the design model and the window is enlarged to normal size.

After deleting the compartment, she clicks on a menu button for the arrangement of the windows. A new menu gives her – also using graphic symbols – the choice of a two-window split. In one, she sees the design model of the automobile manufacturer, in the other she sees the components of his library.

Both of these were produced with different systems. Using drag and drop, she pulls individual components and assemblies, the electric motor and the mechanical parts, from one window and places them with references onto the inner surface of the design model. The motor is somewhat too big. She looks at the limits set by the manufacturer without leaving the system. An appropriate change in the compartment geometry is not a concern.

With the right mouse button, she clicks on the motor and opens a browser allowing her access to the preferred types of motors. For the motor in question, she calls up the storage date, delivery terms and the most favorable supplier. The data are attached to the motor.

Again using drag and drop, she pulls the desired motor into the working window and releases it above the existing motor. A pop-up menu asks whether she wants to exchange components and she confirms this. And look at that – the motor fits.

After she treats all required components likewise, she makes a shell from the completed space model and begins with the construction of the inner workings of the mirror: the fastening of the mechanical and electronic components, the stiffening ribs and the removal haunch.

The other task steps are not significantly different, The connections to the cable harness, the adjustment of the compartment at the outer skin of the door model, the fasteners for the mirror.

Virtual engineering model

By means of URL’s, she couples the functional data of the electromechanical components to the corresponding model parts. Then she stores this first version of the entire assembly and sets a flag that reports to all members that the planning phase is finished and the model is ready for more steps.

An external, specialized rapid prototyping company gets the job by e-mail to create a stereolithographic model of the compartment. An STL file is attached to the e-mail for this purpose.

The tool construction begins based on the model with the layout of the casting form. In the meantime, the pricing specialist has taken a look at the model. He doesn’t have a CAD system, but has installed a few special modules for electronic simulation of 3D models.

He first chooses the parts that interest him for the first job, and stores the reduced model on his computer with compartment interiors, electro-mechanics and the sections of the cable harness in question.

With a few mouse clicks, the assembly is exploded into its individual parts. Using a hyperlink, he begins -again with the mouse - to  pen an office package on the server and opens a text file - illustrated with component views - which describes individual assembly steps  It was created in the meantime by a co-worker from technical documentation using 3D models on a normal office system.

He puts on the headband with the third eye in its place and puts on the glove connected to the computer.

The assembly is now virtually accessible to him and he starts to put together the first mirror using the transparent instructions above the model. He doesn’t only feel the small parts, but also the compartment wall. Until he gets to a situation where he can hardly attach a screw without colliding with the stiffening rib of the mirror, the assembly is no problem.

The way to the parts, the motion of the hands and the collision with the compartment during installation are held as if in a video. He hangs this object with a short annotation as a URL on the associated rib and stores the state of the assembly reached.

Then he connects - virtually- the cable ends to the corresponding connectors. The assembly is finished. He takes the third eye off and puts the gloves away. In a menu, he chooses the functional simulation for the electrical mirror, started as a prepared component by the server. On his screen, a small control element appears which contains all required functions for the motion of the mirror.

From a menu, he selects function simulation for electrical mirrors, and it is started by the server as a ready-made component. A small control element appears on his screen containing all necessary functions for the movement of the mirror.

As he controls the mirror vertically and horizontally from one end position to the other one after the other, he can follow the results of his actions online on the model. No component collision occurs, the motions must be carried out in satisfactory tempo. With the exception of the assembly problems, there are no objections with respect to the construction on his page.

In a second intermediate storage, the functional test is captured as if on video. Both simulation results are reported as available using a flag on the project page.

Within eight hours after the order, the first stereo lithographic model is on the desk of the construction specialist. This serves as the bas s, together with the screen model and the simulation films, for a short discussion of the project team with the development leader.

Product life cycle

The mirror has run through another series of further tests and experienced a few changes, all conceivable errors have been simulated and it was part of the new vehicle model of the manufacturer in overall simulations in a wind tunnel and in a crash test. It is now part of the vehicle production line, and the model is on the street.

A customer is driving the car to his auto shop. The mirror has been torn off by a freak, harmless collision, but seems to still be intact up to the flexible fasteners that attach it to the receptacle at the door. Naturally, the question for the owner is whether defective parts can be individually replaced or not.

The specialist goes to the network computer, which has replaced the microfiche device a little while ago, and starts a browser. All vehicle brands appear in picture representation in circular fashion like a tire. The homepage of the manufacturer is called up with the mouse, the class and model year is chosen and a 3D model of the vehicle that the customer drives appears on the screen. 

The specialist turns the model with the mouse so that the mirror can be selected and enlarged. By double clicking, the assembly is shown in a larger window.  Another double-click leads to an exploded diagram in which the man can search for the individual required parts. 

Beside each part where the cursor stays for longer than 3 seconds, a small text window appears that contains the order number, the current nearest supplier and location, the sales options as an individual part, and some other information. 

While accessing, necessary information is collectively gathered and put together for this purpose automatically from many separate servers.

As expected, more than the damaged parts must be replaced. The customer nods, the specialist clicks on the order number, enters the customer files and releases the order. He takes one more look at the information window. The part will be here the same day. The repair will take place the next day.

And so enough of Science Fiction. It would be nice if all this were really possible soon. It depends a bit on all of us. And on you. Because only if the available technologies are actually in demand will the products come about. And only if these products are globally set up, can the utopia illustrated become a reality. 

Virtual reality, of course. But in engineering, a large portion of reality will change to a virtual one in the next few years presumably. What does it mean? "Be realistic! Demand the impossible!" In this sense: Onward into the future!

Appendix: The principles of the underlying technology

This rather extensive appendix is concerned with the details of the most important components of Web technology. At the same time, it should make the reader acquainted with the central concepts that will belong to the standard vocabulary of computer users in the coming years.

Reading is thus warmly encouraged, with a gentle warning: although this section is more theoretical, here again no attempt is made here to turn the reader into a programming specialist. But a certain amount of background information is a prerequisite for being able to discuss intelligently questions that are important for every manufacturing company.

A.1 The Java programming platform

Perhaps we should begin by explaining what Java is not. As I was collecting material for this book, I came across numerous half-truths and jokes that had little originality but much staying power.

"Java? that’s a kind of coffee!" or: "Isn’t that an island?" These were questions that were returned on occasion for my own inquiries into what strategies a software company was pursuing in terms of Java – often asked, incidentally, by employees who were truly unaware that their own research and development department was already working at full steam on programming Java-based applications, or at least on researching its suitability for certain projects. 

Another stereotype was only slightly more serious: "Java? That’s just a new programming language for the Internet. That does that have to do with?" 

This question is meant less humorously, but generally rests on a basic misconception, and one that is still very widespread. True it is that Java has something to do with the Internet, and the general industrial and business break-through coincided with the availability of Java for good reason. But the reverse conclusion, that Java hasno special significance beyond that domain, is rather far removed from reality. 

Short story, long preamble

In the beginning, Java was not called Java, but Oak. Before James Gosling and other employees at Sun Microsystems made the technology they had developed available in the spring of 1995 as Java, the goal of their work had been something else entirely. The gurus of Sun wanted to create a language that was specially adapted to allowing all possible types of devices to communicate with each other, for example via set top boxes.

It’s not hard to see where the vision for this came from. If almost every toaster, every coffee machine and washing machine, every car wash and every heating and air conditioning system is equipped with chips, why should it not be possible to control them through a single remote control system, maybe even via cellular phone?

The dream of a traveling service representative who sets the heater at home to the desired temperature from his car, or a couple on vacation on some Pacific island who modem home to see if they perhaps forgot to turn the washing machine off or the VCR on seemed to be in the foreseeable future. There was only one barrier, but it was a big one: thousands of devices were controlled by almost as many proprietary command languages, and a standard was absolutely nowhere in sight.

In the beginning of the 1990’s, no one was interested in this standard into which Oak was to be developed. The greater part of the industry involved showed no interest. It was a good idea, and that was confirmed from all sides. But it was clearly the wrong time, and the expected benefits were estimated as not significant enough. The project did not get going until 1994.

Clearly, the Internet community saw greater benefits. For when Sun Microsystems announced its decision to make Java available for everyone on the Web - and at no cost to boot - there was an unexpected run on it. At lightning speed, software developers discovered the opportunity of making Web pages more attractive with Java and introducing interaction and graphics. Some companies owe their meteoric rise in the last two or three years not least to the availability of Java. 

Now it has quickly become apparent that Java is useful for much more than just formatting Web pages. But what is the primary innovation that distinguishes Java from previous technologies? To answer this question, we will first take a look at the entire project, and then at the language itself.

The brand name "Write Once, Run Anywhere" or a virtual machine for all

What was said of the consumer goods industry and devices programmed in machine languages applies to the entire computer world when we look more closely. Every hardware has its own operating system. Every program, whether Fortran, Pascal or C ++ , must be translated into the machine language of the system in question by means of a ‘compiler’ built exclusively for this hardware. Otherwise it will not run. Therefore, communication between different computers has always been somewhat wanting, to say the least.

Little had changed in this regard through the years in the field of computers, even if it seemed to the end user that it had, because programs in use functioned largely identically on more and more hardware devices. 

To offer the user a (relatively) free choice between a given number of different computer vendors, the program must be maintained in just that many different versions. Each new release involves adapting to all supported platforms, and every innovation in hardware or the operating system has to be considered by the manufacturer of the application.

This is a tremendous expenditure of energy that on first glance offers no advantages to anyone. But the alternative – one operating system to which all computers would be standardized – is no more attractive. For the many peculiarities are well justified upon closer examination.

One manufacturer has specialized in high-performance graphics, another more on office applications. One offers a computer that can stand up to use in the changing temperatures and dirty environment of a workshop, another produces machines that can reliably manage vast quantities of data. One builds personal computers for your desk at home and another delivers reliable network computers for purely professional application.

Should all of these different devices turn their back on their specific strong points in favor of a standard operating system? That attempt has been tried, several times in fact. The result has been failure.

But what if programs were written so they could be understood and executed by all hardware platforms without being translated or adapted? That is precisely the solution that Java technology brought to bear on this basic problem.

Of course, this required more than just inventing another programming language. And more hardware manufacturers would have to be interested in such a solution than just Sun Microsystems.

The solution is called the Java Virtual Machine (JVM), and it sparked interest within a very short period of time with virtually every hardware and operating system manufacturer, as well as numerous software providers to whom the newly founded subsidiary of Sun Microsystems, Javasoft, offered licensing agreements.

Meanwhile anyone can still download the development environment for Java software, the Java Developer’s Kit (JDK), from the Web free. Incidentally, this makes it possible to calculated relatively accurate figures on the scope of Java-based development. In April 1998, the then-current number was made public: up until that time, the Java Developer’s Kit had been accessed more than 2 ½ million times. Sun estimates from this figure that there are about 700,000 Java developers worldwide.

Digital Equipment, IBM, Apple, Silicon Graphics, Sun Microsystems, and for the unimaginable number of manufacturers of Windows and Windows NT machines including Microsoft itself – these computer manufacturers represent well over one hundred companies that have acquired licenses for the Java Virtual Machine and have signed a license agreeing to ensure that every program written in Java will run on their computers without any further adaptation.

But the further development of Java has long been a matter that extends beyond Sun. Proposals for new specifications, extension and improvements have come from the entire computer industry and have become part of the programming platform.

To avoid dilution and the formation of dialects, which would contradict the highest goal of platform independence, Sun has applied for ISO certification for Java. And in what may be the only case of its kind thus far, the International Standards Committee had agreed that in the interests of smooth and swift completion of Java, a single manufacturer rather than a consortium would bear the main responsibility. This allows for clear rules of how to proceed in contested cases.

Middleware

Back to the Java Virtual Machine. Essentially, this is the more abstract form of an operating system. A runtime system, lying somewhere between the actual operating system and the Java program started on it.

The term ‘Middleware’ has been coined to describe this level on which Java (and CORBA as well) is established. 

The software is executed by the virtual machine, which is also responsible for translating Java commands into the appropriate machine code.

Java thus initiates an entirely new type of client/server interaction via the Web. It makes it possible to write small software components referred to as Applets, which can be loaded by a Java-compatible browser. Applets make it possible to share executable programs and data over the Web. Reduced to the essential, the following steps take place, represented schematically in the illustration:

  1. An applet is requested by a Web browser.

  2. The Browser opens a new window for the applet (or starts it in the same window), that is handled like any other HTML object. (HTML, or Hypertext Markup Language is the format used to communicate over the Web.)

  3. The browser loads the applet into the Java virtual machine that runs in the main memory of the client computer and is responsible for execution.

  4. After termination, the Java virtual machine erases the program out of memory again.

A Java applet is thus basically a piece of portable source code that is converted into bytecode by the Java virtual machine. This conversion results in instructions on the lowest possible level without making the program machine-dependent. This means that Java programs look the same on every hardware platform, and there is no incompatibility between hardware and software architecture. (I write this fully aware that various individual deviations still arise at present. But in my opinion, the great interest of the entire computer industry would indicate that these already minor inconsistencies will have become even less significant by the time of publication.)

Bytecode makes Java a partially compiled language. Conversion of the source program involves roughly 80 percent of the entire applet. The remaining 20 percent are interpreted by the virtual machine at runtime.

This approach ensures complete platform independence. The disadvantage, known to every Interpreter, is reduced speed. The closer the program is to the actual machine code, the faster the commands can be executed. Interpreted Java code is about 15 times slower than complied programs in execution.

This disadvantage may not weigh heavily for smaller applets, especially in view of constantly increasing performance of computers. But for complex programs such as technical applications, or even for office applications, performance is one of the most important criteria for success.

Java - compiled

Thus, there are now regular Java compilers on almost all hardware platforms. Numerous just-in-time compilers are also available today, performing the conversion right at runtime. In terms of execution speed, a compiled Java program is comparable with a program written in C ++ .

Today’s industry is using extremely intelligent solutions in the form of just-in-time compilers, which achieve significantly better results than traditional compilers. The translation is supplemented here by a step by step optimization.

When a compiler of this type determines that of ten variables occurring, only five actually take on variable values, and that the other five could be replaced by constants, it turns the theoretical variables into constants. Recursively, this may lead to other variables then becoming constants and so forth.

According to claims of a Sun specialist, test results with compilers for electronics CAD systems have shown increased speeds by a factor exceeding 1000.

The argument that Java is too slow for C technology and similar extensive tasks is thus based more on ignorance than on concern for customer satisfaction.

To understand the success of Java, we must examine some of the particular features of the Java programming language in more detail. For it is no coincidence that not until this language was developed could a project such as the Java virtual machine be implemented worldwide and in all application areas.

All software roads lead to the object

Producing computer software is a strenuous, exciting, and often a rather tiring task. At its heart is the attempt to think out in advance on a very high abstract level what the hardware and connected peripheral devices should do when the user gives a certain command.

Not much has changed. Commands can now be given by clicking with a mouse, speaking into a microphone or through a modem window, but things have actually become even more difficult.

The more complex the task of a given program, the greater the probability that the developer will fail to anticipate some possibility of execution. And the greater the range of software components that interact with each other, the less likely it is to think of all possibilities.

For some time now, the magical incantation against these difficulties has been object-oriented programming (OOP). There have already been several generations of object-oriented programming languages. In the last ten years, C++ has more or less replaced all other alternatives in this regard.

In the area of CAD as well, we can probably say that the majority of systems on the market today have largely been developed in this language or have recently been rewritten in it.

This is not the place to discuss the details of object-oriented technology or the advantages and disadvantages of this or that programming language. However, a few words on the main problems and significant innovations, in comparison to earlier software development methods, are necessary for further understanding.

An object and not an object

The purpose of Objects is to make programming simpler and to facilitate maintenance and further development of existing source code. Everything a piece of software is capable of is encapsulated into tiny definitions that form these objects. A schematic definition as shown in the illustration is often used to explain this.

An object thus consists essentially of a series of variables known as "entity variables". These serve to define the status or the state the object is in.

Surrounding this core like a membrane, and to a certain extent acting as a protective layer against undesired access to the object status, are the methods. They define the behavior or the object, and by modifying the entity variables and assigning values to them, they perform manipulations.

This characteristic of objects is referred to as encapsulation. It is the first of four minimal requirements belonging to an object-oriented language.

Only through the methods is it generally possible to access objects externally or to modify their state. To do this, one object sends a message to another. Depending on how the object is designed, it will do something with the message or not, and its state will be modified or not.

Let us consider an example. An object in the form of an icon that visually represents the printer appears in the menu bar of a system. If the user clicks on the icon, she will be informed for example if no driver has been loaded for the connected printer, or if no more paper is available. Alternatively, the printer will automatically print out the document currently open on the screen. 

The object responsible for printing was changed from a ‘Ready’ status to the ‘Printing’ status by the event ‘click’ of the object ‘mouse’. 

Different objects react to the same message with very specific responses types of behavior, just like objects in the real world, where not every bicycle brake responds with the same force when the brake lever is pressed. In one case, the rider flies over the handlebars and in another the bike reduces its speed just as desired.

To return to the example, the same click with the same mouse button of the same mouse causes the object ‘Save’ not to print, but to save the open file. Similarly, an object ‘Printer’ that is connected with a color printer responds to the command ‘Print’ with a different series of determinations, which may govern for example the choice of paper or the quality of the color printing.

This characteristic of objects is the second of the four criteria mentioned above, and it is described by the somewhat impressive sounding term polymorphism.

Objects are formed and defined as elements of classes. These classes have a tendency to inherit their features. The goal is simple: if certain characteristics and methods are already defined, the developer can use this definition to create a new object that will perhaps have additional characteristics or will not have certain others.

This brings us to the third point of object technology, Inheritance.

Since the world is a big place and the number of objects is inconceivably large, it is inconvenient to be tied to objects that are already loaded on the computer at hand when creating programs. If an object has found its way across the Internet to some computer, for example, it should be dynamic enough to be able to interact with the objects there and exchange messages. 

This is possible because of the fourth criterion of objects, dynamic binding.

Less is more

So far, so good. These criteria are respected by C ++ and other languages, including Java. The problem is rather that in most previous OOP languages including C ++ , the dividing line between procedural or functional programming techniques and object technology has not been followed consistently enough.

These artifacts from the early years of software development often do fulfill their special purpose very well, and quickly as well. At the same time, however, even when objects and clear class definitions are used, they often make it so that even one experienced software professional can barely understand another’s code. The developer will often have difficulties after some time if it is necessary to search for errors or to make specific changes in a program that was written in-house.

Things are just as they have always been in procedural languages, complex, complicated, extensive and ridden with numerous source errors - in stark contrast to the goals associated with the advent of object-oriented programming stated above.

I use this point as an example of a whole series of "additional functionalities" that C ++ had, which Java does not. Bill Joy, the co-founder and vice-president for research and development at Sun Microsystems, calls Java "C plus plus minus minus".

I white paper entitled "The Java Language Environment", which appeared when the language was published in 1995, put it this way: "Everything you can do with a function, you can do just as well by defining a class and creating specific methods of this class.

The paper continues: "After all the ballast is discarded, Java is noticeably context-free. Programmers can read source code significantly faster and more easily, and more importantly, modify and extend it."

Collect the trash and take it out instead of managing it!

A second important advantage of Java lies in the area of managing the memory required during runtime for a program. "memory management" has long been an area of concern for software developers.

An object requires a specific area in memory. The area is assigned. When does the object no longer require the space? Is the object even active any more? Or is the memory area being tied up unnecessarily?

Untold sums of money and man-years have been invested in the past for the solution to questions such as these. As commonly understood, the programmer is responsible for this even though the programmer has nothing to do with the actual purpose of the application. Every software house has its own tale to tell about this, and many will confirm that the problem simply cannot be solved in this manner.

But is it really still an obstacle with memory becoming ever less expensive and PCs going out of the store these days with more than 100 MB of memory? On the one hand, memory still has its limits, especially when complex extensive applications are being executed. On the other hand, it is only possible to assign the appropriate amount of memory correctly if it is possible to determine at any time what is currently occupied.

Even a lay person can easily imagine that this obstacle becomes more of a problem as more applications run on the computer at the same time, from different manufacturers and for the most diverse purposes.

On this point, Java developers are free of all cares. A garbage collector takes care of the task automatically. Let’s listen to the words of the inventor of Java in the basic paper already cited:

"Automatic ‘garbage collection’ is an integral part of Java and its runtime system. While Java has a new operator for assigning memory for objects, there is no more function for freeing up memory. After an object has been assigned, the runtime system follows its status and automatically gives the memory back for other uses when the object is no longer active. 

Java memory management is based on objects and references to objects. (...) The Java memory manager follows references to objects, and when an object has no more references, that object is a candidate for ‘garbage collection’.

Java’s model for memory management and automatic ‘garbage collection’ makes programming simpler, eliminates whole classes of errors and in general offers better performance than can be achieved through explicit memory management." 

But the same thing applies here: the advantage described above is naturally most important in situations where available memory and tidy management of it represent more or less the elixir of life, for example in 3D graphic applications.

What the user can expect is fewer crashes with higher performance of systems. The advantage for the software developer is concentrating on the actual task at hand by jettisoning unnecessary ballast, thus ensuring faster releases of new versions and components.

Another significant aid in Java is ‘exception handling’, also an integral component of the programming language. This means that for every defined method, Java programs must also contain a complete description of exceptions under which the method will not work.

The result is clear error messages instead of references requiring interpretation or even crashes from the clear blue sky during runtime.

 There are still other positive features of Java, for example the lack of pointers or automatic testing of field limits. Knowledge of them is, however, more important for programmers and less for a general understanding of Java as a whole.

For the moment, we may then summarize by stating that purer treatment of objects and the lack of non-shared components prone to errors seen in other programming languages have contributed decisively to the success of Java. 

Together with the Java virtual machine, the advantages of the programming language form the Java platform and at the same time comprise the cornerstone of Web technology. 

No longer can objects benefit from their advantages on a single computer or on a limited network only – instead they can be used for communication purposes that extend beyond a single platform in a way that was never possible before.

Java Beans and Java Foundation Classes

A few remarks on two concepts that often come up in this context are in order: Java Beans and Java Foundation Classes (JFC).

These contain nothing other than a set of basic object classes and basic components. On the one hand, they can simplify work for the developer (who will then not need to define everything by himself, but can rely on defined components that are available on every platform. On the other hand, they are essential components for actually being able to create a living, platform-independent world of objects, components and applications. 

Java Beans were the first specifications that allowed for interaction and communication between Java objects. They offer the developer an architecture comparable to COM/OLE in the Microsoft world. 

Basically, they represent a series of programming interfaces which when maintained enable a series of functions: shared use of menus and software tools within different applications, saving the status of objects within a running application, dynamic recognition of methods, interfaces and properties offered by objects, and visual tools for combining and modifying Beans for a specific purpose. 

A Java Bean might be a button that simply asks whether it is "pressed" or not and passes this state on to some method. It may also involve complex functionality, for example generating a table.

Java Foundation Classes allow everything that is already known within an application, now on a platform-independent level. Examples are drag & drop objects, independently of memory, of the platform of origin or of the generating application: clicking on object X and dragging this object from one application (for example a 3D component from a CAD system) into another (for example into a text document for illustration).

Java Beans and Java Foundation Classes – like Java itself – are the result of the combined efforts of the entire computer industry. They also tend to confirm that this is a powerful programming platform has come into existence, one that will affect every area that deals with computing.

These are only two items of many already in existence and certainly of many more that will be added in the next few years.

These are signs of a lively new technology with a promising future. They also symbolize the great interest the world of software developers has found in this technology.

A.2 Java 3D

Communication on the Internet, as for most software applications, can be initially described while being limited to the representation of two-dimensional objects. Text, graphics and user execution generally succeed without the third dimension, although it naturally adds zip to the matter and facilitates usage.

Even in the area of the engineering workplace, only certain tasks demand 3D. They demand it so insistently, however, that a programming language must always be measured for this circle of users by how well it supports the creation of programs for generating, displaying, modifying, rendering and interactively manipulating models in space.

Since the beginning of April 1998, developers of systems for industrial product development have yet another reason to turn to Java in programming. Sun Microsystems, in close partnership with Intel, SGI and Apple, has developed the interface in question (also 99% in Java, by the way), now to be released in a first version.

Java 3D API

Java 3D API (Application Programming Interface) is an interface for programming 3D applications in Java. The concept is directed towards a whole series of applications manufacturers. The potential products are listed in the white paper of April 1997, ‘The Java 3D API’: 3D navigation within browsers, systems for virtual reality, 3D games, CAD systems, 3D logos, Web pages, graphic design, VRML implementations and much more.

The starting point is Web technology itself. Until now, whenever it was necessary to access a 3D graphic through a browser, either an external application or a so-called ‘plug-in’ had to be available on the client, the workstation computer. The browser itself was not able to interpret and display this data independently.

For example, navigation through a 3D model stored in VRML (Virtual Reality Markup Language) requires the installation of a separate VRML plug-in on the computer in question.

Plug-ins generally have the same old disadvantage for the developer that they are a separately compiled, platform-specific application. Moreover, the user senses that they are not truly integrated into the Web technology, but are running in a separate window.

One of the goals of Java 3D API was to remedy this situation: to extend the browser technology that up until now was limited to surface display into 3D space without an add-on and without a separate window. The other goal was to create a purely object-oriented, programming environment not specific to any platform that would eliminate the weaknesses of previously available 3D graphics interfaces, with drastically improved performance.

We are thus dealing with questions very similar to Java technology itself. Until now, not only have programmers working with graphics applications had to port their entire source code onto the hardware and operating system in question on which the application will run. Software manufacturers have also had to use experts familiar with the different graphics platforms who understood how to make optimal use of them.

The most important of these platforms are called OpenGL (SGI), Direct 3D (Microsoft) and Quickdraw (Apple).

A considerable amount of effort is required for special optimization of the software to adapt to the graphics system in question, in particular to actually attain real-time behavior while viewing complex 3D models. It often turns out that programmers are more preoccupied with fulfilling this task than with shaping the actual functionality of their programs. 

The industry is currently tending towards a new type of platform, called ‘High-level Scene Graph APIs’. Java 3D belongs to this category.

The programmer will no longer need to be concerned with the ‘nitty-gritty’ of detailed rendering of models of angles and triangles to achieve the best display results. Instead, a new essentially abstract level will be available to describe objects or to form the virtual scene itself. The graphics system will take over details on the level of fundamental geometry.

With the Java 3D API, the programmer can limit herself to a single code, since the new programming interfaces are responsible for ensuring the program is capable of running on different hardware platforms.

If the Java virtual machine lies between the operating system and the executable application, then Java 3D as a runtime graphics system inserts itself between the application and the appropriate ‘low level’ graphics API, which in turn is tied to the specific hardware. Java 3D API supports all currently valid standards.

There are additional developments that tend in the same direction in regard to functionality, but a certain amount of confusion reigns here at the present time.

While Microsoft was working together with HP on a project called Direct Model, SGI had concentrated in the last few years on the OpenGL Optimizer based on its own OpenGL Scene Graph technology.

In mid-December 1997, Microsoft SGI announced their intention to bundle their previously separate efforts into a combined project with the name Fahrenheit.

This is to be a set of three new APIs. One will presumably replace Direct 3D and Open GL on the same level. A second will be based on OpenGL Scene Graph technology, and the third will be a tool for viewing large components (Large Model Visualization, LMV).

It is not yet clear to what extent Fahrenheit is a proprietary platform available on dedicated computers and special operating systems.

Independently of this, the Java 3D API development community has already announced that if the new platform establishes itself successfully, in will also be supported in the future. 

The 3D Universe

Henry Sowizral, the lead architect of Java 3D at Sun Microsystems, explained in an interview at the beginning of December 1997 the advantages of programming with the new interface.

He comes from Boeing, where CATIA V4 is at the center of product development tools and complete aircraft are currently being modeled in 3D. With this background, he knows the need in industry for high-performance aids for viewing virtual prototypes all too well. And the obstacles that have previously stood in the way of a meaningful digital mock-up: lack of speed, too high memory requirements,  and always too narrow a bandwidth when transferring image data.

"The performance already achieved in the first version of Java 3D is striking. The main reason for this is the way the geometry is compressed. Compared to previous technology, one-tenth the memory is sufficient, and one third of the bandwidth required up until now.

Of course, the question of platform-independence is also pushing the industry in this area. You see, when someone in air travel want  to work with the 3D model on the maintenance of a machine, he can’t take the high-end graphics workstation along to the airfield or to   the hangar. He must be able to implement what he wants to see on a notebook or laptop.

The applications in question can now rely completely on Java 3D for displaying models and components, allowing for complete freedom of peripheral device."

The Java 3D API white paper explains the slogans that guided the development of Java 3D:

"The design of Java 3D API is based on the broad expertise of the companies involved in the project in existing graphics interfaces and in new technologies. The low-level graphics constructs of Java 3D API combine the best ideas of such low-level APIs such as Direct3D, OpenGL, Quickdraw 3D and XGL. In a similar manner, the higher-level constructs of Java 3D API are based on outstanding ideas encountered in various modern Scene Graph-based systems." 

3D and other Java media

Java 3D API is a component of Java Media. This includes a series of additional tools related to integrating a wide variety of multimedia technologies into the Web environment, for example Java Speech for speech recognition and text-to-speech conversion, or Java Animation for moving 2D objects. 

Java 3D-API is intended for creating both stand-alone applications and Web-based 3D-Applets. It contains constructs for forming and manipulating 3D geometry and tools for defining structures required for rendering the geometry. 

Incidentally, Java 3D also includes sound objects. The reason for this is as obvious as it is enlightening: just like the eye, hearing works in space. Just as we assign a position in space to a viewed object, we do the same with sound. And the acoustic effect of a sound depends strongly on its spatial environment. It is quite possible to hear the difference between a chair that falls on the cement floor of a fully loaded garage and the same chair falling on the parquet floor of a dance hall. 

To display a 3D object as realistically as possible, the sounds must not only behave stereophonically, they must be in direct relation to the visible 3D world.

The target range of hardware and software platforms Java 3D is addressing extend from low-cost PC games and software renderings at the lower end through mid-range workstations to highly specialized 3D high-performance image generating.

In contrast to previous graphics interfaces, Java 3D API is based on a new ‘view model’, not on a ‘camera model’. A clear line is drawn dividing the physical world from the virtual world on the screen.

Write Once, View Anywhere

Here again, the goal is to avoid any dependence on hardware. In this case "Write Once, View Anywhere" is an adaptation of the well-known Java brand name. 

Whether the viewer is using 3D glasses, whether her current viewpoint can be considered by the computer by means of a helmet or a third eye on the forehead; whether the representation of the 3D world takes place on a flat screen or in a 3D box, as the automobile industry is increasingly using for its virtual prototypes – the 3D model should suffer as little impairment from it as does the viewer. 

Henry Sowizral comments: "This last problem is absolutely not to be taken lying down. If the helmet is working with an aperture angle of30 degrees, for example, but the programmed camera perspective and the represented angle of view on the screen are 90 degrees, the result is a certain amount of uneasiness and nausea. 

We have separated these two levels from each other completely, since in the future an application will be developed less and less for a specific device – generally the programmer will not even know where the Web might carry his program, or what auxiliary devices the viewer might be using."

Java 3D therefor clearly distinguishes between positioning, orienting and scaling objects and spaces by an application developer on a ‘ViewPlatform’ on theone hand, and how these objects are finally displayed by the Java 3D Renderer on a specific device on the other hand.

So as not to place installed software in the industry and the investments represented by the software in question, Java 3D API is provided with a so-called ‘loader’ supported by previous common graphics platforms and graphics file formats. Various CAD formats will also belong to it such as STL (Stereo lithography format), Wavefront (Alias) and VRML.

A.3 The CORBA object infrastructure

Up to now we have seen how Java and Java 3D provide the programmer with new ways for developing object-oriented programs capable of running not on a specific platform, but rather on practically all platforms in an unlimited worldwide network. We have seen that these programs can be developed more quickly and more securely than in previous programming environments, and that they promise the user a whole series of new possibilities for supporting his work.

One of the core issues was the fact that this is a technology that makes it possible to create intelligent objects. The question is whether Java alone is adequate to allow the object, once it is created, to respond intelligently in combination with other objects.

An additional cardinal problem is that there are already a large number of other systems. Some of them are more or less object-oriented, while others are not at all. Their source code consists of millions of lines written in the most varied programming languages and translated with the most diverse compilers, and of course into a   large number of machine languages. 

Must all programs now be rewritten in Java? Must the end user buy everything new and throw all existing  applications in the trash to enjoy the benefits described? Did she get into the field of computers too early? What  is the actual sense of the objects, components and systems now possible if they cannot communicate with existing ones?

No rules and no object interaction

We should be clear from the start: without an infrastructure that makes it possible for a wide variety of objects to react with each other, without clear rules accepted by all that are involved for a platform-independent computer world, the finest programming languages and the best objects are of little value. The ambitious goal hiding behind the abbreviation CORBA is to create just such an infrastructure.

While explaining the CORBA project in the following pages, I rely essentially on a book Robert Orfali and Dan Harkey published in 1997 with the John Wiley &  Sons, Inc. printing press under the title ‘Client/Server Programming with Java and CORBA’. Let us see how they describe what CORBA is all about.

"The Common Object Request Broker Architecture (CORBA) is the most significant (and most ambitious)  Middleware project our industry has ever taken on.

It is the project of a consortium with the name Object Management Group (OMG) that includes more than  700 companies representing the entire spectrum of the  computer industry. The  exception worthy of mention is Microsoft, which has its own internal competing object broker, Distributed Component Object Model (DCOM).

For the rest of our industry, CORBA is the next generation of Middleware. The CORBA object bus defines the form of thecomponents that live in it and how they inter-operate. By choosing an open object bus, the industry has also chosen as a result to create an open playing field for components."

In the fall of 1990, OMG published the Object Management Architecture Guide (OMA Guide). The four main elements of this architecture are:

  1. The Object Request Broker (ORB) defines the bus.

  2. CORBAservices describe contextual conditions for objects that supplement the bus on the system level.

  3. CORBAfacilities specify the horizontal and vertical frames of applications that are used directly by so-called business objects.

  4. The Business Objects themselves and other applications are application objects, so to speak the end users of the entire infrastructure.

This architecture and all its components - and herein may lie one of the mysteries of the great and rapid success of CORBA – are not program code at all, but consist entirely of specifications and interfaces based on demonstrated examples of the companies that are members of OMG.

The specifications themselves are formulated in a neutral Interface Definition Language (IDL). They delineate the basic requirements of a component. One could also say that they represent the contract each client must enter into upon initiating a relationship with a component.

CORBA allows intelligent components to recognize each other reciprocally and to communicate with each other via the object bus. Along with the services,CORBA also offers a rich set of instruments for creating and deleting objects as well as for accessing them by name. Permanent storage and defining relationships among components is also possible, and much more.

CORBA objects can be located anywhere on the network. They have the form of binary components the client can access by calling methods. The language and the compiler used to generate the object are completely transparent for the client. He does not need to know where the CORBA object is located or the operating system on which it is executed. It may reside on the same computer as the client or in the same local network, but also anywhere else in the world, being connected to the client only by a telephone line.

One particularly valuable aspect of the project is that the object can just as well consist of C ++ object classes or of thousands of program lines in FORTRAN 77 or COBOL. There is no difference for the client. The only thing the client must know is the interface of components, the server object.

The required architecture of CORBA, and in particular the hardware-specific ORB kernel, is currently available for nearly all computers. It is the task of the computer provider to create and maintain this kernel.

Non-platform-specific interoperability became possible with version 2.0 of CORBA, by means of the generally binding Intern t Inter-ORB Protocol (IIOP). Essentially this represents nothing more than TCP/IP extended by a few CORBA definitions regarding information exchange between objects.

CORBA also supports other protocols for specific tasks and environments that will not be discussed in any more detail here. The CORBA/Java book named above is heartily recommended to any reader interested in a more comprehensive study.

The Java/CORBA object Web

The reader should be able to conclude why the authors of the book cited above state in one place: "We hope to have convinced youthat CORBA and Java are ‘made for each other.’"

While Java provides for neat, platform-independent programming of objects and portable program components, CORBA offers the infrastructure with which the objects can communicate and work with each other across all hardware and software platforms. One without the other is only half of heaven.

It is thus little surprise an optimal integration of these two projects is being worked on full-throttle in all the nooks and corners of the compute industry, in the form of development tools and environments that comply fully with CORBA specifications and are also completely Java-compatible.

The result is CORBA/Java ORBs. They are special IIOP ORBs written completely in Java to ensure their platform-independence. Their IDL compilers generate exclusively pure Java code, which can thus be loaded onto any hardware platform that has a Java runtime system.

A Java ORB of this type allows any Java Applet to call methods of CORBA objects over the Internet directly and without any detours through the IIOP protocol. This circumvents the normal Java route via HTTP. Client and server can communicate with each other directly. The disruptive bottleneck of normal Internet access is avoided and the performance of interoperability is improved.

CORBA/Java ORBs are coming into existence at a tremendous rate. Some providers are not even waiting for the official OMG specifications to be released.

Without wishing to provide a complete listing of available Java ORBs, the most important should be mentioned briefly. There are two, produced by Iona and Visigenic/Netscape, after Sun Microsystems undertook the development of its own Java ORB named Joe.

Iona’s OrbixWeb

Iona is currently considered as the leading provider in of CORBA technologies. The ORB is named Orbix. It supports client/server objects in C ++ on more that 20 platforms, including twelve Unix variants alone, OS/2, NT, Windows 95, Open VMS and MVS.

The Java ORB available from Iona since 1997 is called OrbixWeb and is a Java implementation of Orbix based on the IIOP protocol. At first, the approach for this ORB was Java on the client and C ++ on the server. Currently full Java support is offered on both sides of the server.

Visigenic’s Visibroker

The second in the group is Visigenic, still a young company like Iona, and one based on new technology since its inception. The ORB is called Visibroker for Java, is written completely in Java, and was the first to support Java objects on the client and on the server as well.  Here also the IIOP serves fully as the basis of communication. In the future, Visibroker will be available integrated into every Netscape browser.

Two things about the CORBA/Java story are noteworthy:

  1. It proves how serious the interest of (almost) the entire computer industry is for an open standard for object-oriented systems and how far this standardization has already gone.

  2. It illustrates that the projected openness of the computer world is anything but a hindrance to lively competition. On the contrary, it promotes and requires a myriad of developments. We will shortly learn that this applies most specifically to the very applications that will be developed on this basis in the next few years.

A.4 Microsoft, the exception that proves the rule

It is gradually becoming necessary, after so much positive material on the shared efforts in the direction of platform independence to deal with the exception. Microsoft does not belong to OMG, its component world is (initially) limited to a single platform and that is the operating systems from Microsoft itself, Windows and Windows NT.

Of course, a proprietary arrangement and dependence on hardware and software, as they continue to exist here, make good sense from the point of view of a world monopoly in PC matters.

The marketing strategy that was followed in the past can only be continued successfully as long as software on the PC remains directly tied to the operating system.  This strategy is that with every upgrade of the operating system, the software that runs on it is due for an upgrade. Only by owning the newest version of both can someone stay current and participate meaningfully in functional innovations.

To that extent, the widespread disinterest in CORBA is not surprising. It is also hardly irritating that Microsoft is attempting to establish a variant of Java with Visual J++, one that only runs on its on basic hardware, and which in no way allows every Java Applet to run without conditions.

Politics of delay

Of course, the observer cannot help suspecting that this is a defensive strategy for a limited time. For what does Microsoft want to change about the World Wide Web, that CORBA/Java objects and applications are also supported on the PC? For example, the Java Bytecode of Visibroker for Java comprises an insubstantial total of 100 Kbytes, which can simply be downloaded over the Internet.

The way it looks, the computer giant will not even be able to prevent its own platform from being swept into the wave of openness. And as a  side note, if Bill Gates has the right advisors, sooner or later he will be swimming on this wave, less stubbornly and more actively – secure in the hope of being able to leaving his mark on the next computing generation as well.

In reality, everything else works to weaken his position. At least the growing public debate, now worldwide, about his use of a monopoly on operating systems and products tied to them would indicate as much.

What’s good about DCOM

The good side of the proprietary DCOM project is that just because it does not need to consider other platforms, it has gone well beyond CORBA in one area. This area is interoperability.

For years, the most diverse applications have not only been able to exchange data among themselves using OLE/COM, but also even use the same object.

With the expansion OLE for Design & Modeling Applications - originally defined by Intergraph and subsequently further developed by the consortium Design & Modeling Applications Council (DMAC) -this also includes 3D objects and transparency.

An office application, for example Word, can use an Excel table to create a form letter; a graphic result of a finite elements calculation can be inserted into technical documentation by Drag & Drop. And by double clicking on an integrated object, the user enters into the programming environment from which the object was generated, almost without noticing it.

This is exactly the type of interoperability that is sweeping the rest of the computer industry with CORBA and Java. While everything in the rest of the industry had to concentrate first on making communication possible over platform boundaries, however, object interoperability does not comes in until the second step for DMAC.

In my opinion, the positive aspect of Microsoft’s solitary course has also contributed significantly to the present NT wave among users of engineering software.  Finally, it has become possible to make data and models from the product development area accessible to other levels of the company. Finally, no other computer than the CAD workstation is required to draft a report or to access a project plan - as long as everything runs on the same Wintel platform.

This trend will presumably diminish to the extent that the necessity of reaching this goal of placing it on a single hardware vanishes.

That is everything essential to say on this point. Microsoft and the success of NT and Windows do not refute the general development towards open systems; rather they confirm it.

After this brief look back on the history of proprietary computer culture, we will now turn to the future again to see what new type of client/server computing is opening up before us through CORBA, Java and the Web.

A.5 Multi-level computing

Up until now, I have often spoken of ‘client’ and ‘server’ and done so as if these were familiar terms requiring no further explanation.

In reality, however, some explanation is required in this regard to avoid misunderstanding. For client/server are frequently understood today - the reader may have already picked this up from the text in various places –somewhat differently than they have been in the last fifteen years.

In particular, these terms no longer stand for hardware components. Instead, they have taken up residence in the software. The form of computing known as ‘multi-level computing’, still in its infancy, has much to do with this.

Originally, client/server computing stood for a network environment in which one a relatively large number of workstations (and later also PCs) were connected to one or more servers. Client here meant nothing more than workstation computer. Ethernet wasthe medium connecting the network components together.

The advantage compared to the single workstation is obvious: Not only could network data be exchanged among individual stations within the network, but the network also offered the possibility of storing shared data in a central location, on the file server or the data server. This was a step in the direction of ordered electronic data management and greater data security.

In the next step, which currently characterizes all installations, a backbone was added to the network in the form of a database. Increasingly, the network was also strengthened by a data management system. Oracle rose to become one of the leading software providers. Along with access to data and files, access to applications over the network also became possible. To the extent it was supported by the implementing software, it was also possible to service a whole host of users with one network installation, even if the number was finite, and the application no longer had to be installed on the individual workplace itself.

With the availability of the World Wide Web and especially with the advent of object technology, as was described in the previous chapter, it is little wonder that hardware plays a more subordinate role.

What do client and server mean today?

When we speak of clients and servers today, we frequently mean specific components within the world of objects. In other words, the role of the interaction fulfilled by the object is described.

An object is a client whenever it calls the method of another object, whenever it load components, whenever it starts an application. On the other hand, an object is a server whenever it makes its methods available, whenever it offers a service.

The careful wording with ‘whenever’ is simply due to the fact that many, if not most objects can constantly change back and forth between the role of client or server. The flexibility of modern object technology makes this possible.

In a certain sense, this corresponds to the properties and the behavior of objects in the real world. The same printer serves the object paper in one instance by place black and white text on it, while in another instance the object paper serves the printer by being loaded into the paper feed slot.

Wrappers

This too contributes to the common good: an object is no longer an individual object; today, an entire application can be viewed and handled as an object.

One of the immeasurable advantages of the Java/CORBA object world is that these technologies make it possible to provide older applications that are still needed and still run with so-called ‘wrappers’ that let them be seen by the rest of the object world as a modern object.

Applications such as these may be of enormous size, written in COBOL and may also reside on a traditional host computer. This in no way prevents a small Java applet run over a standard browser from picking up the calculation results of this ‘legacy’ application and forwarding them to the front end on the screen.

The fascinating thing is the fact that the most recent step in the direction of object technology is a very specific, targeted usage of available hardware and software without having to give up the attributes of the newest development.

Thick and thin clients, fat servers

To complete the confusion, the terms client and server continue to be used as before to describe solid objects such as computers or peripheral devices within a network environment. Now they need no longer be limited to a few servers at a specific location; through the Internet and global networking, they can incorporate the entire computer world.

As the reader may already have feared, here too the boundaries have become fluid. What functions in one case as a server may be the client in the next instant, for another operation.

Something new also comes into play: Along with workstations and PCs, computers with very low capacity can now also be used as clients, requiring neither hard drive nor disk drive nor CD-ROM drive.

Network computers of IBM and JavaStations from Sun are the first on a market that has not yet begun to see large sales figures.

For obvious reasons, these devices which are outfitted particularly meagerly are called thin clients. They work exclusively on applications based purely on Java, and the current version of the application with the current data are delivered to them at runtime.

We are still lacking is an explanation of what is meant today by multi-level computing.

Compared to traditional networks and the monolithic applications that run on them, software on future networks will be designed and will work much differently.

Instead of gigantic self-contained packages with horrendous numbers of programming lines, applications will be broken down into small components and objects that in their totality, because of their interoperability, achieve the required purpose not worse, but even better.

The portability of the software also makes it possible to centralize the greater part of today’s maintenance, upgrade and installation activities. At the same time the flexibility of the individual user is not diminished, but is actually increased.

Three levels may generally be distinguished in this new type of network, even if they may trade roles in concrete cases:

  1. The level of the data servers, domains and databases.

  2. The level of the application servers and

  3. The level of the front end or client, where data is imported and exported.

Table XXX shows an attempt to bring this admittedly highly abstract material in the proper light, through a comparison with the most important aspects of traditional and modern networks.

This ends the appendix. If the book and the theoretical background information have made you curious, there are hundreds of books you can consult to broaden and deepen your knowledge of Java and CORBA.

For the restructuring and the rethinking you will face in the next few years, however, I hope to have armed you adequately.


Disclaimer: The Practical Catia Training course is not offered as vocational training or as qualifying one for any particular employment.  There are no course credits transferable to any accredited educational institution.