Avoiding Vendor Lock-in

The goal of object-oriented tool development is to avoid vendor lock-in. Lock-in is not determined by the cost of the application but by the method of its implementation. The level of integrated customization determines the level of vendor lock-in.

One-of-a-kind software (fully custom or “one-off” software) is both desirable and necessary when no other solution is available. Unique software is custom-built for one specific purpose; it cannot be used by multiple customers and cannot be rented.

Customizable software (tightly integrated software) was a natural development of productized custom software. Customizable software was an attempt by vendors to create generic applications that could be set up according to individual business requirements, specifications, and needs. In a very real sense, customized software is harder to upgrade, expand, or replace than completely unique software: customized applications have as much single-purpose code as they have COTS application software, but the single-purpose software is tied to both the particular COTS application as well as the particular COTS application version. This means that all of the customizations must be upgraded whenever the application upgrades, if it can be upgraded at all.

Open standards software development (API tool development) is a direct response to the customizable software development over the last 20 years. API tool development focuses on isolating the impact of an application-specific code. For example, rather than implementing customizations entirely inside a particular vendor’s customization language, API developers create the small custom tools required to implement specific business requirements. They isolate required functionality in a small custom code components and then build an API layer to communicate with a specific vendor’s application. This way, when the application required upgrade, developers only need to upgrade the API layer: the customization layer does not significantly retard the ability to upgrade or replace any particular vendor product at any time.

NGES-MS is facing this situation with Vulcan today. Vulcan is tightly integrating the functionality of Epic with Visual Source Safe. As a result, NGES-MS is tied to Epic 4.3.1. Although Arbortext charges an annual maintenance fee, we cannot upgrade Epic to take advantage of new features, Because Vulcan is customized COTS software, upgrading Epic will not occur until Vulcan upgrades all of their 4.3.1-specific code.

And Vulcan is all custom code that attempts to bridge the gap between two commercially available products.

That’s vendor lock in.

In addition, NGES-MS is facing a deadline: it is highly probably that Interleaf will stop functioning with the next release of Windows (Windows Vista). Interleaf has been in use at NGES-MS since 1995. Interleaf might have still retain long-term prospects, if other factors were not at work to place the documents and data it supports at extreme risk. NGES-MS discovered that with the release of Windows XP, RDM became inoperable. Windows XP is scheduled to reach end of life in 2006. Extended support will end 5 years after mainstream support ends, in 2011. Documents still in Interleaf after 2011 will likely be unrecoverable.

Today, 78 percent of NGES-MS Launcher documentation is not supported in the Vulcan/CAE data model. Given the laundry list of features currently facing the Vulcan development team, it’s unlikely that Vulcan will be able to support the additional document types in time and we would simply be exchanging one highly-customized, proprietary system for another.

On the other hand, S1000D is written with interoperability in mind, but without limiting this to the lowest technical application. Although it may appear rigid or involved with tracking irrelevancies, the specification defines application behaviors; it requires vendors to provide data interoperability (a “way out”). If a product claims S1000D-conformance, developers supporting this format will never find themselves held captive. If a vendor is compliant, support staff will always have at significantly less expense options than if they were dealing with a proprietary or highly customized environment.

S1000D is not a tool. By its own definition,

S1000D is an international specification for the procurement and production of technical publications.. for [use in] the support of any type of equipment, including both military and civil products. (Chap 1.1, Page 1)

For the first time, interoperability is considered important from more than the technical (software engineering) perspective: a usability perspective is also involved. The uniform presentation of content increases both author and user efficiency. If every manual is created in accordance with the same specification, and content is always presented the same format, the learning curve for new personnel can be drastically reduced. In fact, a novice user becomes productive much more quickly without interference from the manual, in order to complete the job at hand. The majority of the S1000D specification addresses this very issue. By thoroughly defining how manuals should be constructed, the specification improves overall quality for every single person who uses the manual.

Besides defining the schema and enforcing interoperability, the real revolution S1000D offers is that it is completely modular. A data module is a data module is a data module, and output is done via assembly. S1000D will force users to change both the way they work and they way the “have always done things.” New policy decisions will be required. S1000D requires changes because, it demands following best practices. The end results, the benefits, really do outweigh the costs.

Can projects using S1000D be tailored, expanded, and customized? Yes.

Do you have to use their process? Yes.

Do you have to change your processes? Perhaps.

The best part of the S1000D specification is that, once agreement occurs between user, customer, supplier, and other internal groups, the implementation can still be tailored business requirements.

The “known standard” benefits are self-evident.

As with other documentation formats, S1000D reduces support costs, facilitates modularity and reuse, and allows users to view electronic documentation via a common Web browser or other interactive electronic technical manual (IETM) viewer.

S1000D indicates that it is flexible system. In contrast, most of the commercial world is adopting IBM’s Darwin Information Type Architecture (DITA) format. Many early DITA users have struggled with its overly generic character.

This was a conscious choice by the DITA architects: DITA legislates only the framework (topics, tasks, references, et al), but requires users to flesh out the details through “specialization.” Those who have used DITA out of the box have customized it and are now looking for something better.

S1000D is better. While it has many similarities with DITA, it goes even further. S1000D represents an emerging technology. This means that the tools are not fully developed and that vendors have not yet fully embraced it. However, because many organizations are adopting it, vendor support will grow. Working with this standard, we can leverage the same benefits that we would get by using any other.

S1000D is specific to air, land, and sea systems. S1000D captures more information (for documents supporting these products) than any documentation format to date. The specification is essentially complete. S1000D is ready to use today. It has more leverage for information authored and for tools that serve documents, on a smaller-than-document-level basis. In addition, S1000D has significant COTS vendor support and a wide community of practice to draw upon.

Because the DOD has specified that new documents be S1000D-compatible, and because migration is inevitable, going to the new system now is most cost effective. S1000D is XML, and XML transformation tools are immediately available. S1000D is the full-blown schema; all that remains are refinements of the data that the S1000D format contains.

We see several valuable side effects of this implementation in Aerospace and Defense. First, we believe that S1000D provides a neutral response to the DOD. This gives the entire SP community adequate opportunity to say: “S1000D is being investigated.” Second, the trial implementation provides a genuinely independent evaluation.

Third, because they will be supporting Vulcan output as well as IETM and pdf, as a bonus, they will be able to provide the roadmap for Vulcan-S1000D transformations without delaying current deliverables, or diverting attention away from current development efforts. If S1000D/Vulcan interoperability ever becomes desirable, the SP community will know the requirements, how much work will be involved, and a starting point (their transformation code). This information becomes available without derailing current deliverables or distracting the Vulcan implementation team from their primary focus.

Tight-vs-Loose Integration

Updated: 2017

I started working with XML and single-sourcing systems in 1999. Although both SGML and XML had been around a long time, vendors were still figuring out a lot about how the overall process should work because every implementation was different: DocBook did both too much and too little and was frequently customized. DITA was just coming into being. Content management systems were fairly new and not yet sophisticated. They, too, were dealing with specific, specialized problems (translation vs supply chain vs software configuration management).

And in 2005, I got caught working on a migration project from Interleaf and RDM (the Interleaf CMS) to SGML and LiveLink.  Interleaf and RDM ran only on Windows NT, which had reached the end of support from Microsoft (the true EOL). They had to get content out of RDM, out of Interleaf, and to something that wouldn’t take out all of their business assets if the system crashed.

End of life for software is serious. In some ways, there’s no way around it. As a profession, software vendors haven’t solved this problem but they’re also not motivated to do so (the money is in the upgrades). We can’t fault them, that’s how it is.

Smart purchasers take this into account and think about the issues of tight vs loose integration and vendor lock in when they’re looking at investing in an enterprise tool.

Here’s what I was up against (from my journal in 2005):

IT has just started migrating all the documents out of RDM and into Livelink, but everyone who’s ever used RDM + Interleaf is having a horrible time understanding what RDM did and what its purpose was. It’s all due to how tightly they were integrated by Interleaf to start with. Interleaf integrated their authoring tool and their content management tool tightly, because they knew how both sides were implemented (and were doing both). They took shortcuts and hid things from the users. Now, the users can’t make the shift because they never saw the lines between the products.

You need to know where products start and end. So do your users.

If you have a CMS from one vendor and a publishing tool from another and an authoring tool from a third. You absolutely must know where each vendor draws the lines for their responsibility. When push comes to shove and you’ve got a deadline and money at risk if you don’t meet that deadline, you need to know how to determine which vendor to call based on where the problem is and where they believe their responsibility starts and ends.

I can’t tell you how often I’ve heard frustration from people who have mixed systems and can’t get a problem fixed because all the vendors are pointing fingers at each other.

It’s impossible to diagnose a problem if you don’t understand where your vendors draw their lines in the sand.

At one company, I supported a team of 25-30 writers using Arbortext Editor on a flat file system. On any given day a writer would come to me and say Arbortext wasn’t working. And every single time it wasn’t working because the writer hadn’t mapped the server drive to their windows machine before getting started. Arbortext was working fine but because the writer associated it all as one system (one process from their point of view) they didn’t notice that the problem wasn’t with Arbortext at all: It was an OS issue. Arbortext doesn’t map drives; Windows maps drives.

The experience of end users is better when they are not faced with complicated manual procedures to get their jobs done.

Avoiding vendor lock-in does not mean mix-and-match and building glue code. Building glue code actually ties you tighter to both sides: Glue code is 100% application specific.

In 2005, when things were new, you didn’t really have a choice. In 2017, vendors have learned a lot. Products are much more mature. Users are savvier. We don’t have to do today, what we did then.

(In fact, if you ask any of us who have been around a long time and who have been on the purchasing end of big IT systems, we’ll tell you that it’s totally worth doing, just NOT the way we did it.)

When you mix-and-match application vendors, or choose a tool that’s more toolkit than professional, it’s instinctive to start down the road to the glue code that bridges systems together.

Unfortunately, many implementers grow their consultancy by building tools (typically over and over again, by hand each time).

I was guilty of this myself, back in 2005 when the field was new and applications were still figuring things out:

I like being able to develop custom tools that can automating the systems as much as the systems automate process, without being overly toolkit-y. The one thing we all know is that no vendor lasts forever, so I try to avoid vendor lock-in and tight vendor integration as much as possible. There’s all kinds of space when products aren’t tied together.

Today, things are so very different than they were and my view of vendor lock-in is not so pedestrian.

Open standards matter most where there is stability and asset control.

I still have old 3.5″ floppy disks with Word 6 documents on them. I’ll never be able to get at that content. First, who has a 3.5″ disk drive anymore? And, second, Word 6?

Word 2017 has the ability to output both HTML and XML. Two forms that are great for archival purposes. XML and HTML are ascii. We’ll always be able to get at that.

Publishing? Well, that’s a bit trickier. There are open standards for publishing XML content. However, publishing is a transformative process. It’s not an archival process. It takes one structure (XML) and turns it into something viewable by some technology. Over the last 10 years we’ve seen brand new display technology (mobile phones, tablets).

For new technology what matters is the Beginning Format (get to the content asset) and the End Result (display format). The path to get from one to the other isn’t what’s important. As long as you can get from one to the other and provide your asset in the format your customer needs, that’s what matters.

Products that support access to their application through open-standard, common programming language, and a supported, professional API can only help you along the way. You can always replace the transformer path (and you will because none of us know the future).

Problems with tight integration come when you can’t get your asset out of the starting format (Word 6). So you’re stuck. You can’t get at what you need in the first place. Forget about taking it anywhere…

When you sit down to decide your enterprise implementation make sure you’re looking at the right things when you’re evaluating vendor lock-in, tight integration, and longevity.


Single-Sourcing is a Software Project (not a Documentation Project)

Understanding the staffing requirements for a production quality single-sourcing system depends on two things:

  1. understanding the difference between IT and engineering
  2. understanding why a group, which has not (at least in recent history) required specialized skills to operate, suddenly does

Think of IT as a mall and TechPubs as a Store in the mall.

Both the mall and the store have specialized staff who help run their business. Both groups have staff members who are responsible for the operation and success or failure of the their particular business. The store depends on the mall to provide a physical location in which the store can do business. The mall depends on the store to provide a source income in order for the mall. The mall depends on the stores it houses to provide attractions that bring consumers to the property. The stores depend on the mall to provide a safe and secure location for customers to do business. Both groups have responsibilities that center around the same customers; both groups work together to achieve the same goal.

But they don’t use the same staff to operate both businesses. While the two groups would have several staff members with overlapping skills, these same staff members have completely different focuses. For example, both groups have sales associates. The sales associate for the mall would never be the same sales associate for the store. If the staff for both groups were the same, the mall would be responsible not only for the operation and success or failure of the mall itself, but the success or failure of the stores operating within it as well. This would mean that the mall’s sales associates, who would normally be selling space in the mall, would not be doing that because they would be spending their time selling products at every store in the mall.

The focus for each group is fundamentally different. For the mall sales associates, every sale is a single sale: a one-off deal. For the store sales associates, every sale is a repeat sale: every sale is tied to the dynamically changing nature of the rest of the store.

The store can afford to try different approaches, to move products around. If one display works, then everything’s great; if it doesn’t, then the sales associates change the display and try something else. One failure isn’t fundamental to successful operation. More importantly, because of the frequently changing nature of the store, one bad display, particularly one that isn’t there very long, isn’t visible to customers.

It’s different for the mall sales associates. Stores contract locations for long periods of time. Malls cannot afford to have individual store spots empty for long periods of time. One bad contract can mean success or failure: and that success or failure is glaringly obvious to everyone who comes into contact with that mall.

The same things are true of engineering and IT. Engineering groups can try different things out; they can change things on a daily basis in order to improve they way they operate.

IT doesn’t have that luxury. IT focuses on projects that support the entire business structure. IT’s failures are failures that everyone sees. A failure for IT is a big problem: it’s lost money. As a result, IT cannot afford to have people tied up supporting engineering groups on a daily basis. Successful IT organizations bank skills in order to deploy teams to implement and manage the large parts of one-off projects and then move them off to the next one.

TechPubs is one store in the Mall. TechPubs has products and customers and requires a certain amount of infrastructure support from IT, in order to operate effectively. However, in recent history, TechPubs has not required specialized skills, that have overlapped with skills traditionally found in engineering or IT organization. With a single-sourcing system, that’s no longer true.

Implementing a single-sourcing system is not a trivial replacement of applications. Since the mid-80s, technical publications departments have been able to trade authoring tools with very little assistance: Let’s use Interleaf! Now, let’s use FrameMaker! Look, PageMaker! Microsoft Word!

These applications could easily substitute for one another on the author’s, editor’s, and text processor’s desktop computers.

With an XML authoring and production system, this is no longer true.

For example, authoring consistent and publishable documents in Microsoft word requires adherence to a Microsoft Word Template (.DOT) that governs the styles, spacing, and pagination of a document authored in Word. IT would maintain the Word application on each author’s desktop, but it would not normally develop the .DOT template document. Word user specialists would create, maintain, and deploy the .DOT template to the authors using the application. So, just as IT would maintain the authoring application and the publishing application but not maintain or develop the required templates for publishing from a desktop publishing application, they would not do that development for an XML publishing system either.

Look at a typical PeopleSoft implementation. IT would maintain the application, machines, and application specialists. They would typically develop and maintain the workflows that are required for efficient processing. The HR department would also have a PeopleSoft specialist, who would administrate the application, train new users, and provide or restrict access to certain records in the database. In an XML authoring environment, IT would maintain the database and repository, from the application perspective, but someone from TechPubs would have administrate, train, and provide or restrict access to documents in the repository.

In a very general sense, every IT project is a “one-off,” a singular project that supports part of the enterprise. IT costs are overhead costs. As a result, if an IT project doesn’t work, it’s a big problem: the company has lost money on it.

In contrast, functional tools teams can support proactive projects: try something, if it works, good, if it doesn’t work, it’s not a problem. Engineering organizations are expected to try apply new approaches and new technology to existing problems. Likewise, functional tools teams who support engineering products can also do proactive software development.

For a single-sourcing project to be successful, the way you define the line between the IT team and the functional team (the “tools” team) becomes important. In many ways, this depends on the charter IT has in your enterprise. At Juniper, IT had a very specific job: whenever company purchased a new application, IT would dive in, learn everything there was to know about it, and support it. But they rarely went beyond that point. If an organization needed daily support, they added a functional tools team that took things from there.

IT builds the roads, the functional group buys the cars; the functional tools team customizes and maintains the cars, all while following IT’s rules for driving.