The value of a style guide is well-established across many writing domains. A style guide standardizes presentation across content and contributors in an organization. Creating a style guide is an important step in the maturation of a technical writing team. The bigger the group, the greater the need, but even a sole contributor should create and maintain at least a style sheet, both as a reminder and to blaze a trail for others. I can point you to an excellent master list of style guides in many fields.
But you know all that. I’m interested in what goes into a technical documentation style guide and how it gets selected for inclusion.
If you survey the field, you’ll see that style guides typically cover trademarks, vocabulary, formatting, and grammar and punctuation rules. Now, that’s an interesting mix of subject areas:
Product and trademark names unique to your organization. These are important intellectual property and your organization is the sole authority on their usage. To retain legal possession, you must adhere to usage rules; if you don’t, no one else will. This is an obvious subject for a style guide. Your list is volatile and needs periodic revision as new products are created.
Technical vocabulary. Every product domain has its own vocabulary. It’s important to use technical terms and acronyms accurately, and a master glossary is useful to new members of the organization, so this is also an obvious subject. Because both you and your competitors are innovating, this list is even more volatile and needs periodic revision.
Formatting. This subject is less obvious. You’d think that the first practitioner in an organization would establish formatting styles and that, after a few thousand pages were published, the format would become the organizational standard. Why list formatting rules when they’re visible by inspection? Wouldn’t everyone just go with the flow rather than rework existing documents or needlessly introduce differences?
Grammar and punctuation rules. Okay, what’s going on here? Grammar and punctuation should be uncontroversial and universal, right? Agreed-upon language authorities (in the US, perhaps the American Heritage Dictionary and Fowler’s Modern American Usage) ought to suffice. The rules of grammar and punctuation aren’t volatile, at least not within the lifetime of an organization. Why include any of these rules, and why a specific subset?
In addition to the good and sufficient reasons I’ve listed to create and use style guides, there are also unstated reasons. Those reasons are what I’ll delve into in this post.
Style Guide as Armistice Agreement
As soon as there are two content creators, disputes—excuse me, differences of opinion—can arise for which there are no “right” answers. For example, there’s no fundamental, grammar-based reason why a bulleted list, a construction not discussed in any grammar text but universal in technical writing, should use one dingbat character rather than another. Consistency of presentation, not grammar, is the overriding factor. If the group has a manager, choices can be made “because I say so,” but that solution is rarely satisfying or lasting.
Creating and ratifying a style guide doesn’t end disputes. Newcomers naturally want to do things differently, perhaps through experience at another organization with a different style, perhaps to make their mark in an attempt to “shake things up,” perhaps due to lack of attention to detail. As a publications group grows, the stasis will be disturbed, at which point they realize it’s a style guide, not a style rulebook. At my first tech-writing job, I watched in shock as a veteran writer stormed out of a group meeting at the merest suggestion of “legislating style.” I have come into groups and advocated the use of semicolons, which I like, only to discover that the group forbad their use. You would be amazed at how disputes over such things can fester. When my company was bought by another, we asked on a conference call with their writing team why a particular presentation was used. The raw anger that radiated back from the speakerphone made us never question it again. It turned out that people on that team had quit over this issue. Alrighty then…!
When you encounter items in an established style guide that seem out of place, I put it to you that they represent settled disputes between writers who might no longer be with the organization. In important ways, a style guide is an armistice agreement among disputing parties. The peace can last for years, but it is always vulnerable to well-intentioned violations.
Collegial agreement on matters of style doesn’t end the problem, either. Writers are constantly buffeted by external source material written by people unaware of the house style or used to a previous employer’s style. The refrain, “Why don’t you do it this way? It’s better,” is ceaseless. Sometimes the call is coming from inside the house, from writers who simply don’t remember the precedent. The bottom line is that maintaining the status quo is as challenging as changing it.
Style Guide as Brand Protection
“Ipads are magical.” True or not, you will never see this sentence in any Apple document because it abuses an Apple registered trademark. An organization’s trademarks are invaluable intellectual property, and an important part of any style guide is how to treat them. Marketing will decide how trademarks are handled, but the rules must be top of mind for all content creators. Developers, in particular, unknowingly mistreat trademarked terms, and it helps to have the rules written down in a style guide to show them.
Style Guide as Controlled Vocabulary
Some writers approach technical writing like creative writing. In technical writing, employing a rich vocabulary is a drawback; consistency is important, even if it’s boring. The better approach is to regard technical writing as a subset of writing. Your writing needs to use a controlled vocabulary, whether to implement Simplified English if you use it (and you should), facilitate translation (by which I mean save money on it), and ensure consistency. Everywhere I worked, we fought over, and agreed on, verbs for manipulating the user interface: typing, not entering, keystrokes; pressing, not hitting keys; selecting, not arrowing to, menu items; and clicking (and now tapping) buttons.
Technical terms proliferate and evolve more rapidly than words in general usage. Companies competing for mindshare have the incentive to create and promulgate new terms. But inventing a word because you don’t know the right one is still bad practice. Warning: it can be hard to tell the difference between clever marketing and lazy writing.
I regard the need to stick to a simple, controlled vocabulary as one of the greatest frustrations for new technical writers. The right word is appreciated, but synonyms are not. (Even in my post-technical writing, I’m discovering that a florid style, which I looked forward to unleashing, is discouraged by manuscript editors. Apparently, Faulkner is out, but Hemingway is still in.)
Style Guide as Seawall
The enemies of consistency work from within and without. For every new writer who comes in and says, “I know a better way,” there are ten managers who say the same thing, and twenty engineers with relentlessly poorly written specifications that you’re pressed to copy in whole.
Remaining consistent requires shoveling against the tide. Having a style guide to fall back on is welcome reinforcement. An established style guide serves as a seawall against arbitrary change. Saying, “It’s in the style guide this way,” always worked for me with outsiders, even when I was the one who wrote the style guide. It’s less effective against new team members. For many years, I came into new jobs as an agitator. I’d like to think my passion for disruptive improvements cooled as I better understood group dynamics, but it was more that I came to embody the status quo and didn’t want newcomers disrupting me. In any case, over time, you need a mechanism to update the style guide while being mindful of the ever-growing library of existing documents.
Style Guide as Editor’s Festivus
Finally, at first glance it’s odd to see spelling and grammatical rules in a style guide. Out of all the rules of grammar, why are these few included? Did writers ever argue against subject-verb agreement?
The reality is that such entries represent the crie de couer of some previous editor or group of writers who wished to banish specific, persistent errors. With turnover there may no longer be that one team member who habitually wrote “irregardless,” but the rule’s still there and still right, so it remains.
Even things you would think indisputable are disputed. Every writing team I’ve ever worked in seemed to have one grammar maven in conflict with at least one person who’d never before been expected to conform to writing rules. Style guides can contain specific grammar rules—hopefully, not many—that provide reinforcement to those who once needed it.
If you’re fortunate enough to have editors, they have a little list, like a scouting report, that they apply to your drafts every time. For example, working for a major network vendor, I not infrequently mistyped “network” as “newtrock.” The scouting report on me surely included that one.
Consider yourself fortunate if you go through a style guide, as you should, and know that you don’t break most of the rules in it. If there are some things you’re not used to doing, either as a matter of style or because you didn’t know the point of grammar, resolve to start doing them. And if you have some personal writing weaknesses, be honest and suggest adding them to the list. You may improve your writing, and you may help someone else improve as well.
On the job, circa 2000Retirement Day, 31 December 2022
My father worked at many different jobs in his life. He wound up as a technical writer at Raytheon in the late 1960s and early 1970s, documenting the Hawk, Sparrow, and what became the Patriot missile systems. As it has transpired, I became a second-generation technical writer and spent a full, 40+ year working life almost exclusively in that field. I worked for ten companies with hundreds of writers, editors, artists, and production experts, and thousands of managers, developers, testers, product managers, and marketers. I know I wrote hundreds of full manuals and thousands of documents, because midway through I started keeping count. Whether owed to excellence or inertia, few can boast so much experience.
A lot changed during those 40 years. My first writing tool was a typewriter, and my first cubicle mate, an experienced writer, wrote his drafts out longhand, in pencil, on legal paper. The focus of documents was the text because the format was controlled by a production team and because anything non-textual was difficult and expensive to produce. The review, approval, and print production cycle took 13 weeks. For my last publication, I wrote it at home on a laptop; some colleagues worked half a world away and I never laid eyes on them; the source files were stored half a continent away; I was the writer, editor, artist, and production manager; the product manager complained when the review/approval/production cycle took more than a day; and the publication appeared simultaneously in application help, the company website as web pages, and print PDF. The scope of what we documented, how we documented it, the tools we used, and where we published, changed utterly.
But some principles that I was taught as an associate writer, and some things I learned along the way about working with information and people, remained true when I retired as a site manager for the writing team. Quite a few people have written textbooks on technical writing, but I haven’t seen one on how to be a technical writer—to do good work, to do it efficiently and effectively, and to be a good member of a work group. My plan here is to offer occasional series on things I’ve learned over the years that don’t appear in textbooks on technical writing but are still, in my opinion, valuable and important to know.
To declare my biases, almost everything I wrote as a technical writer was in the domain of computer software, and I wrote for end users, system administrators, operators, and architects. I will try to make my posts in this series as widely applicable as possible and to separate facts from my opinions. You can judge or yourself what, if anything, is useful and applicable. I’d love to hear your comments as well.
“The Celebrated No-Hit Inning,” a 1956 science-fiction short story by Frederik Pohl, contains, as a prelude establishing his premise, a vivid sketch of machines taking over a job once thought open only to humans. An inventor showed up at a major-league ballpark with a robot batter. As skeptical players guffawed, the robot whiffed on the first batting-practice pitch it got, but the inventor said it just needed a minor adjustment and applied his screwdriver. Sure enough, the robot drove the next pitch sharply, and every pitch thereafter. In short order robots replaced every major leaguer, and soon the idea of a human ballplayer competing against a robot became laughable.
Robot athletes are still science fiction, but robots performing repetitive tasks such as on assembly lines, have been around for decades. Many repetitive white-collar tasks have also been automated by computer software. My particular interest is creative pursuits, once thought immune to replacement, where artificial intelligence (AI) is coming on strong and companies are racing to explore and monetize it. Indeed, the tsunami of developments in this field are too much for me to capture in this blog post. But I can offer my opinion.
How did we get here?
If you think about it, we’ve been following this path for thousands of years. The history of civilization consists of the replacement of human labor, first by animals and then by machines. Oxen plowed fields, which replaced foraging; horses carried travelers; looms wove cloth; engines replaced horses. The inevitable result was more output from fewer people, freeing us to pursue other endeavors.
Broadly speaking, these advances have been steady and successful. For example, at one time three quarters of the farmland in the United States was used for growing food for horses, and manure in city streets was an urgent public healthcare threat. We used horses as working animals right into World War II before we could replace their labor with engines. Since then the US horse population has declined by half, but I don’t think they would complain today about their lot as mostly recreational animals or pets, and the farmland is freed up for other crops. Similarly, mechanical devices have completely replaced many categories of manual labor. The overall impact has been great for consumers but not, at least in the short term, for the workers affected. The Industrial Revolution was recent enough that we have stories about the impact of job disruption on the displaced workers. But though we have many more people today, we mostly all can earn a living one way or another, because new jobs have been created that tend to be less dangerous and less repetitive.
Until the 1940s, pin setting was done manually, often by children (as in this 1910 photo) [Wikipedia]
AI is improving at an incredible rate across a huge range of job titles. Are we next?
Admittedly, on a small scale things are not so rosy. As Harry Truman said, a recession is when your neighbor loses his job; a depression is when you lose yours. Replacement hasn’t happened yet in baseball (though automating ball-and-strike calls can’t come soon enough). But AI is improving at an incredible rate across a huge range of job titles. Hay farmers, pin setters, telephone operators, copyeditors, technical typists, technical illustrators—are we next? I want to focus on creative endeavors in general and writing—OK, technical writing—in particular.
A brief history of artificial intelligence is in order. The first generation of AI programs explored the limited world of game theory. It was trivially easy to code an unbeatable tic-tac-toe program, but more complex games proved challenging. A well-known example is chess. Because the set of all possible positions in a chess game is beyond storage even today, chess programs have to figure things out move by move. The first generation of programs were given the rules of the game, the relative value of pieces, and a library of known solid openings. Without “knowing” what constituted a promising line, the programs relied on clever algorithms for how to evaluate a position and select the best move and then brute hardware power (CPU processing speed, and memory) to evaluate positions as fast as possible. Still, it took nearly 50 years using that approach before Deeper Blue, an IBM program running on a supercomputer, defeated former world champion Garry Kasparov in an over-the-board match using regular time controls, and even at that it needed two tries.
Eliza was an early rules-driven natural-language program developed in 1966 by Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory. Eliza was the first chatbot, designed to simulate conversation. Weizenbaum modeled its interaction style on a psychoanalyst, and it worked unexpectedly and disturbingly well: users engaged in personal conversations with Eliza and his secretary even asked to interact with it privately. In a foreshadowing of today’s events, users misconstrued what Eliza was doing.
I once worked for a company that provided building blocks for businesses to write rules-driven decision-making programs. If businesses could codify their decision-making processes, then they could automate those processes. Wouldn’t it be great if an insurance claims adjuster or a bank mortgage officer could have access to a program that always gave expert, by-the-book decisions, never skipping a step or indulging in favoritism? The issues that emerged with expert systems (not from my former employer but in general) were twofold. First, I say “access,” but of course businesses preferred to replace workers. Second, in practice expert systems coded with existing policies and trained on existing data were unwittingly given, and thereafter acted on, past errors and existing biases, either in their algorithms or in the input data. For example, one justice-system program (not based on our software!) designed to recommend pre-trial release decisions and evaluate the risk of recidivism was tougher on Blacks than whites given the same case facts. These kinds of problems left businesses exposed to lawsuits, the very thing they were trying to avoid.
The next generation of AI programs used huge amounts of data to make its decisions. This technology was exemplified by IBM Watson, which was fed millions of documents and given the raw computational power to search through them in real time. In 2011 Watson impressively defeated “Jeopardy!” champions Ken Jennings and Brad Rutter in regular game play.
The next advance in AI was to dispense with programmed rules in favor of a “neural network” that weighed connections in data and produced a “best” outcome. In 2017 AlphaZero, a purpose-built game program designed by DeepMind, was given the rules of chess and set to play against itself to learn what patterns of play led to victory. Crucially, DeepMind engineers played no role in AlphaZero’s development; it developed superhuman chess skill without benefit of a single programmed algorithm or even an opening library. After 44 million(!) self-practice games, the AI crushed Stockfish, the best chess program available, in a 100-game test match without losing a game. (In a thousand-game rematch played in 2018 with conditions more favorable to its opponent, AlphaZero routed the then-newest version of Stockfish with 155 wins, 839 draws, and only 6 losses.) The game logs showed that AlphaZero repeatedly evaluated positions and lines as winning that its opponent evaluated as winning for itself. Not even human chess masters understood AlphaZero’s “thinking.”
The newest generation of AI, which has captured the imagination of computer experts and laypersons alike, combines neural networks with training on huge amounts of domain information. What can it do?
In a large study, an AI trained on mammograms more accurately predicted breast cancer risk than human radiologists.
College admissions officers already expect that AI-generated admissions essays will figure prominently this year, and are planning to use AI tools to detect AI essays.
Companies are already deploying AI screeners to screen job applications.
A researcher prototyping AI drug discovery thinks he can reduce the cycle time from two years (and billions of dollars) to two weeks.
It seems no creative job is beyond AI’s reach. In art, photography, interior design, and other creative endeavors, AI’s advances have been stunningly fast, and in some cases have already reached the threshold of professional viability. AI services such as Midjourney, Stable Diffusion, and DALL-E can generate whole images in response to simple text queries. (Imagine using alt-text to generate an entire image.) One man used an AI to generate an image that won first prize in a digital fine-arts competition at the Colorado state fair, to the outrage of other contestants. In an unsettling recent interview, a game designer recounted balking at a graphic designer’s fee of $50 per image and turned to an AI to get what he wanted in a few minutes and for a few cents. How can anyone compete with that?
“Théâtre D’opéra Spatial” (Jason Allen via Midjourney) [https://tinyurl.com/8n64bd76]
In November 2022, the company OpenAI (founded in 2015 and the creator of DALL-E) released its Generative Pre-Trained Transformer (GPT), trained as a large language model using ten trillion data points. ChatGPT generates responses by predicting, based on its input, the next likely word and building forward. Just six months later, ChatGPT is already responding to 100 million queries a week. Companies are using the latest version, ChatGPT-4, to create AI-powered chatbots in a dizzying array of fields. Microsoft, Google, and others are racing to incorporate AI into their core products. (Imagine a Microsoft Word that can not just format your report but write it for you.)
Because of its broad training, ChatGPT displays expertise in a huge range of fields. This infographic summarizes an OpenAI report on ChatGPT-4 and its impressive performance on selected professional examinations.
Copywriters, social-media content creators, and now editors are suffering the effects of AI competition. Scribendi AI is marketed as a productivity tool for overworked copyeditors. AuthorONE is marketed to publishers and offers manuscript assessment. Paperpal is aimed at academic writers, whose work can be formulaic. Sudowrite and Marlowe are aimed at fiction writers; the latter will optionally upload and anonymously store your manuscript in a “research database.” In these and other fields, AI offers—threatens?—to remove the middle folks entirely and generate output on its own, not in days but in minutes.
Issues in artificial intelligence
The capabilities of generative AI confound researchers, who can’t say where the responses are “coming from,” and who wonder if AIs have developed “theory of mind”—that is, are conscious. If this sounds unsettling to you, you’re not alone. On May 29, 2023, the nonprofit Center for AI Safety issued a sensational one-sentence open letter: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”—Center for AI Safety
It was signed by over 350 AI researchers and tech executives, including OpenAI CEO Sam Altman himself. Microsoft (of all companies!) has called for government regulation of the field. Alrighty then… What are the risks? Off the top of my head I can think of the Skynet scenario and the risk that jobs will be replaced or eliminated faster than workers, and the economy, can adapt to the change. Altman also cited the risk of spreading misinformation. Mark Twain said a lie can travel halfway around the world while the truth is putting on its shoes—and he never saw the Internet.
Like Johnny Five from “Short Circuit,” training an AI requires a lot of input, but it can’t evaluate the quality of what it’s fed. Garbage in, garbage out. Back in 2016 Microsoft trained an adaptive chatbot using general Twitter input and some comedic riffs and set it loose on Twitter to interact with users. The hope was that the program would learn to sound like just another user, but had they ever used Twitter? The results were disastrous: within hours, an organized troll attack corrupted it into spewing racist and sexist chatter, and in less than a day Microsoft pulled the plug. In theory, a small-scale AI could go through your company’s internal information and then answer queries about it, solving the common problem of navigating poorly organized Confluence pages or Sharepoint sites. But as a practical matter I know that regardless of how it’s stored, half the stuff in every company’s internal library is already obsolete. Quickly finding a wrong answer is not an improvement.
Predictive AI is non-deterministic. It produces output one likely word after another, but not the most likely next word. If you ask the same question twice in a row you can get slightly different responses. That’s a feature to avoid making AI sound robotic. But because of this, even OpenAI admits that ChatGPT, while it can create new content, can also “hallucinate” false statements of fact because it’s not actually checking. This is a fatal flaw in every field of endeavor where correct answers are required. Has Roger Federer won five Wimbledon singles titles or eight? Feed an AI a mix of current and outdated information and it might say both. For AI, hallucinations are a feature and a bug. So far, the quality triangle holds: AI produces content that’s fast, cheap, but too often bad.
Even high-quality input, taken from websites instead of chats, or amassed from billions of images scraped from the web, can be problematic. Today AI can create an image simply from a prompt describing the desired result, but many current examples don’t pass careful inspection. The prizewinning image above looks fantastic, but the closer you look the weirder it gets. Today AI can produce photorealistic output but has a hard time with hands. The field is advancing so fast that it might just take a minor adjustment, but it’s not perfect yet.
Where did all that information come from? Is it curated, or just indiscriminately collected? Examples have surfaced of AI-generated art that contains stock-photo watermarks. Josh Marshall of Turning Points Memo points out that much of the visual information that’s been hovered up is copyrighted, thus resulting in generative art demonstrably built through intellectual theft. I stand in solidarity with my graphic-arts colleagues to denounce this dishonest practice!
Similarly, I am personally nervous of startups that offer “help” with manuscripts. This may sound like a valuable service for a great price. But are you submitting your great American novel for editing or harvest? Are you even writing it?
If you ask a generative AI for a sonnet in the style of Shakespeare it can easily provide one because it has the data on what words the Bard used and how he strung them together. If you don’t specify a style subset, the results are still coherent but rely instead on the expressive skill of the crowd, which in practice tends toward the trite. Much worse, because statements of fact are predicted, not looked up, they ain’t necessarily so. Like Weizembaum’s secretary, early adapters of AI are misinterpreting what they’re getting.
In a recent legal case, an attorney in a civil action used ChatGPT to prepare a brief, but the AI actually generated nonexistent case citations and the other side noticed. Another case arose when a journalist trying out ChatGPT asked for a summary of a gun-rights case, and the answer stated that a certain Georgia radio host was accused of embezzling from one of the parties. The man actually had no involvement and filed a libel suit against OpenAI. The first lawsuit from a patient whose cancer goes undetected by an AI is inevitable, even if the AI is objectively better than a human. Conversely, what happens if a human double-checks and mistakenly overrides a correct AI diagnosis?
In the software field we know about “vaporware.” Elon Musk introduced vaporware to the automotive industry by announcing that the Tesla Autopilot was “fully operational” and released Full Self-Driving as beta, which many owners immediately began to use. Subsequent developments showed the technology was far from ready for public use. Autonomous control (definitely not my area of expertise) consists of the vehicle sensing its surroundings, interpreting sensor data, and using those interpretations to makedriving decisions. Cruise control, a simple application, has been available for decades; adaptive cruise control, where the vehicle slows or stops depending on what’s happening ahead, has been available for about ten years. But a fully self-driving car requires highly complex software with hundreds of subroutines making hundreds of driving decisions at unpredictable moments. As yet, it hasn’t been safe enough.
Colleges are already looking to setting AIs to catch AI-generated admissions essays. If existing data is tainted with human error, can’t we just create an AI to generate data for use by another AI? Researchers warn that an AI feedback loop could lead to “model collapse.”
Getting past AI’s formative years won’t end the problems. The invasion of Ukraine is the first full-scale drone war, and just as with airplanes in WWI, both sides have quickly escalated their role from reconnaissance to weapons delivery. So far, the drones on both sides are remotely piloted. But it’s possible to add AI to create autonomous devices that can identify enemy combatants on the battlefield and make kill/no kill decisions. (God help the civilian who emerges from his basement during a skirmish.) Naturally, people are hesitant to delegate life-and-death decisions to computers. As of this writing, the Pentagon hasn’t actually deployed such devices and instead relies on remote operators. But killer robots are entirely possible and for now are restricted only by doctrine.
How can we differentiate our work?
We’ve fought the battle of quality versus quantity for years, so this is familiar ground. I would suggest that it’s foolish to replace one good worker with ten bad ones for the same price, but in the current environment companies increasingly disagree. What, then, makes our work worth paying for?
So far, at least, ChatGPT struggles with English language and literature. We can’t expect our natural advantage to last forever, but we have it for now. Looking more deeply, large language models draw from many examples of common, trite phrases, which is reflected in their output. (Strunk and White is not overweighted in samples.) More deeply still, AI only simulates intelligence. Its output isn’t necessarily logical or even correct. AI output will need as much review, and probably more review, as the work of a human writer.
I worked at a startup whose CEO was reluctant to publish useful technical details because “our competitor will just steal it.” I had a problem with not helping our users and I said so. The STC Ethical Principles specifically calls us to create “truthful and accurate communications” under the core principal of honesty. Until now, members have interpreted this to mean “write facts, not marketing hype.” But today it differentiates us from AIs, because no self-respecting tech writer makes up facts. The ethical standards also call on practitioners to work to help users, implicitly guiding us not to work to hurt them. Given the current state of AI, that’s also a differentiator, because we know the difference between helping and hurting.
There’s a lot of input on the public web by both professional and amateur technical writers. I’ve already seen an attempt to use it to generate technical documentation. It’s easy to generate information in the style of Microsoft because the company has published a widely adopted style guide. But technical facts are company specific. If your company wants technical documentation in the style of your company and accurate for the next release of your product, it will first have to input its existing documentation, which may or may not be public, and then the specs for the next release, which are proprietary, copyrighted material. Otherwise, the new documentation will read like everything else on the web in both style and function. Is your company willing to feed proprietary material to a third party, or develop its own dedicated small language model AI, just to avoid paying a technical writer? That doesn’t seem cost effective or wise. To avoid model collapse, it doesn’t seem like humans can be removed from the content-creation equation any time soon. I like the idea that my work might become the gold standard somewhere!
Let’s assume AI technical output becomes as good as human-written output—cheaper, faster, and just as good. What then?
But let’s look past the period of minor adjustment. Let’s assume AI technical output becomes as good as human-written output—cheaper, faster, and just as good. What then?
As a technical communicator, there were topics I could write in my sleep, and possibly sometimes did. For example, documenting a new object in a management tool that follows the CRUD model of being able to create, read, update, and delete a manageable object meant I could document how to manage a new object by writing an overview of the object and then topics on how to create, display (or read), update (or edit), and delete the object. If the architectural model was well designed and followed, every task proceeded in parallel for each object, so I could copy and paste the text from an existing topic… Admit it, you’re asleep already. Once the development team establishes an established model (or pattern), a new error message, set of permissions, API method, or supported disk drive could be coded, tested, and documented by rote by junior workers. Work that follows a strong pattern and can be done by rote is very likely to be given to an AI. Hint: that includes the code. But as far as writing goes, these topics, which made up the bulk for my later assignments, were never the topics I looked forward to writing anyway. The complaint I’ve had, and heard, for years is that the rising volume and quickening pace of our work is overwhelming. If we don’t have to do the boring stuff, is that actually bad?
I question whether AI is a current threat to our profession. but we have to think of the future. Another approach to the challenge of AI is to level up, not surrender. As one marketing slogan put it: “AI won’t replace you. A person using AI will.” If you can consider AI as just the latest in a line of productivity tools going back to the typewriter, you can take advantage and get through the disruption. (I wonder what to say to new practitioners, as the skills required to enter the profession are increasing still further; but they don’t have to unlearn old skills.)
It’s going to be a bumpy transition. But I think we will get through to the other side. Perhaps there are higher pursuits to which we can turn our attention.
On my last day of work, 31 December 2022, wearing a sweatshirt from my first full-time job 44 years earlier
On New Year’s Eve 2022 I walked out of a deserted office and into another phase of my life. I don’t think of it as going “off” to retirement but rather going on to something new.
Some retirees say they got out “just in time.” My wife, who retired from a hospital lab in January 2020, absolutely did. Back then the world was weeks away from recognizing the coronavirus as a clear and present danger with no vaccines or even effective treatments available. Everyone was anxious, but front-line medical workers became rightfully frightened. Me? I was fortunate enough to continue working full-time at a white-collar job from a home office. Nobody in our house caught covid (knock wood) and my employer never pressured anyone to return to our office. I liked my work and I liked my boss in California, and I had an ergonomically sound setup and a ten-second commute. I could have continued indefinitely. In fact, I know people who have.
But while I was learning how to document microservice applications, single-sourcing from DITA to multiple outputs, in the back of my mind I still remembered how to paste up camera-ready copy. The accumulation of new knowledge on top of old was piling up. (This cultural reference is actually before my time, but I didn’t want my brain turning into Fibber McGee’s closet.) I wanted to stop while I could still point to my work with pride. And the economic calculus of retirement is complex: the longer you work, the greater your Social Security payments and the bigger your nest egg, but the less time you leave yourself to enjoy it. I got my first tech-writing job right out of college and I was still working past my full retirement age. I wanted to reclaim my time.
Looking back at my career, I wrote thousands of documents for dozens of products for a succession of employers. True, the audience was never large, and all but my most recent works for hire are already superseded. Also, I admit that technical writing is to writing as military music is to music. Deathless prose it wasn’t. But it was clear, concise, and correct.
STC has a strong educational mission, and over the years I’ve never understood those English departments that spurned our professional outreach offers to work with their students, as if applied writing was somehow ignoble. But as I told more than a few college classes, my career demonstrated that you could earn a good living as a writer. In the final accounting, it took both our incomes and decades of work to accomplish, but my wife and I bought and paid for two houses, put our three children through college, and drove (mostly) new cars. We ended with no debt and enough of a nest egg that we could afford to stop working altogether. Flex? Facts.
I don’t think I got out just in time. I don’t think the profession is in danger, just in its typical state of flux. In that regard, I can attest that technologically, everything has changed in the last forty-plus years, and yet nothing about what we do has changed. Despite the burgeoning capabilities of AI (about which I’ll write more soon), the need to explain technical products and services to users at a human level remains; if anything, it’s greater than ever. Digital technology in particular has passed Arthur C. Clarke’s point of magic; consumers don’t know how any of it works, they just expect it to work. We pay $1000 for a smartphone but only know how to use ten percent of its features (and I may be overly generous here). To explain these complex products clearly; to help people make full use of what they’ve paid for; to describe how they can get things done; all that remains the job of the technical communicator. I leave it to others to carry on that long tradition.
What will I do now? I have a lot of books on my shelves I bought but haven’t read; a lot of movies I added to my wishlist but haven’t watched; and a lot of adjectives and adverbs I have in mind but haven’t written. I’d like to trade cold facts for warm fictions and personal opinions—ooh, and maybe sometimes in complex sentences!
2020 has been an awfully eventful year. I’ve been fortunate enough to work from home since March, and we’ve essentially been hiding in our home since then. I bought a new car in September and haven’t yet put any gas in the tank. (It’s a Prius Prime, but come on—it still shows the tank as full!) I’m politically active, but in the last few weeks I’ve spent more time sleeping and less doom-scrolling.
Over the last two years I served back-to-back terms as president of STC New England, and my blogging energy went into announcements on the chapter website. Now I’ve handed over the reins and ascended to the post of immediate past president. At the same time I’ve moved this blog to WordPress. So it seems like a good time to start blogging for myself again.
At this point I’m not running for Society offices and I’m not looking for a new job. I don’t know if my new circumstances will allow me to blog more, but if I do, at least my posts won’t be quite so focused on STC and self promotional.
The annual STC election, for members of the Board of Directors and the Nominating Committee, is underway. If you’re a member, you should have received a link to the election website (send email to stc@stc.org if you have not). We have until 9 March to vote, and it’s important to do so. The Society needs our active involvement. You can find out more about the election slate here.
For what it’s worth, here’s who I voted for this year.
For Vice President: Ben Woelk
Ben Woelk
Ben is a President’s Award winner, an Associate Fellow, and has been very active in his chapter, the Spectrum regional conference, the Society (particularly the scholarship committee), and the Community Affairs Committee on which we both serve. He has generously shared his time and knowledge with many of us. I agree with him that the primary challenge the Society faces is demographic, not just technological. If he approaches the role with the same vigor with which he has approached the campaign, he will accomplish a lot!
For Secretary: Kirsty Taylor (incumbent)
Kirsty Taylor
Kirsty is running unopposed for re-election, but she is a long-time active member who has been an effective Secretary. To use a cliché, consider that she has put her money where her mouth is by traveling to Board events from Australia. Now that’s commitment. She has earned my continued support.
For Director: Ramesh Aiyyangar and Alisa Bonsignore (incumbent)
Ramesh Aiyyangar
Ramesh is a past president and fifteen-year active member of the India Chapter, an Associate Fellow, a Distinguished Chapter Service Award winner, and, as I can attest, an energetic and effective member of the CAC. He recognizes the importance of moving past retention to a growth strategy for STC by expanding both our reach and our breadth. He would bring a welcome and important perspective to the Board. His work ethic, dedication, and perseverance continues to earn my vote.
Alisa Bonsignore
Alisa, who is running for re-election, recognizes the importance of the CPTC program, which of course appeals to me. But she also recognizes that membership and revenue growth, not budget cutting, are the keys to our future.
For Nominating Committee: Jackie Damrau and MaryKay Grueneberg
Jackie Damrau
Everybody in STC knows Jackie, who has been active at the Society level forever, it seems. This is an ideal attribute for a member of the Nominating Committee, and indeed she has served on the committee before. Jackie has also been a chapter president, regional conference manager, SIG leader, and member of multiple Society-Level committees. Unsurprisingly in light of her contributions to the Society, she is a Fellow. I admire her ongoing willingness to serve at the chapter and Society levels.
MaryKay Grueneberg
Like me, MaryKay has been a technical writer since she graduated from college. She has been a chapter president and has served on several Society-level committees. She is an Associate Fellow. I think she has the appropriate breadth of experience and contacts to go a good job on the committee.
The annual STC election for members of the Board of Directors is underway. If you’re a member, you should have received a link to the election website (send email to stc@stc.org if you have not). We have until 10 March to vote, and it’s important! The Society needs our active involvement. You can find out more about the election slate here.
For what it’s worth, here’s who I voted for this year.
For Vice President: Craig Behr
Craig Baehr
Craig, a 25-year practitioner, an academic, a current Director, and an Associate Fellow, has made important contributions to the certification program and the Body of Knowledge, two of the Society’s most important initiatives. He has been published in both Technical Communication and Intercom. His recognition of the importance of volunteers and mentors resonates with me.
For Treasurer: Tim Esposito
Tim Esposito
Tim, an Associate Fellow, is on the Society’s budget review committee. He has extensive experience as president of the Philadelphia Metro Chapter (PMC) and past treasurer. He has also helped organize regional conferences. I’ve worked with him for the last year as part of the Community Affairs Committee (CAC) and found him energetic, responsive, and committed.
For Director: Ramesh Aiyyangar and Jessie Mallory
Ramesh Aiyyangar
If a board is made up of people with similar backgrounds and experiences, they can find it difficult to consider other viewpoints. Fortunately, this year three excellent and varied candidates can help avoid this problem. Ramesh is a long-time technical communicator, a past president and active member of the India Chapter, an Associate Fellow, and a member of the CAC. He recognizes the importance of a growth strategy for STC and expanding both our reach and our perspective. Ramesh ran a gallant petition campaign last year, and his dedication and perserverence earned my vote this year.
Jessie Mallory
Jessie has been very active in a relatively short time. She’s already served as president of PMC, and now coordinates social media for the BOK committee. She recognizes the importance of student members and young practitioners, and I agree with her approach of directly asking us how to make the Society better. I think she will bring invigorating youth and energy.
For Nominating Committee: Larry Kunz and Grant Hogarth
Larry Kunz
I’ve known Larry for many years. More to the point, a lot of people know Larry, and he in turn knows a lot of people. This is an ideal attribute for a member of the Nominating Committee. Larry is an active and influential blogger with an excellent grasp of the state of the profession and the Society. He is a Fellow and a President’s Award winner. I admire his energy and dedication at the chapter and Society level.
Grant Hogarth
Between two other worthy candidates, I made my second choice on the basis of geographic diversity. Grant has over 25 years of professional experience, has served the Society as a chapter president and ITPC judge, and is active in other nonprofit organizations.