bell notificationshomepageloginNewPostedit profile

Topic : What is in scope for criticizing technical writing I'm not well versed in technical writing. I'm probably not going to be in a position to give anyone feedback for some time, but I would like - selfpublishingguru.com

10.03% popularity

I'm not well versed in technical writing. I'm probably not going to be in a position to give anyone feedback for some time, but I would like to know how to approach it when I get there. I would also like to know how to consider the feedback I receive as I begin to develop technical writing. As a reviewer, what are you looking at? Fact checking? Brevity? Appropriateness? Accessibility of language? Directness of communication?


Load Full (3)

Login to follow topic

More posts by @Gail2416123

3 Comments

Sorted by latest first Latest Oldest Best

10% popularity

The most important criteria for technical writing concern the technical content. They are correctness and completeness, in that order of importance.
After the content criteria, in importance, comes the criterion of clarity.

#1 is about the message to be delivered: the information the writer tries to get across. #2 is about the message delivery: how well the writer gets the message across.

If the content is correct and complete, and if it is communicated clearly, then 99.9% of the job is done. Considerations of language style or correctness that do not detract from message clarity are not so important in most technical contexts. Readers are there for the technical content.

If as a reviewer you are versed in the particular technical area then you can speak to #1 . With respect to completeness you need to have an idea what the scope of the writing is: what "complete" means. This might be clear from the context of the writing, or it might be made clear by the writing itself.

If as a reviewer you are not very familiar with the given technical area then you can at least speak to #2 to some extent.

For both #1 and #2 you need to have an idea whom the writing is for. To judge whether the intended message is appropriate for its audience you need to understand the content and know something about what the audience knows and is interested in.
Judging message clarity requires taking the point of view of an intended reader.

As a reviewer you can pretty much always help by providing some feedback about completeness and clarity. If you can't judge the completeness you can at least ask what the scope is, if that isn't clear. Asking the writer things about the intended message and pointing out what is unclear to you is bound to help.


Load Full (0)

10% popularity

This answer covers a single work like a paper well, and what it says applies to larger works too. Correctness and clarity are the most important factors in any technical work. For larger works, such as a new section in a large documentation set, I look for some additional things:

Consistency of style: Does this part fit seamlessly into the larger body of which it's a part? Ideally there is a house style guide, so what we're checking here is adherence to it. But, beyond that, if there are themes running through the larger work, such as an example domain that all the examples use, does this piece fit into that?
Cross-references: Very few pieces of technical documentation exist in a vacuum. Does this new part link to (or reference) other parts where needed? On my team, for example, we have reference pages for all the commands, functions, system tables, and so on that our product supports; when reviewing task/guide documentation, I expect to see links to the relevant references pages.

When providing feedback on a larger work in its entirety (which is a huge task), I pay particular attention to:

Table of contents: Do the order and divisions into subtopics make sense? Is the structure reasonably balanced? For example, I don't want to see 30 top-level topics and then two topics each that have children out to six levels. You shouldn't impose arbitrary limits (I said reasonably balanced), but if your organization seems really out of whack, that can signal problems with the content itself, too.
Index (if present): I'll scan to see if similar things are in fact grouped together; I don't want to see "get" (15 subentries), "getting" (12 more), and "retrieving" (3 more). I look for a good mix of nouns and verbs. I'll pick a few entries at random, follow them, and see if where I landed makes sense.


Load Full (0)

10% popularity

I am a professor, I peer-review scientific articles; one or two per year, in fact I did one two weeks ago.
The first things I look for are correctness and understandability, particularly in any math or proofs included in the paper. I have on three different papers found fault in the math, including the one I just did. It may be correctable, but as written it used variables they never defined, I can't accept a paper that doesn't make SENSE.
Not all reviewers do, but I print out papers and review them ON paper, with a pencil, then type up a review document. I do that because it can take me literally hours to unravel some mathematical arguments, and I just can't do it without drawing, circling, underlining, writing notes to remind what a variable means, etc on the equations themselves. Sometimes I'll put post-its on them for extra room. That's how I roll!
But since I can put checks or X's or circles on words or in the margins, I also note and correct English errors, missing punctuation, typos, wrong word usage, etc, which they can change or not. I'm just obsessive compulsive in that way, I can't just read past them.
I won't reject a paper for those things (editors might), but I will reject a paper if the exposition does not make sense to me, if the formulas don't make sense, or the logic they use is flawed. I see bad reasoning quite often. I see bad math in perhaps 20% of papers. I also see graphs that I can't make any sense of, I will reject a paper for that.
(Actually as a reviewer, I only recommend rejection or acceptance or acceptance-with-caveats, the caveats being something that can be corrected and MUST be corrected for my approval. Journal Editors make all final decisions, and can reject my complaints, but never do if I find a flaw in the scientific argument.)
As a reviewer, what am I looking for?
Fact checking? Yes.
Brevity? No, but no wandering off topic, no telling me about the dream that inspired this work, no telling me how you felt upon making your discovery. Just the facts. In the "Conclusions" you can describe how your invention or discovery or technique works to improve something or is useful. In "Future Work" section you can describe implications or plans to use it (things not yet accomplished, obviously, or they would not be "Future Work").
Appropriateness? Yes. A paper is not a forum for personal crap. We don't want to know. It is not a place to include little fables, or stories, or personal footnotes, or apologies because you were sick, or inspirational quotes you like. Stick to the science, that's IT.
Accessibility of language? To an extent, if I can't understand it, I won't approve it. Authors come from everywhere and English is quite often something they had to learn as an adult, I don't expect papers to read fluently. I try to help with the worst transgressions, but papers with good ideas and good proofs should never be rejected over a poor grasp of English, unless the bad English prevents me from understanding there ARE good ideas and good proofs in there.
Directness of communication? Yes. A scientific paper is not a place to tease, or wander, or ask rhetorical questions to try and make the reader figure out a puzzle. It is not a novel. It is not a story of discovery.
Here is the problem we were trying to address. Here is the previous work on that problem. Here is what we did that worked. Here is our proof of why that worked. Here is why this is better than earlier work, or better in certain conditions, or whatever.
Like that.


Load Full (0)

Back to top