I've covered how to improve the written component of your documentation with automated spell-checking and suggestions for better writing. In this post, we'll explore the code component of good documentation.
The techniques for testing code fall into two distinct camps, depending on what you're testing. We will look at one example in detail and then link to alternatives with similar functionality.
Whether you generate your API documentation from the source code or write it externally, the more distinct nature of your API makes it easier to test. Tooling options also frequently offer the auto-generation of examples as well as tests, automating even more mundane tasks for you.
I have had the most experience with using Dredd. It works with API documentation outlined in the swagger and API blueprint formats, so documentation is written externally to your API source code.
I will use the Marvel API as an example for this section. Instead of attempting to create an entire API definition of the API from scratch, I found an existing API Blueprint file floating on the internet and created a demo project around it. You can find the project on GitHub.
I'll only cover the essentials of what's inside the projects. Otherwise this article would turn into a fully fledged API Blueprint and Dredd tutorial.
The API definitions in the file I found are out of date, so running the tests will fail. I decided to leave this as is for a couple reasons. First, you'll still be able to appreciate the setup process for Dredd and its potential. Also it's unlikely that you want to test the Marvel API. Unless you work for Marvel, in which case, release an updated file 😝.
As Dredd integrates with Node and npm, you can use the default setup with a new Codeship project, selecting node.js as the language of the project.
The dependencies defined in package.json are:
With most needed for using the Marvel API, not Dredd.
You can find more here about Dredd configuration in their documentation.
You can find the API definitions in the marvel.apib file, which are a series of endpoints, with parameters passed to them and an expected response. The API Blueprint format is standard markdown with extras added for API modeling, which means it's well-suited for adding good explanatory text around the API definitions.
All calls to the Marvel API need an parameter appended, constructed in a slightly complicated way. You could add this as a parameter to every endpoint definition, but there's a much better way.
Dredd's 'hooks' feature allows you to interrupt the processing flow and make amendments. The hooks.js file below interrupts every API call before it's sent to the API and adds the that the Marvel API expects:
Now, whenever you push to the repository, Codeship will run the tests you've defined. As I mentioned, the build will currently fail, as the expected response doesn't match the actual response. If you want to experiment and learn further, then you can try updating the API Blueprint file to match the new response structure. Or spend that time making Dredd work with your own API.
If you want to use a SaaS to design/host/test your APIs, then a quick search will show that there are dozens of options. Here's a quick roundup of other open-source and free options; not all work that well within a CI context.
Aside from API documentation, I hope you also have a series of tutorials that help users assemble the API components into something useful. These will frequently contain snippets of code that readers follow. How many times have you tried one that didn't work?
Automatically testing these snippets is a fantastic idea, but it's unfortunately hard to do. This is due to the nature of a code snippet; it's an abstract extraction of an application, so how can you test it with no context?
It can work with certain languages, such as SQL, which are discrete commands that a CI tool can run and test, or small, simple code snippets that can work in isolation.
I have had a couple of ideas suggested to me as a general solution to this problem, but I haven't implemented any yet. One is to extract each code snippet from a document, generate separate files for the code, and test those. Another is to make sure that each snippet can work in isolation from the rest of the application and is thus testable, but this leads to less understandable documentation.
After that lengthy disclaimer (and food for thought), there is one concrete tool out there for Python developers: Sphinx. Sphinx is a mature documentation project primarily aimed at Python developers but usable (to varying degrees of functionality) by other developers. Generally, the Python community has the best documentation tools; it's a pity I don't work with more Python-based projects.
The Sphinx project has great installation and Getting Started guides, so follow those first.
Follow the option. The installer will ask if you want to include the extension (the magic extensions), so select .
Continuing the Getting Started guide will give you more information on reStructuredText and Sphinx. For this article, I will create an example to test. You can find the final project on GitHub.
To get this working with Codeship, create a new project and set it as a Python project. Leave the commands as default; the GitHub repository contains the requirements.txt file needed.
Remove what's in the test pipeline section and add the command. Save the project and trigger a build.
And voilà, as expected, you can be assured that your code examples are usable by readers.
And that's it! In the next and final installment of this series of posts, I will look at automating screenshots. If you have comments and suggestions, please let me know below. I'd especially like to know if you have solutions to the tooling gaps I identified throughout the article.