As part of my investigation into scaffolding tools like Cookiecutter and Yeoman, I skimmed the documentation of alternatives like Slush and Hygen. All of them left me unsatisfied. It got me thinking about what makes this problem so difficult.
What is a scaffolding tool?
I don't know much about the history of scaffolding tools, but it seems like a young category of software. Other terms that pop up when talking about these tools include "boilerplate", "template", and "generator". I think the industry is converging on "scaffolding tool" as the preferred term, and I will use that here.
Scaffolding tools have a small paragraph on Wikipedia that is brief enough to quote in full here:
Complicated software projects often share certain conventions on project structure and requirements. For example, they often have separate folders for source code, binaries and code tests, as well as files containing license agreements, release notes and contact information. To simplify the creation of projects following those conventions, "scaffolding" tools can automatically generate them at the beginning of each project. Such tools include Yeoman and Cargo.
I feel like this definition is incomplete, for reasons which I'll explain later. For now, let's use it as a starting point and try to design a scaffolding tool from first principles.
In the beginning, we just want a tool that can generate all the boilerplate for a new project.
It might produce just the bare necessities for a package of the given language: a package metadata file and perhaps a source file or directory. Examples include
poetry newfor Python,
stack newfor Haskell, or
cargo newfor Rust.
It might be a whole project that we copy and edit. In its heyday, HTML5 Boilerplate was a famous example.
These last two classes expand the scope to generating boilerplate for many different features of a project:
- Software license
- Test framework
- Code formatter
- Static analyzers
- Build system
- Continuous integration
After a point, we might want to share our tool with other people, but some of them will want to make different choices (including "none of the above") for the features of their project. For some of the features, we might offer an option that is presented when the user creates a project. For the rest, we might just tell users to delete the files they don't want after the tool runs. (That's certainly easier than offering a binary choice.)
Yeoman tries to make it easy to present options, but it still requires work to code up the interaction and conditionally install a file. That work expands if the option affects more files, e.g. conditionally adding a dependency to the package metadata file.
Some users will not want a given feature at first. They want to keep their project simple at the beginning, only including features that they understand and only as soon as their project needs them. (This might be why the service worker and manifest in Create React App became optional, for example.) They might consider adding the feature later once they've learned more about it or once its need has arisen.
If it was a feature whose configuration file they deleted, and they remembered to commit the file before deleting it, then they could just restore it from version control, but even then, it might no longer be up-to-date. If it was for a feature for which they want to try a different option, then there won't be any deleted files to restore.
These users might love to re-run the scaffolding tool for just that feature. Yeoman supports this to a limited extent with its conflict resolution: the user can choose to ignore all the generated outputs except the relevant feature's configuration file.
Up until now, the tools I've discussed have focused on starting a new project and then stepping out of the way. I call these project scaffolding tools. With this latest capability, we've crossed into a new class of scaffolding tools, ones that can be used after the project's creation to add components as they are needed, generating boilerplate incrementally. I call these component scaffolding tools. Examples include the Angular CLI and Hygen.
What would really support incrementality is if each feature had its own separate generator that could be invoked individually. In this world, a project generator is just a composition of different feature generators. To my knowledge, Yeoman is the only scaffolding tool trying to offer composable generators, but it seems to have missed the mark. If the tool and community were successfully driving this way of thinking about generators, then I would expect the ecosystem to look like NPM: a plethora of highly-focused generators with a few gold standards at the top of every category. Instead, the Yeoman ecosystem looks like a collection of monolithic project generators. To be fair, there are some popular composable generators, e.g. for licenses or Travis CI or Jest, but they don't rule the landscape, and I can't even tell if they are being composed into other generators.
I tried to compose the license generator into my own Python project generator, and had a bad time. Besides the fact that the license generator does not return which license the user chose so that I can include it in my package metadata file, the documentation fails to explain the interaction between my generator and those it composes. Each generator has multiple phases (e.g. "initializing", "prompting", "writing", and "conflicts"), and the composition function must be called from one of these phases. Does the composed generator run all of its phases at that point? If so, then why have separate phases at all? If not, and it is interleaving its phases somehow, then that's bound to lead to some confusing interactions.
I would rather use a familiar mental model for composition: functions. With my ideal scaffolding tool, each generator behaves like a function:
- Once entered, it runs uninterrupted until it exits.
- It can have an ordered list of parameters. Those parameters have names, types, and optionally default values. Values not passed by the calling generator are filled in through interactive prompts, in order. Default values can be constructed by asynchronous functions, and they can use the values of earlier parameters.
- When it exits, it may return a meaningful value to its calling generator.
- It can have preconditions that must be satisfied by the calling generator, perhaps by calling other generators. The system will helpfully diagnose unfulfilled preconditions and halt execution.
Even more, a scaffolding tool framework in a given language will need that language ecosystem to have comment-preserving parsers and printers. Without them, generators will see limited adoption, and without adoption, few authors will want to contribute generators.
I think this problem is largely cultural. I have never seen a textbook talk about preserving comments when teaching parsing techniques. Few developers have comment preservation in mind when embarking on a new parser. Preserving comments is generally an afterthought, an "advanced" feature, and underprioritized, but it is absolutely necessary if you want to write source transformation tools like incremental generators. Consider this a call to action for the parsing community.