Conveyor is a build system. You give it a configuration file (
-f=conveyor.conf) that declaratively defines what you want, and then run tasks. Tasks create files or file trees corresponding to various stages in the production of the packages, and these trees are stored in a disk cache (see below). Tasks run in parallel and at the end of the build the output directory (
--output-dir=output) will contain a copy of whatever the requested task produced.
Output overwrite modes
By default Conveyor will replace the contents of the output directory if that directory was created by Conveyor itself and the contents haven't been changed. If the output directory already exists but either wasn't created by Conveyor itself, or you changed something inside it, then the tool won't proceed.
You can use
--overwrite-mode=HARD_REPLACE to replace any files the build produced but leave other files alone, but be aware that this may result in old files hanging around in the output directory, and any read only files will cause an error.
--overwrite-mode=STOP can be useful in scripts: it will prevent the tool from proceeding if the output directory already exists, even if it was created by Conveyor.
The first time you use Conveyor you will need to run:
optionally giving this command a
--passphrase. This will create a "root key" which is used to derive all the other keys you'll need. It's written to your per-user defaults file, which can be found here:
Config placed in those paths will be merged into every build file. There's nothing special about the signing related config options - anything can be put here.
Conveyor has a simple project generation command that creates self-contained GUI projects complete with source code, build system and Conveyor configuration:
1 2 3 4 5 6
To learn more see the tutorial.
Build a download site for all available platforms in a directory called
Adjust a configuration key for one build only:
1 2 3 4 5
Show all invokable task names, using a different config file to the default:
Tasks labelled as "ambiguous" apply to more than one machine. You can run them by temporarily narrowing the machines your config supports by setting the
app.machines key, e.g. by passing
-Kapp.machines=mac.amd64 on the command line. The machines you can target are named using simple hierarchical identifiers that look like
linux.aarch64.glibc. You can pick the machines you wish to build for with the
app.machines key. Learn more.
Render the config to JSON:
Create a Mac .app directory for Apple Silicon, an unnotarized zip of it, and a notarized zip for Intel CPUs:
1 2 3
Create a Windows app as a directory tree, a ZIP and an MSIX package:
1 2 3
This doesn't need you to set
app.machines because currently only Intel/AMD64 targets are supported for Windows.
Create a Linux JVM app as a directory tree, tarball and a Debian package:
1 2 3
--parallelism flag allows you to control how many tasks run simultaneously. It defaults to four, which works well enough for us. Be aware that setting this too high may not yield performance improvements, or may use too much memory. Experiment a bit and see what works best for you.
If using a VM or container you should allocate at least 4GB of RAM. With less Conveyor may stall or trigger the kernel out-of-memory killer.
Viewing task dependencies¶
task-dependencies command takes a task command-line name and prints all the dependencies. Try running
conveyor task-dependencies site to see how the site is made up. Dimmed out tasks are being hidden either because they appeared elsewhere in the tree already (it's really a graph), or because the task is disabled for some reason, which will be explained next to the task's entry.
Conveyor makes heavy use of caching. Tasks work in individual cached directories and the results are copied to the output directory at the end. The disk cache can be found here:
You can change this location using the
--cache-dir flag and the maximum number of gigabytes it's allowed to consume using
--cache-limit. It's safe to delete this directory whenever you like, or any of the individual sub-directories. Entries are stored under a hashed key, and an English description in Markdown of what's in each entry can be found in the
key file within it, so it's easy to explore if you're curious. You can also find what cache keys are being used by viewing the logs.
If you hit a caching bug you can forcibly re-run tasks. You shouldn't need this unless you encounter a caching bug, but here it is anyway:
There's no way to clear the cache from the CLI. You can just delete the cache directory yourself if you want to free up the space it uses.
If anything goes wrong or you are just curious to see what was done, use the
--show-log flag. On Windows, you get Notepad. On UNIX it will display the last execution's log file in a pager, highlighted and colored. By default lines aren't wrapped, so you can scroll left and right with the arrows. If you'd like to enable wrapping, perhaps to copy some long path or URL, type
-S (that is,
- followed by Shift-S). As always, you can press
q to quit.
Logs are kept for more than just the last execution. At the top of each log file is the path where logs are kept. You can view log files by process ID in that directory.