Manually Beetmove Files#

This is fairly rough documentation for a fairly rare request: beetmoving arbitrary artifacts into archive.m.o.

Steps#

  1. Identify what needs to beetmoved, and where.

We’re going to be pushing files to archive.m.o, which lends it some legitimacy: let’s make sure this is a valid request.

Generally we want to upload to an existing directory structure, e.g. https://archive.mozilla.org/pub/firefox/nightly/ or https://archive.mozilla.org/pub/mobile/toolchains/ or the like. If we need a new directory structure, we should coordinate with SRE Services to make sure we have the right permissions and the right cleanup rules set.

  1. Fork/clone the repo.

  1. Determine which bucket and AWS creds are needed.

For the bucket, look at the appropriate production beetmoverscript configs. For instance, as of this writing, Firefox files go in the net-mozaws-prod-delivery-firefox bucket.

Then go to the appropriate worker in k8s-sops (e.g. firefoxci-gecko-3), and grab the appropriate id and key. You’ll need these to have write access to the bucket.

Note: you probably want to use the appropriate staging bucket and staging id+key for testing first, so also grab those. These will be in use by the non-prod dep and/or dev pools.

Copy the config_example.json file to config.json and edit it. In the example, maven-staging and maven-production are the script’s nicknames. The buckets dict contains the real bucket name, along with a 2nd nickname which is hardcoded in the script, and the credentials dict holds the AWS creds.

  1. Hack the script, e.g. script.py and util.py; Aki made these changes to beetmove an apidoc file rather than some glean files. Adding a --noop or --dry-run flag so you can test as much as possible without moving any files is recommended.

Note: the script assumes you have the files downloaded locally. If you need to dynamically download a file, or if you need to download e.g. 6 months of nightlies a la Bug 1727585, you may want to add automation to do that (and you probably want to verify the downloaded files’ checksums via the chain-of-trust.json artifact for robustness and correctness).

  1. Give it a real try, using the staging bucket. If that works, then if everyone’s sure that the files and paths are correct, then push to the production bucket and close the bug.