It does seem like deleting the git branch won't reduce the repo size. It's equivalent to deleting a pointer to a commit -- all the commits will still be there.
If we want to make the old approach work on github/gitlab, we probably want to throw out the repo periodically and then put new tor browser files into a fresh repo.
I just found this bundles2github.py script while looking at what we need to do to update the links...
so it looks like we already had a script to upload tor browser as releases. However, it also looks old (last edited in 2016) and like it does not match the current functionality of gettor. For example, the link creation seems outdated:
core.create_links_file('GitHub', readable_fp)
It's a great idea to use releases and the script looks good to me! I left a few comments in the commits.
Are we monitoring the return code of the script? At some point, something will break and we should notice right away when the script failed to upload new Tor Browser releases.
Trac: Status: needs_review to needs_information Reviewer: N/Ato phw
It's a great idea to use releases and the script looks good to me! I left a few comments in the commits.
Thanks!
Are we monitoring the return code of the script? At some point, something will break and we should notice right away when the script failed to upload new Tor Browser releases.
Right now this script is run manually, but this is something we should definitely do if/when it is automated in the future. I'm not quite sure how to handle this since I don't want an error to stop the update script and start over when it is run again. This wastes a lot of bandwidth and time. My current thought is that we can print out the missing releases in a log file and manually upload them later, but that's not a very elegant solution.
Right now this script is run manually, but this is something we should definitely do if/when it is automated in the future. I'm not quite sure how to handle this since I don't want an error to stop the update script and start over when it is run again. This wastes a lot of bandwidth and time. My current thought is that we can print out the missing releases in a log file and manually upload them later, but that's not a very elegant solution.
How about we make the script more robust by making it re-attempt failed downloads and uploads two or three times? We may still have to fix issues manually on occasion but I would expect this to eliminate the majority of transient issues.
How about we make the script more robust by making it re-attempt failed downloads and uploads two or three times? We may still have to fix issues manually on occasion but I would expect this to eliminate the majority of transient issues.
This commit also fixes a problem I had where github returns a server error when you try to delete a large release. I changed it so that it removes each release asset one-by-one first and then deletes it and this seems to have solved the problem.
Trac: Status: merge_ready to needs_review Actualpoints: .5 to 1
Looks good to me! I only had a minor nitpick, which I left in the code. Putting the authentication token into an environment variable seems like a good solution.