


We think that the archive is something that Linear should manage for you while deleting issues is your own choice. We wanted to make the concepts around archiving and deleting issues more straightforward. At the same time, auto-archiving puts older completed issues inside the archive, which meant the archive served two disparate purposes. We found that most users used the archive as a trash can for issues that they wanted to delete. Issues are permanently deleted after 14 days.Īs part of this change, we removed the option to manually archive issues. Deleted issues will be available in the "Recently deleted" view which you can find through the command menu.
#Auto archiver windows#
You can now delete them by pressing ⌘ ⌫ (or ⌦ for Windows and Linux users). Sometimes issues are created by mistake, and it makes no sense to keep them in the system. If you want to enable it for your team or change auto-archiving settings, navigate to Settings > Team > Workflow. The feature is automatically enabled for all teams that have auto-archiving for issues set up. If a project or cycle has been completed more than X months ago (this time period is configurable) and all the issues inside it have been archived, the project or cycle will also be archived. One of the core principles behind the Linear Method is to "keep a manageable backlog." This applies to issues as well as cycles and projects, and adding sensible automation to help you achieve this lets you focus on the things that really matter. Now we are extending the auto-archive feature to projects and cycles. To manually delete closed events from the database and add them to an archive file, you can use. It must be shared with the same service account.Last year we announced auto-closing and auto-archiving of issues. Scroll to Operations Management - Event Auto Archiving Settings. (It starts here to support instructional text in the first rows of the sheet, as shown below.) This script takes one command line argument, with -sheet, the name of the sheet. To make it easier to set up new auto-archiver sheets, the auto-auto-archiver will look at a particular sheet and run the auto-archiver on every sheet name in column A, starting from row 11. Of course, additional logging information, etc. With this configuration, the archiver should archive and store all media added to the Google Sheet every 60 seconds. * * * * * python auto_archive.py -sheet archiver-test An example crontab entry that runs the archiver every minute is as follows. The auto-archiver can be run automatically via cron. All sheets in the document will be checked. Rows with an empty URL column, or a non-empty archive column are also skipped. Note that the first row is skipped, as it is assumed to be a header row. Live streaming content is recorded in a separate thread. The links are downloaded and archived, and the spreadsheet is updated to the following: When the auto archiver starts running, it updates the "Archive status" column. Pipenv run python auto_archive.py -sheet archiver-test Thumbnail index: a link to a page that shows many thumbnails for the video, useful for quickly seeing video contentįor example, for use with this spreadsheet:.Thumbnail: an image thumbnail of the video (resize row height to make this more visible) So Im experiencing an unusual problem where the docs that are syncd from another server go through 'conversion' upon auto-archiver-import even though there isnt an outgoing provider to an IBR, all file formats are set to Passthru, and I have the 'Copy Web Content' option turned on for the archive.Upload title: the "title" of the video from the original source.(For YouTube, this unfortunately does not currently include the time) Upload timestamp: the timestamp extracted from the video.Archive date: the date that the auto archiver script ran for this file.For files that were not able to be auto archived, this can be manually updated. Archive location (required): the location of the archived version.Any row with text in this column will be skipped automatically. Archive status (required): the status of the auto archiver script.This is the only column that should be supplied with data initially Media URL (required): the location of the media to be archived.This sheet must also have specific columns in the first row: This sheet must have been shared with the Google Service account used by gspread. There is just one necessary command line flag, -sheet name which the name of the Google Sheet to check for URLs. Internet Archive credentials can be retrieved from.
