The total number of attempts to try the job until it completes.
Backoff setting for automatic retries if the job fails
An amount of milliseconds to wait until this job can be processed. Note that for accurate delays, worker and producers should have their clocks synchronized.
A queue can be divided into an unlimited amount of groups that can be defined dynamically as jobs are being added to a queue.
The group option allows you to specify the unique group Id where the given job will belong to, and processed according to group mechanics.
For more information about groups
Override the job ID - by default, the job ID is a unique integer, but you can use this setting to override it. If you use this option, it is up to you to ensure the jobId is unique. If you attempt to add a job with an id that already exists, it will not be added.
If true, adds the job to the right of the queue instead of the left (default false)
Internal property used by repeatable jobs.
Ranges from 1 (highest priority) to MAX_INT (lowest priority). Note that using priorities has a slight impact on performance, so do not use it if not required.
Rate limiter key to use if rate limiter enabled.
If true, removes the job when it successfully completes When given an number, it specifies the maximum amount of jobs to keep, or you can provide an object specifying max age and/or count to keep. Default behavior is to keep the job in the completed set.
If true, removes the job when it fails after all attempts. When given an number, it specifies the maximum amount of jobs to keep, or you can provide an object specifying max age and/or count to keep.
Repeat this job, for example based on a
Internal property used by repeatable jobs to save base repeat job key.
Limits the size in bytes of the job's data payload (as a JSON serialized string).
Limits the amount of stack trace lines that will be recorded in the stacktrace.
Timestamp when the job was created.
Generated using TypeDoc