How to run Jobs on Laravel Queues and best practices
Introduction
Queues are a very important part of high-performant and efficient Laravel applications, enabling background job processing, improving request response times, and handling heavy workloads. However, improper queue management can lead to bottlenecks, failures, and unexpected behavior.
In this article, we'll cover queue essential, and especially how to run Laravel queues effectively. You'll learn how to configure and monitor jobs, handle failures, optimize queue performance, and avoid common pitfalls. Let's go!
Choosing the Right Queue Driver
Laravel supports multiple queue drivers (database, Redis, Amazon SQS, Beanstalkd, etc.), and selecting the right one is crucial.
- Database: Simple but may not be ideal for high-throughput applications, mostly due to overhead cost of managing database transactions.
- Redis: Usually the best choice for most applications, offering speed, reliability, and scalability.
- Amazon SQS: Great for distributed, cloud-based applications needing managed queue services.
- Beanstalkd: Lightweight and fast.
Recommendation: For starting out, you'll most likely be fine with the Database driver. Else, use Redis unless you have a specific need for SQS or another provider, or you absolutely know what you are doing.
For the purpose of demonstration, I'll be using the Database driver for the explanations from here on.
Running and Managing Queues Efficiently
Use Supervisor to Keep Workers Running
Laravel queues require persistent workers. Use Supervisor to ensure jobs continue running even after crashes or server reboots.
Install Supervisor (Ubuntu example):
sudo apt update && sudo apt install supervisor
Example Supervisor configuration for Laravel:
[program:laravel-queue] process_name=%(program_name)s_%(process_num)02d command=php /path-to-project/artisan queue:work --tries=3 --timeout=90 autostart=true autorestart=true numprocs=3 redirect_stderr=true stdout_logfile=/path-to-project/storage/logs/queue.log
Restart Supervisor after configuration:
sudo supervisorctl reread && sudo supervisorctl update && sudo supervisorctl start laravel-queue:*
Handling Failed Jobs Gracefully
Even with retries, you most likely will still end up with some jobs failed jobs. Laravel provides a failed jobs table to track these issues.
Store Failed Jobs in the Database
php artisan queue:failed-table php artisan migrate
Configure Auto-Retry with tries
and backoff
Define retry behavior in your job classes:
class ProcessOrder implements ShouldQueue { public int $tries = 5; public int $backoff = 30; // Wait 30 seconds between retries }
Monitor and Manually Retry Failed Jobs
List failed jobs:
php artisan queue:failed
Retry failed jobs:
php artisan queue:retry all
Retries all failed jobs.
php artisan queue:retry 12345
Retries the failed job with the given job ID (12345
).
Prune old failed jobs:
php artisan queue:prune-failed --hours=24
Deletes failed jobs older than 24 hours to keep the table clean.
Conclusion
Laravel queues are a powerful way to handle background processing, but they require careful management to avoid performance bottlenecks and failures.
Key takeaways:
- Database driver is most likely going to meet your queue needs.
- Configure Supervisor to keep workers running.
- Set proper timeouts, memory limits, and retries to prevent stuck jobs.
- Monitor failed jobs and retry them when necessary.
- Optimize database queries and API interactions to avoid slow job execution.
Hope you got a hint or two from this piece!