Why Performance Tuning SQL Requires Looking Out for CRON Jobs

f:id:TosskaTechnologies:20210128164650p:plain

Intense database overload can have many reasons, one of which is a large number of cron job copies running at the same time. Naturally, an unusually large number of cron jobs will have an extreme effect on database performance. In case the database server slows down or a job requires a longer time to complete than usual, so much that it is still running when its second copy begins running, resulting in the two jobs competing with each other for resources and further reducing the chances of completing.

How to Handle the Multiple Cron Jobs Problem - and Improve Database Performance

Although such a dramatic effect is often noticed and resolved well in time, it can get quite frequent at times, hence the need for Performance Tuning Sql MySQL. Here, we will discuss a few ways to help users keep their cron jobs under control. 

 

1. Avoid Running Several Copies

This one’s apparent and can be done quite easily. Experts suggest leaving the “production requirement” of zero cron jobs permitted, except when they keep themselves from initiating in several copies or be placed in a wrapper script around jobs constructed by developers as they are entered in production.

File locks are especially helpful in this case - refrain from creating files without them, otherwise those remaining post a script crash may keep it from restarting. You may also use the GET_LOCK function in your MySQL queries before you begin tuning.

The second alternative may be preferable in case you wish to serialize jobs from several servers. It may also be useful if you require restrictions on concurrency for some processes for better performance. For instance, you have fifty web servers that run some cron jobs but you may not want over four of those servers to run those jobs at the same instance.

 

f:id:TosskaTechnologies:20210128164826p:plain

 

2. Keep an eye out for errors

One of Cron’s prominent features is mailing users the output. Changing the script to print only error messages can help you catch errors when they start to occur when you try performance tuning SQL, such as if a job fails to execute due to another unexpected job running in the system.

In big systems, there might be a slightly altered approach to this issue in order to prevent numerous cron job messages during database/server reboot, but at least you will be informed about cron errors in time.

 

3. Save Previous Run Times

Several instances have occurred where a Cron job couldn’t finish on time any more. There may be speculation regarding the reason behind this - whether the failure to complete occurred suddenly or if the job slowed down gradually till it just couldn’t finish in time.

In such cases, you may create a table in the database and save details regarding the cron job and the time it took to run to help you in query optimization later. This can also be recorded using a wrapper script but it’s best done internally, because you can store multiple metrics this way.

For instance, you could save information in this manner:

  • Time taken = 30 seconds
  • Images processed = 2000

When you start logging details in this manner, you can easily identify the reason behind the increase in time, such as a greater amount of work or slower system speed in completing the work. Hooking up monitoring to the trending information will be even more helpful, so you will be alerted when the cron job you’re recording data for takes five minutes to run instead of the usual 15 seconds.

 

f:id:TosskaTechnologies:20210128164912p:plain