Automated ClamAV Virus Scanning

//Automated ClamAV Virus Scanning

Automated ClamAV Virus Scanning

Automating Linux Anti-Virus Using ClamAV and Cron

Thankfully Linux isn’t a platform which has a significant problem with Viruses, however it is always better to be safe than sorry. Luckily ClamAV is an excellent free anti-virus solution for Linux servers. However, at least on RedHat Enterprise 5 (RHEL5) the default install doesn’t offer any automated scanning and alerting. So here is what I’ve done:

The following steps assume you are using RHEL5, but should apply to other Linux distributions as well.

First, you’ll want to install ClamAV:

[bash light=”true”] yum install clamav clamav-db clamd
/etc/init.d/clamd start[/bash]

On RHEL5 at least this automatically sets up a daily cron job that uses freshclam to update the virus definitions, so that’s good.

Next I recommend removing the test virus files, although you can save this until after you test the rest of the setup:

[bash light=”true”] rm -rf /usr/share/doc/clamav-0.95.3/test/[/bash]

Now we want to setup our automation. I have a daily cron job that scans the entire server which can take several minutes, and then an hourly cron job that only scans files which were created or modified within the last hour. This should provide rapid notification of any infection without bogging your server down for 5 minutes every hour. The hourly scans run in a couple of seconds.

Each scanning script then checks the scan logs to see if there were any infected files found, and if so immediately sends you a notification e-mail (you could set this address to your mobile phone’s SMS account if you wanted).

The Daily Scan:

[bash light=”true”] emacs /etc/cron.daily/clamscan_daily[/bash]

Paste in:

[bash] #!/bin/bash

# email subject
# Email To ?
EMAIL="[email protected]"
# Log location

check_scan () {

# Check the last set of results. If there are any "Infected" counts that aren’t zero, we have a problem.
if [ `tail -n 12 ${LOG} | grep Infected | grep -v 0 | wc -l` != 0 ] then
EMAILMESSAGE=`mktemp /tmp/virus-alert.XXXXX`
echo "To: ${EMAIL}" >> ${EMAILMESSAGE}
echo "From: [email protected]" >> ${EMAILMESSAGE}
echo "Subject: ${SUBJECT}" >> ${EMAILMESSAGE}
echo "Importance: High" >> ${EMAILMESSAGE}
echo "X-Priority: 1" >> ${EMAILMESSAGE}
echo "`tail -n 50 ${LOG}`" >> ${EMAILMESSAGE}
sendmail -t < ${EMAILMESSAGE}


clamscan -r / –exclude-dir=/sys/ –quiet –infected –log=${LOG}


[bash light=”true”] chmod +x /etc/cron.daily/clamscan_daily[/bash]

The Hourly Scan:

[bash light=”true”] emacs /etc/cron.hourly/clamscan_hourly[/bash]

Paste in:

[bash] #!/bin/bash

# email subject
# Email To ?
EMAIL="[email protected]"
# Log location

check_scan () {

# Check the last set of results. If there are any "Infected" counts that aren’t zero, we have a problem.
if [ `tail -n 12 ${LOG} | grep Infected | grep -v 0 | wc -l` != 0 ] then
EMAILMESSAGE=`mktemp /tmp/virus-alert.XXXXX`
echo "To: ${EMAIL}" >> ${EMAILMESSAGE}
echo "From: [email protected]" >> ${EMAILMESSAGE}
echo "Subject: ${SUBJECT}" >> ${EMAILMESSAGE}
echo "Importance: High" >> ${EMAILMESSAGE}
echo "X-Priority: 1" >> ${EMAILMESSAGE}
echo "`tail -n 50 ${LOG}`" >> ${EMAILMESSAGE}
sendmail -t < ${EMAILMESSAGE}


find / -not -wholename ‘/sys/*’ -and -not -wholename ‘/proc/*’ -mmin -61 -type f -print0 | xargs -0 -r clamscan –exclude-dir=/proc/ –exclude-dir=/sys/ –quiet –infected –log=${LOG}

find / -not -wholename ‘/sys/*’ -and -not -wholename ‘/proc/*’ -cmin -61 -type f -print0 | xargs -0 -r clamscan –exclude-dir=/proc/ –exclude-dir=/sys/ –quiet –infected –log=${LOG}

[bash light=”true”] chmod +x /etc/cron.hourly/clamscan_hourly[/bash]

Protected System

You should now have a well protected system with low impact to system performance and rapid alerting. Anti-Virus is only one piece of protecting a server, but hopefully this makes it easy to implement for everyone.

By |2010-01-26T16:07:57+00:00January 26th, 2010|Linux|63 Comments

About the Author:

Devon is a husband, amateur photographer, avid reader, and motorcycle enthusiast.


  1. Joshua Glemza May 18, 2010 at 9:17 am - Reply

    Thanks for writing these scripts. Also, there’s an erroneous semicolon on line 19 of the hourly script.

    • Devon May 18, 2010 at 9:30 am - Reply


      great catch on the semi-colon (fixed it) thanks!!

      Glad you like the scripts!


  2. Andrew Kramer May 20, 2010 at 7:55 pm - Reply

    Nice scripts – thanks for posting them. One problem – filenames with spaces. Example:”Time_spent on the queue-all messages.png”

    Naturally, you get the following complaint:

    Time_spent: No such file or directory
    on: No such file or directory
    the: No such file or directory
    queue-all: No such file or directory
    messages.png: No such file or directory


    • Andrew Kramer May 20, 2010 at 7:58 pm - Reply

      Although to be fair, I think that’s Clam being dumb, not you πŸ˜‰

      • Devon May 21, 2010 at 8:58 am - Reply

        Very good point! Assuming you’re only having the issue with the hourly script, it’s not clam’s fault, it’s mine/linux’s:) However, I’ve just updated the hourly script in a way that should handle files with spaces no problem. Give it a test!



  3. Gerrit Kamp June 4, 2010 at 5:35 am - Reply

    Hi Devon,
    Thanks for these great scrips! Very, very useful.

    Two comments that may be helpful: when using -type f I did not pick up viruses in zip files but as soon as I removed -type f altogether, I did find them. Also, the hourly script above runs ‘find’ and ‘clamscan’ twice.


    • Devon June 4, 2010 at 6:49 am - Reply

      Good point on the -type f thing. I didn’t check that. The twice running thing is by design, but I had a cut and paste failure when I updated that script. It should run once for -mmin and once for -cmin to find both files created in the last 61 minutes, and files modified in the last 61 minutes. Fixing the script now, good eye!

  4. Derek June 10, 2010 at 11:00 pm - Reply

    I implemented the scripts for daily and weekly scans, though I have a question regarding the log file setup.

    When I first tried to use the LOG option from the command line, I found that I had to create the scan.log file manually.

    My question is how did you setup the log file permissions so that the cron script will write to it?

    • Devon June 11, 2010 at 11:48 am - Reply

      It should run as root, so it shouldn’t need any special permissions. Just make sure the scan.log file is owned by root, and it can be world readable without too much risk.


  5. Chris September 29, 2010 at 8:01 am - Reply

    The find command can list files that match on separate criteria, i.e. or/union. This should list the same files as your two-pass approach in one pass, without duplicates:

    find / \( -wholename ‘/proc’ -o -wholename ‘/sys’ \) -prune -o -type f \( -cmin -61 -o -mmin -61 \) -print0

  6. Dave November 18, 2011 at 6:20 pm - Reply

    Hi There

    When the hourly cron runs, I get the following e-mail:


    /etc/cron.hourly/clamscan_hourly: line 33: unexpected EOF while looking for matching `”
    /etc/cron.hourly/clamscan_hourly: line 37: syntax error: unexpected end of file

    Can you please help me to resolve.


    • Devon November 21, 2011 at 7:11 pm - Reply

      Something is wrong as the clamscan_hourly script only has 31 lines. If you’re getting errors on lines 33 and 37 there must have been an issue with the copy/paste. Try this:

  7. alsahh December 2, 2011 at 8:07 am - Reply

    Can u please explain me what is the hourly script doin in

    find / -not -wholename ‘/sys/*’ -and -not -wholename ‘/proc/*’ -mmin -61 -type f -print0

    I wanna understand why this so solution is performing so well. Thank u !

    • Devon December 2, 2011 at 11:54 am - Reply

      Sure. Basically this is finding files which have been modified in the past 61 minutes, ignoring files under /sys/ and /proc/ which are often file representations of system data/ports/etc… and will show up as changed all the time but be unscanable/slow to scan/etc…

      By checking only files created or changed in the past hour the hourly scan can quickly check for potential new viruses, without doing a full hard drive scan, which happens nightly just in case.

      • alsahh December 2, 2011 at 12:18 pm - Reply

        ah ok thank u for the explanation!

        i still workin on getting clamd startet for scanning files on-access. while beeing an linux-noob i cannot get clamd reacting on uploaded test files. clamdscan is working but the daemon itself doesnt react on infected files, dont know what to do so… πŸ™

        • Devon December 2, 2011 at 1:10 pm - Reply

          What do you want it to do? Using my scripts it will just e-mail you if a scan finds something, leaving you to take manual action. You can configure clam to automatically delete or quarantine flagged files as well, but that’s something you should read the clamav docs for.

  8. Daniel December 10, 2011 at 7:13 am - Reply

    Would there be any way that the clamscan_daily script can be put on the same site where the clamscan_hourly script was? The site where you could paste directly from. Having trouble getting the daily script to work properly. Getting the error: /etc/cron.daily/clamscan_daily: line 9: syntax error: unexpected end of file and: /etc/cron.daily/clamscan_daily: line 7: unexpected EOF while looking for matching `”

  9. Dennis Y. December 23, 2011 at 9:35 am - Reply

    Good Stuff. Thanks for the script works perfectly.

  10. […] tutorial (…-scanning.html ) setups daily and hourly scanning and notification if viruses are […]

  11. Peter Nelson July 6, 2012 at 12:56 am - Reply

    Bear in mind that the “grep -v 0” will filter out more than you intend, such as 10 or 105 (deity forbid) infected files.

    • Nate March 14, 2013 at 7:07 am - Reply

      I found that “grep -v ‘: 0′” works better

  12. Michael Hutchinson July 7, 2012 at 10:31 am - Reply

    I just have a simple question,

    Using the Daily script, how can I have it auto attach the Log file of that Daily Scan to it when sending the email.

    So that I can then see what and where the virus was, without having to log in via Putty every time.

    Thanks a mil,

    • Devon July 8, 2012 at 6:09 pm - Reply


      that’s a good idea. I don’t have time to update the script right now but if anyone does and wants to share it here, that would be great!


  13. Jeff August 14, 2012 at 7:22 am - Reply


    I installed the cron.daily script and it works fine, however, all infected filenames (and full path) end up in the mail headers instead of the mail body. Any idea how to fix this ?

  14. fobriste August 21, 2012 at 9:24 am - Reply

    This is a pretty good howto that could greatly be enhanced with a daily run of fresh clam:

    23 2 * * * /usr/bin/freshclam

    Would do alright. This requires the clamav-update package.

  15. Dee December 28, 2012 at 8:15 am - Reply

    Thanks for this excellent resource – very well explained and easy to implement. Maybe a side-note about the clamavconnector in cPanel wouldn’t go amiss.
    All the best,

    • Devon December 28, 2012 at 8:21 am - Reply

      I’ve never used cPanel, but I’ll take your word for it!

      • Dee December 31, 2012 at 12:38 pm - Reply

        Sure, it’s just an automated Clam install through cPanel… I made the mistake of installing Clam manually and then installing over it with the cPanel plugin, thereby destroying all my config. Guess you live and learn πŸ™‚

  16. Jo February 9, 2013 at 1:32 pm - Reply

    Hi, Thank you for the scripts. I am having problems with hackers and need to check files to find where the problem is. I am getting an error after setting it up.
    /bin/sh: /etc/cron.hourly/clamscan_hourly: /bin/bash^M: bad interpreter: No such file or directory

    Can anyone see from this what I have done wrong?

  17. clamav - Page 4 February 19, 2013 at 12:26 am - Reply

    […] :- [SOLVED] Incoming e-mail virus scan with Evolution – Ubuntu Forums Scan your directories :- Automated ClamAV Virus Scanning | Devon Hillard's Digital Sanctuary kmail,claws it seem supports clam :- Clam AntiVirus openSUSE12.2(Mantis)64-bit/GNOME 3.4.2 […]

  18. Frans March 1, 2013 at 4:02 am - Reply

    THANK YOU for posting these scripts Devon πŸ˜‰

    Jo, you can just replace the shebang part like this:
    #!/usr/bin/env bash

    bash interpreter, as well as many others, is not always located at /bin/bash but in /usr/bin/bash on many distros. Using env utility is a simple way to improve script portability as it points to where the interpreter is located on the current system.


  19. IT_Architect March 8, 2013 at 7:30 pm - Reply

    Thanks for making these scripts:
    A few of the changes I made were:
    1. Changed -wholename to -path on the hourly. -path runs everywhere while -wholename does not.
    2. Dropped –exclude-dir on the hourly since clamscan receives a filtered stream from find already.
    4. Dropped the clamscan -r parameter on the hourly because it serves no purpose with a filtered find stream.
    5. Did a cat on the two finds and piped it through sort -zu. (hourly) This avoids scanning the same files twice and produces one entry in the log instead of two.
    6. Made a second version of the hourly where I can specify with wildcards where to scan, rather than where not to scan. This gives fine control of what gets scanned, which is also very useful for web servers.
    7. Parameterized everything so no code needs to be edited outside of header variables.
    *If you see any issues with this logic, let me know.


    • Devon March 11, 2013 at 5:41 am - Reply

      IT Architect,

      that sounds great! Can you share your improved scripts? You can email me: [email protected] if you like. Comments might mess up the formatting.

      • IT_Architect March 13, 2013 at 9:30 am - Reply

        >Comments might mess up the formatting.Can you share your improved scripts?<
        I can, but about the only thing code that resembles what is here is the email alert form. LOL!

        Other: I hope you're a Linux guy, because I'm a UNIX guy. I got out of Linux in 2008 only due to the wild server loads we have. One of my goals is that the scripts run on any ?NIX, and about any version, and I don't have a Linux box to try them on. Your timing is also great. Teething issues are over, and I just finished a code cleanup. The code and logic are well commented.

        • Devon March 13, 2013 at 9:35 am - Reply

          Hey that sounds great! Feel free to email it over to me, or if you post it on your own blog I’ll just link to it or whatever is easiest:)

          Yes I’m a Linux guy so happy to test stuff on RHEL 5 for ya! Thanks!

  20. Mike March 28, 2013 at 8:55 am - Reply


    Nice work, your site has been VERY informative. I am currently running ClamAV on UNIX and would love to get a copy of IT_Architects version of the script to test in my environment. Do you think you could email me a copy of it?

    I would greatly appreciate it, thanks!

  21. daniel April 7, 2013 at 5:03 pm - Reply

    dear devon

    thanks for your excellent script

    I’m kind of new on this

    do i need to setup crontab -e or creating daily and hourly scan will be fine?

    my question is,, when these script will start???

    sorry for the dumb question


    • Devon April 7, 2013 at 8:47 pm - Reply


      if you create the scripts in the locations mentioned: /etc/cron.hourly/ and /etc/cron.daily then they will run automatically (assuming you’re running on a system that has those locations – I use RHEL 5). You won’t need to add them to a crontab directly.

  22. daniel April 7, 2013 at 5:06 pm - Reply

    my current crontab -e result are:

    30 5 * * * /usr/local/cpanel/scripts/optimize_eximstats > /dev/null 2>&1
    58 4 * * * /usr/local/cpanel/scripts/upcp –cron
    0 1 * * * /usr/local/cpanel/scripts/cpbackup
    35 * * * * /usr/bin/test -x /usr/local/cpanel/bin/tail-check && /usr/local/cpanel/bin/tail-check
    45 */4 * * * /usr/bin/test -x /usr/local/cpanel/scripts/update_mailman_cache && /usr/local/cpanel/scripts/update_mailman_cache
    30 */4 * * * /usr/bin/test -x /usr/local/cpanel/scripts/update_db_cache && /usr/local/cpanel/scripts/update_db_cache
    45 */8 * * * /usr/bin/test -x /usr/local/cpanel/bin/optimizefs && /usr/local/cpanel/bin/optimizefs
    30 */2 * * * /usr/local/cpanel/bin/mysqluserstore >/dev/null 2>&1
    15 */2 * * * /usr/local/cpanel/bin/dbindex >/dev/null 2>&1
    15 */6 * * * /usr/local/cpanel/scripts/autorepair recoverymgmt >/dev/null 2>&1
    */5 * * * * /usr/local/cpanel/bin/dcpumon >/dev/null 2>&1
    34 22 * * * /usr/local/cpanel/whostmgr/docroot/cgi/ –notify
    7,22,37,52 * * * * /usr/local/cpanel/whostmgr/bin/dnsqueue > /dev/null 2>&1
    2,58 * * * * /usr/local/bandmin/bandmin
    0 0 * * * /usr/local/bandmin/ipaddrmap
    0 1 * * Wed /usr/local/1h/bin/ >> /usr/local/1h/var/log/1h_updates.log 2>&1
    * * * * * /usr/local/1h/sbin/ >> /usr/local/1h/var/log/data2rrd.log 2>&1
    */30 * * * * /usr/local/1h/sbin/ local >> /usr/local/1h/var/log/rrd2img.log 2>&1
    */5 * * * * /usr/local/1h/sbin/ >> /usr/local/1h/var/log/local-interface.log 2>&1
    7 0 * * * /usr/local/1h/bin/ >> /usr/local/1h/var/log/update_expire_times.log 2>&1
    59 * * * * /usr/local/1h/bin/ > /dev/null 2>&1
    0 * * * * /usr/local/1h/sbin/ >> /usr/local/1h/var/log/take_limits_down.log 2>&1
    45 23 * * 6 /usr/local/bin/clamav-cron /home

  23. Selwyn Orren May 31, 2013 at 3:29 am - Reply

    Hey All,

    Love your scripts, they have really added so much benifit to my servers. I do have a question if you dont mind. I noticed teh clamscan_daily give me teh following error when it runs

    /etc/cron.daily/clamscan_daily: line 27: 30625 Terminated clamscan -r / –exclude-dir=/sys/ –quiet –infected –log=${LOG}

    Any idea why this happens and how to fix it?

    Thanks in advance

  24. Steve Belanger September 19, 2013 at 9:26 pm - Reply

    Simple curiosity, is there a reason why on the hourly script scan the last command ‘find’ is repeated twice with the same parameters ?

    • Devon September 20, 2013 at 5:05 am - Reply

      It isn’t. You can’t see it well on this site, but if you copy/paste you’ll see that one lines uses -cmin and one uses -mmin, looking for files that were created or modified in the last 61 minutes. It’s possible that newly created files have a modified timestamp of their creation, so it may not be needed… I have’t checked that.

  25. kostas peridis January 20, 2014 at 11:03 am - Reply

    I’ve found the “default” clamav rules to be ineffective against the usual trojans/malware uploaded on hacked server accounts.
    I recommend using special rules such as the atomic security clamav signatures.

  26. […] # Email alert cron job script for ClamAV # Original, unmodified script by: Deven Hillard #( # Modified to show infected and/or removed files # Directories to scan SCAN_DIR="/home /tmp /var" […]

  27. Wesley July 21, 2015 at 5:59 am - Reply

    Thanks for this scrips. They are very helpfull.
    I have only one question.
    How to get in the reports date and time of scanning???
    Thanks in advance

    • Devon July 21, 2015 at 9:24 am - Reply

      You just want to get the date and time inserted into the email or? just add an echo `date` >> in there.

  28. Wesley July 22, 2015 at 8:50 am - Reply

    Where?? there is no link :-))

    • Devon July 23, 2015 at 5:23 am - Reply

      In the set of echos that generate the email. Although really the email should have a date/time sent anyhow right?

  29. Czarek July 23, 2015 at 6:13 am - Reply

    One more question.
    Scan report do not shows the location of the infected file.
    How to add this to the scan report??

  30. Viktor October 30, 2015 at 1:57 am - Reply

    i wonder that no one actually fixed this script since 2010.. with 52 comments…
    first you have issues in this line ” if [ `tail -n 12 ${LOG} | grep Infected | grep -v 0 | wc -l` != 0 ] ”
    you dont have to use “tail”, because you already have “grep Infected”, then “grep -v 0” will match only non-zero, so if Infected files 250 or 105 for example, it will be never reported because of 0, it simply excluded from grep results… then you have to reset log every day, and copy infected log aside also… aand do not relay on mail and even sendmail too much, because of DNS issues or other problems…

    • Devon October 30, 2015 at 5:35 am - Reply


      I’m not sure I agree with you here. You have to use tail since the log file will have the results of MANY runs of the scan, so if you don’t use tail, then you will get false positives on old scans which you may have already addressed (at least until your log rotates). So tail solves that all cleanly with no downside.

      You are correct about the grep -v 0 however. Not sure the easiest way to fix that. It’s too early for regexes:)

      Not sure what you mean by not using mail because of DNS issues? If you don’t want an email notification I guess you can feel free to do whatever you want, but I like get an email notice:)

      • Viktor October 30, 2015 at 12:30 pm - Reply

        you are running clamscan DAILY so reset log after scan, copy infected log aside, and you have –infected option, then only infected files will be logged. so you will have new log every day.
        if [ $(grep “Infected.*[1-9].*” ${LOG} | wc -l) != 0 ]
        cp ${LOG} ${LOG}_CHECK_ME
        echo “” > ${LOG}
        echo “” > ${LOG}

        • Devon October 30, 2015 at 12:32 pm - Reply

          Also running it hourly, same log file. I think having a tail -n 12 is a LOT simpler than trying to bake in log rotation (when you should be using logrotate.d anyhow). Your pseudo code is already 10x more complex than the tail -n 12.

        • Viktor October 30, 2015 at 12:32 pm - Reply

          cp ${LOG} ${LOG}_CHECK_ME_${DATE}

  31. Reid December 6, 2015 at 11:43 am - Reply

    Hi Devon,

    Thanks for the scripts! I’m currently trying to implement the clamscan_daily script. Unfortunately its not sending me the email messages. I’m currently using postfix, so I’m not sure what I have to change in order to make the script work. Any help in resolving this issue would be greatly appreciated. And one more thing, how would I enable freshclam updates in the script?


  32. Ron December 18, 2015 at 7:06 am - Reply

    Thanks for this, it worked great! The only adjustment I needed to make was to add –max-filesize parameter to suppress this warning: “LibClamAV Warning: cli_scanxz: decompress file size exceeds limits – only scanning 27262976 bytes”

    • Ron December 18, 2015 at 7:07 am - Reply

      oops, i meant to say max-scansize

  33. George February 28, 2016 at 8:44 am - Reply

    Great tutorial…
    I try setup my server… (VPS CentOS 6.6 with Plesk ODIN 12.0.6)
    I setup chron ect… but have error message:

    /etc/cron.hourly/clamscan_hourly: line 5: Log: command not found
    find: File system loop detected; `/var/named/chroot/var/named’ is part of the same file system loop as `/var/named’.
    find: File system loop detected; `/var/named/chroot/var/named’ is part of the same file system loop as `/var/named’.

    What is a problem? can u help me?

    Thank you for all

  34. vmthunder September 3, 2016 at 10:57 am - Reply

    Great stuff. I use these scripts on my Ubuntu servers. Someone has modified your scripts to show infected/removed files, What do you think about those changes?

  35. Chris Thorn November 17, 2016 at 2:57 am - Reply

    Great script!
    I use this on my desktop PC, but I use ssmtp, not sendmail. Ssmtp does not support the -t option, so I deleted the 5 “echo” lines 16 thru 20 and change line 22 to:
    mail -s “${SUBJECT}” “${EMAIL}” < ${EMAILMESSAGE}

    All working fine – thanks.


Leave A Comment